Science.gov

Sample records for accurate face recognition

  1. A Highly Accurate Face Recognition System Using Filtering Correlation

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Ishikawa, Sayuri; Kodate, Kashiko

    2007-09-01

    The authors previously constructed a highly accurate fast face recognition optical correlator (FARCO) [E. Watanabe and K. Kodate: Opt. Rev. 12 (2005) 460], and subsequently developed an improved, super high-speed FARCO (S-FARCO), which is able to process several hundred thousand frames per second. The principal advantage of our new system is its wide applicability to any correlation scheme. Three different configurations were proposed, each depending on correlation speed. This paper describes and evaluates a software correlation filter. The face recognition function proved highly accurate, seeing that a low-resolution facial image size (64 × 64 pixels) has been successfully implemented. An operation speed of less than 10 ms was achieved using a personal computer with a central processing unit (CPU) of 3 GHz and 2 GB memory. When we applied the software correlation filter to a high-security cellular phone face recognition system, experiments on 30 female students over a period of three months yielded low error rates: 0% false acceptance rate and 2% false rejection rate. Therefore, the filtering correlation works effectively when applied to low resolution images such as web-based images or faces captured by a monitoring camera.

  2. A cross-race effect in metamemory: Predictions of face recognition are more accurate for members of our own race.

    PubMed

    Hourihan, Kathleen L; Benjamin, Aaron S; Liu, Xiping

    2012-09-01

    The Cross-Race Effect (CRE) in face recognition is the well-replicated finding that people are better at recognizing faces from their own race, relative to other races. The CRE reveals systematic limitations on eyewitness identification accuracy and suggests that some caution is warranted in evaluating cross-race identification. The CRE is a problem because jurors value eyewitness identification highly in verdict decisions. In the present paper, we explore how accurate people are in predicting their ability to recognize own-race and other-race faces. Caucasian and Asian participants viewed photographs of Caucasian and Asian faces, and made immediate judgments of learning during study. An old/new recognition test replicated the CRE: both groups displayed superior discriminability of own-race faces, relative to other-race faces. Importantly, relative metamnemonic accuracy was also greater for own-race faces, indicating that the accuracy of predictions about face recognition is influenced by race. This result indicates another source of concern when eliciting or evaluating eyewitness identification: people are less accurate in judging whether they will or will not recognize a face when that face is of a different race than they are. This new result suggests that a witness's claim of being likely to recognize a suspect from a lineup should be interpreted with caution when the suspect is of a different race than the witness.

  3. The Cambridge Face Tracker: Accurate, Low Cost Measurement of Head Posture Using Computer Vision and Face Recognition Software

    PubMed Central

    Thomas, Peter B. M.; Baltrušaitis, Tadas; Robinson, Peter; Vivian, Anthony J.

    2016-01-01

    Purpose We validate a video-based method of head posture measurement. Methods The Cambridge Face Tracker uses neural networks (constrained local neural fields) to recognize facial features in video. The relative position of these facial features is used to calculate head posture. First, we assess the accuracy of this approach against videos in three research databases where each frame is tagged with a precisely measured head posture. Second, we compare our method to a commercially available mechanical device, the Cervical Range of Motion device: four subjects each adopted 43 distinct head postures that were measured using both methods. Results The Cambridge Face Tracker achieved confident facial recognition in 92% of the approximately 38,000 frames of video from the three databases. The respective mean error in absolute head posture was 3.34°, 3.86°, and 2.81°, with a median error of 1.97°, 2.16°, and 1.96°. The accuracy decreased with more extreme head posture. Comparing The Cambridge Face Tracker to the Cervical Range of Motion Device gave correlation coefficients of 0.99 (P < 0.0001), 0.96 (P < 0.0001), and 0.99 (P < 0.0001) for yaw, pitch, and roll, respectively. Conclusions The Cambridge Face Tracker performs well under real-world conditions and within the range of normally-encountered head posture. It allows useful quantification of head posture in real time or from precaptured video. Its performance is similar to that of a clinically validated mechanical device. It has significant advantages over other approaches in that subjects do not need to wear any apparatus, and it requires only low cost, easy-to-setup consumer electronics. Translational Relevance Noncontact assessment of head posture allows more complete clinical assessment of patients, and could benefit surgical planning in future. PMID:27730008

  4. Toward hyperspectral face recognition

    NASA Astrophysics Data System (ADS)

    Robila, Stefan A.

    2008-02-01

    Face recognition continues to meet significant challenges in reaching accurate results and still remains one of the activities where humans outperform technology. An attractive approach in improving face identification is provided by the fusion of multiple imaging sources such as visible and infrared images. Hyperspectral data, i.e. images collected over hundreds of narrow contiguous light spectrum intervals constitute a natural choice for expanding face recognition image fusion, especially since it may provide information beyond the normal visible range, thus exceeding the normal human sensing. In this paper we investigate the efficiency of hyperspectral face recognition through an in house experiment that collected data in over 120 bands within the visible and near infrared range. The imagery was produced using an off the shelf sensor in both indoors and outdoors with the subjects being photographed from various angles. Further processing included spectra collection and feature extraction. Human matching performance based on spectral properties is discussed.

  5. Symmetry, probability, and recognition in face space.

    PubMed

    Sirovich, Lawrence; Meytlis, Marsha

    2009-04-28

    The essential midline symmetry of human faces is shown to play a key role in facial coding and recognition. This also has deep and important connections with recent explorations of the organization of primate cortex, as well as human psychophysical experiments. Evidence is presented that the dimension of face recognition space for human faces is dramatically lower than previous estimates. One result of the present development is the construction of a probability distribution in face space that produces an interesting and realistic range of (synthetic) faces. Another is a recognition algorithm that by reasonable criteria is nearly 100% accurate.

  6. [Comparative studies of face recognition].

    PubMed

    Kawai, Nobuyuki

    2012-07-01

    Every human being is proficient in face recognition. However, the reason for and the manner in which humans have attained such an ability remain unknown. These questions can be best answered-through comparative studies of face recognition in non-human animals. Studies in both primates and non-primates show that not only primates, but also non-primates possess the ability to extract information from their conspecifics and from human experimenters. Neural specialization for face recognition is shared with mammals in distant taxa, suggesting that face recognition evolved earlier than the emergence of mammals. A recent study indicated that a social insect, the golden paper wasp, can distinguish their conspecific faces, whereas a closely related species, which has a less complex social lifestyle with just one queen ruling a nest of underlings, did not show strong face recognition for their conspecifics. Social complexity and the need to differentiate between one another likely led humans to evolve their face recognition abilities.

  7. Face recognition performance with superresolution.

    PubMed

    Hu, Shuowen; Maschal, Robert; Young, S Susan; Hong, Tsai Hong; Phillips, P Jonathon

    2012-06-20

    With the prevalence of surveillance systems, face recognition is crucial to aiding the law enforcement community and homeland security in identifying suspects and suspicious individuals on watch lists. However, face recognition performance is severely affected by the low face resolution of individuals in typical surveillance footage, oftentimes due to the distance of individuals from the cameras as well as the small pixel count of low-cost surveillance systems. Superresolution image reconstruction has the potential to improve face recognition performance by using a sequence of low-resolution images of an individual's face in the same pose to reconstruct a more detailed high-resolution facial image. This work conducts an extensive performance evaluation of superresolution for a face recognition algorithm using a methodology and experimental setup consistent with real world settings at multiple subject-to-camera distances. Results show that superresolution image reconstruction improves face recognition performance considerably at the examined midrange and close range. PMID:22722306

  8. Face recognition performance with superresolution.

    PubMed

    Hu, Shuowen; Maschal, Robert; Young, S Susan; Hong, Tsai Hong; Phillips, P Jonathon

    2012-06-20

    With the prevalence of surveillance systems, face recognition is crucial to aiding the law enforcement community and homeland security in identifying suspects and suspicious individuals on watch lists. However, face recognition performance is severely affected by the low face resolution of individuals in typical surveillance footage, oftentimes due to the distance of individuals from the cameras as well as the small pixel count of low-cost surveillance systems. Superresolution image reconstruction has the potential to improve face recognition performance by using a sequence of low-resolution images of an individual's face in the same pose to reconstruct a more detailed high-resolution facial image. This work conducts an extensive performance evaluation of superresolution for a face recognition algorithm using a methodology and experimental setup consistent with real world settings at multiple subject-to-camera distances. Results show that superresolution image reconstruction improves face recognition performance considerably at the examined midrange and close range.

  9. Genetic specificity of face recognition.

    PubMed

    Shakeshaft, Nicholas G; Plomin, Robert

    2015-10-13

    Specific cognitive abilities in diverse domains are typically found to be highly heritable and substantially correlated with general cognitive ability (g), both phenotypically and genetically. Recent twin studies have found the ability to memorize and recognize faces to be an exception, being similarly heritable but phenotypically substantially uncorrelated both with g and with general object recognition. However, the genetic relationships between face recognition and other abilities (the extent to which they share a common genetic etiology) cannot be determined from phenotypic associations. In this, to our knowledge, first study of the genetic associations between face recognition and other domains, 2,000 18- and 19-year-old United Kingdom twins completed tests assessing their face recognition, object recognition, and general cognitive abilities. Results confirmed the substantial heritability of face recognition (61%), and multivariate genetic analyses found that most of this genetic influence is unique and not shared with other cognitive abilities.

  10. Face recognition based tensor structure

    NASA Astrophysics Data System (ADS)

    Yang, De-qiang; Ye, Zhi-xia; Zhao, Yang; Liu, Li-mei

    2012-01-01

    Face recognition has broad applications, and it is a difficult problem since face image can change with photographic conditions, such as different illumination conditions, pose changes and camera angles. How to obtain some invariable features for a face image is the key issue for a face recognition algorithm. In this paper, a novel tensor structure of face image is proposed to represent image features with eight directions for a pixel value. The invariable feature of the face image is then obtained from gradient decomposition to make up the tensor structure. Then the singular value decomposition (SVD) and principal component analysis (PCA) of this tensor structure are used for face recognition. The experimental results from this study show that many difficultly recognized samples can correctly be recognized, and the recognition rate is increased by 9%-11% in comparison with same type of algorithms.

  11. [Faces affect recognition in schizophrenia].

    PubMed

    Prochwicz, Katarzyna; Rózycka, Jagoda

    2012-01-01

    Clinical observations and the results of many experimental researches indicate that individuals suffering from schizophrenia reveal difficulties in the recognition of emotional states experienced by other people; however the causes and the range of these problems have not been clearly described. Despite early research results confirming that difficulties in emotion recognition are related only to negative emotions, the results of the researches conducted over the lat 30 years indicate that emotion recognition problems are a manifestation of a general cognitive deficit, and they do not concern specific emotions. The article contains a review of the research on face affect recognition in schizophrenia. It discusses the causes of these difficulties, the differences in the accuracy of the recognition of specific emotions, the relationship between the symptoms of schizophrenia and the severity of problems with face perception, and the types of cognitive processes which influence the disturbances in face affect recognition. Particular attention was paid to the methodology of the research on face affect recognition, including the methods used in control tasks relying on the identification of neutral faces designed to assess the range of deficit underlying the face affect recognition problems. The analysis of methods used in particular researches revealed some weaknesses. The article also deals with the question of the possibilities of improving the ability to recognise the emotions, and briefly discusses the efficiency of emotion recognition training programs designed for patients suffering from schizophrenia.

  12. Holistic processing predicts face recognition.

    PubMed

    Richler, Jennifer J; Cheung, Olivia S; Gauthier, Isabel

    2011-04-01

    The concept of holistic processing is a cornerstone of face-recognition research. In the study reported here, we demonstrated that holistic processing predicts face-recognition abilities on the Cambridge Face Memory Test and on a perceptual face-identification task. Our findings validate a large body of work that relies on the assumption that holistic processing is related to face recognition. These findings also reconcile the study of face recognition with the perceptual-expertise work it inspired; such work links holistic processing of objects with people's ability to individuate them. Our results differ from those of a recent study showing no link between holistic processing and face recognition. This discrepancy can be attributed to the use in prior research of a popular but flawed measure of holistic processing. Our findings salvage the central role of holistic processing in face recognition and cast doubt on a subset of the face-perception literature that relies on a problematic measure of holistic processing.

  13. Semantic information can facilitate covert face recognition in congenital prosopagnosia.

    PubMed

    Rivolta, Davide; Schmalzl, Laura; Coltheart, Max; Palermo, Romina

    2010-11-01

    People with congenital prosopagnosia have never developed the ability to accurately recognize faces. This single case investigation systematically investigates covert and overt face recognition in "C.," a 69 year-old woman with congenital prosopagnosia. Specifically, we: (a) describe the first assessment of covert face recognition in congenital prosopagnosia using multiple tasks; (b) show that semantic information can contribute to covert recognition; and (c) provide a theoretical explanation for the mechanisms underlying covert face recognition.

  14. Accuracy enhanced thermal face recognition

    NASA Astrophysics Data System (ADS)

    Lin, Chun-Fu; Lin, Sheng-Fuu

    2013-11-01

    Human face recognition has been generally researched for the last three decades. Face recognition with thermal image has begun to attract significant attention gradually since illumination of environment would not affect the recognition performance. However, the recognition performance of traditional thermal face recognizer is still insufficient in practical application. This study presents a novel thermal face recognizer employing not only thermal features but also critical facial geometric features which would not be influenced by hair style to improve the recognition performance. A three-layer back-propagation feed-forward neural network is applied as the classifier. Traditional thermal face recognizers only use the indirect information of the topography of blood vessels like thermogram as features. To overcome this limitation, the proposed thermal face recognizer can use not only the indirect information but also the direct information of the topography of blood vessels which is unique for every human. Moreover, the recognition performance of the proposed thermal features would not decrease even if the hair of frontal bone varies, the eye blinks or the nose breathes. Experimental results show that the proposed features are significantly more effective than traditional thermal features and the recognition performance of thermal face recognizer is improved.

  15. Emotion-independent face recognition

    NASA Astrophysics Data System (ADS)

    De Silva, Liyanage C.; Esther, Kho G. P.

    2000-12-01

    Current face recognition techniques tend to work well when recognizing faces under small variations in lighting, facial expression and pose, but deteriorate under more extreme conditions. In this paper, a face recognition system to recognize faces of known individuals, despite variations in facial expression due to different emotions, is developed. The eigenface approach is used for feature extraction. Classification methods include Euclidean distance, back propagation neural network and generalized regression neural network. These methods yield 100% recognition accuracy when the training database is representative, containing one image representing the peak expression for each emotion of each person apart from the neutral expression. The feature vectors used for comparison in the Euclidean distance method and for training the neural network must be all the feature vectors of the training set. These results are obtained for a face database consisting of only four persons.

  16. Face Processing: Models For Recognition

    NASA Astrophysics Data System (ADS)

    Turk, Matthew A.; Pentland, Alexander P.

    1990-03-01

    The human ability to process faces is remarkable. We can identify perhaps thousands of faces learned throughout our lifetime and read facial expression to understand such subtle qualities as emotion. These skills are quite robust, despite sometimes large changes in the visual stimulus due to expression, aging, and distractions such as glasses or changes in hairstyle or facial hair. Computers which model and recognize faces will be useful in a variety of applications, including criminal identification, human-computer interface, and animation. We discuss models for representing faces and their applicability to the task of recognition, and present techniques for identifying faces and detecting eye blinks.

  17. Lateralized processes in face recognition.

    PubMed

    Rhodes, G

    1985-05-01

    In this paper a model is presented in which face recognition is analysed into several stages, each of which may be independently lateralized. Evidence is reviewed which suggests that lateralization is important at all stages of processing a face. Early visuospatial processing, and the creation and comparison of facial representations, appear to be carried out more efficiently by the right hemisphere. Comparisons based on discrete, namable features of faces may yield a left hemisphere advantage. It is also proposed that faces may activate semantic information, including names, more efficiently in the left hemisphere. The model is useful in resolving inconsistencies in the degree and direction of asymmetries found in face-recognition tasks. Suggestions are also made for future research.

  18. Face recognition for uncontrolled environments

    NASA Astrophysics Data System (ADS)

    Podilchuk, Christine; Hulbert, William; Flachsbart, Ralph; Barinov, Lev

    2010-04-01

    A new face recognition algorithm has been proposed which is robust to variations in pose, expression, illumination and occlusions such as sunglasses. The algorithm is motivated by the Edit Distance used to determine the similarity between strings of one dimensional data such as DNA and text. The key to this approach is how to extend the concept of an Edit Distance on one-dimensional data to two-dimensional image data. The algorithm is based on mapping one image into another and using the characteristics of the mapping to determine a two-dimensional Pictorial-Edit Distance or P-Edit Distance. We show how the properties of the mapping are similar to insertion, deletion and substitution errors defined in an Edit Distance. This algorithm is particularly well suited for face recognition in uncontrolled environments such as stand-off and other surveillance applications. We will describe an entire system designed for face recognition at a distance including face detection, pose estimation, multi-sample fusion of video frames and identification. Here we describe how the algorithm is used for face recognition at a distance, present some initial results and describe future research directions.(

  19. Face Recognition Using Local Quantized Patterns and Gabor Filters

    NASA Astrophysics Data System (ADS)

    Khryashchev, V.; Priorov, A.; Stepanova, O.; Nikitin, A.

    2015-05-01

    The problem of face recognition in a natural or artificial environment has received a great deal of researchers' attention over the last few years. A lot of methods for accurate face recognition have been proposed. Nevertheless, these methods often fail to accurately recognize the person in difficult scenarios, e.g. low resolution, low contrast, pose variations, etc. We therefore propose an approach for accurate and robust face recognition by using local quantized patterns and Gabor filters. The estimation of the eye centers is used as a preprocessing stage. The evaluation of our algorithm on different samples from a standardized FERET database shows that our method is invariant to the general variations of lighting, expression, occlusion and aging. The proposed approach allows about 20% correct recognition accuracy increase compared with the known face recognition algorithms from the OpenCV library. The additional use of Gabor filters can significantly improve the robustness to changes in lighting conditions.

  20. Face Recognition Incorporating Ancillary Information

    NASA Astrophysics Data System (ADS)

    Kim, Sang-Ki; Toh, Kar-Ann; Lee, Sangyoun

    2007-12-01

    Due to vast variations of extrinsic and intrinsic imaging conditions, face recognition remained to be a challenging computer vision problem even today. This is particularly true when the passive imaging approach is considered for robust applications. To advance existing recognition systems for face, numerous techniques and methods have been proposed to overcome the almost inevitable performance degradation due to external factors such as pose, expression, occlusion, and illumination. In particular, the recent part-based method has provided noticeable room for verification performance improvement based on the localized features which have good tolerance to variation of external conditions. The part-based method, however, does not really stretch the performance without incorporation of global information from the holistic method. In view of the need to fuse the local information and the global information in an adaptive manner for reliable recognition, in this paper we investigate whether such external factors can be explicitly estimated and be used to boost the verification performance during fusion of the holistic and part-based methods. Our empirical evaluations show noticeable performance improvement adopting the proposed method.

  1. Recognition of Faces of Ingroup and Outgroup Children and Adults

    ERIC Educational Resources Information Center

    Corenblum, B.; Meissner, Christian A.

    2006-01-01

    People are often more accurate in recognizing faces of ingroup members than in recognizing faces of outgroup members. Although own-group biases in face recognition are well established among adults, less attention has been given to such biases among children. This is surprising considering how often children give testimony in criminal and civil…

  2. Unaware person recognition from the body when face identification fails.

    PubMed

    Rice, Allyson; Phillips, P Jonathon; Natu, Vaidehi; An, Xiaobo; O'Toole, Alice J

    2013-11-01

    How does one recognize a person when face identification fails? Here, we show that people rely on the body but are unaware of doing so. State-of-the-art face-recognition algorithms were used to select images of people with almost no useful identity information in the face. Recognition of the face alone in these cases was near chance level, but recognition of the person was accurate. Accuracy in identifying the person without the face was identical to that in identifying the whole person. Paradoxically, people reported relying heavily on facial features over noninternal face and body features in making their identity decisions. Eye movements indicated otherwise, with gaze duration and fixations shifting adaptively toward the body and away from the face when the body was a better indicator of identity than the face. This shift occurred with no cost to accuracy or response time. Human identity processing may be partially inaccessible to conscious awareness.

  3. Bayesian Face Recognition and Perceptual Narrowing in Face-Space

    ERIC Educational Resources Information Center

    Balas, Benjamin

    2012-01-01

    During the first year of life, infants' face recognition abilities are subject to "perceptual narrowing", the end result of which is that observers lose the ability to distinguish previously discriminable faces (e.g. other-race faces) from one another. Perceptual narrowing has been reported for faces of different species and different races, in…

  4. The neural speed of familiar face recognition.

    PubMed

    Barragan-Jason, G; Cauchoix, M; Barbeau, E J

    2015-08-01

    Rapidly recognizing familiar people from their faces appears critical for social interactions (e.g., to differentiate friend from foe). However, the actual speed at which the human brain can distinguish familiar from unknown faces still remains debated. In particular, it is not clear whether familiarity can be extracted from rapid face individualization or if it requires additional time consuming processing. We recorded scalp EEG activity in 28 subjects performing a go/no-go, famous/non-famous, unrepeated, face recognition task. Speed constraints were used to encourage subjects to use the earliest familiarity information available. Event related potential (ERP) analyses show that both the N170 and the N250 components were modulated by familiarity. The N170 modulation was related to behaviour: subjects presenting the strongest N170 modulation were also faster but less accurate than those who only showed weak N170 modulation. A complementary Multi-Variate Pattern Analysis (MVPA) confirmed ERP results and provided some more insights into the dynamics of face recognition as the N170 differential effect appeared to be related to a first transitory phase (transitory bump of decoding power) starting at around 140 ms, which returned to baseline afterwards. This bump of activity was henceforth followed by an increase of decoding power starting around 200 ms after stimulus onset. Overall, our results suggest that rather than a simple single-process, familiarity for faces may rely on a cascade of neural processes, including a coarse and fast stage starting at 140 ms and a more refined but slower stage occurring after 200 ms.

  5. The neural speed of familiar face recognition.

    PubMed

    Barragan-Jason, G; Cauchoix, M; Barbeau, E J

    2015-08-01

    Rapidly recognizing familiar people from their faces appears critical for social interactions (e.g., to differentiate friend from foe). However, the actual speed at which the human brain can distinguish familiar from unknown faces still remains debated. In particular, it is not clear whether familiarity can be extracted from rapid face individualization or if it requires additional time consuming processing. We recorded scalp EEG activity in 28 subjects performing a go/no-go, famous/non-famous, unrepeated, face recognition task. Speed constraints were used to encourage subjects to use the earliest familiarity information available. Event related potential (ERP) analyses show that both the N170 and the N250 components were modulated by familiarity. The N170 modulation was related to behaviour: subjects presenting the strongest N170 modulation were also faster but less accurate than those who only showed weak N170 modulation. A complementary Multi-Variate Pattern Analysis (MVPA) confirmed ERP results and provided some more insights into the dynamics of face recognition as the N170 differential effect appeared to be related to a first transitory phase (transitory bump of decoding power) starting at around 140 ms, which returned to baseline afterwards. This bump of activity was henceforth followed by an increase of decoding power starting around 200 ms after stimulus onset. Overall, our results suggest that rather than a simple single-process, familiarity for faces may rely on a cascade of neural processes, including a coarse and fast stage starting at 140 ms and a more refined but slower stage occurring after 200 ms. PMID:26100560

  6. Neural microgenesis of personally familiar face recognition

    PubMed Central

    Ramon, Meike; Vizioli, Luca; Liu-Shuang, Joan; Rossion, Bruno

    2015-01-01

    Despite a wealth of information provided by neuroimaging research, the neural basis of familiar face recognition in humans remains largely unknown. Here, we isolated the discriminative neural responses to unfamiliar and familiar faces by slowly increasing visual information (i.e., high-spatial frequencies) to progressively reveal faces of unfamiliar or personally familiar individuals. Activation in ventral occipitotemporal face-preferential regions increased with visual information, independently of long-term face familiarity. In contrast, medial temporal lobe structures (perirhinal cortex, amygdala, hippocampus) and anterior inferior temporal cortex responded abruptly when sufficient information for familiar face recognition was accumulated. These observations suggest that following detailed analysis of individual faces in core posterior areas of the face-processing network, familiar face recognition emerges categorically in medial temporal and anterior regions of the extended cortical face network. PMID:26283361

  7. Neural microgenesis of personally familiar face recognition.

    PubMed

    Ramon, Meike; Vizioli, Luca; Liu-Shuang, Joan; Rossion, Bruno

    2015-09-01

    Despite a wealth of information provided by neuroimaging research, the neural basis of familiar face recognition in humans remains largely unknown. Here, we isolated the discriminative neural responses to unfamiliar and familiar faces by slowly increasing visual information (i.e., high-spatial frequencies) to progressively reveal faces of unfamiliar or personally familiar individuals. Activation in ventral occipitotemporal face-preferential regions increased with visual information, independently of long-term face familiarity. In contrast, medial temporal lobe structures (perirhinal cortex, amygdala, hippocampus) and anterior inferior temporal cortex responded abruptly when sufficient information for familiar face recognition was accumulated. These observations suggest that following detailed analysis of individual faces in core posterior areas of the face-processing network, familiar face recognition emerges categorically in medial temporal and anterior regions of the extended cortical face network. PMID:26283361

  8. Neural microgenesis of personally familiar face recognition.

    PubMed

    Ramon, Meike; Vizioli, Luca; Liu-Shuang, Joan; Rossion, Bruno

    2015-09-01

    Despite a wealth of information provided by neuroimaging research, the neural basis of familiar face recognition in humans remains largely unknown. Here, we isolated the discriminative neural responses to unfamiliar and familiar faces by slowly increasing visual information (i.e., high-spatial frequencies) to progressively reveal faces of unfamiliar or personally familiar individuals. Activation in ventral occipitotemporal face-preferential regions increased with visual information, independently of long-term face familiarity. In contrast, medial temporal lobe structures (perirhinal cortex, amygdala, hippocampus) and anterior inferior temporal cortex responded abruptly when sufficient information for familiar face recognition was accumulated. These observations suggest that following detailed analysis of individual faces in core posterior areas of the face-processing network, familiar face recognition emerges categorically in medial temporal and anterior regions of the extended cortical face network.

  9. Face recognition based on fringe pattern analysis

    NASA Astrophysics Data System (ADS)

    Guo, Hong; Huang, Peisen

    2010-03-01

    Two-dimensional face-recognition techniques suffer from facial texture and illumination variations. Although 3-D techniques can overcome these limitations, the reconstruction and storage expenses of 3-D information are extremely high. We present a novel face-recognition method that directly utilizes 3-D information encoded in face fringe patterns without having to reconstruct 3-D geometry. In the proposed method, a digital video projector is employed to sequentially project three phase-shifted sinusoidal fringe patterns onto the subject's face. Meanwhile, a camera is used to capture the distorted fringe patterns from an offset angle. Afterward, the face fringe images are analyzed by the phase-shifting method and the Fourier transform method to obtain a spectral representation of the 3-D face. Finally, the eigenface algorithm is applied to the face-spectrum images to perform face recognition. Simulation and experimental results demonstrate that the proposed method achieved satisfactory recognition rates with reduced computational complexity and storage expenses.

  10. The hierarchical brain network for face recognition.

    PubMed

    Zhen, Zonglei; Fang, Huizhen; Liu, Jia

    2013-01-01

    Numerous functional magnetic resonance imaging (fMRI) studies have identified multiple cortical regions that are involved in face processing in the human brain. However, few studies have characterized the face-processing network as a functioning whole. In this study, we used fMRI to identify face-selective regions in the entire brain and then explore the hierarchical structure of the face-processing network by analyzing functional connectivity among these regions. We identified twenty-five regions mainly in the occipital, temporal and frontal cortex that showed a reliable response selective to faces (versus objects) across participants and across scan sessions. Furthermore, these regions were clustered into three relatively independent sub-networks in a face-recognition task on the basis of the strength of functional connectivity among them. The functionality of the sub-networks likely corresponds to the recognition of individual identity, retrieval of semantic knowledge and representation of emotional information. Interestingly, when the task was switched to object recognition from face recognition, the functional connectivity between the inferior occipital gyrus and the rest of the face-selective regions were significantly reduced, suggesting that this region may serve as an entry node in the face-processing network. In sum, our study provides empirical evidence for cognitive and neural models of face recognition and helps elucidate the neural mechanisms underlying face recognition at the network level.

  11. Face aftereffects predict individual differences in face recognition ability.

    PubMed

    Dennett, Hugh W; McKone, Elinor; Edwards, Mark; Susilo, Tirta

    2012-01-01

    Face aftereffects are widely studied on the assumption that they provide a useful tool for investigating face-space coding of identity. However, a long-standing issue concerns the extent to which face aftereffects originate in face-level processes as opposed to earlier stages of visual processing. For example, some recent studies failed to find atypical face aftereffects in individuals with clinically poor face recognition. We show that in individuals within the normal range of face recognition abilities, there is an association between face memory ability and a figural face aftereffect that is argued to reflect the steepness of broadband-opponent neural response functions in underlying face-space. We further show that this correlation arises from face-level processing, by reporting results of tests of nonface memory and nonface aftereffects. We conclude that face aftereffects can tap high-level face-space, and that face-space coding differs in quality between individuals and contributes to face recognition ability.

  12. Face recognition increases during saccade preparation.

    PubMed

    Lin, Hai; Rizak, Joshua D; Ma, Yuan-ye; Yang, Shang-chuan; Chen, Lin; Hu, Xin-tian

    2014-01-01

    Face perception is integral to human perception system as it underlies social interactions. Saccadic eye movements are frequently made to bring interesting visual information, such as faces, onto the fovea for detailed processing. Just before eye movement onset, the processing of some basic features, such as the orientation, of an object improves at the saccade landing point. Interestingly, there is also evidence that indicates faces are processed in early visual processing stages similar to basic features. However, it is not known whether this early enhancement of processing includes face recognition. In this study, three experiments were performed to map the timing of face presentation to the beginning of the eye movement in order to evaluate pre-saccadic face recognition. Faces were found to be similarly processed as simple objects immediately prior to saccadic movements. Starting ∼ 120 ms before a saccade to a target face, independent of whether or not the face was surrounded by other faces, the face recognition gradually improved and the critical spacing of the crowding decreased as saccade onset was approaching. These results suggest that an upcoming saccade prepares the visual system for new information about faces at the saccade landing site and may reduce the background in a crowd to target the intended face. This indicates an important role of pre-saccadic eye movement signals in human face recognition.

  13. Partial face recognition: alignment-free approach.

    PubMed

    Liao, Shengcai; Jain, Anil K; Li, Stan Z

    2013-05-01

    Numerous methods have been developed for holistic face recognition with impressive performance. However, few studies have tackled how to recognize an arbitrary patch of a face image. Partial faces frequently appear in unconstrained scenarios, with images captured by surveillance cameras or handheld devices (e.g., mobile phones) in particular. In this paper, we propose a general partial face recognition approach that does not require face alignment by eye coordinates or any other fiducial points. We develop an alignment-free face representation method based on Multi-Keypoint Descriptors (MKD), where the descriptor size of a face is determined by the actual content of the image. In this way, any probe face image, holistic or partial, can be sparsely represented by a large dictionary of gallery descriptors. A new keypoint descriptor called Gabor Ternary Pattern (GTP) is also developed for robust and discriminative face recognition. Experimental results are reported on four public domain face databases (FRGCv2.0, AR, LFW, and PubFig) under both the open-set identification and verification scenarios. Comparisons with two leading commercial face recognition SDKs (PittPatt and FaceVACS) and two baseline algorithms (PCA+LDA and LBP) show that the proposed method, overall, is superior in recognizing both holistic and partial faces without requiring alignment.

  14. Partial face recognition: alignment-free approach.

    PubMed

    Liao, Shengcai; Jain, Anil K; Li, Stan Z

    2013-05-01

    Numerous methods have been developed for holistic face recognition with impressive performance. However, few studies have tackled how to recognize an arbitrary patch of a face image. Partial faces frequently appear in unconstrained scenarios, with images captured by surveillance cameras or handheld devices (e.g., mobile phones) in particular. In this paper, we propose a general partial face recognition approach that does not require face alignment by eye coordinates or any other fiducial points. We develop an alignment-free face representation method based on Multi-Keypoint Descriptors (MKD), where the descriptor size of a face is determined by the actual content of the image. In this way, any probe face image, holistic or partial, can be sparsely represented by a large dictionary of gallery descriptors. A new keypoint descriptor called Gabor Ternary Pattern (GTP) is also developed for robust and discriminative face recognition. Experimental results are reported on four public domain face databases (FRGCv2.0, AR, LFW, and PubFig) under both the open-set identification and verification scenarios. Comparisons with two leading commercial face recognition SDKs (PittPatt and FaceVACS) and two baseline algorithms (PCA+LDA and LBP) show that the proposed method, overall, is superior in recognizing both holistic and partial faces without requiring alignment. PMID:23520259

  15. Extraversion predicts individual differences in face recognition.

    PubMed

    Li, Jingguang; Tian, Moqian; Fang, Huizhen; Xu, Miao; Li, He; Liu, Jia

    2010-07-01

    In daily life, one of the most common social tasks we perform is to recognize faces. However, the relation between face recognition ability and social activities is largely unknown. Here we ask whether individuals with better social skills are also better at recognizing faces. We found that extraverts who have better social skills correctly recognized more faces than introverts. However, this advantage was absent when extraverts were asked to recognize non-social stimuli (e.g., flowers). In particular, the underlying facet that makes extraverts better face recognizers is the gregariousness facet that measures the degree of inter-personal interaction. In addition, the link between extraversion and face recognition ability was independent of general cognitive abilities. These findings provide the first evidence that links face recognition ability to our daily activity in social communication, supporting the hypothesis that extraverts are better at decoding social information than introverts.

  16. Recognition of Unfamiliar Talking Faces at Birth

    ERIC Educational Resources Information Center

    Coulon, Marion; Guellai, Bahia; Streri, Arlette

    2011-01-01

    Sai (2005) investigated the role of speech in newborns' recognition of their mothers' faces. Her results revealed that, when presented with both their mother's face and that of a stranger, newborns preferred looking at their mother only if she had previously talked to them. The present study attempted to extend these findings to any other faces.…

  17. Face recognition system and method using face pattern words and face pattern bytes

    SciTech Connect

    Zheng, Yufeng

    2014-12-23

    The present invention provides a novel system and method for identifying individuals and for face recognition utilizing facial features for face identification. The system and method of the invention comprise creating facial features or face patterns called face pattern words and face pattern bytes for face identification. The invention also provides for pattern recognitions for identification other than face recognition. The invention further provides a means for identifying individuals based on visible and/or thermal images of those individuals by utilizing computer software implemented by instructions on a computer or computer system and a computer readable medium containing instructions on a computer system for face recognition and identification.

  18. Contextual Modulation of Biases in Face Recognition

    PubMed Central

    Felisberti, Fatima Maria; Pavey, Louisa

    2010-01-01

    Background The ability to recognize the faces of potential cooperators and cheaters is fundamental to social exchanges, given that cooperation for mutual benefit is expected. Studies addressing biases in face recognition have so far proved inconclusive, with reports of biases towards faces of cheaters, biases towards faces of cooperators, or no biases at all. This study attempts to uncover possible causes underlying such discrepancies. Methodology and Findings Four experiments were designed to investigate biases in face recognition during social exchanges when behavioral descriptors (prosocial, antisocial or neutral) embedded in different scenarios were tagged to faces during memorization. Face recognition, measured as accuracy and response latency, was tested with modified yes-no, forced-choice and recall tasks (N = 174). An enhanced recognition of faces tagged with prosocial descriptors was observed when the encoding scenario involved financial transactions and the rules of the social contract were not explicit (experiments 1 and 2). Such bias was eliminated or attenuated by making participants explicitly aware of “cooperative”, “cheating” and “neutral/indifferent” behaviors via a pre-test questionnaire and then adding such tags to behavioral descriptors (experiment 3). Further, in a social judgment scenario with descriptors of salient moral behaviors, recognition of antisocial and prosocial faces was similar, but significantly better than neutral faces (experiment 4). Conclusion The results highlight the relevance of descriptors and scenarios of social exchange in face recognition, when the frequency of prosocial and antisocial individuals in a group is similar. Recognition biases towards prosocial faces emerged when descriptors did not state the rules of a social contract or the moral status of a behavior, and they point to the existence of broad and flexible cognitive abilities finely tuned to minor changes in social context. PMID:20886086

  19. Bayesian face recognition and perceptual narrowing in face-space.

    PubMed

    Balas, Benjamin

    2012-07-01

    During the first year of life, infants' face recognition abilities are subject to 'perceptual narrowing', the end result of which is that observers lose the ability to distinguish previously discriminable faces (e.g. other-race faces) from one another. Perceptual narrowing has been reported for faces of different species and different races, in developing humans and primates. Though the phenomenon is highly robust and replicable, there have been few efforts to model the emergence of perceptual narrowing as a function of the accumulation of experience with faces during infancy. The goal of the current study is to examine how perceptual narrowing might manifest as statistical estimation in 'face-space', a geometric framework for describing face recognition that has been successfully applied to adult face perception. Here, I use a computer vision algorithm for Bayesian face recognition to study how the acquisition of experience in face-space and the presence of race categories affect performance for own and other-race faces. Perceptual narrowing follows from the establishment of distinct race categories, suggesting that the acquisition of category boundaries for race is a key computational mechanism in developing face expertise.

  20. [Big five personality factors related to face recognition].

    PubMed

    Saito, Takako; Nakamura, Tomoyasu; Endo, Toshihiko

    2005-02-01

    The present study examined whether scores on big five personality factors correlated with face-recognition response time in visual search paradigm. Sixty adjectives were used to measure personality scores of 60 participants along the five factors of Extroversion, Neuroticism, Openness to Experience, Agreeableness, and Conscientiousness. Picture of human faces or geometrical figures in a 4 x 4 array were used as stimuli. The sixteen faces or figures were either identical (absent condition) or one randomly placed target with 15 identical distracters (present condition). Participants were asked to respond 'present' or 'absent' as fast and accurately as possible. Results showed that the response time differed significantly between high and low groups of each personality factor except Agreeableness. For Extroversion, Neuroticism, and Conscientiousness, the response time difference was observed only for human face recognition. The results suggested that personality differences and face recognition were related. PMID:15782589

  1. Real-time, face recognition technology

    SciTech Connect

    Brady, S.

    1995-11-01

    The Institute for Scientific Computing Research (ISCR) at Lawrence Livermore National Laboratory recently developed the real-time, face recognition technology KEN. KEN uses novel imaging devices such as silicon retinas developed at Caltech or off-the-shelf CCD cameras to acquire images of a face and to compare them to a database of known faces in a robust fashion. The KEN-Online project makes that recognition technology accessible through the World Wide Web (WWW), an internet service that has recently seen explosive growth. A WWW client can submit face images, add them to the database of known faces and submit other pictures that the system tries to recognize. KEN-Online serves to evaluate the recognition technology and grow a large face database. KEN-Online includes the use of public domain tools such as mSQL for its name-database and perl scripts to assist the uploading of images.

  2. [Face recognition in patients with schizophrenia].

    PubMed

    Doi, Hirokazu; Shinohara, Kazuyuki

    2012-07-01

    It is well known that patients with schizophrenia show severe deficiencies in social communication skills. These deficiencies are believed to be partly derived from abnormalities in face recognition. However, the exact nature of these abnormalities exhibited by schizophrenic patients with respect to face recognition has yet to be clarified. In the present paper, we review the main findings on face recognition deficiencies in patients with schizophrenia, particularly focusing on abnormalities in the recognition of facial expression and gaze direction, which are the primary sources of information of others' mental states. The existing studies reveal that the abnormal recognition of facial expression and gaze direction in schizophrenic patients is attributable to impairments in both perceptual processing of visual stimuli, and cognitive-emotional responses to social information. Furthermore, schizophrenic patients show malfunctions in distributed neural regions, ranging from the fusiform gyrus recruited in the structural encoding of facial stimuli, to the amygdala which plays a primary role in the detection of the emotional significance of stimuli. These findings were obtained from research in patient groups with heterogeneous characteristics. Because previous studies have indicated that impairments in face recognition in schizophrenic patients might vary according to the types of symptoms, it is of primary importance to compare the nature of face recognition deficiencies and the impairments of underlying neural functions across sub-groups of patients.

  3. Very low resolution face recognition problem.

    PubMed

    Zou, Wilman W W; Yuen, Pong C

    2012-01-01

    This paper addresses the very low resolution (VLR) problem in face recognition in which the resolution of the face image to be recognized is lower than 16 × 16. With the increasing demand of surveillance camera-based applications, the VLR problem happens in many face application systems. Existing face recognition algorithms are not able to give satisfactory performance on the VLR face image. While face super-resolution (SR) methods can be employed to enhance the resolution of the images, the existing learning-based face SR methods do not perform well on such a VLR face image. To overcome this problem, this paper proposes a novel approach to learn the relationship between the high-resolution image space and the VLR image space for face SR. Based on this new approach, two constraints, namely, new data and discriminative constraints, are designed for good visuality and face recognition applications under the VLR problem, respectively. Experimental results show that the proposed SR algorithm based on relationship learning outperforms the existing algorithms in public face databases. PMID:21775262

  4. The own-age face recognition bias is task dependent.

    PubMed

    Proietti, Valentina; Macchi Cassia, Viola; Mondloch, Catherine J

    2015-08-01

    The own-age bias (OAB) in face recognition (more accurate recognition of own-age than other-age faces) is robust among young adults but not older adults. We investigated the OAB under two different task conditions. In Experiment 1 young and older adults (who reported more recent experience with own than other-age faces) completed a match-to-sample task with young and older adult faces; only young adults showed an OAB. In Experiment 2 young and older adults completed an identity detection task in which we manipulated the identity strength of target and distracter identities by morphing each face with an average face in 20% steps. Accuracy increased with identity strength and facial age influenced older adults' (but not younger adults') strategy, but there was no evidence of an OAB. Collectively, these results suggest that the OAB depends on task demands and may be absent when searching for one identity.

  5. Face recognition using ensemble string matching.

    PubMed

    Chen, Weiping; Gao, Yongsheng

    2013-12-01

    In this paper, we present a syntactic string matching approach to solve the frontal face recognition problem. String matching is a powerful partial matching technique, but is not suitable for frontal face recognition due to its requirement of globally sequential representation and the complex nature of human faces, containing discontinuous and non-sequential features. Here, we build a compact syntactic Stringface representation, which is an ensemble of strings. A novel ensemble string matching approach that can perform non-sequential string matching between two Stringfaces is proposed. It is invariant to the sequential order of strings and the direction of each string. The embedded partial matching mechanism enables our method to automatically use every piece of non-occluded region, regardless of shape, in the recognition process. The encouraging results demonstrate the feasibility and effectiveness of using syntactic methods for face recognition from a single exemplar image per person, breaking the barrier that prevents string matching techniques from being used for addressing complex image recognition problems. The proposed method not only achieved significantly better performance in recognizing partially occluded faces, but also showed its ability to perform direct matching between sketch faces and photo faces.

  6. Specialized face learning is associated with individual recognition in paper wasps.

    PubMed

    Sheehan, Michael J; Tibbetts, Elizabeth A

    2011-12-01

    We demonstrate that the evolution of facial recognition in wasps is associated with specialized face-learning abilities. Polistes fuscatus can differentiate among normal wasp face images more rapidly and accurately than nonface images or manipulated faces. A close relative lacking facial recognition, Polistes metricus, however, lacks specialized face learning. Similar specializations for face learning are found in primates and other mammals, although P. fuscatus represents an independent evolution of specialization. Convergence toward face specialization in distant taxa as well as divergence among closely related taxa with different recognition behavior suggests that specialized cognition is surprisingly labile and may be adaptively shaped by species-specific selective pressures such as face recognition.

  7. A novel thermal face recognition approach using face pattern words

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng

    2010-04-01

    A reliable thermal face recognition system can enhance the national security applications such as prevention against terrorism, surveillance, monitoring and tracking, especially at nighttime. The system can be applied at airports, customs or high-alert facilities (e.g., nuclear power plant) for 24 hours a day. In this paper, we propose a novel face recognition approach utilizing thermal (long wave infrared) face images that can automatically identify a subject at both daytime and nighttime. With a properly acquired thermal image (as a query image) in monitoring zone, the following processes will be employed: normalization and denoising, face detection, face alignment, face masking, Gabor wavelet transform, face pattern words (FPWs) creation, face identification by similarity measure (Hamming distance). If eyeglasses are present on a subject's face, an eyeglasses mask will be automatically extracted from the querying face image, and then masked with all comparing FPWs (no more transforms). A high identification rate (97.44% with Top-1 match) has been achieved upon our preliminary face dataset (of 39 subjects) from the proposed approach regardless operating time and glasses-wearing condition.e

  8. Recognition of own-race and other-race caricatures: implications for models of face recognition.

    PubMed

    Byatt, G; Rhodes, G

    1998-08-01

    Valentine's (Valentine T. Q J Exp Psychol 1991;43A:161-204) face recognition framework supports both a norm-based coding (NBC) and an exemplar-only, absolute coding, model (ABC). According to NBC; (1) faces are represented in terms of deviations from a prototype or norm; (2) caricatures are effective because they exaggerate this norm deviation information; and (3) other-race faces are coded relative to the (only available) own-race norm. Therefore NBC predicts that, for European subjects, caricatures of Chinese faces made by distorting differences from the European norm would be more effective than caricatures made relative to the Chinese norm. According to ABC; (1) faces are encoded as absolute values on a set of shared dimensions with the norm playing no role in recognition; (2) caricatures are effective because they minimise exemplar density and (3) the dimensions of face-space are inappropriate for other-race faces leaving them relatively densely clustered. ABC predicts that all faces would be recognised more accurately when caricatured against their own-race norm. We tested European subjects' identification of European and Chinese faces, caricatured against both race norms. The ABC model's prediction was supported. European faces were also rated as more distinctive and recognised more easily than Chinese faces. However, the own-race recognition bias held even when the races were equated for distinctiveness which suggests that the ABC model may not provide a complete account of race effects in recognition.

  9. Face-space: A unifying concept in face recognition research.

    PubMed

    Valentine, Tim; Lewis, Michael B; Hills, Peter J

    2016-10-01

    The concept of a multidimensional psychological space, in which faces can be represented according to their perceived properties, is fundamental to the modern theorist in face processing. Yet the idea was not clearly expressed until 1991. The background that led to the development of face-space is explained, and its continuing influence on theories of face processing is discussed. Research that has explored the properties of the face-space and sought to understand caricature, including facial adaptation paradigms, is reviewed. Face-space as a theoretical framework for understanding the effect of ethnicity and the development of face recognition is evaluated. Finally, two applications of face-space in the forensic setting are discussed. From initially being presented as a model to explain distinctiveness, inversion, and the effect of ethnicity, face-space has become a central pillar in many aspects of face processing. It is currently being developed to help us understand adaptation effects with faces. While being in principle a simple concept, face-space has shaped, and continues to shape, our understanding of face perception.

  10. Robust kernel collaborative representation for face recognition

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Wang, Xiaohui; Ma, Yanbo; Jiang, Yuzheng; Zhu, Yinghui; Jin, Zhong

    2015-05-01

    One of the greatest challenges of representation-based face recognition is that the training samples are usually insufficient. In other words, the training set usually does not include enough samples to show varieties of high-dimensional face images caused by illuminations, facial expressions, and postures. When the test sample is significantly different from the training samples of the same subject, the recognition performance will be sharply reduced. We propose a robust kernel collaborative representation based on virtual samples for face recognition. We think that the virtual training set conveys some reasonable and possible variations of the original training samples. Hence, we design a new object function to more closely match the representation coefficients generated from the original and virtual training sets. In order to further improve the robustness, we implement the corresponding representation-based face recognition in kernel space. It is noteworthy that any kind of virtual training samples can be used in our method. We use noised face images to obtain virtual face samples. The noise can be approximately viewed as a reflection of the varieties of illuminations, facial expressions, and postures. Our work is a simple and feasible way to obtain virtual face samples to impose Gaussian noise (and other types of noise) specifically to the original training samples to obtain possible variations of the original samples. Experimental results on the FERET, Georgia Tech, and ORL face databases show that the proposed method is more robust than two state-of-the-art face recognition methods, such as CRC and Kernel CRC.

  11. Self-face recognition in social context.

    PubMed

    Sugiura, Motoaki; Sassa, Yuko; Jeong, Hyeonjeong; Wakusawa, Keisuke; Horie, Kaoru; Sato, Shigeru; Kawashima, Ryuta

    2012-06-01

    The concept of "social self" is often described as a representation of the self-reflected in the eyes or minds of others. Although the appearance of one's own face has substantial social significance for humans, neuroimaging studies have failed to link self-face recognition and the likely neural substrate of the social self, the medial prefrontal cortex (MPFC). We assumed that the social self is recruited during self-face recognition under a rich social context where multiple other faces are available for comparison of social values. Using functional magnetic resonance imaging (fMRI), we examined the modulation of neural responses to the faces of the self and of a close friend in a social context. We identified an enhanced response in the ventral MPFC and right occipitoparietal sulcus in the social context specifically for the self-face. Neural response in the right lateral parietal and inferior temporal cortices, previously claimed as self-face-specific, was unaffected for the self-face but unexpectedly enhanced for the friend's face in the social context. Self-face-specific activation in the pars triangularis of the inferior frontal gyrus, and self-face-specific reduction of activation in the left middle temporal gyrus and the right supramarginal gyrus, replicating a previous finding, were not subject to such modulation. Our results thus demonstrated the recruitment of a social self during self-face recognition in the social context. At least three brain networks for self-face-specific activation may be dissociated by different patterns of response-modulation in the social context, suggesting multiple dynamic self-other representations in the human brain.

  12. About-face on face recognition ability and holistic processing.

    PubMed

    Richler, Jennifer J; Floyd, R Jackie; Gauthier, Isabel

    2015-01-01

    Previous work found a small but significant relationship between holistic processing measured with the composite task and face recognition ability measured by the Cambridge Face Memory Test (CFMT; Duchaine & Nakayama, 2006). Surprisingly, recent work using a different measure of holistic processing (Vanderbilt Holistic Face Processing Test [VHPT-F]; Richler, Floyd, & Gauthier, 2014) and a larger sample found no evidence for such a relationship. In Experiment 1 we replicate this unexpected result, finding no relationship between holistic processing (VHPT-F) and face recognition ability (CFMT). A key difference between the VHPT-F and other holistic processing measures is that unique face parts are used on each trial in the VHPT-F, unlike in other tasks where a small set of face parts repeat across the experiment. In Experiment 2, we test the hypothesis that correlations between the CFMT and holistic processing tasks are driven by stimulus repetition that allows for learning during the composite task. Consistent with our predictions, CFMT performance was correlated with holistic processing in the composite task when a small set of face parts repeated over trials, but not when face parts did not repeat. A meta-analysis confirms that relationships between the CFMT and holistic processing depend on stimulus repetition. These results raise important questions about what is being measured by the CFMT, and challenge current assumptions about why faces are processed holistically.

  13. Hyperspectral face recognition under variable outdoor illumination

    NASA Astrophysics Data System (ADS)

    Pan, Zhihong; Healey, Glenn E.; Prasad, Manish; Tromberg, Bruce J.

    2004-08-01

    We examine the performance of illumination-invariant face recognition in outdoor hyperspectral images using a database of 200 subjects. The hyperspectral camera acquires 31 bands over the 700-1000nm spectral range. Faces are represented by local spectral information for several tissue types. Illumination variation is modeled by low-dimensional spectral radiance subspaces. Invariant subspace projection over multiple tissue types is used for recognition. The experiments consider various face orientations and expressions. The analysis includes experiments for images synthesized using face reflectance images of 200 subjects and a database of over 7,000 outdoor illumination spectra. We also consider experiments that use a set of face images that were acquired under outdoor illumination conditions.

  14. Video face recognition against a watch list

    NASA Astrophysics Data System (ADS)

    Abbas, Jehanzeb; Dagli, Charlie K.; Huang, Thomas S.

    2007-10-01

    Due to a large increase in the video surveillance data recently in an effort to maintain high security at public places, we need more robust systems to analyze this data and make tasks like face recognition a realistic possibility in challenging environments. In this paper we explore a watch-list scenario where we use an appearance based model to classify query faces from low resolution videos into either a watch-list or a non-watch-list face. We then use our simple yet a powerful face recognition system to recognize the faces classified as watch-list faces. Where the watch-list includes those people that we are interested in recognizing. Our system uses simple feature machine algorithms from our previous work to match video faces against still images. To test our approach, we match video faces against a large database of still images obtained from a previous work in the field from Yahoo News over a period of time. We do this matching in an efficient manner to come up with a faster and nearly real-time system. This system can be incorporated into a larger surveillance system equipped with advanced algorithms involving anomalous event detection and activity recognition. This is a step towards more secure and robust surveillance systems and efficient video data analysis.

  15. Cellular Phone Face Recognition System Based on Optical Phase Correlation

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Ishikawa, Sayuri; Ohta, Maiko; Kodate, Kashiko

    We propose a high security facial recognition system using a cellular phone on the mobile network. This system is composed of a face recognition engine based on optical phase correlation which uses phase information with emphasis on a Fourier domain, a control sever and the cellular phone with a compact camera for taking pictures, as a portable terminal. Compared with various correlation methods, our face recognition engine revealed the most accurate EER of less than 1%. By using the JAVA interface on this system, we implemented the stable system taking pictures, providing functions to prevent spoofing while transferring images. This recognition system was tested on 300 women students and the results proved this system effective.

  16. FaceID: A face detection and recognition system

    SciTech Connect

    Shah, M.B.; Rao, N.S.V.; Olman, V.; Uberbacher, E.C.; Mann, R.C.

    1996-12-31

    A face detection system that automatically locates faces in gray-level images is described. Also described is a system which matches a given face image with faces in a database. Face detection in an Image is performed by template matching using templates derived from a selected set of normalized faces. Instead of using original gray level images, vertical gradient images were calculated and used to make the system more robust against variations in lighting conditions and skin color. Faces of different sizes are detected by processing the image at several scales. Further, a coarse-to-fine strategy is used to speed up the processing, and a combination of whole face and face component templates are used to ensure low false detection rates. The input to the face recognition system is a normalized vertical gradient image of a face, which is compared against a database using a set of pretrained feedforward neural networks with a winner-take-all fuser. The training is performed by using an adaptation of the backpropagation algorithm. This system has been developed and tested using images from the FERET database and a set of images obtained from Rowley, et al and Sung and Poggio.

  17. Influence of motion on face recognition.

    PubMed

    Bonfiglio, Natale S; Manfredi, Valentina; Pessa, Eliano

    2012-02-01

    The influence of motion information and temporal associations on recognition of non-familiar faces was investigated using two groups which performed a face recognition task. One group was presented with regular temporal sequences of face views designed to produce the impression of motion of the face rotating in depth, the other group with random sequences of the same views. In one condition, participants viewed the sequences of the views in rapid succession with a negligible interstimulus interval (ISI). This condition was characterized by three different presentation times. In another condition, participants were presented a sequence with a 1-sec. ISI among the views. That regular sequences of views with a negligible ISI and a shorter presentation time were hypothesized to give rise to better recognition, related to a stronger impression of face rotation. Analysis of data from 45 participants showed a shorter presentation time was associated with significantly better accuracy on the recognition task; however, differences between performances associated with regular and random sequences were not significant.

  18. Holistic face processing can inhibit recognition of forensic facial composites.

    PubMed

    McIntyre, Alex H; Hancock, Peter J B; Frowd, Charlie D; Langton, Stephen R H

    2016-04-01

    Facial composite systems help eyewitnesses to show the appearance of criminals. However, likenesses created by unfamiliar witnesses will not be completely accurate, and people familiar with the target can find them difficult to identify. Faces are processed holistically; we explore whether this impairs identification of inaccurate composite images and whether recognition can be improved. In Experiment 1 (n = 64) an imaging technique was used to make composites of celebrity faces more accurate and identification was contrasted with the original composite images. Corrected composites were better recognized, confirming that errors in production of the likenesses impair identification. The influence of holistic face processing was explored by misaligning the top and bottom parts of the composites (cf. Young, Hellawell, & Hay, 1987). Misalignment impaired recognition of corrected composites but identification of the original, inaccurate composites significantly improved. This effect was replicated with facial composites of noncelebrities in Experiment 2 (n = 57). We conclude that, like real faces, facial composites are processed holistically: recognition is impaired because unlike real faces, composites contain inaccuracies and holistic face processing makes it difficult to perceive identifiable features. This effect was consistent across composites of celebrities and composites of people who are personally familiar. Our findings suggest that identification of forensic facial composites can be enhanced by presenting composites in a misaligned format.

  19. A novel orientation code for face recognition

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng

    2011-06-01

    A novel orientation code is proposed for face recognition applications in this paper. Gabor wavelet transform is a common tool for orientation analysis in a 2D image; whereas Hamming distance is an efficient distance measurement for multiple classifications such as face identification. Specifically, at each frequency band, an index number representing the strongest orientational response is selected, and then encoded in binary format to favor the Hamming distance calculation. Multiple-band orientation codes are then organized into a face pattern byte (FPB) by using order statistics. With the FPB, Hamming distances are calculated and compared to achieve face identification. The FPB has the dimensionality of 8 bits per pixel and its performance will be compared to that of FPW (face pattern word, 32 bits per pixel). The dimensionality of FPB can be further reduced down to 4 bits per pixel, called face pattern nibble (FPN). Experimental results with visible and thermal face databases show that the proposed orientation code for face recognition is very promising in contrast with classical methods such as PCA.

  20. Suitable models for face geometry normalization in facial expression recognition

    NASA Astrophysics Data System (ADS)

    Sadeghi, Hamid; Raie, Abolghasem A.

    2015-01-01

    Recently, facial expression recognition has attracted much attention in machine vision research because of its various applications. Accordingly, many facial expression recognition systems have been proposed. However, the majority of existing systems suffer from a critical problem: geometric variability. It directly affects the performance of geometric feature-based facial expression recognition approaches. Furthermore, it is a crucial challenge in appearance feature-based techniques. This variability appears in both neutral faces and facial expressions. Appropriate face geometry normalization can improve the accuracy of each facial expression recognition system. Therefore, this paper proposes different geometric models or shapes for normalization. Face geometry normalization removes geometric variability of facial images and consequently, appearance feature extraction methods can be accurately utilized to represent facial images. Thus, some expression-based geometric models are proposed for facial image normalization. Next, local binary patterns and local phase quantization are used for appearance feature extraction. A combination of an effective geometric normalization with accurate appearance representations results in more than a 4% accuracy improvement compared to several state-of-the-arts in facial expression recognition. Moreover, utilizing the model of facial expressions which have larger mouth and eye region sizes gives higher accuracy due to the importance of these regions in facial expression.

  1. Mirror Self-Recognition beyond the Face

    ERIC Educational Resources Information Center

    Nielsen, Mark; Suddendorf, Thomas; Slaughter, Virginia

    2006-01-01

    Three studies (N=144) investigated how toddlers aged 18 and 24 months pass the surprise-mark test of self-recognition. In Study 1, toddlers were surreptitiously marked in successive conditions on their legs and faces with stickers visible only in a mirror. Rates of sticker touching did not differ significantly between conditions. In Study 2,…

  2. Wavelet-based multispectral face recognition

    NASA Astrophysics Data System (ADS)

    Liu, Dian-Ting; Zhou, Xiao-Dan; Wang, Cheng-Wen

    2008-09-01

    This paper proposes a novel wavelet-based face recognition method using thermal infrared (IR) and visible-light face images. The method applies the combination of Gabor and the Fisherfaces method to the reconstructed IR and visible images derived from wavelet frequency subbands. Our objective is to search for the subbands that are insensitive to the variation in expression and in illumination. The classification performance is improved by combining the multispectal information coming from the subbands that attain individually low equal error rate. Experimental results on Notre Dame face database show that the proposed wavelet-based algorithm outperforms previous multispectral images fusion method as well as monospectral method.

  3. Face recognition using spectral and spatial information

    NASA Astrophysics Data System (ADS)

    Robila, Stefan A.; Chang, Marco; D'Amico, Nisha B.

    2011-09-01

    We present a novel unsupervised method for facial recognition using hyperspectral imaging and decision fusion. In previous work we have separately investigated the use of spectra matching and image based matching. In spectra matching, face spectra are being classified based on spectral similarities. In image based matching, we investigated various approaches based on orthogonal subspaces (such as PCA and OSP). In the current work we provide an automated unsupervised method that starts by detecting the face in the image and then proceeds to performs both spectral and image based matching. The results are fused in a single classification decision. The algorithm is tested on an experimental hyperspectral image database of 17 subjects each with five different facial expressions and viewing angles. Our results show that the decision fusion leads to improvement of recognition accuracy when compared to the individual approaches as well as to recognition based on regular imaging.

  4. Face recognition with L1-norm subspaces

    NASA Astrophysics Data System (ADS)

    Maritato, Federica; Liu, Ying; Colonnese, Stefania; Pados, Dimitris A.

    2016-05-01

    We consider the problem of representing individual faces by maximum L1-norm projection subspaces calculated from available face-image ensembles. In contrast to conventional L2-norm subspaces, L1-norm subspaces are seen to offer significant robustness to image variations, disturbances, and rank selection. Face recognition becomes then the problem of associating a new unknown face image to the "closest," in some sense, L1 subspace in the database. In this work, we also introduce the concept of adaptively allocating the available number of principal components to different face image classes, subject to a given total number/budget of principal components. Experimental studies included in this paper illustrate and support the theoretical developments.

  5. Independent component representations for face recognition

    NASA Astrophysics Data System (ADS)

    Stewart Bartlett, Marian; Lades, Martin H.; Sejnowski, Terrence J.

    1998-07-01

    In a task such as face recognition, much of the important information may be contained in the high-order relationships among the image pixels. A number of face recognition algorithms employ principal component analysis (PCA), which is based on the second-order statistics of the image set, and does not address high-order statistical dependencies such as the relationships among three or more pixels. Independent component analysis (ICA) is a generalization of PCA which separates the high-order moments of the input in addition to the second-order moments. ICA was performed on a set of face images by an unsupervised learning algorithm derived from the principle of optimal information transfer through sigmoidal neurons. The algorithm maximizes the mutual information between the input and the output, which produces statistically independent outputs under certain conditions. ICA was performed on the face images under two different architectures. The first architecture provided a statistically independent basis set for the face images that can be viewed as a set of independent facial features. The second architecture provided a factorial code, in which the probability of any combination of features can be obtained from the product of their individual probabilities. Both ICA representations were superior to representations based on principal components analysis for recognizing faces across sessions and changes in expression.

  6. Robust face recognition via sparse representation.

    PubMed

    Wright, John; Yang, Allen Y; Ganesh, Arvind; Sastry, S Shankar; Ma, Yi

    2009-02-01

    We consider the problem of automatically recognizing human faces from frontal views with varying expression and illumination, as well as occlusion and disguise. We cast the recognition problem as one of classifying among multiple linear regression models and argue that new theory from sparse signal representation offers the key to addressing this problem. Based on a sparse representation computed by l{1}-minimization, we propose a general classification algorithm for (image-based) object recognition. This new framework provides new insights into two crucial issues in face recognition: feature extraction and robustness to occlusion. For feature extraction, we show that if sparsity in the recognition problem is properly harnessed, the choice of features is no longer critical. What is critical, however, is whether the number of features is sufficiently large and whether the sparse representation is correctly computed. Unconventional features such as downsampled images and random projections perform just as well as conventional features such as Eigenfaces and Laplacianfaces, as long as the dimension of the feature space surpasses certain threshold, predicted by the theory of sparse representation. This framework can handle errors due to occlusion and corruption uniformly by exploiting the fact that these errors are often sparse with respect to the standard (pixel) basis. The theory of sparse representation helps predict how much occlusion the recognition algorithm can handle and how to choose the training images to maximize robustness to occlusion. We conduct extensive experiments on publicly available databases to verify the efficacy of the proposed algorithm and corroborate the above claims.

  7. Near-infrared face recognition utilizing open CV software

    NASA Astrophysics Data System (ADS)

    Sellami, Louiza; Ngo, Hau; Fowler, Chris J.; Kearney, Liam M.

    2014-06-01

    Commercially available hardware, freely available algorithms, and authors' developed software are synergized successfully to detect and recognize subjects in an environment without visible light. This project integrates three major components: an illumination device operating in near infrared (NIR) spectrum, a NIR capable camera and a software algorithm capable of performing image manipulation, facial detection and recognition. Focusing our efforts in the near infrared spectrum allows the low budget system to operate covertly while still allowing for accurate face recognition. In doing so a valuable function has been developed which presents potential benefits in future civilian and military security and surveillance operations.

  8. Tolerance for distorted faces: challenges to a configural processing account of familiar face recognition.

    PubMed

    Sandford, Adam; Burton, A Mike

    2014-09-01

    Face recognition is widely held to rely on 'configural processing', an analysis of spatial relations between facial features. We present three experiments in which viewers were shown distorted faces, and asked to resize these to their correct shape. Based on configural theories appealing to metric distances between features, we reason that this should be an easier task for familiar than unfamiliar faces (whose subtle arrangements of features are unknown). In fact, participants were inaccurate at this task, making between 8% and 13% errors across experiments. Importantly, we observed no advantage for familiar faces: in one experiment participants were more accurate with unfamiliars, and in two experiments there was no difference. These findings were not due to general task difficulty - participants were able to resize blocks of colour to target shapes (squares) more accurately. We also found an advantage of familiarity for resizing other stimuli (brand logos). If configural processing does underlie face recognition, these results place constraints on the definition of 'configural'. Alternatively, familiar face recognition might rely on more complex criteria - based on tolerance to within-person variation rather than highly specific measurement.

  9. Tolerance for distorted faces: challenges to a configural processing account of familiar face recognition.

    PubMed

    Sandford, Adam; Burton, A Mike

    2014-09-01

    Face recognition is widely held to rely on 'configural processing', an analysis of spatial relations between facial features. We present three experiments in which viewers were shown distorted faces, and asked to resize these to their correct shape. Based on configural theories appealing to metric distances between features, we reason that this should be an easier task for familiar than unfamiliar faces (whose subtle arrangements of features are unknown). In fact, participants were inaccurate at this task, making between 8% and 13% errors across experiments. Importantly, we observed no advantage for familiar faces: in one experiment participants were more accurate with unfamiliars, and in two experiments there was no difference. These findings were not due to general task difficulty - participants were able to resize blocks of colour to target shapes (squares) more accurately. We also found an advantage of familiarity for resizing other stimuli (brand logos). If configural processing does underlie face recognition, these results place constraints on the definition of 'configural'. Alternatively, familiar face recognition might rely on more complex criteria - based on tolerance to within-person variation rather than highly specific measurement. PMID:24853629

  10. Tensor discriminant color space for face recognition.

    PubMed

    Wang, Su-Jing; Yang, Jian; Zhang, Na; Zhou, Chun-Guang

    2011-09-01

    Recent research efforts reveal that color may provide useful information for face recognition. For different visual tasks, the choice of a color space is generally different. How can a color space be sought for the specific face recognition problem? To address this problem, this paper represents a color image as a third-order tensor and presents the tensor discriminant color space (TDCS) model. The model can keep the underlying spatial structure of color images. With the definition of n-mode between-class scatter matrices and within-class scatter matrices, TDCS constructs an iterative procedure to obtain one color space transformation matrix and two discriminant projection matrices by maximizing the ratio of these two scatter matrices. The experiments are conducted on two color face databases, AR and Georgia Tech face databases, and the results show that both the performance and the efficiency of the proposed method are better than those of the state-of-the-art color image discriminant model, which involve one color space transformation matrix and one discriminant projection matrix, specifically in a complicated face database with various pose variations.

  11. Finding Faces Among Faces: Human Faces are Located More Quickly and Accurately than Other Primate and Mammal Faces

    PubMed Central

    Simpson, Elizabeth A.; Buchin, Zachary; Werner, Katie; Worrell, Rey; Jakobsen, Krisztina V.

    2014-01-01

    We tested the specificity of human face search efficiency by examining whether there is a broad window of detection for various face-like stimuli—human and animal faces—or whether own-species faces receive greater attentional allocation. We assessed the strength of the own-species face detection bias by testing whether human faces are located more efficiently than other animal faces, when presented among various other species’ faces, in heterogeneous 16-, 36-, and 64-item arrays. Across all array sizes, we found that, controlling for distractor type, human faces were located faster and more accurately than primate and mammal faces, and that, controlling for target type, searches were faster when distractors were human faces compared to animal faces, revealing more efficient processing of human faces regardless of their role as targets or distractors (Experiment 1). Critically, these effects remained when searches were for specific species’ faces (human, chimpanzee, otter), ruling out a category-level explanation (Experiment 2). Together, these results suggest that human faces may be processed more efficiently than animal faces, both when task-relevant (targets), and when task-irrelevant (distractors), even when in direct competition with other faces. These results suggest that there is not a broad window of detection for all face-like patterns, but that human adults process own-species’ faces more efficiently than other species’ faces. Such own-species search efficiencies may arise through experience with own-species faces throughout development, or may be privileged early in development, due to the evolutionary importance of conspecifics’ faces. PMID:25113852

  12. Gender-Based Prototype Formation in Face Recognition

    ERIC Educational Resources Information Center

    Baudouin, Jean-Yves; Brochard, Renaud

    2011-01-01

    The role of gender categories in prototype formation during face recognition was investigated in 2 experiments. The participants were asked to learn individual faces and then to recognize them. During recognition, individual faces were mixed with faces, which were blended faces of same or different genders. The results of the 2 experiments showed…

  13. Super-resolution benefit for face recognition

    NASA Astrophysics Data System (ADS)

    Hu, Shuowen; Maschal, Robert; Young, S. Susan; Hong, Tsai Hong; Phillips, Jonathon P.

    2011-06-01

    Vast amounts of video footage are being continuously acquired by surveillance systems on private premises, commercial properties, government compounds, and military installations. Facial recognition systems have the potential to identify suspicious individuals on law enforcement watchlists, but accuracy is severely hampered by the low resolution of typical surveillance footage and the far distance of suspects from the cameras. To improve accuracy, super-resolution can enhance suspect details by utilizing a sequence of low resolution frames from the surveillance footage to reconstruct a higher resolution image for input into the facial recognition system. This work measures the improvement of face recognition with super-resolution in a realistic surveillance scenario. Low resolution and super-resolved query sets are generated using a video database at different eye-to-eye distances corresponding to different distances of subjects from the camera. Performance of a face recognition algorithm using the super-resolved and baseline query sets was calculated by matching against galleries consisting of frontal mug shots. The results show that super-resolution improves performance significantly at the examined mid and close ranges.

  14. Face recognition: a model specific ability.

    PubMed

    Wilmer, Jeremy B; Germine, Laura T; Nakayama, Ken

    2014-01-01

    In our everyday lives, we view it as a matter of course that different people are good at different things. It can be surprising, in this context, to learn that most of what is known about cognitive ability variation across individuals concerns the broadest of all cognitive abilities; an ability referred to as general intelligence, general mental ability, or just g. In contrast, our knowledge of specific abilities, those that correlate little with g, is severely constrained. Here, we draw upon our experience investigating an exceptionally specific ability, face recognition, to make the case that many specific abilities could easily have been missed. In making this case, we derive key insights from earlier false starts in the measurement of face recognition's variation across individuals, and we highlight the convergence of factors that enabled the recent discovery that this variation is specific. We propose that the case of face recognition ability illustrates a set of tools and perspectives that could accelerate fruitful work on specific cognitive abilities. By revealing relatively independent dimensions of human ability, such work would enhance our capacity to understand the uniqueness of individual minds.

  15. Face recognition: a model specific ability.

    PubMed

    Wilmer, Jeremy B; Germine, Laura T; Nakayama, Ken

    2014-01-01

    In our everyday lives, we view it as a matter of course that different people are good at different things. It can be surprising, in this context, to learn that most of what is known about cognitive ability variation across individuals concerns the broadest of all cognitive abilities; an ability referred to as general intelligence, general mental ability, or just g. In contrast, our knowledge of specific abilities, those that correlate little with g, is severely constrained. Here, we draw upon our experience investigating an exceptionally specific ability, face recognition, to make the case that many specific abilities could easily have been missed. In making this case, we derive key insights from earlier false starts in the measurement of face recognition's variation across individuals, and we highlight the convergence of factors that enabled the recent discovery that this variation is specific. We propose that the case of face recognition ability illustrates a set of tools and perspectives that could accelerate fruitful work on specific cognitive abilities. By revealing relatively independent dimensions of human ability, such work would enhance our capacity to understand the uniqueness of individual minds. PMID:25346673

  16. Block error correction codes for face recognition

    NASA Astrophysics Data System (ADS)

    Hussein, Wafaa R.; Sellahewa, Harin; Jassim, Sabah A.

    2011-06-01

    Face recognition is one of the most desirable biometric-based authentication schemes to control access to sensitive information/locations and as a proof of identity to claim entitlement to services. The aim of this paper is to develop block-based mechanisms, to reduce recognition errors that result from varying illumination conditions with emphasis on using error correction codes. We investigate the modelling of error patterns in different parts/blocks of face images as a result of differences in illumination conditions, and we use appropriate error correction codes to deal with the corresponding distortion. We test the performance of our proposed schemes using the Extended Yale-B Face Database, which consists of face images belonging to 5 illumination subsets depending on the direction of light source from the camera. In our experiments each image is divided into three horizontal regions as follows: region1, three rows above the eyebrows, eyebrows and eyes; region2, nose region and region3, mouth and chin region. By estimating statistical parameters for errors in each region we select suitable BCH error correction codes that yield improved recognition accuracy for that particular region in comparison to applying error correction codes to the entire image. Discrete Wavelet Transform (DWT) to a depth of 3 is used for face feature extraction, followed by global/local binarization of coefficients in each subbands. We shall demonstrate that the use of BCH improves separation of the distribution of Hamming distances of client-client samples from the distribution of Hamming distances of imposter-client samples.

  17. Face and body recognition show similar improvement during childhood.

    PubMed

    Bank, Samantha; Rhodes, Gillian; Read, Ainsley; Jeffery, Linda

    2015-09-01

    Adults are proficient in extracting identity cues from faces. This proficiency develops slowly during childhood, with performance not reaching adult levels until adolescence. Bodies are similar to faces in that they convey identity cues and rely on specialized perceptual mechanisms. However, it is currently unclear whether body recognition mirrors the slow development of face recognition during childhood. Recent evidence suggests that body recognition develops faster than face recognition. Here we measured body and face recognition in 6- and 10-year-old children and adults to determine whether these two skills show different amounts of improvement during childhood. We found no evidence that they do. Face and body recognition showed similar improvement with age, and children, like adults, were better at recognizing faces than bodies. These results suggest that the mechanisms of face and body memory mature at a similar rate or that improvement of more general cognitive and perceptual skills underlies improvement of both face and body recognition.

  18. Maximum neighborhood margin criterion in face recognition

    NASA Astrophysics Data System (ADS)

    Han, Pang Ying; Teoh, Andrew Beng Jin

    2009-04-01

    Feature extraction is a data analysis technique devoted to removing redundancy and extracting the most discriminative information. In face recognition, feature extractors are normally plagued with small sample size problems, in which the total number of training images is much smaller than the image dimensionality. Recently, an optimized facial feature extractor, maximum marginal criterion (MMC), was proposed. MMC computes an optimized projection by solving the generalized eigenvalue problem in a standard form that is free from inverse matrix operation, and thus it does not suffer from the small sample size problem. However, MMC is essentially a linear projection technique that relies on facial image pixel intensity to compute within- and between-class scatters. The nonlinear nature of faces restricts the discrimination of MMC. Hence, we propose an improved MMC, namely maximum neighborhood margin criterion (MNMC). Unlike MMC, which preserves global geometric structures that do not perfectly describe the underlying face manifold, MNMC seeks a projection that preserves local geometric structures via neighborhood preservation. This objective function leads to the enhancement of classification capability, and this is testified by experimental results. MNMC shows its performance superiority compared to MMC, especially in pose, illumination, and expression (PIE) and face recognition grand challenge (FRGC) databases.

  19. Maximum Correntropy Criterion for Robust Face Recognition.

    PubMed

    He, Ran; Zheng, Wei-Shi; Hu, Bao-Gang

    2011-08-01

    In this paper, we present a sparse correntropy framework for computing robust sparse representations of face images for recognition. Compared with the state-of-the-art l(1)norm-based sparse representation classifier (SRC), which assumes that noise also has a sparse representation, our sparse algorithm is developed based on the maximum correntropy criterion, which is much more insensitive to outliers. In order to develop a more tractable and practical approach, we in particular impose nonnegativity constraint on the variables in the maximum correntropy criterion and develop a half-quadratic optimization technique to approximately maximize the objective function in an alternating way so that the complex optimization problem is reduced to learning a sparse representation through a weighted linear least squares problem with nonnegativity constraint at each iteration. Our extensive experiments demonstrate that the proposed method is more robust and efficient in dealing with the occlusion and corruption problems in face recognition as compared to the related state-of-the-art methods. In particular, it shows that the proposed method can improve both recognition accuracy and receiver operator characteristic (ROC) curves, while the computational cost is much lower than the SRC algorithms.

  20. Face recognition: a model specific ability

    PubMed Central

    Wilmer, Jeremy B.; Germine, Laura T.; Nakayama, Ken

    2014-01-01

    In our everyday lives, we view it as a matter of course that different people are good at different things. It can be surprising, in this context, to learn that most of what is known about cognitive ability variation across individuals concerns the broadest of all cognitive abilities; an ability referred to as general intelligence, general mental ability, or just g. In contrast, our knowledge of specific abilities, those that correlate little with g, is severely constrained. Here, we draw upon our experience investigating an exceptionally specific ability, face recognition, to make the case that many specific abilities could easily have been missed. In making this case, we derive key insights from earlier false starts in the measurement of face recognition’s variation across individuals, and we highlight the convergence of factors that enabled the recent discovery that this variation is specific. We propose that the case of face recognition ability illustrates a set of tools and perspectives that could accelerate fruitful work on specific cognitive abilities. By revealing relatively independent dimensions of human ability, such work would enhance our capacity to understand the uniqueness of individual minds. PMID:25346673

  1. The significance of hair for face recognition.

    PubMed

    Toseeb, Umar; Keeble, David R T; Bryant, Eleanor J

    2012-01-01

    Hair is a feature of the head that frequently changes in different situations. For this reason much research in the area of face perception has employed stimuli without hair. To investigate the effect of the presence of hair we used faces with and without hair in a recognition task. Participants took part in trials in which the state of the hair either remained consistent (Same) or switched between learning and test (Switch). It was found that in the Same trials performance did not differ for stimuli presented with and without hair. This implies that there is sufficient information in the internal features of the face for optimal performance in this task. It was also found that performance in the Switch trials was substantially lower than in the Same trials. This drop in accuracy when the stimuli were switched suggests that faces are represented in a holistic manner and that manipulation of the hair causes disruption to this, with implications for the interpretation of some previous studies.

  2. Regularized robust coding for face recognition.

    PubMed

    Yang, Meng; Zhang, Lei; Yang, Jian; Zhang, David

    2013-05-01

    Recently the sparse representation based classification (SRC) has been proposed for robust face recognition (FR). In SRC, the testing image is coded as a sparse linear combination of the training samples, and the representation fidelity is measured by the l2-norm or l1 -norm of the coding residual. Such a sparse coding model assumes that the coding residual follows Gaussian or Laplacian distribution, which may not be effective enough to describe the coding residual in practical FR systems. Meanwhile, the sparsity constraint on the coding coefficients makes the computational cost of SRC very high. In this paper, we propose a new face coding model, namely regularized robust coding (RRC), which could robustly regress a given signal with regularized regression coefficients. By assuming that the coding residual and the coding coefficient are respectively independent and identically distributed, the RRC seeks for a maximum a posterior solution of the coding problem. An iteratively reweighted regularized robust coding (IR(3)C) algorithm is proposed to solve the RRC model efficiently. Extensive experiments on representative face databases demonstrate that the RRC is much more effective and efficient than state-of-the-art sparse representation based methods in dealing with face occlusion, corruption, lighting, and expression changes, etc.

  3. Recognition of faces of ingroup and outgroup children and adults.

    PubMed

    Corenblum, B; Meissner, Christian A

    2006-03-01

    People are often more accurate in recognizing faces of ingroup members than in recognizing faces of outgroup members. Although own-group biases in face recognition are well established among adults, less attention has been given to such biases among children. This is surprising considering how often children give testimony in criminal and civil cases. In the current two studies, Euro-Canadian children attending public school and young adults enrolled in university-level classes were asked whether previously presented photographs of Euro-American and African American adults (Study 1) or photographs of Native Canadian, Euro-Canadian, and African American children (Study 2) were new or old. In both studies, own-group biases were found on measures of discrimination accuracy and response bias as well as on estimates of reaction time, confidence, and confidence-accuracy relations. Results of both studies were consistent with predictions derived from multidimensional face space theory of face recognition. Implications of the current studies for the validity of children's eyewitness testimony are also discussed. PMID:16243349

  4. Individual differences in holistic processing predict face recognition ability.

    PubMed

    Wang, Ruosi; Li, Jingguang; Fang, Huizhen; Tian, Moqian; Liu, Jia

    2012-02-01

    Why do some people recognize faces easily and others frequently make mistakes in recognizing faces? Classic behavioral work has shown that faces are processed in a distinctive holistic manner that is unlike the processing of objects. In the study reported here, we investigated whether individual differences in holistic face processing have a significant influence on face recognition. We found that the magnitude of face-specific recognition accuracy correlated with the extent to which participants processed faces holistically, as indexed by the composite-face effect and the whole-part effect. This association is due to face-specific processing in particular, not to a more general aspect of cognitive processing, such as general intelligence or global attention. This finding provides constraints on computational models of face recognition and may elucidate mechanisms underlying cognitive disorders, such as prosopagnosia and autism, that are associated with deficits in face recognition.

  5. Towards Robust Face Recognition from Video

    SciTech Connect

    Price, JR

    2001-10-18

    A novel, template-based method for face recognition is presented. The goals of the proposed method are to integrate multiple observations for improved robustness and to provide auxiliary confidence data for subsequent use in an automated video surveillance system. The proposed framework consists of a parallel system of classifiers, referred to as observers, where each observer is trained on one face region. The observer outputs are combined to yield the final recognition result. Three of the four confounding factors--expression, illumination, and decoration--are specifically addressed in this paper. The extension of the proposed approach to address the fourth confounding factor--pose--is straightforward and well supported in previous work. A further contribution of the proposed approach is the computation of a revealing confidence measure. This confidence measure will aid the subsequent application of the proposed method to video surveillance scenarios. Results are reported for a database comprising 676 images of 160 subjects under a variety of challenging circumstances. These results indicate significant performance improvements over previous methods and demonstrate the usefulness of the confidence data.

  6. Spectral face recognition using orthogonal subspace bases

    NASA Astrophysics Data System (ADS)

    Wimberly, Andrew; Robila, Stefan A.; Peplau, Tansy

    2010-04-01

    We present an efficient method for facial recognition using hyperspectral imaging and orthogonal subspaces. Projecting the data into orthogonal subspaces has the advantage of compactness and reduction of redundancy. We focus on two approaches: Principal Component Analysis and Orthogonal Subspace Projection. Our work is separated in three stages. First, we designed an experimental setup that allowed us to create a hyperspectral image database of 17 subjects under different facial expressions and viewing angles. Second, we investigated approaches to employ spectral information for the generation of fused grayscale images. Third, we designed and tested a recognition system based on the methods described above. The experimental results show that spectral fusion leads to improvement of recognition accuracy when compared to regular imaging. The work expands on previous band extraction research and has the distinct advantage of being one of the first that combines spatial information (i.e. face characteristics) with spectral information. In addition, the techniques are general enough to accommodate differences in skin spectra.

  7. Impaired face recognition is associated with social inhibition.

    PubMed

    Avery, Suzanne N; VanDerKlok, Ross M; Heckers, Stephan; Blackford, Jennifer U

    2016-02-28

    Face recognition is fundamental to successful social interaction. Individuals with deficits in face recognition are likely to have social functioning impairments that may lead to heightened risk for social anxiety. A critical component of social interaction is how quickly a face is learned during initial exposure to a new individual. Here, we used a novel Repeated Faces task to assess how quickly memory for faces is established. Face recognition was measured over multiple exposures in 52 young adults ranging from low to high in social inhibition, a core dimension of social anxiety. High social inhibition was associated with a smaller slope of change in recognition memory over repeated face exposure, indicating participants with higher social inhibition showed smaller improvements in recognition memory after seeing faces multiple times. We propose that impaired face learning is an important mechanism underlying social inhibition and may contribute to, or maintain, social anxiety.

  8. Random-profiles-based 3D face recognition system.

    PubMed

    Kim, Joongrock; Yu, Sunjin; Lee, Sangyoun

    2014-01-01

    In this paper, a noble nonintrusive three-dimensional (3D) face modeling system for random-profile-based 3D face recognition is presented. Although recent two-dimensional (2D) face recognition systems can achieve a reliable recognition rate under certain conditions, their performance is limited by internal and external changes, such as illumination and pose variation. To address these issues, 3D face recognition, which uses 3D face data, has recently received much attention. However, the performance of 3D face recognition highly depends on the precision of acquired 3D face data, while also requiring more computational power and storage capacity than 2D face recognition systems. In this paper, we present a developed nonintrusive 3D face modeling system composed of a stereo vision system and an invisible near-infrared line laser, which can be directly applied to profile-based 3D face recognition. We further propose a novel random-profile-based 3D face recognition method that is memory-efficient and pose-invariant. The experimental results demonstrate that the reconstructed 3D face data consists of more than 50 k 3D point clouds and a reliable recognition rate against pose variation.

  9. Random-Profiles-Based 3D Face Recognition System

    PubMed Central

    Joongrock, Kim; Sunjin, Yu; Sangyoun, Lee

    2014-01-01

    In this paper, a noble nonintrusive three-dimensional (3D) face modeling system for random-profile-based 3D face recognition is presented. Although recent two-dimensional (2D) face recognition systems can achieve a reliable recognition rate under certain conditions, their performance is limited by internal and external changes, such as illumination and pose variation. To address these issues, 3D face recognition, which uses 3D face data, has recently received much attention. However, the performance of 3D face recognition highly depends on the precision of acquired 3D face data, while also requiring more computational power and storage capacity than 2D face recognition systems. In this paper, we present a developed nonintrusive 3D face modeling system composed of a stereo vision system and an invisible near-infrared line laser, which can be directly applied to profile-based 3D face recognition. We further propose a novel random-profile-based 3D face recognition method that is memory-efficient and pose-invariant. The experimental results demonstrate that the reconstructed 3D face data consists of more than 50 k 3D point clouds and a reliable recognition rate against pose variation. PMID:24691101

  10. Combination of direct matching and collaborative representation for face recognition

    NASA Astrophysics Data System (ADS)

    Zhang, Chongyang

    2013-06-01

    It has been proved that representation-based classification (RBC) can achieve high accuracy in face recognition. However, conventional RBC has a very high computational cost. Collaborative representation proposed in [1] not only has the advantages of RBC but also is computationally very efficient. In this paper, a combination of direct matching of images and collaborative representation is proposed for face recognition. Experimental results show that the proposed method can always classify more accurately than collaborative representation! The underlying reason is that direct matching of images and collaborative representation use different ways to calculate the dissimilarity between the test sample and training sample. As a result, the score obtained using direct matching of images is very complementary to the score obtained using collaborative representation. Actually, the analysis shows that the matching scores generated from direct matching of images and collaborative representation always have a low correlation. This allows the proposed method to exploit more information for face recognition and to produce a better result.

  11. [Face recognition in patients with autism spectrum disorders].

    PubMed

    Kita, Yosuke; Inagaki, Masumi

    2012-07-01

    The present study aimed to review previous research conducted on face recognition in patients with autism spectrum disorders (ASD). Face recognition is a key question in the ASD research field because it can provide clues for elucidating the neural substrates responsible for the social impairment of these patients. Historically, behavioral studies have reported low performance and/or unique strategies of face recognition among ASD patients. However, the performance and strategy of ASD patients is comparable to those of the control group, depending on the experimental situation or developmental stage, suggesting that face recognition of ASD patients is not entirely impaired. Recent brain function studies, including event-related potential and functional magnetic resonance imaging studies, have investigated the cognitive process of face recognition in ASD patients, and revealed impaired function in the brain's neural network comprising the fusiform gyrus and amygdala. This impaired function is potentially involved in the diminished preference for faces, and in the atypical development of face recognition, eliciting symptoms of unstable behavioral characteristics in these patients. Additionally, face recognition in ASD patients is examined from a different perspective, namely self-face recognition, and facial emotion recognition. While the former topic is intimately linked to basic social abilities such as self-other discrimination, the latter is closely associated with mentalizing. Further research on face recognition in ASD patients should investigate the connection between behavioral and neurological specifics in these patients, by considering developmental changes and the spectrum clinical condition of ASD.

  12. Direct Gaze Modulates Face Recognition in Young Infants

    ERIC Educational Resources Information Center

    Farroni, Teresa; Massaccesi, Stefano; Menon, Enrica; Johnson, Mark H.

    2007-01-01

    From birth, infants prefer to look at faces that engage them in direct eye contact. In adults, direct gaze is known to modulate the processing of faces, including the recognition of individuals. In the present study, we investigate whether direction of gaze has any effect on face recognition in four-month-old infants. Four-month infants were shown…

  13. Multiview face recognition: from TensorFace to V-TensorFace and K-TensorFace.

    PubMed

    Tian, Chunna; Fan, Guoliang; Gao, Xinbo; Tian, Qi

    2012-04-01

    Face images under uncontrolled environments suffer from the changes of multiple factors such as camera view, illumination, expression, etc. Tensor analysis provides a way of analyzing the influence of different factors on facial variation. However, the TensorFace model creates a difficulty in representing the nonlinearity of view subspace. In this paper, to break this limitation, we present a view-manifold-based TensorFace (V-TensorFace), in which the latent view manifold preserves the local distances in the multiview face space. Moreover, a kernelized TensorFace (K-TensorFace) for multiview face recognition is proposed to preserve the structure of the latent manifold in the image space. Both methods provide a generative model that involves a continuous view manifold for unseen view representation. Most importantly, we propose a unified framework to generalize TensorFace, V-TensorFace, and K-TensorFace. Finally, an expectation-maximization like algorithm is developed to estimate the identity and view parameters iteratively for a face image of an unknown/unseen view. The experiment on the PIE database shows the effectiveness of the manifold construction method. Extensive comparison experiments on Weizmann and Oriental Face databases for multiview face recognition demonstrate the superiority of the proposed V- and K-TensorFace methods over the view-based principal component analysis and other state-of-the-art approaches for such purpose. PMID:22318490

  14. Familiar Person Recognition: Is Autonoetic Consciousness More Likely to Accompany Face Recognition Than Voice Recognition?

    NASA Astrophysics Data System (ADS)

    Barsics, Catherine; Brédart, Serge

    2010-11-01

    Autonoetic consciousness is a fundamental property of human memory, enabling us to experience mental time travel, to recollect past events with a feeling of self-involvement, and to project ourselves in the future. Autonoetic consciousness is a characteristic of episodic memory. By contrast, awareness of the past associated with a mere feeling of familiarity or knowing relies on noetic consciousness, depending on semantic memory integrity. Present research was aimed at evaluating whether conscious recollection of episodic memories is more likely to occur following the recognition of a familiar face than following the recognition of a familiar voice. Recall of semantic information (biographical information) was also assessed. Previous studies that investigated the recall of biographical information following person recognition used faces and voices of famous people as stimuli. In this study, the participants were presented with personally familiar people's voices and faces, thus avoiding the presence of identity cues in the spoken extracts and allowing a stricter control of frequency exposure with both types of stimuli (voices and faces). In the present study, the rate of retrieved episodic memories, associated with autonoetic awareness, was significantly higher from familiar faces than familiar voices even though the level of overall recognition was similar for both these stimuli domains. The same pattern was observed regarding semantic information retrieval. These results and their implications for current Interactive Activation and Competition person recognition models are discussed.

  15. Image preprocessing study on KPCA-based face recognition

    NASA Astrophysics Data System (ADS)

    Li, Xuan; Li, Dehua

    2015-12-01

    Face recognition as an important biometric identification method, with its friendly, natural, convenient advantages, has obtained more and more attention. This paper intends to research a face recognition system including face detection, feature extraction and face recognition, mainly through researching on related theory and the key technology of various preprocessing methods in face detection process, using KPCA method, focuses on the different recognition results in different preprocessing methods. In this paper, we choose YCbCr color space for skin segmentation and choose integral projection for face location. We use erosion and dilation of the opening and closing operation and illumination compensation method to preprocess face images, and then use the face recognition method based on kernel principal component analysis method for analysis and research, and the experiments were carried out using the typical face database. The algorithms experiment on MATLAB platform. Experimental results show that integration of the kernel method based on PCA algorithm under certain conditions make the extracted features represent the original image information better for using nonlinear feature extraction method, which can obtain higher recognition rate. In the image preprocessing stage, we found that images under various operations may appear different results, so as to obtain different recognition rate in recognition stage. At the same time, in the process of the kernel principal component analysis, the value of the power of the polynomial function can affect the recognition result.

  16. Face shape and face identity processing in behavioral variant fronto-temporal dementia: A specific deficit for familiarity and name recognition of famous faces.

    PubMed

    De Winter, François-Laurent; Timmers, Dorien; de Gelder, Beatrice; Van Orshoven, Marc; Vieren, Marleen; Bouckaert, Miriam; Cypers, Gert; Caekebeke, Jo; Van de Vliet, Laura; Goffin, Karolien; Van Laere, Koen; Sunaert, Stefan; Vandenberghe, Rik; Vandenbulcke, Mathieu; Van den Stock, Jan

    2016-01-01

    Deficits in face processing have been described in the behavioral variant of fronto-temporal dementia (bvFTD), primarily regarding the recognition of facial expressions. Less is known about face shape and face identity processing. Here we used a hierarchical strategy targeting face shape and face identity recognition in bvFTD and matched healthy controls. Participants performed 3 psychophysical experiments targeting face shape detection (Experiment 1), unfamiliar face identity matching (Experiment 2), familiarity categorization and famous face-name matching (Experiment 3). The results revealed group differences only in Experiment 3, with a deficit in the bvFTD group for both familiarity categorization and famous face-name matching. Voxel-based morphometry regression analyses in the bvFTD group revealed an association between grey matter volume of the left ventral anterior temporal lobe and familiarity recognition, while face-name matching correlated with grey matter volume of the bilateral ventral anterior temporal lobes. Subsequently, we quantified familiarity-specific and name-specific recognition deficits as the sum of the celebrities of which respectively only the name or only the familiarity was accurately recognized. Both indices were associated with grey matter volume of the bilateral anterior temporal cortices. These findings extent previous results by documenting the involvement of the left anterior temporal lobe (ATL) in familiarity detection and the right ATL in name recognition deficits in fronto-temporal lobar degeneration.

  17. Comparison of computer-based and optical face recognition paradigms

    NASA Astrophysics Data System (ADS)

    Alorf, Abdulaziz A.

    The main objectives of this thesis are to validate an improved principal components analysis (IPCA) algorithm on images; designing and simulating a digital model for image compression, face recognition and image detection by using a principal components analysis (PCA) algorithm and the IPCA algorithm; designing and simulating an optical model for face recognition and object detection by using the joint transform correlator (JTC); establishing detection and recognition thresholds for each model; comparing between the performance of the PCA algorithm and the performance of the IPCA algorithm in compression, recognition and, detection; and comparing between the performance of the digital model and the performance of the optical model in recognition and detection. The MATLAB(c) software was used for simulating the models. PCA is a technique used for identifying patterns in data and representing the data in order to highlight any similarities or differences. The identification of patterns in data of high dimensions (more than three dimensions) is too difficult because the graphical representation of data is impossible. Therefore, PCA is a powerful method for analyzing data. IPCA is another statistical tool for identifying patterns in data. It uses information theory for improving PCA. The joint transform correlator (JTC) is an optical correlator used for synthesizing a frequency plane filter for coherent optical systems. The IPCA algorithm, in general, behaves better than the PCA algorithm in the most of the applications. It is better than the PCA algorithm in image compression because it obtains higher compression, more accurate reconstruction, and faster processing speed with acceptable errors; in addition, it is better than the PCA algorithm in real-time image detection due to the fact that it achieves the smallest error rate as well as remarkable speed. On the other hand, the PCA algorithm performs better than the IPCA algorithm in face recognition because it offers

  18. A developmental study of the own-age face recognition bias in children.

    PubMed

    Hills, Peter J

    2012-03-01

    The own-age bias is one in which people recognize faces of people their own age more accurately than faces of other ages (e.g., Anastasi & Rhodes, 2005, 2006) and appears to be, at least, partially based on experience (Harrison & Hole, 2009). Indeed, Hills and Lewis (2011a) have shown that 8-year-old faces are more accurately recognized by 8-year-old children than by 6- or 11-year-old children, suggesting the own-age bias develops rapidly. The present study explores the own-age bias in a developmental study in participants aged 6-10 years. Ninety participants (divided into 3 groups of 30 on the basis of their age at the first time of testing) undertook a standard old/new recognition paradigm in which their recognition accuracy was measured for 8- and 20-year-old faces. Results showed that when the participants were 8 years old, they recognized 8-year-old faces more accurately than when they were 7 or 9 years old. This effect was found to be based on mechanisms that differ from simple developmental improvement. This is the first study to show the development of the own-age bias in face recognition using a longitudinal design. These results show that the face recognition system is updated on the basis of recent experience and/or motivation to process faces, creating recognition biases.

  19. Passive and active recognition of one's own face.

    PubMed

    Sugiura, M; Kawashima, R; Nakamura, K; Okada, K; Kato, T; Nakamura, A; Hatano, K; Itoh, K; Kojima, S; Fukuda, H

    2000-01-01

    Facial identity recognition has been studied mainly with explicit discrimination requirement and faces of social figures in previous human brain imaging studies. We performed a PET activation study with normal volunteers in facial identity recognition tasks using the subject's own face as visual stimulus. Three tasks were designed so that the activation of the visual representation of the face and the effect of sustained attention to the representation could be separately examined: a control-face recognition task (C), a passive own-face recognition task (no explicit discrimination was required) (P), and an active own-face recognition task (explicit discrimination was required) (A). Increased skin conductance responses during recognition of own face were seen in both task P and task A, suggesting the occurrence of psychophysiological changes during recognition of one's own face. The left fusiform gyrus, the right supramarginal gyrus, the left putamen, and the right hypothalamus were activated in tasks P and A compared with task C. The left fusiform gyrus and the right supramarginal gyrus are considered to be involved in the representation of one's own face. The activation in the right supramarginal gyrus may be associated with the representation of one's own face as a part of one's own body. The prefrontal cortices, the right anterior cingulate, the right presupplementary motor area, and the left insula were specifically activated during task A compared with tasks C and P, indicating that these regions may be involved in the sustained attention to the representation of one's own face. PMID:10686115

  20. Covert face recognition relies on affective valence in congenital prosopagnosia.

    PubMed

    Bate, Sarah; Haslam, Catherine; Jansari, Ashok; Hodgson, Timothy L

    2009-06-01

    Dominant accounts of covert recognition in prosopagnosia assume subthreshold activation of face representations created prior to onset of the disorder. Yet, such accounts cannot explain covert recognition in congenital prosopagnosia, where the impairment is present from birth. Alternatively, covert recognition may rely on affective valence, yet no study has explored this possibility. The current study addressed this issue in 3 individuals with congenital prosopagnosia, using measures of the scanpath to indicate recognition. Participants were asked to memorize 30 faces paired with descriptions of aggressive, nice, or neutral behaviours. In a later recognition test, eye movements were monitored while participants discriminated studied from novel faces. Sampling was reduced for studied--nice compared to studied--aggressive faces, and performance for studied--neutral and novel faces fell between these two conditions. This pattern of findings suggests that (a) positive emotion can facilitate processing in prosopagnosia, and (b) covert recognition may rely on emotional valence rather than familiarity.

  1. Face age and sex modulate the other-race effect in face recognition.

    PubMed

    Wallis, Jennifer; Lipp, Ottmar V; Vanman, Eric J

    2012-11-01

    Faces convey a variety of socially relevant cues that have been shown to affect recognition, such as age, sex, and race, but few studies have examined the interactive effect of these cues. White participants of two distinct age groups were presented with faces that differed in race, age, and sex in a face recognition paradigm. Replicating the other-race effect, young participants recognized young own-race faces better than young other-race faces. However, recognition performance did not differ across old faces of different races (Experiments 1, 2A). In addition, participants showed an other-age effect, recognizing White young faces better than White old faces. Sex affected recognition performance only when age was not varied (Experiment 2B). Overall, older participants showed a similar recognition pattern (Experiment 3) as young participants, displaying an other-race effect for young, but not old, faces. However, they recognized young and old White faces on a similar level. These findings indicate that face cues interact to affect recognition performance such that age and sex information reliably modulate the effect of race cues. These results extend accounts of face recognition that explain recognition biases (such as the other-race effect) as a function of dichotomous ingroup/outgroup categorization, in that outgroup characteristics are not simply additive but interactively determine recognition performance.

  2. Face age and sex modulate the other-race effect in face recognition.

    PubMed

    Wallis, Jennifer; Lipp, Ottmar V; Vanman, Eric J

    2012-11-01

    Faces convey a variety of socially relevant cues that have been shown to affect recognition, such as age, sex, and race, but few studies have examined the interactive effect of these cues. White participants of two distinct age groups were presented with faces that differed in race, age, and sex in a face recognition paradigm. Replicating the other-race effect, young participants recognized young own-race faces better than young other-race faces. However, recognition performance did not differ across old faces of different races (Experiments 1, 2A). In addition, participants showed an other-age effect, recognizing White young faces better than White old faces. Sex affected recognition performance only when age was not varied (Experiment 2B). Overall, older participants showed a similar recognition pattern (Experiment 3) as young participants, displaying an other-race effect for young, but not old, faces. However, they recognized young and old White faces on a similar level. These findings indicate that face cues interact to affect recognition performance such that age and sex information reliably modulate the effect of race cues. These results extend accounts of face recognition that explain recognition biases (such as the other-race effect) as a function of dichotomous ingroup/outgroup categorization, in that outgroup characteristics are not simply additive but interactively determine recognition performance. PMID:22933042

  3. Newborns' Face Recognition: Role of Inner and Outer Facial Features

    ERIC Educational Resources Information Center

    Turati, Chiara; Macchi Cassia, Viola; Simion, Francesca; Leo, Irene

    2006-01-01

    Existing data indicate that newborns are able to recognize individual faces, but little is known about what perceptual cues drive this ability. The current study showed that either the inner or outer features of the face can act as sufficient cues for newborns' face recognition (Experiment 1), but the outer part of the face enjoys an advantage…

  4. Neural Substrates for Episodic Encoding and Recognition of Unfamiliar Faces

    ERIC Educational Resources Information Center

    Hofer, Alex; Siedentopf, Christian M.; Ischebeck, Anja; Rettenbacher, Maria A.; Verius, Michael; Golaszewski, Stefan M.; Felber, Stephan; Fleischhacker, W. Wolfgang

    2007-01-01

    Functional MRI was used to investigate brain activation in healthy volunteers during encoding of unfamiliar faces as well as during correct recognition of newly learned faces (CR) compared to correct identification of distractor faces (CF), missed alarms (not recognizing previously presented faces, MA), and false alarms (incorrectly recognizing…

  5. Isolating the Special Component of Face Recognition: Peripheral Identification and a Mooney Face

    ERIC Educational Resources Information Center

    McKone, Elinor

    2004-01-01

    A previous finding argues that, for faces, configural (holistic) processing can operate even in the complete absence of part-based contributions to recognition. Here, this result is confirmed using 2 methods. In both, recognition of inverted faces (parts only) was removed altogether (chance identification of faces in the periphery; no perception…

  6. Effects of compression and individual variability on face recognition performance

    NASA Astrophysics Data System (ADS)

    McGarry, Delia P.; Arndt, Craig M.; McCabe, Steven A.; D'Amato, Donald P.

    2004-08-01

    The Enhanced Border Security and Visa Entry Reform Act of 2002 requires that the Visa Waiver Program be available only to countries that have a program to issue to their nationals machine-readable passports incorporating biometric identifiers complying with applicable standards established by the International Civil Aviation Organization (ICAO). In June 2002, the New Technologies Working Group of ICAO unanimously endorsed the use of face recognition (FR) as the globally interoperable biometric for machine-assisted identity confirmation with machine-readable travel documents (MRTDs), although Member States may elect to use fingerprint and/or iris recognition as additional biometric technologies. The means and formats are still being developed through which biometric information might be stored in the constrained space of integrated circuit chips embedded within travel documents. Such information will be stored in an open, yet unalterable and very compact format, probably as digitally signed and efficiently compressed images. The objective of this research is to characterize the many factors that affect FR system performance with respect to the legislated mandates concerning FR. A photograph acquisition environment and a commercial face recognition system have been installed at Mitretek, and over 1,400 images have been collected of volunteers. The image database and FR system are being used to analyze the effects of lossy image compression, individual differences, such as eyeglasses and facial hair, and the acquisition environment on FR system performance. Images are compressed by varying ratios using JPEG2000 to determine the trade-off points between recognition accuracy and compression ratio. The various acquisition factors that contribute to differences in FR system performance among individuals are also being measured. The results of this study will be used to refine and test efficient face image interchange standards that ensure highly accurate recognition, both

  7. Familiar Face Recognition in Children with Autism: The Differential Use of Inner and Outer Face Parts

    ERIC Educational Resources Information Center

    Wilson, Rebecca; Pascalis, Olivier; Blades, Mark

    2007-01-01

    We investigated whether children with autistic spectrum disorders (ASD) have a deficit in recognising familiar faces. Children with ASD were given a forced choice familiar face recognition task with three conditions: full faces, inner face parts and outer face parts. Control groups were children with developmental delay (DD) and typically…

  8. Transfer between Pose and Illumination Training in Face Recognition

    ERIC Educational Resources Information Center

    Liu, Chang Hong; Bhuiyan, Md. Al-Amin; Ward, James; Sui, Jie

    2009-01-01

    The relationship between pose and illumination learning in face recognition was examined in a yes-no recognition paradigm. The authors assessed whether pose training can transfer to a new illumination or vice versa. Results show that an extensive level of pose training through a face-name association task was able to generalize to a new…

  9. Children's Recognition of Unfamiliar Faces: Developments and Determinants.

    ERIC Educational Resources Information Center

    Soppe, H. J. G.

    1986-01-01

    Eight- to 12-year-old primary school children and 13-year-old secondary school children were given a live and photographed face recognition task and several other figural tasks. While scores on most tasks increased with age, face recognition scores were affected by age, decreasing at age 12 (puberty onset). (Author/BB)

  10. When the face fits: recognition of celebrities from matching and mismatching faces and voices.

    PubMed

    Stevenage, Sarah V; Neil, Greg J; Hamlin, Iain

    2014-01-01

    The results of two experiments are presented in which participants engaged in a face-recognition or a voice-recognition task. The stimuli were face-voice pairs in which the face and voice were co-presented and were either "matched" (same person), "related" (two highly associated people), or "mismatched" (two unrelated people). Analysis in both experiments confirmed that accuracy and confidence in face recognition was consistently high regardless of the identity of the accompanying voice. However accuracy of voice recognition was increasingly affected as the relationship between voice and accompanying face declined. Moreover, when considering self-reported confidence in voice recognition, confidence remained high for correct responses despite the proportion of these responses declining across conditions. These results converged with existing evidence indicating the vulnerability of voice recognition as a relatively weak signaller of identity, and results are discussed in the context of a person-recognition framework.

  11. When the face fits: recognition of celebrities from matching and mismatching faces and voices.

    PubMed

    Stevenage, Sarah V; Neil, Greg J; Hamlin, Iain

    2014-01-01

    The results of two experiments are presented in which participants engaged in a face-recognition or a voice-recognition task. The stimuli were face-voice pairs in which the face and voice were co-presented and were either "matched" (same person), "related" (two highly associated people), or "mismatched" (two unrelated people). Analysis in both experiments confirmed that accuracy and confidence in face recognition was consistently high regardless of the identity of the accompanying voice. However accuracy of voice recognition was increasingly affected as the relationship between voice and accompanying face declined. Moreover, when considering self-reported confidence in voice recognition, confidence remained high for correct responses despite the proportion of these responses declining across conditions. These results converged with existing evidence indicating the vulnerability of voice recognition as a relatively weak signaller of identity, and results are discussed in the context of a person-recognition framework. PMID:23531227

  12. Electrophysiological markers of covert face recognition in developmental prosopagnosia.

    PubMed

    Eimer, Martin; Gosling, Angela; Duchaine, Bradley

    2012-02-01

    To study the existence and neural basis of covert face recognition in individuals with developmental prosopagnosia, we tested a group of 12 participants with developmental prosopagnosia in a task that required them to judge the familiarity of successively presented famous or non-famous faces. Electroencephalography was recorded during task performance, and event-related brain potentials were computed for recognized famous faces, non-recognized famous faces and non-famous faces. In six individuals with developmental prosopagnosia, non-recognized famous faces triggered an occipito-temporal N250 component, which is thought to reflect the activation of stored visual memory traces of known individual faces. In contrast to the N250, the P600f component, which is linked to late semantic stages of face identity processing, was not triggered by non-recognized famous faces. Event-related potential correlates of explicit face recognition obtained on those few trials where participants with developmental prosopagnosia classified famous faces as known or familiar, were similar to the effects previously found in participants with intact face recognition abilities, suggesting that face recognition mechanisms in individuals with developmental prosopagnosia are not qualitatively different from that of unimpaired individuals. Overall, these event-related potential results provide the first neurophysiological evidence for covert face recognition in developmental prosopagnosia, and suggest this phenomenon results from disconnected links between intact identity-specific visual memory traces and later semantic face processing stages. They also imply that the activation of stored visual representations of familiar faces is not sufficient for conscious explicit face recognition.

  13. Face averages enhance user recognition for smartphone security.

    PubMed

    Robertson, David J; Kramer, Robin S S; Burton, A Mike

    2015-01-01

    Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy). In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual's 'face-average'--a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1) and for real faces (Experiment 2): users could unlock their phones more reliably when the device stored an average of the user's face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings.

  14. Face Averages Enhance User Recognition for Smartphone Security

    PubMed Central

    Robertson, David J.; Kramer, Robin S. S.; Burton, A. Mike

    2015-01-01

    Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy). In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual’s ‘face-average’ – a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1) and for real faces (Experiment 2): users could unlock their phones more reliably when the device stored an average of the user’s face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings. PMID:25807251

  15. Developmental Commonalities between Object and Face Recognition in Adolescence

    PubMed Central

    Jüttner, Martin; Wakui, Elley; Petters, Dean; Davidoff, Jules

    2016-01-01

    In the visual perception literature, the recognition of faces has often been contrasted with that of non-face objects, in terms of differences with regard to the role of parts, part relations and holistic processing. However, recent evidence from developmental studies has begun to blur this sharp distinction. We review evidence for a protracted development of object recognition that is reminiscent of the well-documented slow maturation observed for faces. The prolonged development manifests itself in a retarded processing of metric part relations as opposed to that of individual parts and offers surprising parallels to developmental accounts of face recognition, even though the interpretation of the data is less clear with regard to holistic processing. We conclude that such results might indicate functional commonalities between the mechanisms underlying the recognition of faces and non-face objects, which are modulated by different task requirements in the two stimulus domains. PMID:27014176

  16. Pose-Invariant Face Recognition via RGB-D Images.

    PubMed

    Sang, Gaoli; Li, Jing; Zhao, Qijun

    2016-01-01

    Three-dimensional (3D) face models can intrinsically handle large pose face recognition problem. In this paper, we propose a novel pose-invariant face recognition method via RGB-D images. By employing depth, our method is able to handle self-occlusion and deformation, both of which are challenging problems in two-dimensional (2D) face recognition. Texture images in the gallery can be rendered to the same view as the probe via depth. Meanwhile, depth is also used for similarity measure via frontalization and symmetric filling. Finally, both texture and depth contribute to the final identity estimation. Experiments on Bosphorus, CurtinFaces, Eurecom, and Kiwi databases demonstrate that the additional depth information has improved the performance of face recognition with large pose variations and under even more challenging conditions. PMID:26819581

  17. Pose-Invariant Face Recognition via RGB-D Images

    PubMed Central

    Sang, Gaoli; Li, Jing; Zhao, Qijun

    2016-01-01

    Three-dimensional (3D) face models can intrinsically handle large pose face recognition problem. In this paper, we propose a novel pose-invariant face recognition method via RGB-D images. By employing depth, our method is able to handle self-occlusion and deformation, both of which are challenging problems in two-dimensional (2D) face recognition. Texture images in the gallery can be rendered to the same view as the probe via depth. Meanwhile, depth is also used for similarity measure via frontalization and symmetric filling. Finally, both texture and depth contribute to the final identity estimation. Experiments on Bosphorus, CurtinFaces, Eurecom, and Kiwi databases demonstrate that the additional depth information has improved the performance of face recognition with large pose variations and under even more challenging conditions. PMID:26819581

  18. Graph Laplace for occluded face completion and recognition.

    PubMed

    Deng, Yue; Dai, Qionghai; Zhang, Zengke

    2011-08-01

    This paper proposes a spectral-graph-based algorithm for face image repairing, which can improve the recognition performance on occluded faces. The face completion algorithm proposed in this paper includes three main procedures: 1) sparse representation for partially occluded face classification; 2) image-based data mining; and 3) graph Laplace (GL) for face image completion. The novel part of the proposed framework is GL, as named from graphical models and the Laplace equation, and can achieve a high-quality repairing of damaged or occluded faces. The relationship between the GL and the traditional Poisson equation is proven. We apply our face repairing algorithm to produce completed faces, and use face recognition to evaluate the performance of the algorithm. Experimental results verify the effectiveness of the GL method for occluded face completion.

  19. Face recognition using improved-LDA with facial combined feature

    NASA Astrophysics Data System (ADS)

    Zhou, Dake; Yang, Xin; Peng, Ningsong

    2005-06-01

    Face recognition subjected to various conditions is a challenging task. This paper presents a combined feature improved Fisher classifier method for face recognition. Both of the facial holistic information and local information are used for face representation. In addition, the improved linear discriminant analysis (I-LDA) is employed for good generalization capability. Experiments show that the method is not only robust to moderate changes of illumination, pose and facial expression but also superior to the traditional methods, such as eigenfaces and Fisherfaces.

  20. The activation of visual face memory and explicit face recognition are delayed in developmental prosopagnosia.

    PubMed

    Parketny, Joanna; Towler, John; Eimer, Martin

    2015-08-01

    Individuals with developmental prosopagnosia (DP) are strongly impaired in recognizing faces, but the causes of this deficit are not well understood. We employed event-related brain potentials (ERPs) to study the time-course of neural processes involved in the recognition of previously unfamiliar faces in DPs and in age-matched control participants with normal face recognition abilities. Faces of different individuals were presented sequentially in one of three possible views, and participants had to detect a specific Target Face ("Joe"). EEG was recorded during task performance to Target Faces, Nontarget Faces, or the participants' Own Face (which had to be ignored). The N250 component was measured as a marker of the match between a seen face and a stored representation in visual face memory. The subsequent P600f was measured as an index of attentional processes associated with the conscious awareness and recognition of a particular face. Target Faces elicited reliable N250 and P600f in the DP group, but both of these components emerged later in DPs than in control participants. This shows that the activation of visual face memory for previously unknown learned faces and the subsequent attentional processing and conscious recognition of these faces are delayed in DP. N250 and P600f components to Own Faces did not differ between the two groups, indicating that the processing of long-term familiar faces is less affected in DP. However, P600f components to Own Faces were absent in two participants with DP who failed to recognize their Own Face during the experiment. These results provide new evidence that face recognition deficits in DP may be linked to a delayed activation of visual face memory and explicit identity recognition mechanisms.

  1. The activation of visual face memory and explicit face recognition are delayed in developmental prosopagnosia.

    PubMed

    Parketny, Joanna; Towler, John; Eimer, Martin

    2015-08-01

    Individuals with developmental prosopagnosia (DP) are strongly impaired in recognizing faces, but the causes of this deficit are not well understood. We employed event-related brain potentials (ERPs) to study the time-course of neural processes involved in the recognition of previously unfamiliar faces in DPs and in age-matched control participants with normal face recognition abilities. Faces of different individuals were presented sequentially in one of three possible views, and participants had to detect a specific Target Face ("Joe"). EEG was recorded during task performance to Target Faces, Nontarget Faces, or the participants' Own Face (which had to be ignored). The N250 component was measured as a marker of the match between a seen face and a stored representation in visual face memory. The subsequent P600f was measured as an index of attentional processes associated with the conscious awareness and recognition of a particular face. Target Faces elicited reliable N250 and P600f in the DP group, but both of these components emerged later in DPs than in control participants. This shows that the activation of visual face memory for previously unknown learned faces and the subsequent attentional processing and conscious recognition of these faces are delayed in DP. N250 and P600f components to Own Faces did not differ between the two groups, indicating that the processing of long-term familiar faces is less affected in DP. However, P600f components to Own Faces were absent in two participants with DP who failed to recognize their Own Face during the experiment. These results provide new evidence that face recognition deficits in DP may be linked to a delayed activation of visual face memory and explicit identity recognition mechanisms. PMID:26169316

  2. A Spatial Frequency Account of the Detriment that Local Processing of Navon Letters Has on Face Recognition

    ERIC Educational Resources Information Center

    Hills, Peter J.; Lewis, Michael B.

    2009-01-01

    Five minutes of processing the local features of a Navon letter causes a detriment in subsequent face-recognition performance (Macrae & Lewis, 2002). We hypothesize a perceptual after effect explanation of this effect in which face recognition is less accurate after adapting to high-spatial frequencies at high contrasts. Five experiments were…

  3. Impaired processing of self-face recognition in anorexia nervosa.

    PubMed

    Hirot, France; Lesage, Marine; Pedron, Lya; Meyer, Isabelle; Thomas, Pierre; Cottencin, Olivier; Guardia, Dewi

    2016-03-01

    Body image disturbances and massive weight loss are major clinical symptoms of anorexia nervosa (AN). The aim of the present study was to examine the influence of body changes and eating attitudes on self-face recognition ability in AN. Twenty-seven subjects suffering from AN and 27 control participants performed a self-face recognition task (SFRT). During the task, digital morphs between their own face and a gender-matched unfamiliar face were presented in a random sequence. Participants' self-face recognition failures, cognitive flexibility, body concern and eating habits were assessed with the Self-Face Recognition Questionnaire (SFRQ), Trail Making Test (TMT), Body Shape Questionnaire (BSQ) and Eating Disorder Inventory-2 (EDI-2), respectively. Subjects suffering from AN exhibited significantly greater difficulties than control participants in identifying their own face (p = 0.028). No significant difference was observed between the two groups for TMT (all p > 0.1, non-significant). Regarding predictors of self-face recognition skills, there was a negative correlation between SFRT and body mass index (p = 0.01) and a positive correlation between SFRQ and EDI-2 (p < 0.001) or BSQ (p < 0.001). Among factors involved, nutritional status and intensity of eating disorders could play a part in impaired self-face recognition. PMID:26420298

  4. Partial least squares regression on DCT domain for infrared face recognition

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua

    2014-09-01

    Compact and discriminative feature extraction is a challenging task for infrared face recognition. In this paper, we propose an infrared face recognition method using Partial Least Square (PLS) regression on Discrete Cosine Transform (DCT) coefficients. With the strong ability for data de-correlation and compact energy, DCT is studied to get the compact features in infrared face. To dig out discriminative information in DCT coefficients, class-specific One-to-Rest Partial Least Squares (PLS) classifier is learned for accurate classification. The infrared data were collected by an infrared camera Thermo Vision A40 supplied by FLIR Systems Inc. The experimental results show that the recognition rate of the proposed algorithm can reach 95.8%, outperforms that of the state of art infrared face recognition methods based on Linear Discriminant Analysis (LDA) and DCT.

  5. Covert recognition and the neural system for face processing.

    PubMed

    Schweinberger, Stefan R; Burton, A Mike

    2003-02-01

    In this viewpoint, we discuss the new evidence on covert face recognition in prosopagnosia presented by Bobes et al. (2003, this issue) and by Sperber and Spinnler (2003, this issue). Contrary to earlier hypotheses, both papers agree that covert and overt face recognition are based on the same mechanism. In line with this suggestion, an analysis of reported cases with prosopagnosia indicates that a degree of successful encoding of facial representations is a prerequisite for covert recognition to occur. While we agree with this general conclusion as far as Bobes et al.'s and Sperber and Spinnler's data are concerned, we also discuss evidence for a dissociation between different measures of covert recognition. Specifically, studies in patients with Capgras delusion and patients with prosopagnosia suggest that skin conductance and behavioural indexes of covert face recognition are mediated by partially different mechanisms. We also discuss implications of the new data for models of normal face recognition that have been successful in simulating covert recognition phenomena (e.g., Young and Burton, 1999, and O'Reilly et al., 1999). Finally, in reviewing recent neurophysiological and brain imaging evidence concerning the neural system for face processing, we argue that the relationship between ERP components (specifically, N170, N250r, and N400) and different cognitive processes in face recognition is beginning to emerge. PMID:12627750

  6. Tolerance of geometric distortions in infant's face recognition.

    PubMed

    Yamashita, Wakayo; Kanazawa, So; Yamaguchi, Masami K

    2014-02-01

    The aim of the current study is to reveal the effect of global linear transformations (shearing, horizontal stretching, and vertical stretching) on the recognition of familiar faces (e.g., a mother's face) in 6- to 7-month-old infants. In this experiment, we applied the global linear transformations to both the infants' own mother's face and to a stranger's face, and we tested infants' preference between these faces. We found that only 7-month-old infants maintained preference for their own mother's face during the presentation of vertical stretching, while the preference for the mother's face disappeared during the presentation of shearing or horizontal stretching. These findings suggest that 7-month-old infants might not recognize faces based on calculating the absolute distance between facial features, and that the vertical dimension of facial features might be more related to infants' face recognition rather than the horizontal dimension.

  7. Research on face recognition based on singular value decomposition

    NASA Astrophysics Data System (ADS)

    Liang, Yixiong; Gong, Weiguo; Pan, Yingjun; Liu, Jiamin; Li, Weihong; Zhang, Hongmei

    2004-08-01

    Singular values (SVs) feature vectors of face image have been used for face recognition as the feature recently. Although SVs have some important properties of algebraic and geometric invariance and insensitiveness to noise, they are the representation of face image in its own eigen-space spanned by the two orthogonal matrices of singular value decomposition (SVD) and clearly contain little useful information for face recognition. This study concentrates on extracting more informational feature from a frontal and upright view image based on SVD and proposing an improving method for face recognition. After standardized by intensity normalization, all training and testing face images are projected onto a uniform eigen-space that is obtained from SVD of standard face image. To achieve more computational efficiency, the dimension of the uniform eigen-space is reduced by discarding the eigenvectors that the corresponding eigenvalue is close to zero. Euclidean distance classifier is adopted in recognition. Two standard databases from Yale University and Olivetti research laboratory are selected to evaluate the recognition accuracy of the proposed method. These databases include face images with different expressions, small occlusion, different illumination condition and different poses. Experimental results on the two face databases show the effectiveness of the method and its insensitivity to the face expression, illumination and posture.

  8. Multi-feature fusion for thermal face recognition

    NASA Astrophysics Data System (ADS)

    Bi, Yin; Lv, Mingsong; Wei, Yangjie; Guan, Nan; Yi, Wang

    2016-07-01

    Human face recognition has been researched for the last three decades. Face recognition with thermal images now attracts significant attention since they can be used in low/none illuminated environment. However, thermal face recognition performance is still insufficient for practical applications. One main reason is that most existing work leverage only single feature to characterize a face in a thermal image. To solve the problem, we propose multi-feature fusion, a technique that combines multiple features in thermal face characterization and recognition. In this work, we designed a systematical way to combine four features, including Local binary pattern, Gabor jet descriptor, Weber local descriptor and Down-sampling feature. Experimental results show that our approach outperforms methods that leverage only a single feature and is robust to noise, occlusion, expression, low resolution and different l1 -minimization methods.

  9. 3D face recognition based on matching of facial surfaces

    NASA Astrophysics Data System (ADS)

    Echeagaray-Patrón, Beatriz A.; Kober, Vitaly

    2015-09-01

    Face recognition is an important task in pattern recognition and computer vision. In this work a method for 3D face recognition in the presence of facial expression and poses variations is proposed. The method uses 3D shape data without color or texture information. A new matching algorithm based on conformal mapping of original facial surfaces onto a Riemannian manifold followed by comparison of conformal and isometric invariants computed in the manifold is suggested. Experimental results are presented using common 3D face databases that contain significant amount of expression and pose variations.

  10. Perception and recognition of faces in adolescence

    PubMed Central

    Fuhrmann, D.; Knoll, L. J.; Sakhardande, A. L.; Speekenbrink, M.; Kadosh, K. C.; Blakemore, S. -J.

    2016-01-01

    Most studies on the development of face cognition abilities have focussed on childhood, with early maturation accounts contending that face cognition abilities are mature by 3–5 years. Late maturation accounts, in contrast, propose that some aspects of face cognition are not mature until at least 10 years. Here, we measured face memory and face perception, two core face cognition abilities, in 661 participants (397 females) in four age groups (younger adolescents (11.27–13.38 years); mid-adolescents (13.39–15.89 years); older adolescents (15.90–18.00 years); and adults (18.01–33.15 years)) while controlling for differences in general cognitive ability. We showed that both face cognition abilities mature relatively late, at around 16 years, with a female advantage in face memory, but not in face perception, both in adolescence and adulthood. Late maturation in the face perception task was driven mainly by protracted development in identity perception, while gaze perception abilities were already comparatively mature in early adolescence. These improvements in the ability to memorize, recognize and perceive faces during adolescence may be related to increasing exploratory behaviour and exposure to novel faces during this period of life. PMID:27647477

  11. Perception and recognition of faces in adolescence.

    PubMed

    Fuhrmann, D; Knoll, L J; Sakhardande, A L; Speekenbrink, M; Kadosh, K C; Blakemore, S-J

    2016-01-01

    Most studies on the development of face cognition abilities have focussed on childhood, with early maturation accounts contending that face cognition abilities are mature by 3-5 years. Late maturation accounts, in contrast, propose that some aspects of face cognition are not mature until at least 10 years. Here, we measured face memory and face perception, two core face cognition abilities, in 661 participants (397 females) in four age groups (younger adolescents (11.27-13.38 years); mid-adolescents (13.39-15.89 years); older adolescents (15.90-18.00 years); and adults (18.01-33.15 years)) while controlling for differences in general cognitive ability. We showed that both face cognition abilities mature relatively late, at around 16 years, with a female advantage in face memory, but not in face perception, both in adolescence and adulthood. Late maturation in the face perception task was driven mainly by protracted development in identity perception, while gaze perception abilities were already comparatively mature in early adolescence. These improvements in the ability to memorize, recognize and perceive faces during adolescence may be related to increasing exploratory behaviour and exposure to novel faces during this period of life. PMID:27647477

  12. The Impact of Early Bilingualism on Face Recognition Processes.

    PubMed

    Kandel, Sonia; Burfin, Sabine; Méary, David; Ruiz-Tada, Elisa; Costa, Albert; Pascalis, Olivier

    2016-01-01

    Early linguistic experience has an impact on the way we decode audiovisual speech in face-to-face communication. The present study examined whether differences in visual speech decoding could be linked to a broader difference in face processing. To identify a phoneme we have to do an analysis of the speaker's face to focus on the relevant cues for speech decoding (e.g., locating the mouth with respect to the eyes). Face recognition processes were investigated through two classic effects in face recognition studies: the Other-Race Effect (ORE) and the Inversion Effect. Bilingual and monolingual participants did a face recognition task with Caucasian faces (own race), Chinese faces (other race), and cars that were presented in an Upright or Inverted position. The results revealed that monolinguals exhibited the classic ORE. Bilinguals did not. Overall, bilinguals were slower than monolinguals. These results suggest that bilinguals' face processing abilities differ from monolinguals'. Early exposure to more than one language may lead to a perceptual organization that goes beyond language processing and could extend to face analysis. We hypothesize that these differences could be due to the fact that bilinguals focus on different parts of the face than monolinguals, making them more efficient in other race face processing but slower. However, more studies using eye-tracking techniques are necessary to confirm this explanation. PMID:27486422

  13. The Impact of Early Bilingualism on Face Recognition Processes

    PubMed Central

    Kandel, Sonia; Burfin, Sabine; Méary, David; Ruiz-Tada, Elisa; Costa, Albert; Pascalis, Olivier

    2016-01-01

    Early linguistic experience has an impact on the way we decode audiovisual speech in face-to-face communication. The present study examined whether differences in visual speech decoding could be linked to a broader difference in face processing. To identify a phoneme we have to do an analysis of the speaker’s face to focus on the relevant cues for speech decoding (e.g., locating the mouth with respect to the eyes). Face recognition processes were investigated through two classic effects in face recognition studies: the Other-Race Effect (ORE) and the Inversion Effect. Bilingual and monolingual participants did a face recognition task with Caucasian faces (own race), Chinese faces (other race), and cars that were presented in an Upright or Inverted position. The results revealed that monolinguals exhibited the classic ORE. Bilinguals did not. Overall, bilinguals were slower than monolinguals. These results suggest that bilinguals’ face processing abilities differ from monolinguals’. Early exposure to more than one language may lead to a perceptual organization that goes beyond language processing and could extend to face analysis. We hypothesize that these differences could be due to the fact that bilinguals focus on different parts of the face than monolinguals, making them more efficient in other race face processing but slower. However, more studies using eye-tracking techniques are necessary to confirm this explanation. PMID:27486422

  14. Effective face recognition using bag of features with additive kernels

    NASA Astrophysics Data System (ADS)

    Yang, Shicai; Bebis, George; Chu, Yongjie; Zhao, Lindu

    2016-01-01

    In past decades, many techniques have been used to improve face recognition performance. The most common and well-studied ways are to use the whole face image to build a subspace based on the reduction of dimensionality. Differing from methods above, we consider face recognition as an image classification problem. The face images of the same person are considered to fall into the same category. Each category and each face image could be both represented by a simple pyramid histogram. Spatial dense scale-invariant feature transform features and bag of features method are used to build categories and face representations. In an effort to make the method more efficient, a linear support vector machine solver, Pegasos, is used for the classification in the kernel space with additive kernels instead of nonlinear SVMs. Our experimental results demonstrate that the proposed method can achieve very high recognition accuracy on the ORL, YALE, and FERET databases.

  15. Implementation of perceptual aspects in a face recognition algorithm

    NASA Astrophysics Data System (ADS)

    Crenna, F.; Zappa, E.; Bovio, L.; Testa, R.; Gasparetto, M.; Rossi, G. B.

    2013-09-01

    Automatic face recognition is a biometric technique particularly appreciated in security applications. In fact face recognition presents the opportunity to operate at a low invasive level without the collaboration of the subjects under tests, with face images gathered either from surveillance systems or from specific cameras located in strategic points. The automatic recognition algorithms perform a measurement, on the face images, of a set of specific characteristics of the subject and provide a recognition decision based on the measurement results. Unfortunately several quantities may influence the measurement of the face geometry such as its orientation, the lighting conditions, the expression and so on, affecting the recognition rate. On the other hand human recognition of face is a very robust process far less influenced by the surrounding conditions. For this reason it may be interesting to insert perceptual aspects in an automatic facial-based recognition algorithm to improve its robustness. This paper presents a first study in this direction investigating the correlation between the results of a perception experiment and the facial geometry, estimated by means of the position of a set of repere points.

  16. Newborns' face recognition: role of inner and outer facial features.

    PubMed

    Turati, Chiara; Macchi Cassia, Viola; Simion, Francesca; Leo, Irene

    2006-01-01

    Existing data indicate that newborns are able to recognize individual faces, but little is known about what perceptual cues drive this ability. The current study showed that either the inner or outer features of the face can act as sufficient cues for newborns' face recognition (Experiment 1), but the outer part of the face enjoys an advantage over the inner part (Experiment 2). Inversion of the face stimuli disrupted recognition when only the inner portion of the face was shown, but not when the whole face was fully visible or only the outer features were presented (Experiment 3). The results enhance our picture of what information newborns actually process and encode when they discriminate, learn, and recognize faces.

  17. Original and Mirror Face Images and Minimum Squared Error Classification for Visible Light Face Recognition

    PubMed Central

    Wang, Rong

    2015-01-01

    In real-world applications, the image of faces varies with illumination, facial expression, and poses. It seems that more training samples are able to reveal possible images of the faces. Though minimum squared error classification (MSEC) is a widely used method, its applications on face recognition usually suffer from the problem of a limited number of training samples. In this paper, we improve MSEC by using the mirror faces as virtual training samples. We obtained the mirror faces generated from original training samples and put these two kinds of samples into a new set. The face recognition experiments show that our method does obtain high accuracy performance in classification. PMID:26576452

  18. A new accurate pill recognition system using imprint information

    NASA Astrophysics Data System (ADS)

    Chen, Zhiyuan; Kamata, Sei-ichiro

    2013-12-01

    Great achievements in modern medicine benefit human beings. Also, it has brought about an explosive growth of pharmaceuticals that current in the market. In daily life, pharmaceuticals sometimes confuse people when they are found unlabeled. In this paper, we propose an automatic pill recognition technique to solve this problem. It functions mainly based on the imprint feature of the pills, which is extracted by proposed MSWT (modified stroke width transform) and described by WSC (weighted shape context). Experiments show that our proposed pill recognition method can reach an accurate rate up to 92.03% within top 5 ranks when trying to classify more than 10 thousand query pill images into around 2000 categories.

  19. Face recognition in newly hatched chicks at the onset of vision.

    PubMed

    Wood, Samantha M W; Wood, Justin N

    2015-04-01

    How does face recognition emerge in the newborn brain? To address this question, we used an automated controlled-rearing method with a newborn animal model: the domestic chick (Gallus gallus). This automated method allowed us to examine chicks' face recognition abilities at the onset of both face experience and object experience. In the first week of life, newly hatched chicks were raised in controlled-rearing chambers that contained no objects other than a single virtual human face. In the second week of life, we used an automated forced-choice testing procedure to examine whether chicks could distinguish that familiar face from a variety of unfamiliar faces. Chicks successfully distinguished the familiar face from most of the unfamiliar faces-for example, chicks were sensitive to changes in the face's age, gender, and orientation (upright vs. inverted). Thus, chicks can build an accurate representation of the first face they see in their life. These results show that the initial state of face recognition is surprisingly powerful: Newborn visual systems can begin encoding and recognizing faces at the onset of vision.

  20. Understanding eye movements in face recognition using hidden Markov models.

    PubMed

    Chuk, Tim; Chan, Antoni B; Hsiao, Janet H

    2014-09-16

    We use a hidden Markov model (HMM) based approach to analyze eye movement data in face recognition. HMMs are statistical models that are specialized in handling time-series data. We conducted a face recognition task with Asian participants, and model each participant's eye movement pattern with an HMM, which summarized the participant's scan paths in face recognition with both regions of interest and the transition probabilities among them. By clustering these HMMs, we showed that participants' eye movements could be categorized into holistic or analytic patterns, demonstrating significant individual differences even within the same culture. Participants with the analytic pattern had longer response times, but did not differ significantly in recognition accuracy from those with the holistic pattern. We also found that correct and wrong recognitions were associated with distinctive eye movement patterns; the difference between the two patterns lies in the transitions rather than locations of the fixations alone.

  1. Face engagement during infancy predicts later face recognition ability in younger siblings of children with autism.

    PubMed

    de Klerk, Carina C J M; Gliga, Teodora; Charman, Tony; Johnson, Mark H

    2014-07-01

    Face recognition difficulties are frequently documented in children with autism spectrum disorders (ASD). It has been hypothesized that these difficulties result from a reduced interest in faces early in life, leading to decreased cortical specialization and atypical development of the neural circuitry for face processing. However, a recent study by our lab demonstrated that infants at increased familial risk for ASD, irrespective of their diagnostic status at 3 years, exhibit a clear orienting response to faces. The present study was conducted as a follow-up on the same cohort to investigate how measures of early engagement with faces relate to face-processing abilities later in life. We also investigated whether face recognition difficulties are specifically related to an ASD diagnosis, or whether they are present at a higher rate in all those at familial risk. At 3 years we found a reduced ability to recognize unfamiliar faces in the high-risk group that was not specific to those children who received an ASD diagnosis, consistent with face recognition difficulties being an endophenotype of the disorder. Furthermore, we found that longer looking at faces at 7 months was associated with poorer performance on the face recognition task at 3 years in the high-risk group. These findings suggest that longer looking at faces in infants at risk for ASD might reflect early face-processing difficulties and predicts difficulties with recognizing faces later in life.

  2. Culture moderates the relationship between interdependence and face recognition

    PubMed Central

    Ng, Andy H.; Steele, Jennifer R.; Sasaki, Joni Y.; Sakamoto, Yumiko; Williams, Amanda

    2015-01-01

    Recent theory suggests that face recognition accuracy is affected by people’s motivations, with people being particularly motivated to remember ingroup versus outgroup faces. In the current research we suggest that those higher in interdependence should have a greater motivation to remember ingroup faces, but this should depend on how ingroups are defined. To examine this possibility, we used a joint individual difference and cultural approach to test (a) whether individual differences in interdependence would predict face recognition accuracy, and (b) whether this effect would be moderated by culture. In Study 1 European Canadians higher in interdependence demonstrated greater recognition for same-race (White), but not cross-race (East Asian) faces. In Study 2 we found that culture moderated this effect. Interdependence again predicted greater recognition for same-race (White), but not cross-race (East Asian) faces among European Canadians; however, interdependence predicted worse recognition for both same-race (East Asian) and cross-race (White) faces among first-generation East Asians. The results provide insight into the role of motivation in face perception as well as cultural differences in the conception of ingroups. PMID:26579011

  3. A recurrent dynamic model for correspondence-based face recognition.

    PubMed

    Wolfrum, Philipp; Wolff, Christian; Lücke, Jörg; von der Malsburg, Christoph

    2008-01-01

    Our aim here is to create a fully neural, functionally competitive, and correspondence-based model for invariant face recognition. By recurrently integrating information about feature similarities, spatial feature relations, and facial structure stored in memory, the system evaluates face identity ("what"-information) and face position ("where"-information) using explicit representations for both. The network consists of three functional layers of processing, (1) an input layer for image representation, (2) a middle layer for recurrent information integration, and (3) a gallery layer for memory storage. Each layer consists of cortical columns as functional building blocks that are modeled in accordance with recent experimental findings. In numerical simulations we apply the system to standard benchmark databases for face recognition. We find that recognition rates of our biologically inspired approach lie in the same range as recognition rates of recent and purely functionally motivated systems. PMID:19146266

  4. Face recognition in simulated prosthetic vision: face detection-based image processing strategies

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Wu, Xiaobei; Lu, Yanyu; Wu, Hao; Kan, Han; Chai, Xinyu

    2014-08-01

    Objective. Given the limited visual percepts elicited by current prosthetic devices, it is essential to optimize image content in order to assist implant wearers to achieve better performance of visual tasks. This study focuses on recognition of familiar faces using simulated prosthetic vision. Approach. Combined with region-of-interest (ROI) magnification, three face extraction strategies based on a face detection technique were used: the Viola-Jones face region, the statistical face region (SFR) and the matting face region. Main results. These strategies significantly enhanced recognition performance compared to directly lowering resolution (DLR) with Gaussian dots. The inclusion of certain external features, such as hairstyle, was beneficial for face recognition. Given the high recognition accuracy achieved and applicable processing speed, SFR-ROI was the preferred strategy. DLR processing resulted in significant face gender recognition differences (i.e. females were more easily recognized than males), but these differences were not apparent with other strategies. Significance. Face detection-based image processing strategies improved visual perception by highlighting useful information. Their use is advisable for face recognition when using low-resolution prosthetic vision. These results provide information for the continued design of image processing modules for use in visual prosthetics, thus maximizing the benefits for future prosthesis wearers.

  5. What Types of Visual Recognition Tasks Are Mediated by the Neural Subsystem that Subserves Face Recognition?

    ERIC Educational Resources Information Center

    Brooks, Brian E.; Cooper, Eric E.

    2006-01-01

    Three divided visual field experiments tested current hypotheses about the types of visual shape representation tasks that recruit the cognitive and neural mechanisms underlying face recognition. Experiment 1 found a right hemisphere advantage for subordinate but not basic-level face recognition. Experiment 2 found a right hemisphere advantage for…

  6. Face Recognition System with Holographic Memory and Stereovision Technology

    NASA Astrophysics Data System (ADS)

    Honma, Satoshi; Yagisawa, Yasuaki; Momose, Hidetomo; Sekiguchi, Toru

    2011-09-01

    We have proposed a face recognition system with holographic memory and stereovision technology (FARSHAS). In this system, facial three-dimensional data is captured by stereovision technology and then the facial images at a position in front of the virtual camera is reconstructed automatically. Using the corrected facial images, we estimated theoretically the error rate of the facial recognition system.

  7. Face recognition performance of individuals with Asperger syndrome on the Cambridge Face Memory Test.

    PubMed

    Hedley, Darren; Brewer, Neil; Young, Robyn

    2011-12-01

    Although face recognition deficits in individuals with Autism Spectrum Disorder (ASD), including Asperger syndrome (AS), are widely acknowledged, the empirical evidence is mixed. This in part reflects the failure to use standardized and psychometrically sound tests. We contrasted standardized face recognition scores on the Cambridge Face Memory Test (CFMT) for 34 individuals with AS with those for 42, IQ-matched non-ASD individuals, and age-standardized scores from a large Australian cohort. We also examined the influence of IQ, autistic traits, and negative affect on face recognition performance. Overall, participants with AS performed significantly worse on the CFMT than the non-ASD participants and when evaluated against standardized test norms. However, while 24% of participants with AS presented with severe face recognition impairment (>2 SDs below the mean), many individuals performed at or above the typical level for their age: 53% scored within +/- 1 SD of the mean and 9% demonstrated superior performance (>1 SD above the mean). Regression analysis provided no evidence that IQ, autistic traits, or negative affect significantly influenced face recognition: diagnostic group membership was the only significant predictor of face recognition performance. In sum, face recognition performance in ASD is on a continuum, but with average levels significantly below non-ASD levels of performance.

  8. Fast face recognition by using an inverted index

    NASA Astrophysics Data System (ADS)

    Herrmann, Christian; Beyerer, Jürgen

    2015-02-01

    This contribution addresses the task of searching for faces in large video datasets. Despite vast progress in the field, face recognition remains a challenge for uncontrolled large scale applications like searching for persons in surveillance footage or internet videos. While current productive systems focus on the best shot approach, where only one representative frame from a given face track is selected, thus sacrificing recognition performance, systems achieving state-of-the-art recognition performance, like the recently published DeepFace, ignore recognition speed, which makes them impractical for large scale applications. We suggest a set of measures to address the problem. First, considering the feature location allows collecting the extracted features in according sets. Secondly, the inverted index approach, which became popular in the area of image retrieval, is applied to these feature sets. A face track is thus described by a set of local indexed visual words which enables a fast search. This way, all information from a face track is collected which allows better recognition performance than best shot approaches and the inverted index permits constantly high recognition speeds. Evaluation on a dataset of several thousand videos shows the validity of the proposed approach.

  9. Illumination-invariant face recognition in hyperspectral images

    NASA Astrophysics Data System (ADS)

    Pan, Zhihong; Healey, Glenn E.; Prasad, Manish; Tromberg, Bruce J.

    2003-09-01

    We examine the performance of illumination-invariant face recognition in hyperspectral images on a database of 200 subjects. The images are acquired over the near-infrared spectral range of 0.7-1.0 microns. Each subject is imaged over a range of facial orientations and expressions. Faces are represented by local spectral information for several tissue types. Illumination variation is modeled by low-dimensional linear subspaces of reflected radiance spectra. One hundred outdoor illumination spectra measured at Boulder, Colorado are used to synthesize the radiance spectra for the face tissue types. Weighted invariant subspace projection over multiple tissue types is used for recognition. Illumination-invariant face recognition is tested for various face rotations as well as different facial expressions.

  10. Newborns' Face Recognition over Changes in Viewpoint

    ERIC Educational Resources Information Center

    Turati, Chiara; Bulf, Hermann; Simion, Francesca

    2008-01-01

    The study investigated the origins of the ability to recognize faces despite rotations in depth. Four experiments are reported that tested, using the habituation technique, whether 1-to-3-day-old infants are able to recognize the invariant aspects of a face over changes in viewpoint. Newborns failed to recognize facial perceptual invariances…

  11. Robust textural features for real time face recognition

    NASA Astrophysics Data System (ADS)

    Cui, Chen; Asari, Vijayan K.; Braun, Andrew D.

    2015-03-01

    Automatic face recognition in real life environment is challenged by various issues such as the object motion, lighting conditions, poses and expressions. In this paper, we present the development of a system based on a refined Enhanced Local Binary Pattern (ELBP) feature set and a Support Vector Machine (SVM) classifier to perform face recognition in a real life environment. Instead of counting the number of 1's in ELBP, we use the 8-bit code of the thresholded data as per the ELBP rule, and then binarize the image with a predefined threshold value, removing the small connections on the binarized image. The proposed system is currently trained with several people's face images obtained from video sequences captured by a surveillance camera. One test set contains the disjoint images of the trained people's faces to test the accuracy and the second test set contains the images of non-trained people's faces to test the percentage of the false positives. The recognition rate among 570 images of 9 trained faces is around 94%, and the false positive rate with 2600 images of 34 non-trained faces is around 1%. Research work is progressing for the recognition of partially occluded faces as well. An appropriate weighting strategy will be applied to the different parts of the face area to achieve a better performance.

  12. Lateralization of kin recognition signals in the human face

    PubMed Central

    Dal Martello, Maria F.; Maloney, Laurence T.

    2010-01-01

    When human subjects view photographs of faces, their judgments of identity, gender, emotion, age, and attractiveness depend more on one side of the face than the other. We report an experiment testing whether allocentric kin recognition (the ability to judge the degree of kinship between individuals other than the observer) is also lateralized. One hundred and twenty-four observers judged whether or not pairs of children were biological siblings by looking at photographs of their faces. In three separate conditions, (1) the right hemi-face was masked, (2) the left hemi-face was masked, or (3) the face was fully visible. The d′ measures for the masked left hemi-face and masked right hemi-face were 1.024 and 1.004, respectively (no significant difference), and the d′ measure for the unmasked face was 1.079, not significantly greater than that for either of the masked conditions. We conclude, first, that there is no superiority of one or the other side of the observed face in kin recognition, second, that the information present in the left and right hemi-faces relevant to recognizing kin is completely redundant, and last that symmetry cues are not used for kin recognition. PMID:20884584

  13. The Complete Gabor-Fisher Classifier for Robust Face Recognition

    NASA Astrophysics Data System (ADS)

    Štruc, Vitomir; Pavešić, Nikola

    2010-12-01

    This paper develops a novel face recognition technique called Complete Gabor Fisher Classifier (CGFC). Different from existing techniques that use Gabor filters for deriving the Gabor face representation, the proposed approach does not rely solely on Gabor magnitude information but effectively uses features computed based on Gabor phase information as well. It represents one of the few successful attempts found in the literature of combining Gabor magnitude and phase information for robust face recognition. The novelty of the proposed CGFC technique comes from (1) the introduction of a Gabor phase-based face representation and (2) the combination of the recognition technique using the proposed representation with classical Gabor magnitude-based methods into a unified framework. The proposed face recognition framework is assessed in a series of face verification and identification experiments performed on the XM2VTS, Extended YaleB, FERET, and AR databases. The results of the assessment suggest that the proposed technique clearly outperforms state-of-the-art face recognition techniques from the literature and that its performance is almost unaffected by the presence of partial occlusions of the facial area, changes in facial expression, or severe illumination changes.

  14. Error Rates in Users of Automatic Face Recognition Software

    PubMed Central

    White, David; Dunn, James D.; Schmid, Alexandra C.; Kemp, Richard I.

    2015-01-01

    In recent years, wide deployment of automatic face recognition systems has been accompanied by substantial gains in algorithm performance. However, benchmarking tests designed to evaluate these systems do not account for the errors of human operators, who are often an integral part of face recognition solutions in forensic and security settings. This causes a mismatch between evaluation tests and operational accuracy. We address this by measuring user performance in a face recognition system used to screen passport applications for identity fraud. Experiment 1 measured target detection accuracy in algorithm-generated ‘candidate lists’ selected from a large database of passport images. Accuracy was notably poorer than in previous studies of unfamiliar face matching: participants made over 50% errors for adult target faces, and over 60% when matching images of children. Experiment 2 then compared performance of student participants to trained passport officers–who use the system in their daily work–and found equivalent performance in these groups. Encouragingly, a group of highly trained and experienced “facial examiners” outperformed these groups by 20 percentage points. We conclude that human performance curtails accuracy of face recognition systems–potentially reducing benchmark estimates by 50% in operational settings. Mere practise does not attenuate these limits, but superior performance of trained examiners suggests that recruitment and selection of human operators, in combination with effective training and mentorship, can improve the operational accuracy of face recognition systems. PMID:26465631

  15. Developmental Changes in Face Recognition during Childhood: Evidence from Upright and Inverted Faces

    ERIC Educational Resources Information Center

    de Heering, Adelaide; Rossion, Bruno; Maurer, Daphne

    2012-01-01

    Adults are experts at recognizing faces but there is controversy about how this ability develops with age. We assessed 6- to 12-year-olds and adults using a digitized version of the Benton Face Recognition Test, a sensitive tool for assessing face perception abilities. Children's response times for correct responses did not decrease between ages 6…

  16. Face Engagement during Infancy Predicts Later Face Recognition Ability in Younger Siblings of Children with Autism

    ERIC Educational Resources Information Center

    de Klerk, Carina C. J. M.; Gliga, Teodora; Charman, Tony; Johnson, Mark H.

    2014-01-01

    Face recognition difficulties are frequently documented in children with autism spectrum disorders (ASD). It has been hypothesized that these difficulties result from a reduced interest in faces early in life, leading to decreased cortical specialization and atypical development of the neural circuitry for face processing. However, a recent study…

  17. Cross-modal face recognition using multi-matcher face scores

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Blasch, Erik

    2015-05-01

    The performance of face recognition can be improved using information fusion of multimodal images and/or multiple algorithms. When multimodal face images are available, cross-modal recognition is meaningful for security and surveillance applications. For example, a probe face is a thermal image (especially at nighttime), while only visible face images are available in the gallery database. Matching a thermal probe face onto the visible gallery faces requires crossmodal matching approaches. A few such studies were implemented in facial feature space with medium recognition performance. In this paper, we propose a cross-modal recognition approach, where multimodal faces are cross-matched in feature space and the recognition performance is enhanced with stereo fusion at image, feature and/or score level. In the proposed scenario, there are two cameras for stereo imaging, two face imagers (visible and thermal images) in each camera, and three recognition algorithms (circular Gaussian filter, face pattern byte, linear discriminant analysis). A score vector is formed with three cross-matched face scores from the aforementioned three algorithms. A classifier (e.g., k-nearest neighbor, support vector machine, binomial logical regression [BLR]) is trained then tested with the score vectors by using 10-fold cross validations. The proposed approach was validated with a multispectral stereo face dataset from 105 subjects. Our experiments show very promising results: ACR (accuracy rate) = 97.84%, FAR (false accept rate) = 0.84% when cross-matching the fused thermal faces onto the fused visible faces by using three face scores and the BLR classifier.

  18. Face-Recognition Memory: Implications for Children's Eyewitness Testimony.

    ERIC Educational Resources Information Center

    Chance, June E.; Goldstein, Alvin G.

    1984-01-01

    Reviews studies of face-recognition memory and considers implications for assessing the dependability of children's performances as eyewitnesses. Considers personal factors (age, intellectual differences, and gender) and situational factors (familiarity of face, retention interval, and others). Also identifies developmental questions for future…

  19. The Development of Spatial Frequency Biases in Face Recognition

    ERIC Educational Resources Information Center

    Leonard, Hayley C.; Karmiloff-Smith, Annette; Johnson, Mark H.

    2010-01-01

    Previous research has suggested that a mid-band of spatial frequencies is critical to face recognition in adults, but few studies have explored the development of this bias in children. We present a paradigm adapted from the adult literature to test spatial frequency biases throughout development. Faces were presented on a screen with particular…

  20. Development of Face Recognition in Infant Chimpanzees (Pan Troglodytes)

    ERIC Educational Resources Information Center

    Myowa-Yamakoshi, M.; Yamaguchi, M.K.; Tomonaga, M.; Tanaka, M.; Matsuzawa, T.

    2005-01-01

    In this paper, we assessed the developmental changes in face recognition by three infant chimpanzees aged 1-18 weeks, using preferential-looking procedures that measured the infants' eye- and head-tracking of moving stimuli. In Experiment 1, we prepared photographs of the mother of each infant and an ''average'' chimpanzee face using…

  1. Recognition Profile of Emotions in Natural and Virtual Faces

    PubMed Central

    Dyck, Miriam; Winbeck, Maren; Leiberg, Susanne; Chen, Yuhan; Gur, Rurben C.; Mathiak, Klaus

    2008-01-01

    Background Computer-generated virtual faces become increasingly realistic including the simulation of emotional expressions. These faces can be used as well-controlled, realistic and dynamic stimuli in emotion research. However, the validity of virtual facial expressions in comparison to natural emotion displays still needs to be shown for the different emotions and different age groups. Methodology/Principal Findings Thirty-two healthy volunteers between the age of 20 and 60 rated pictures of natural human faces and faces of virtual characters (avatars) with respect to the expressed emotions: happiness, sadness, anger, fear, disgust, and neutral. Results indicate that virtual emotions were recognized comparable to natural ones. Recognition differences in virtual and natural faces depended on specific emotions: whereas disgust was difficult to convey with the current avatar technology, virtual sadness and fear achieved better recognition results than natural faces. Furthermore, emotion recognition rates decreased for virtual but not natural faces in participants over the age of 40. This specific age effect suggests that media exposure has an influence on emotion recognition. Conclusions/Significance Virtual and natural facial displays of emotion may be equally effective. Improved technology (e.g. better modelling of the naso-labial area) may lead to even better results as compared to trained actors. Due to the ease with which virtual human faces can be animated and manipulated, validated artificial emotional expressions will be of major relevance in future research and therapeutic applications. PMID:18985152

  2. Supervised Filter Learning for Representation Based Face Recognition

    PubMed Central

    Bi, Chao; Zhang, Lei; Qi, Miao; Zheng, Caixia; Yi, Yugen; Wang, Jianzhong; Zhang, Baoxue

    2016-01-01

    Representation based classification methods, such as Sparse Representation Classification (SRC) and Linear Regression Classification (LRC) have been developed for face recognition problem successfully. However, most of these methods use the original face images without any preprocessing for recognition. Thus, their performances may be affected by some problematic factors (such as illumination and expression variances) in the face images. In order to overcome this limitation, a novel supervised filter learning algorithm is proposed for representation based face recognition in this paper. The underlying idea of our algorithm is to learn a filter so that the within-class representation residuals of the faces' Local Binary Pattern (LBP) features are minimized and the between-class representation residuals of the faces' LBP features are maximized. Therefore, the LBP features of filtered face images are more discriminative for representation based classifiers. Furthermore, we also extend our algorithm for heterogeneous face recognition problem. Extensive experiments are carried out on five databases and the experimental results verify the efficacy of the proposed algorithm. PMID:27416030

  3. Supervised Filter Learning for Representation Based Face Recognition.

    PubMed

    Bi, Chao; Zhang, Lei; Qi, Miao; Zheng, Caixia; Yi, Yugen; Wang, Jianzhong; Zhang, Baoxue

    2016-01-01

    Representation based classification methods, such as Sparse Representation Classification (SRC) and Linear Regression Classification (LRC) have been developed for face recognition problem successfully. However, most of these methods use the original face images without any preprocessing for recognition. Thus, their performances may be affected by some problematic factors (such as illumination and expression variances) in the face images. In order to overcome this limitation, a novel supervised filter learning algorithm is proposed for representation based face recognition in this paper. The underlying idea of our algorithm is to learn a filter so that the within-class representation residuals of the faces' Local Binary Pattern (LBP) features are minimized and the between-class representation residuals of the faces' LBP features are maximized. Therefore, the LBP features of filtered face images are more discriminative for representation based classifiers. Furthermore, we also extend our algorithm for heterogeneous face recognition problem. Extensive experiments are carried out on five databases and the experimental results verify the efficacy of the proposed algorithm. PMID:27416030

  4. Visual speech information for face recognition.

    PubMed

    Rosenblum, Lawrence D; Yakel, Deborah A; Baseer, Naser; Panchal, Anjani; Nodarse, Brynn C; Niehus, Ryan P

    2002-02-01

    Two experiments test whether isolated visible speech movements can be used for face matching. Visible speech information was isolated with a point-light methodology. Participants were asked to match articulating point-light faces to a fully illuminated articulating face in an XAB task. The first experiment tested single-frame static face stimuli as a control. The results revealed that the participants were significantly better at matching the dynamic face stimuli than the static ones. Experiment 2 tested whether the observed dynamic advantage was based on the movement itself or on the fact that the dynamic stimuli consisted of many more static and ordered frames. For this purpose, frame rate was reduced, and the frames were shown in a random order, a correct order with incorrect relative timing, or a correct order with correct relative timing. The results revealed better matching performance with the correctly ordered and timed frame stimuli, suggesting that matches were based on the actual movement itself. These findings suggest that speaker-specific visible articulatory style can provide information for face matching.

  5. Thermal-to-visible face recognition using partial least squares.

    PubMed

    Hu, Shuowen; Choi, Jonghyun; Chan, Alex L; Schwartz, William Robson

    2015-03-01

    Although visible face recognition has been an active area of research for several decades, cross-modal face recognition has only been explored by the biometrics community relatively recently. Thermal-to-visible face recognition is one of the most difficult cross-modal face recognition challenges, because of the difference in phenomenology between the thermal and visible imaging modalities. We address the cross-modal recognition problem using a partial least squares (PLS) regression-based approach consisting of preprocessing, feature extraction, and PLS model building. The preprocessing and feature extraction stages are designed to reduce the modality gap between the thermal and visible facial signatures, and facilitate the subsequent one-vs-all PLS-based model building. We incorporate multi-modal information into the PLS model building stage to enhance cross-modal recognition. The performance of the proposed recognition algorithm is evaluated on three challenging datasets containing visible and thermal imagery acquired under different experimental scenarios: time-lapse, physical tasks, mental tasks, and subject-to-camera range. These scenarios represent difficult challenges relevant to real-world applications. We demonstrate that the proposed method performs robustly for the examined scenarios. PMID:26366654

  6. Thermal-to-visible face recognition using partial least squares.

    PubMed

    Hu, Shuowen; Choi, Jonghyun; Chan, Alex L; Schwartz, William Robson

    2015-03-01

    Although visible face recognition has been an active area of research for several decades, cross-modal face recognition has only been explored by the biometrics community relatively recently. Thermal-to-visible face recognition is one of the most difficult cross-modal face recognition challenges, because of the difference in phenomenology between the thermal and visible imaging modalities. We address the cross-modal recognition problem using a partial least squares (PLS) regression-based approach consisting of preprocessing, feature extraction, and PLS model building. The preprocessing and feature extraction stages are designed to reduce the modality gap between the thermal and visible facial signatures, and facilitate the subsequent one-vs-all PLS-based model building. We incorporate multi-modal information into the PLS model building stage to enhance cross-modal recognition. The performance of the proposed recognition algorithm is evaluated on three challenging datasets containing visible and thermal imagery acquired under different experimental scenarios: time-lapse, physical tasks, mental tasks, and subject-to-camera range. These scenarios represent difficult challenges relevant to real-world applications. We demonstrate that the proposed method performs robustly for the examined scenarios.

  7. TMS to the "occipital face area" affects recognition but not categorization of faces.

    PubMed

    Solomon-Harris, Lily M; Mullin, Caitlin R; Steeves, Jennifer K E

    2013-12-01

    The human cortical system for face perception is comprised of a network of connected regions including the middle fusiform gyrus ("fusiform face area" or FFA), the inferior occipital cortex ("occipital face area" or OFA), and the superior temporal sulcus. The traditional hierarchical feedforward model of visual processing suggests information flows from early visual cortex to the OFA for initial face feature analysis to higher order regions including the FFA for identity recognition. However, patient data suggest an alternative model. Patients with acquired prosopagnosia, an inability to visually recognize faces, have been documented with lesions to the OFA but who nevertheless show face-selective activation in the FFA. Moreover, their ability to categorize faces remains intact. This suggests that the FFA is not solely responsible for face recognition and the network is not strictly hierarchical, but may be organized in a reverse hierarchical fashion. We used transcranial magnetic stimulation (TMS) to temporarily disrupt processing in the OFA in neurologically-intact individuals and found participants' ability to categorize intact versus scrambled faces was unaffected, however face identity discrimination was significantly impaired. This suggests that face categorization but not recognition can occur without the "earlier" OFA being online and indicates that "lower level" face category processing may be assumed by other intact face network regions such as the FFA. These results are consistent with the patient data and support a non-hierarchical, global-to-local model with re-entrant connections between the OFA and other face processing areas.

  8. Recognition memory in developmental prosopagnosia: electrophysiological evidence for abnormal routes to face recognition

    PubMed Central

    Burns, Edwin J.; Tree, Jeremy J.; Weidemann, Christoph T.

    2014-01-01

    Dual process models of recognition memory propose two distinct routes for recognizing a face: recollection and familiarity. Recollection is characterized by the remembering of some contextual detail from a previous encounter with a face whereas familiarity is the feeling of finding a face familiar without any contextual details. The Remember/Know (R/K) paradigm is thought to index the relative contributions of recollection and familiarity to recognition performance. Despite researchers measuring face recognition deficits in developmental prosopagnosia (DP) through a variety of methods, none have considered the distinct contributions of recollection and familiarity to recognition performance. The present study examined recognition memory for faces in eight individuals with DP and a group of controls using an R/K paradigm while recording electroencephalogram (EEG) data at the scalp. Those with DP were found to produce fewer correct “remember” responses and more false alarms than controls. EEG results showed that posterior “remember” old/new effects were delayed and restricted to the right posterior (RP) area in those with DP in comparison to the controls. A posterior “know” old/new effect commonly associated with familiarity for faces was only present in the controls whereas individuals with DP exhibited a frontal “know” old/new effect commonly associated with words, objects and pictures. These results suggest that individuals with DP do not utilize normal face-specific routes when making face recognition judgments but instead process faces using a pathway more commonly associated with objects. PMID:25177283

  9. Sparse representation based face recognition using weighted regions

    NASA Astrophysics Data System (ADS)

    Bilgazyev, Emil; Yeniaras, E.; Uyanik, I.; Unan, Mahmut; Leiss, E. L.

    2013-12-01

    Face recognition is a challenging research topic, especially when the training (gallery) and recognition (probe) images are acquired using different cameras under varying conditions. Even a small noise or occlusion in the images can compromise the accuracy of recognition. Lately, sparse encoding based classification algorithms gave promising results for such uncontrollable scenarios. In this paper, we introduce a novel methodology by modeling the sparse encoding with weighted patches to increase the robustness of face recognition even further. In the training phase, we define a mask (i.e., weight matrix) using a sparse representation selecting the facial regions, and in the recognition phase, we perform comparison on selected facial regions. The algorithm was evaluated both quantitatively and qualitatively using two comprehensive surveillance facial image databases, i.e., SCfaceandMFPV, with the results clearly superior to common state-of-the-art methodologies in different scenarios.

  10. Face Image Gender Recognition Based on Gabor Transform and SVM

    NASA Astrophysics Data System (ADS)

    Yan, Chunjuan

    In order to overcome the disturbance of non-essential information such as illumination variations and facial expression changing, a new algorithm is proposed in this paper for face image gender recognition. That is, the 2-D Gabor transform is used for extracting the face features; a new method is put forwards to decrease dimensions of Gabor transform output for speeding up SVM training; finally gender recognition is accomplished with SVM classifier. Good performance of gender classification test is achieved on a relative large scale and low-resolution face database.

  11. Individual differences in cortical face selectivity predict behavioral performance in face recognition

    PubMed Central

    Huang, Lijie; Song, Yiying; Li, Jingguang; Zhen, Zonglei; Yang, Zetian; Liu, Jia

    2014-01-01

    In functional magnetic resonance imaging studies, object selectivity is defined as a higher neural response to an object category than other object categories. Importantly, object selectivity is widely considered as a neural signature of a functionally-specialized area in processing its preferred object category in the human brain. However, the behavioral significance of the object selectivity remains unclear. In the present study, we used the individual differences approach to correlate participants' face selectivity in the face-selective regions with their behavioral performance in face recognition measured outside the scanner in a large sample of healthy adults. Face selectivity was defined as the z score of activation with the contrast of faces vs. non-face objects, and the face recognition ability was indexed as the normalized residual of the accuracy in recognizing previously-learned faces after regressing out that for non-face objects in an old/new memory task. We found that the participants with higher face selectivity in the fusiform face area (FFA) and the occipital face area (OFA), but not in the posterior part of the superior temporal sulcus (pSTS), possessed higher face recognition ability. Importantly, the association of face selectivity in the FFA and face recognition ability cannot be accounted for by FFA response to objects or behavioral performance in object recognition, suggesting that the association is domain-specific. Finally, the association is reliable, confirmed by the replication from another independent participant group. In sum, our finding provides empirical evidence on the validity of using object selectivity as a neural signature in defining object-selective regions in the human brain. PMID:25071513

  12. [Neural basis of self-face recognition: social aspects].

    PubMed

    Sugiura, Motoaki

    2012-07-01

    Considering the importance of the face in social survival and evidence from evolutionary psychology of visual self-recognition, it is reasonable that we expect neural mechanisms for higher social-cognitive processes to underlie self-face recognition. A decade of neuroimaging studies so far has, however, not provided an encouraging finding in this respect. Self-face specific activation has typically been reported in the areas for sensory-motor integration in the right lateral cortices. This observation appears to reflect the physical nature of the self-face which representation is developed via the detection of contingency between one's own action and sensory feedback. We have recently revealed that the medial prefrontal cortex, implicated in socially nuanced self-referential process, is activated during self-face recognition under a rich social context where multiple other faces are available for reference. The posterior cingulate cortex has also exhibited this activation modulation, and in the separate experiment showed a response to attractively manipulated self-face suggesting its relevance to positive self-value. Furthermore, the regions in the right lateral cortices typically showing self-face-specific activation have responded also to the face of one's close friend under the rich social context. This observation is potentially explained by the fact that the contingency detection for physical self-recognition also plays a role in physical social interaction, which characterizes the representation of personally familiar people. These findings demonstrate that neuroscientific exploration reveals multiple facets of the relationship between self-face recognition and social-cognitive process, and that technically the manipulation of social context is key to its success.

  13. Most information feature extraction (MIFE) approach for face recognition

    NASA Astrophysics Data System (ADS)

    Zhao, Jiali; Ren, Haibing; Wang, Haitao; Kee, Seokcheol

    2005-03-01

    We present a MIFE (Most Information Feature Extraction) approach, which extract as abundant as possible information for the face classification task. In the MIFE approach, a facial image is separated into sub-regions and each sub-region makes individual"s contribution for performing face recognition. Specifically, each sub-region is subjected to a sub-region based adaptive gamma (SadaGamma) correction or sub-region based histogram equalization (SHE) in order to account for different illuminations and expressions. Experiment results show that the proposed SadaGamma/SHE correction approach provides an efficient delighting solution for face recognition. MIFE and SadaGamma/SHE correction together achieves lower error ratio in face recognition under different illumination and expression.

  14. On the facilitative effects of face motion on face recognition and its development

    PubMed Central

    Xiao, Naiqi G.; Perrotta, Steve; Quinn, Paul C.; Wang, Zhe; Sun, Yu-Hao P.; Lee, Kang

    2014-01-01

    For the past century, researchers have extensively studied human face processing and its development. These studies have advanced our understanding of not only face processing, but also visual processing in general. However, most of what we know about face processing was investigated using static face images as stimuli. Therefore, an important question arises: to what extent does our understanding of static face processing generalize to face processing in real-life contexts in which faces are mostly moving? The present article addresses this question by examining recent studies on moving face processing to uncover the influence of facial movements on face processing and its development. First, we describe evidence on the facilitative effects of facial movements on face recognition and two related theoretical hypotheses: the supplementary information hypothesis and the representation enhancement hypothesis. We then highlight several recent studies suggesting that facial movements optimize face processing by activating specific face processing strategies that accommodate to task requirements. Lastly, we review the influence of facial movements on the development of face processing in the first year of life. We focus on infants' sensitivity to facial movements and explore the facilitative effects of facial movements on infants' face recognition performance. We conclude by outlining several future directions to investigate moving face processing and emphasize the importance of including dynamic aspects of facial information to further understand face processing in real-life contexts. PMID:25009517

  15. On the facilitative effects of face motion on face recognition and its development.

    PubMed

    Xiao, Naiqi G; Perrotta, Steve; Quinn, Paul C; Wang, Zhe; Sun, Yu-Hao P; Lee, Kang

    2014-01-01

    For the past century, researchers have extensively studied human face processing and its development. These studies have advanced our understanding of not only face processing, but also visual processing in general. However, most of what we know about face processing was investigated using static face images as stimuli. Therefore, an important question arises: to what extent does our understanding of static face processing generalize to face processing in real-life contexts in which faces are mostly moving? The present article addresses this question by examining recent studies on moving face processing to uncover the influence of facial movements on face processing and its development. First, we describe evidence on the facilitative effects of facial movements on face recognition and two related theoretical hypotheses: the supplementary information hypothesis and the representation enhancement hypothesis. We then highlight several recent studies suggesting that facial movements optimize face processing by activating specific face processing strategies that accommodate to task requirements. Lastly, we review the influence of facial movements on the development of face processing in the first year of life. We focus on infants' sensitivity to facial movements and explore the facilitative effects of facial movements on infants' face recognition performance. We conclude by outlining several future directions to investigate moving face processing and emphasize the importance of including dynamic aspects of facial information to further understand face processing in real-life contexts. PMID:25009517

  16. Recognition of face and non-face stimuli in autistic spectrum disorder.

    PubMed

    Arkush, Leo; Smith-Collins, Adam P R; Fiorentini, Chiara; Skuse, David H

    2013-12-01

    The ability to remember faces is critical for the development of social competence. From childhood to adulthood, we acquire a high level of expertise in the recognition of facial images, and neural processes become dedicated to sustaining competence. Many people with autism spectrum disorder (ASD) have poor face recognition memory; changes in hairstyle or other non-facial features in an otherwise familiar person affect their recollection skills. The observation implies that they may not use the configuration of the inner face to achieve memory competence, but bolster performance in other ways. We aimed to test this hypothesis by comparing the performance of a group of high-functioning unmedicated adolescents with ASD and a matched control group on a "surprise" face recognition memory task. We compared their memory for unfamiliar faces with their memory for images of houses. To evaluate the role that is played by peripheral cues in assisting recognition memory, we cropped both sets of pictures, retaining only the most salient central features. ASD adolescents had poorer recognition memory for faces than typical controls, but their recognition memory for houses was unimpaired. Cropping images of faces did not disproportionately influence their recall accuracy, relative to controls. House recognition skills (cropped and uncropped) were similar in both groups. In the ASD group only, performance on both sets of task was closely correlated, implying that memory for faces and other complex pictorial stimuli is achieved by domain-general (non-dedicated) cognitive mechanisms. Adolescents with ASD apparently do not use domain-specialized processing of inner facial cues to support face recognition memory.

  17. Recognition of face and non-face stimuli in autistic spectrum disorder.

    PubMed

    Arkush, Leo; Smith-Collins, Adam P R; Fiorentini, Chiara; Skuse, David H

    2013-12-01

    The ability to remember faces is critical for the development of social competence. From childhood to adulthood, we acquire a high level of expertise in the recognition of facial images, and neural processes become dedicated to sustaining competence. Many people with autism spectrum disorder (ASD) have poor face recognition memory; changes in hairstyle or other non-facial features in an otherwise familiar person affect their recollection skills. The observation implies that they may not use the configuration of the inner face to achieve memory competence, but bolster performance in other ways. We aimed to test this hypothesis by comparing the performance of a group of high-functioning unmedicated adolescents with ASD and a matched control group on a "surprise" face recognition memory task. We compared their memory for unfamiliar faces with their memory for images of houses. To evaluate the role that is played by peripheral cues in assisting recognition memory, we cropped both sets of pictures, retaining only the most salient central features. ASD adolescents had poorer recognition memory for faces than typical controls, but their recognition memory for houses was unimpaired. Cropping images of faces did not disproportionately influence their recall accuracy, relative to controls. House recognition skills (cropped and uncropped) were similar in both groups. In the ASD group only, performance on both sets of task was closely correlated, implying that memory for faces and other complex pictorial stimuli is achieved by domain-general (non-dedicated) cognitive mechanisms. Adolescents with ASD apparently do not use domain-specialized processing of inner facial cues to support face recognition memory. PMID:23894016

  18. A Benchmark and Comparative Study of Video-Based Face Recognition on COX Face Database.

    PubMed

    Huang, Zhiwu; Shan, Shiguang; Wang, Ruiping; Zhang, Haihong; Lao, Shihong; Kuerban, Alifu; Chen, Xilin

    2015-12-01

    Face recognition with still face images has been widely studied, while the research on video-based face recognition is inadequate relatively, especially in terms of benchmark datasets and comparisons. Real-world video-based face recognition applications require techniques for three distinct scenarios: 1) Videoto-Still (V2S); 2) Still-to-Video (S2V); and 3) Video-to-Video (V2V), respectively, taking video or still image as query or target. To the best of our knowledge, few datasets and evaluation protocols have benchmarked for all the three scenarios. In order to facilitate the study of this specific topic, this paper contributes a benchmarking and comparative study based on a newly collected still/video face database, named COX(1) Face DB. Specifically, we make three contributions. First, we collect and release a largescale still/video face database to simulate video surveillance with three different video-based face recognition scenarios (i.e., V2S, S2V, and V2V). Second, for benchmarking the three scenarios designed on our database, we review and experimentally compare a number of existing set-based methods. Third, we further propose a novel Point-to-Set Correlation Learning (PSCL) method, and experimentally show that it can be used as a promising baseline method for V2S/S2V face recognition on COX Face DB. Extensive experimental results clearly demonstrate that video-based face recognition needs more efforts, and our COX Face DB is a good benchmark database for evaluation.

  19. Robust 3D face recognition by local shape difference boosting.

    PubMed

    Wang, Yueming; Liu, Jianzhuang; Tang, Xiaoou

    2010-10-01

    This paper proposes a new 3D face recognition approach, Collective Shape Difference Classifier (CSDC), to meet practical application requirements, i.e., high recognition performance, high computational efficiency, and easy implementation. We first present a fast posture alignment method which is self-dependent and avoids the registration between an input face against every face in the gallery. Then, a Signed Shape Difference Map (SSDM) is computed between two aligned 3D faces as a mediate representation for the shape comparison. Based on the SSDMs, three kinds of features are used to encode both the local similarity and the change characteristics between facial shapes. The most discriminative local features are selected optimally by boosting and trained as weak classifiers for assembling three collective strong classifiers, namely, CSDCs with respect to the three kinds of features. Different schemes are designed for verification and identification to pursue high performance in both recognition and computation. The experiments, carried out on FRGC v2 with the standard protocol, yield three verification rates all better than 97.9 percent with the FAR of 0.1 percent and rank-1 recognition rates above 98 percent. Each recognition against a gallery with 1,000 faces only takes about 3.6 seconds. These experimental results demonstrate that our algorithm is not only effective but also time efficient. PMID:20724762

  20. Can massive but passive exposure to faces contribute to face recognition abilities?

    PubMed

    Yovel, Galit; Halsband, Keren; Pelleg, Michel; Farkash, Naomi; Gal, Bracha; Goshen-Gottstein, Yonatan

    2012-04-01

    Recent studies have suggested that individuation of other-race faces is more crucial for enhancing recognition performance than exposure that involves categorization of these faces to an identity-irrelevant criterion. These findings were primarily based on laboratory training protocols that dissociated exposure and individuation by using categorization tasks. However, the absence of enhanced recognition following categorization may not simulate key aspects of real-life massive exposure without individuation to other-race faces. Real-life exposure spans years of seeing a multitude of faces, under variant conditions, including expression, view, lighting and gaze, albeit with no subcategory individuation. However, in most real-life settings, massive exposure operates in concert with individuation. An exception to that are neonatology nurses, a unique population that is exposed to--but do not individuate--massive numbers of newborn faces. Our findings show that recognition of newborn faces by nurses does not differ from adults who are rarely exposed to newborn faces. A control study showed that the absence of enhanced recognition cannot be attributed to the relatively short exposure to each newborn face in the neonatology unit or to newborns' apparent homogeneous appearance. It is therefore the quality--not the quantity--of exposure that determines recognition abilities.

  1. Face and emotion recognition in MCDD versus PDD-NOS.

    PubMed

    Herba, Catherine M; de Bruin, Esther; Althaus, Monika; Verheij, Fop; Ferdinand, Robert F

    2008-04-01

    Previous studies indicate that Multiple Complex Developmental Disorder (MCDD) children differ from PDD-NOS and autistic children on a symptom level and on psychophysiological functioning. Children with MCDD (n = 21) and PDD-NOS (n = 62) were compared on two facets of social-cognitive functioning: identification of neutral faces and facial expressions. Few significant group differences emerged. Children with PDD-NOS demonstrated a more attention-demanding strategy of face processing, and processed neutral faces more similarly to complex patterns whereas children with MCDD showed an advantage for face recognition compared to complex patterns. Results further suggested that any disadvantage in face recognition was related more to the autistic features of the PDD-NOS group rather than characteristics specific to MCDD. No significant group differences emerged for identifying facial expressions.

  2. Equivalent activation of the hippocampus by face-face and face-laugh paired associate learning and recognition.

    PubMed

    Holdstock, J S; Crane, J; Bachorowski, J-A; Milner, B

    2010-11-01

    The human hippocampus is known to play an important role in relational memory. Both patient lesion studies and functional-imaging studies have shown that it is involved in the encoding and retrieval from memory of arbitrary associations. Two recent patient lesion studies, however, have found dissociations between spared and impaired memory within the domain of relational memory. Recognition of associations between information of the same kind (e.g., two faces) was spared, whereas recognition of associations between information of different kinds (e.g., face-name or face-voice associations) was impaired by hippocampal lesions. Thus, recognition of associations between information of the same kind may not be mediated by the hippocampus. Few imaging studies have directly compared activation at encoding and recognition of associations between same and different types of information. Those that have have shown mixed findings and been open to alternative interpretation. We used fMRI to compare hippocampal activation while participants studied and later recognized face-face and face-laugh paired associates. We found no differences in hippocampal activation between our two types of stimulus materials during either study or recognition. Study of both types of paired associate activated the hippocampus bilaterally, but the hippocampus was not activated by either condition during recognition. Our findings suggest that the human hippocampus is normally engaged to a similar extent by study and recognition of associations between information of the same kind and associations between information of different kinds.

  3. Perspective projection for variance pose face recognition from camera calibration

    NASA Astrophysics Data System (ADS)

    Fakhir, M. M.; Woo, W. L.; Chambers, J. A.; Dlay, S. S.

    2016-04-01

    Variance pose is an important research topic in face recognition. The alteration of distance parameters across variance pose face features is a challenging. We provide a solution for this problem using perspective projection for variance pose face recognition. Our method infers intrinsic camera parameters of the image which enable the projection of the image plane into 3D. After this, face box tracking and centre of eyes detection can be identified using our novel technique to verify the virtual face feature measurements. The coordinate system of the perspective projection for face tracking allows the holistic dimensions for the face to be fixed in different orientations. The training of frontal images and the rest of the poses on FERET database determine the distance from the centre of eyes to the corner of box face. The recognition system compares the gallery of images against different poses. The system initially utilises information on position of both eyes then focuses principally on closest eye in order to gather data with greater reliability. Differentiation between the distances and position of the right and left eyes is a unique feature of our work with our algorithm outperforming other state of the art algorithms thus enabling stable measurement in variance pose for each individual.

  4. Face recognition using facial expression: a novel approach

    NASA Astrophysics Data System (ADS)

    Singh, Deepak Kumar; Gupta, Priya; Tiwary, U. S.

    2008-04-01

    Facial expressions are undoubtedly the most effective nonverbal communication. The face has always been the equation of a person's identity. The face draws the demarcation line between identity and extinction. Each line on the face adds an attribute to the identity. These lines become prominent when we experience an emotion and these lines do not change completely with age. In this paper we have proposed a new technique for face recognition which focuses on the facial expressions of the subject to identify his face. This is a grey area on which not much light has been thrown earlier. According to earlier researches it is difficult to alter the natural expression. So our technique will be beneficial for identifying occluded or intentionally disguised faces. The test results of the experiments conducted prove that this technique will give a new direction in the field of face recognition. This technique will provide a strong base to the area of face recognition and will be used as the core method for critical defense security related issues.

  5. Face recognition with multiple eigenface spaces

    NASA Astrophysics Data System (ADS)

    Jiang, Ming; Zhang, Guilin; Chen, Zhaoyang; Zhang, Zheng

    2001-09-01

    This paper addresses an ameliorative version of traditional eigenface methods. Much of the previous work on eigenspace methods usually built only one eigenface space with eigenfaces of different persons, utilizing only one or very limited faces of an individual. The information of one facial image is very limited, so traditional methods have difficulty coping with differences of facial images caused by the changes of age, emotion, illumination, and hairdress. We took advantage of facial images of the same person obtained at different ages, under different conditions, and with different emotion. For every individual we constructed an eigenface subspace separately, namely multiple eigenface spaces were constructed for a face database. Experiments illustrated that the ameliorative algorithm is distortion- invariant to some extent.

  6. Face Recognition by Metropolitan Police Super-Recognisers

    PubMed Central

    Robertson, David J.; Noyes, Eilidh; Dowsett, Andrew J.; Jenkins, Rob; Burton, A. Mike

    2016-01-01

    Face recognition is used to prove identity across a wide variety of settings. Despite this, research consistently shows that people are typically rather poor at matching faces to photos. Some professional groups, such as police and passport officers, have been shown to perform just as poorly as the general public on standard tests of face recognition. However, face recognition skills are subject to wide individual variation, with some people showing exceptional ability—a group that has come to be known as ‘super-recognisers’. The Metropolitan Police Force (London) recruits ‘super-recognisers’ from within its ranks, for deployment on various identification tasks. Here we test four working super-recognisers from within this police force, and ask whether they are really able to perform at levels above control groups. We consistently find that the police ‘super-recognisers’ perform at well above normal levels on tests of unfamiliar and familiar face matching, with degraded as well as high quality images. Recruiting employees with high levels of skill in these areas, and allocating them to relevant tasks, is an efficient way to overcome some of the known difficulties associated with unfamiliar face recognition. PMID:26918457

  7. A wavelet-based method for multispectral face recognition

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Zhang, Chaoyang; Zhou, Zhaoxian

    2012-06-01

    A wavelet-based method is proposed for multispectral face recognition in this paper. Gabor wavelet transform is a common tool for orientation analysis of a 2D image; whereas Hamming distance is an efficient distance measurement for face identification. Specifically, at each frequency band, an index number representing the strongest orientational response is selected, and then encoded in binary format to favor the Hamming distance calculation. Multiband orientation bit codes are then organized into a face pattern byte (FPB) by using order statistics. With the FPB, Hamming distances are calculated and compared to achieve face identification. The FPB algorithm was initially created using thermal images, while the EBGM method was originated with visible images. When two or more spectral images from the same subject are available, the identification accuracy and reliability can be enhanced using score fusion. We compare the identification performance of applying five recognition algorithms to the three-band (visible, near infrared, thermal) face images, and explore the fusion performance of combing the multiple scores from three recognition algorithms and from three-band face images, respectively. The experimental results show that the FPB is the best recognition algorithm, the HMM yields the best fusion result, and the thermal dataset results in the best fusion performance compared to other two datasets.

  8. Face Recognition by Metropolitan Police Super-Recognisers.

    PubMed

    Robertson, David J; Noyes, Eilidh; Dowsett, Andrew J; Jenkins, Rob; Burton, A Mike

    2016-01-01

    Face recognition is used to prove identity across a wide variety of settings. Despite this, research consistently shows that people are typically rather poor at matching faces to photos. Some professional groups, such as police and passport officers, have been shown to perform just as poorly as the general public on standard tests of face recognition. However, face recognition skills are subject to wide individual variation, with some people showing exceptional ability-a group that has come to be known as 'super-recognisers'. The Metropolitan Police Force (London) recruits 'super-recognisers' from within its ranks, for deployment on various identification tasks. Here we test four working super-recognisers from within this police force, and ask whether they are really able to perform at levels above control groups. We consistently find that the police 'super-recognisers' perform at well above normal levels on tests of unfamiliar and familiar face matching, with degraded as well as high quality images. Recruiting employees with high levels of skill in these areas, and allocating them to relevant tasks, is an efficient way to overcome some of the known difficulties associated with unfamiliar face recognition.

  9. Face Recognition by Metropolitan Police Super-Recognisers.

    PubMed

    Robertson, David J; Noyes, Eilidh; Dowsett, Andrew J; Jenkins, Rob; Burton, A Mike

    2016-01-01

    Face recognition is used to prove identity across a wide variety of settings. Despite this, research consistently shows that people are typically rather poor at matching faces to photos. Some professional groups, such as police and passport officers, have been shown to perform just as poorly as the general public on standard tests of face recognition. However, face recognition skills are subject to wide individual variation, with some people showing exceptional ability-a group that has come to be known as 'super-recognisers'. The Metropolitan Police Force (London) recruits 'super-recognisers' from within its ranks, for deployment on various identification tasks. Here we test four working super-recognisers from within this police force, and ask whether they are really able to perform at levels above control groups. We consistently find that the police 'super-recognisers' perform at well above normal levels on tests of unfamiliar and familiar face matching, with degraded as well as high quality images. Recruiting employees with high levels of skill in these areas, and allocating them to relevant tasks, is an efficient way to overcome some of the known difficulties associated with unfamiliar face recognition. PMID:26918457

  10. Eye contrast polarity is critical for face recognition by infants.

    PubMed

    Otsuka, Yumiko; Motoyoshi, Isamu; Hill, Harold C; Kobayashi, Megumi; Kanazawa, So; Yamaguchi, Masami K

    2013-07-01

    Just as faces share the same basic arrangement of features, with two eyes above a nose above a mouth, human eyes all share the same basic contrast polarity relations, with a sclera lighter than an iris and a pupil, and this is unique among primates. The current study examined whether this bright-dark relationship of sclera to iris plays a critical role in face recognition from early in development. Specifically, we tested face discrimination in 7- and 8-month-old infants while independently manipulating the contrast polarity of the eye region and of the rest of the face. This gave four face contrast polarity conditions: fully positive condition, fully negative condition, positive face with negated eyes ("negative eyes") condition, and negated face with positive eyes ("positive eyes") condition. In a familiarization and novelty preference procedure, we found that 7- and 8-month-olds could discriminate between faces only when the contrast polarity of the eyes was preserved (positive) and that this did not depend on the contrast polarity of the rest of the face. This demonstrates the critical role of eye contrast polarity for face recognition in 7- and 8-month-olds and is consistent with previous findings for adults. PMID:23499321

  11. Embedded wavelet-based face recognition under variable position

    NASA Astrophysics Data System (ADS)

    Cotret, Pascal; Chevobbe, Stéphane; Darouich, Mehdi

    2015-02-01

    For several years, face recognition has been a hot topic in the image processing field: this technique is applied in several domains such as CCTV, electronic devices delocking and so on. In this context, this work studies the efficiency of a wavelet-based face recognition method in terms of subject position robustness and performance on various systems. The use of wavelet transform has a limited impact on the position robustness of PCA-based face recognition. This work shows, for a well-known database (Yale face database B*), that subject position in a 3D space can vary up to 10% of the original ROI size without decreasing recognition rates. Face recognition is performed on approximation coefficients of the image wavelet transform: results are still satisfying after 3 levels of decomposition. Furthermore, face database size can be divided by a factor 64 (22K with K = 3). In the context of ultra-embedded vision systems, memory footprint is one of the key points to be addressed; that is the reason why compression techniques such as wavelet transform are interesting. Furthermore, it leads to a low-complexity face detection stage compliant with limited computation resources available on such systems. The approach described in this work is tested on three platforms from a standard x86-based computer towards nanocomputers such as RaspberryPi and SECO boards. For K = 3 and a database with 40 faces, the execution mean time for one frame is 0.64 ms on a x86-based computer, 9 ms on a SECO board and 26 ms on a RaspberryPi (B model).

  12. Faces are special but not too special: spared face recognition in amnesia is based on familiarity.

    PubMed

    Aly, Mariam; Knight, Robert T; Yonelinas, Andrew P

    2010-11-01

    Most current theories of human memory are material-general in the sense that they assume that the medial temporal lobe (MTL) is important for retrieving the details of prior events, regardless of the specific type of materials. Recent studies of amnesia have challenged the material-general assumption by suggesting that the MTL may be necessary for remembering words, but is not involved in remembering faces. We examined recognition memory for faces and words in a group of amnesic patients, which included hypoxic patients and patients with extensive left or right MTL lesions. Recognition confidence judgments were used to plot receiver operating characteristics (ROCs) in order to more fully quantify recognition performance and to estimate the contributions of recollection and familiarity. Consistent with the extant literature, an analysis of overall recognition accuracy showed that the patients were impaired at word memory but had spared face memory. However, the ROC analysis indicated that the patients were generally impaired at high confidence recognition responses for faces and words, and they exhibited significant recollection impairments for both types of materials. Familiarity for faces was preserved in all patients, but extensive left MTL damage impaired familiarity for words. These results show that face recognition may appear to be spared because performance tends to rely heavily on familiarity, a process that is relatively well preserved in amnesia. In addition, the findings challenge material-general theories of memory, and suggest that both material and process are important determinants of memory performance in amnesia.

  13. The hows and whys of face memory: level of construal influences the recognition of human faces

    PubMed Central

    Wyer, Natalie A.; Hollins, Timothy J.; Pahl, Sabine; Roper, Jean

    2015-01-01

    Three experiments investigated the influence of level of construal (i.e., the interpretation of actions in terms of their meaning or their details) on different stages of face memory. We employed a standard multiple-face recognition paradigm, with half of the faces inverted at test. Construal level was manipulated prior to recognition (Experiment 1), during study (Experiment 2) or both (Experiment 3). The results support a general advantage for high-level construal over low-level construal at both study and at test, and suggest that matching processing style between study and recognition has no advantage. These experiments provide additional evidence in support of a link between semantic processing (i.e., construal) and visual (i.e., face) processing. We conclude with a discussion of implications for current theories relating to both construal and face processing. PMID:26500586

  14. The advantages of stereo vision in a face recognition system

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Blasch, Erik

    2014-06-01

    Humans can recognize a face with binocular vision, while computers typically use a single face image. It is known that the performance of face recognition (by a computer) can be improved using the score fusion of multimodal images and multiple algorithms. A question is: Can we apply stereo vision to a face recognition system? We know that human binocular vision has many advantages such as stereopsis (3D vision), binocular summation, and singleness of vision including fusion of binocular images (cyclopean image). For face recognition, a 3D face or 3D facial features are typically computed from a pair of stereo images. In human visual processes, the binocular summation and singleness of vision are similar as image fusion processes. In this paper, we propose an advanced face recognition system with stereo imaging capability, which is comprised of two 2-in-1 multispectral (visible and thermal) cameras and three recognition algorithms (circular Gaussian filter, face pattern byte, and linear discriminant analysis [LDA]). Specifically, we present and compare stereo fusion at three levels (images, features, and scores) by using stereo images (from left camera and right camera). Image fusion is achieved with three methods (Laplacian pyramid, wavelet transform, average); feature fusion is done with three logical operations (AND, OR, XOR); and score fusion is implemented with four classifiers (LDA, k-nearest neighbor, support vector machine, binomial logical regression). The system performance is measured by probability of correct classification (PCC) rate (reported as accuracy rate in this paper) and false accept rate (FAR). The proposed approaches were validated with a multispectral stereo face dataset from 105 subjects. Experimental results show that any type of stereo fusion can improve the PCC, meanwhile reduce the FAR. It seems that stereo image/feature fusion is superior to stereo score fusion in terms of recognition performance. Further score fusion after image

  15. Image quality-based adaptive illumination normalisation for face recognition

    NASA Astrophysics Data System (ADS)

    Sellahewa, Harin; Jassim, Sabah A.

    2009-05-01

    Automatic face recognition is a challenging task due to intra-class variations. Changes in lighting conditions during enrolment and identification stages contribute significantly to these intra-class variations. A common approach to address the effects such of varying conditions is to pre-process the biometric samples in order normalise intra-class variations. Histogram equalisation is a widely used illumination normalisation technique in face recognition. However, a recent study has shown that applying histogram equalisation on well-lit face images could lead to a decrease in recognition accuracy. This paper presents a dynamic approach to illumination normalisation, based on face image quality. The quality of a given face image is measured in terms of its luminance distortion by comparing this image against a known reference face image. Histogram equalisation is applied to a probe image if its luminance distortion is higher than a predefined threshold. We tested the proposed adaptive illumination normalisation method on the widely used Extended Yale Face Database B. Identification results demonstrate that our adaptive normalisation produces better identification accuracy compared to the conventional approach where every image is normalised, irrespective of the lighting condition they were acquired.

  16. Direct structural connections between voice- and face-recognition areas.

    PubMed

    Blank, Helen; Anwander, Alfred; von Kriegstein, Katharina

    2011-09-01

    Currently, there are two opposing models for how voice and face information is integrated in the human brain to recognize person identity. The conventional model assumes that voice and face information is only combined at a supramodal stage (Bruce and Young, 1986; Burton et al., 1990; Ellis et al., 1997). An alternative model posits that areas encoding voice and face information also interact directly and that this direct interaction is behaviorally relevant for optimizing person recognition (von Kriegstein et al., 2005; von Kriegstein and Giraud, 2006). To disambiguate between the two different models, we tested for evidence of direct structural connections between voice- and face-processing cortical areas by combining functional and diffusion magnetic resonance imaging. We localized, at the individual subject level, three voice-sensitive areas in anterior, middle, and posterior superior temporal sulcus (STS) and face-sensitive areas in the fusiform gyrus [fusiform face area (FFA)]. Using probabilistic tractography, we show evidence that the FFA is structurally connected with voice-sensitive areas in STS. In particular, our results suggest that the FFA is more strongly connected to middle and anterior than to posterior areas of the voice-sensitive STS. This specific structural connectivity pattern indicates that direct links between face- and voice-recognition areas could be used to optimize human person recognition.

  17. A Markov Random Field Groupwise Registration Framework for Face Recognition.

    PubMed

    Liao, Shu; Shen, Dinggang; Chung, Albert C S

    2014-04-01

    In this paper, we propose a new framework for tackling face recognition problem. The face recognition problem is formulated as groupwise deformable image registration and feature matching problem. The main contributions of the proposed method lie in the following aspects: (1) Each pixel in a facial image is represented by an anatomical signature obtained from its corresponding most salient scale local region determined by the survival exponential entropy (SEE) information theoretic measure. (2) Based on the anatomical signature calculated from each pixel, a novel Markov random field based groupwise registration framework is proposed to formulate the face recognition problem as a feature guided deformable image registration problem. The similarity between different facial images are measured on the nonlinear Riemannian manifold based on the deformable transformations. (3) The proposed method does not suffer from the generalizability problem which exists commonly in learning based algorithms. The proposed method has been extensively evaluated on four publicly available databases: FERET, CAS-PEAL-R1, FRGC ver 2.0, and the LFW. It is also compared with several state-of-the-art face recognition approaches, and experimental results demonstrate that the proposed method consistently achieves the highest recognition rates among all the methods under comparison.

  18. A Markov Random Field Groupwise Registration Framework for Face Recognition

    PubMed Central

    Liao, Shu; Shen, Dinggang; Chung, Albert C.S.

    2014-01-01

    In this paper, we propose a new framework for tackling face recognition problem. The face recognition problem is formulated as groupwise deformable image registration and feature matching problem. The main contributions of the proposed method lie in the following aspects: (1) Each pixel in a facial image is represented by an anatomical signature obtained from its corresponding most salient scale local region determined by the survival exponential entropy (SEE) information theoretic measure. (2) Based on the anatomical signature calculated from each pixel, a novel Markov random field based groupwise registration framework is proposed to formulate the face recognition problem as a feature guided deformable image registration problem. The similarity between different facial images are measured on the nonlinear Riemannian manifold based on the deformable transformations. (3) The proposed method does not suffer from the generalizability problem which exists commonly in learning based algorithms. The proposed method has been extensively evaluated on four publicly available databases: FERET, CAS-PEAL-R1, FRGC ver 2.0, and the LFW. It is also compared with several state-of-the-art face recognition approaches, and experimental results demonstrate that the proposed method consistently achieves the highest recognition rates among all the methods under comparison. PMID:25506109

  19. Collaborative Face Recognition Using a Network of Embedded Cameras

    NASA Astrophysics Data System (ADS)

    Kulathumani, Vinod; Parupati, Srikanth; Ross, Arun; Jillela, Raghavender

    In this chapter, we describe the design and implementation of a distributed real-time face recognition system using a network of embedded cameras. We consider a scenario that simulates typical corridors and passages in airports and other indoor public spaces, where real-time human identification is of prime significance. We characterize system performance on an embedded camera network testbed which is assembled using commercial off-the-shelf components. We quantify the impact of multiple views on the accuracy of face recognition, and describe how distributed pre-processing and local filtering help in reducing both the network load, and the overall processing time.

  20. Expression-invariant face recognition in hyperspectral images

    NASA Astrophysics Data System (ADS)

    Wang, Han; Bau, Tien C.; Healey, Glenn

    2014-10-01

    The performance of a face recognition system degrades when the expression in the probe set is different from the expression in the gallery set. Previous studies use either spatial or spectral information to address this problem. We propose an algorithm that uses spatial and spectral information for expression-invariant face recognition. The algorithm uses a set of three-dimensional Gabor filters to exploit spatial and spectral correlations, while principal-component analysis is used to model expression variation. We demonstrate the effectiveness of the algorithm on a database of 200 subjects with neutral and smiling expressions and explore the dependence of the performance on image spatial resolution and training set size.

  1. Expression-invariant face recognition in hyperspectral images

    NASA Astrophysics Data System (ADS)

    Wang, Han; Bau, Tien C.; Healey, Glenn

    2011-10-01

    The performance of a face recognition system degrades when the expression in the probe set is different from the expression in the gallery set. Previous studies use either spatial or spectral information to address this problem. In this paper, we propose an algorithm that uses spatial and spectral information for expression-invariant face recognition. The algorithm uses a set of 3D Gabor filters to exploit spatial and spectral correlations, and a principal-component analysis (PCA) to model expression variation. We demonstrate the effectiveness of the algorithm on a database of 200 subjects.

  2. FACELOCK-Lock Control Security System Using Face Recognition-

    NASA Astrophysics Data System (ADS)

    Hirayama, Takatsugu; Iwai, Yoshio; Yachida, Masahiko

    A security system using biometric person authentication technologies is suited to various high-security situations. The technology based on face recognition has advantages such as lower user’s resistance and lower stress. However, facial appearances change according to facial pose, expression, lighting, and age. We have developed the FACELOCK security system based on our face recognition methods. Our methods are robust for various facial appearances except facial pose. Our system consists of clients and a server. The client communicates with the server through our protocol over a LAN. Users of our system do not need to be careful about their facial appearance.

  3. Maximal likelihood correspondence estimation for face recognition across pose.

    PubMed

    Li, Shaoxin; Liu, Xin; Chai, Xiujuan; Zhang, Haihong; Lao, Shihong; Shan, Shiguang

    2014-10-01

    Due to the misalignment of image features, the performance of many conventional face recognition methods degrades considerably in across pose scenario. To address this problem, many image matching-based methods are proposed to estimate semantic correspondence between faces in different poses. In this paper, we aim to solve two critical problems in previous image matching-based correspondence learning methods: 1) fail to fully exploit face specific structure information in correspondence estimation and 2) fail to learn personalized correspondence for each probe image. To this end, we first build a model, termed as morphable displacement field (MDF), to encode face specific structure information of semantic correspondence from a set of real samples of correspondences calculated from 3D face models. Then, we propose a maximal likelihood correspondence estimation (MLCE) method to learn personalized correspondence based on maximal likelihood frontal face assumption. After obtaining the semantic correspondence encoded in the learned displacement, we can synthesize virtual frontal images of the profile faces for subsequent recognition. Using linear discriminant analysis method with pixel-intensity features, state-of-the-art performance is achieved on three multipose benchmarks, i.e., CMU-PIE, FERET, and MultiPIE databases. Owe to the rational MDF regularization and the usage of novel maximal likelihood objective, the proposed MLCE method can reliably learn correspondence between faces in different poses even in complex wild environment, i.e., labeled face in the wild database. PMID:25163062

  4. Robust face recognition from multi-view videos.

    PubMed

    Ming Du; Sankaranarayanan, Aswin C; Chellappa, Rama

    2014-03-01

    Multiview face recognition has become an active research area in the last few years. In this paper, we present an approach for video-based face recognition in camera networks. Our goal is to handle pose variations by exploiting the redundancy in the multiview video data. However, unlike traditional approaches that explicitly estimate the pose of the face, we propose a novel feature for robust face recognition in the presence of diffuse lighting and pose variations. The proposed feature is developed using the spherical harmonic representation of the face texture-mapped onto a sphere; the texture map itself is generated by back-projecting the multiview video data. Video plays an important role in this scenario. First, it provides an automatic and efficient way for feature extraction. Second, the data redundancy renders the recognition algorithm more robust. We measure the similarity between feature sets from different videos using the reproducing kernel Hilbert space. We demonstrate that the proposed approach outperforms traditional algorithms on a multiview video database. PMID:24723517

  5. Efficient Detection of Occlusion prior to Robust Face Recognition

    PubMed Central

    Dugelay, Jean-Luc

    2014-01-01

    While there has been an enormous amount of research on face recognition under pose/illumination/expression changes and image degradations, problems caused by occlusions attracted relatively less attention. Facial occlusions, due, for example, to sunglasses, hat/cap, scarf, and beard, can significantly deteriorate performances of face recognition systems in uncontrolled environments such as video surveillance. The goal of this paper is to explore face recognition in the presence of partial occlusions, with emphasis on real-world scenarios (e.g., sunglasses and scarf). In this paper, we propose an efficient approach which consists of first analysing the presence of potential occlusion on a face and then conducting face recognition on the nonoccluded facial regions based on selective local Gabor binary patterns. Experiments demonstrate that the proposed method outperforms the state-of-the-art works including KLD-LGBPHS, S-LNMF, OA-LBP, and RSC. Furthermore, performances of the proposed approach are evaluated under illumination and extreme facial expression changes provide also significant results. PMID:24526902

  6. Efficient detection of occlusion prior to robust face recognition.

    PubMed

    Min, Rui; Hadid, Abdenour; Dugelay, Jean-Luc

    2014-01-01

    While there has been an enormous amount of research on face recognition under pose/illumination/expression changes and image degradations, problems caused by occlusions attracted relatively less attention. Facial occlusions, due, for example, to sunglasses, hat/cap, scarf, and beard, can significantly deteriorate performances of face recognition systems in uncontrolled environments such as video surveillance. The goal of this paper is to explore face recognition in the presence of partial occlusions, with emphasis on real-world scenarios (e.g., sunglasses and scarf). In this paper, we propose an efficient approach which consists of first analysing the presence of potential occlusion on a face and then conducting face recognition on the nonoccluded facial regions based on selective local Gabor binary patterns. Experiments demonstrate that the proposed method outperforms the state-of-the-art works including KLD-LGBPHS, S-LNMF, OA-LBP, and RSC. Furthermore, performances of the proposed approach are evaluated under illumination and extreme facial expression changes provide also significant results.

  7. A mosaicing scheme for pose-invariant face recognition.

    PubMed

    Singh, Richa; Vatsa, Mayank; Ross, Arun; Noore, Afzel

    2007-10-01

    Mosaicing entails the consolidation of information represented by multiple images through the application of a registration and blending procedure. We describe a face mosaicing scheme that generates a composite face image during enrollment based on the evidence provided by frontal and semiprofile face images of an individual. Face mosaicing obviates the need to store multiple face templates representing multiple poses of a user's face image. In the proposed scheme, the side profile images are aligned with the frontal image using a hierarchical registration algorithm that exploits neighborhood properties to determine the transformation relating the two images. Multiresolution splining is then used to blend the side profiles with the frontal image, thereby generating a composite face image of the user. A texture-based face recognition technique that is a slightly modified version of the C2 algorithm proposed by Serre et al. is used to compare a probe face image with the gallery face mosaic. Experiments conducted on three different databases indicate that face mosaicing, as described in this paper, offers significant benefits by accounting for the pose variations that are commonly observed in face images. PMID:17926704

  8. Generating virtual training samples for sparse representation of face images and face recognition

    NASA Astrophysics Data System (ADS)

    Du, Yong; Wang, Yu

    2016-03-01

    There are many challenges in face recognition. In real-world scenes, images of the same face vary with changing illuminations, different expressions and poses, multiform ornaments, or even altered mental status. Limited available training samples cannot convey these possible changes in the training phase sufficiently, and this has become one of the restrictions to improve the face recognition accuracy. In this article, we view the multiplication of two images of the face as a virtual face image to expand the training set and devise a representation-based method to perform face recognition. The generated virtual samples really reflect some possible appearance and pose variations of the face. By multiplying a training sample with another sample from the same subject, we can strengthen the facial contour feature and greatly suppress the noise. Thus, more human essential information is retained. Also, uncertainty of the training data is simultaneously reduced with the increase of the training samples, which is beneficial for the training phase. The devised representation-based classifier uses both the original and new generated samples to perform the classification. In the classification phase, we first determine K nearest training samples for the current test sample by calculating the Euclidean distances between the test sample and training samples. Then, a linear combination of these selected training samples is used to represent the test sample, and the representation result is used to classify the test sample. The experimental results show that the proposed method outperforms some state-of-the-art face recognition methods.

  9. Can Massive but Passive Exposure to Faces Contribute to Face Recognition Abilities?

    ERIC Educational Resources Information Center

    Yovel, Galit; Halsband, Keren; Pelleg, Michel; Farkash, Naomi; Gal, Bracha; Goshen-Gottstein, Yonatan

    2012-01-01

    Recent studies have suggested that individuation of other-race faces is more crucial for enhancing recognition performance than exposure that involves categorization of these faces to an identity-irrelevant criterion. These findings were primarily based on laboratory training protocols that dissociated exposure and individuation by using…

  10. The Role of Higher Level Adaptive Coding Mechanisms in the Development of Face Recognition

    ERIC Educational Resources Information Center

    Pimperton, Hannah; Pellicano, Elizabeth; Jeffery, Linda; Rhodes, Gillian

    2009-01-01

    DevDevelopmental improvements in face identity recognition ability are widely documented, but the source of children's immaturity in face recognition remains unclear. Differences in the way in which children and adults visually represent faces might underlie immaturities in face recognition. Recent evidence of a face identity aftereffect (FIAE),…

  11. Face recognition with histograms of fractional differential gradients

    NASA Astrophysics Data System (ADS)

    Yu, Lei; Ma, Yan; Cao, Qi

    2014-05-01

    It has proved that fractional differentiation can enhance the edge information and nonlinearly preserve textural detailed information in an image. This paper investigates its ability for face recognition and presents a local descriptor called histograms of fractional differential gradients (HFDG) to extract facial visual features. HFDG encodes a face image into gradient patterns using multiorientation fractional differential masks, from which histograms of gradient directions are computed as the face representation. Experimental results on Yale, face recognition technology (FERET), Carnegie Mellon University pose, illumination, and expression (CMU PIE), and A. Martinez and R. Benavente (AR) databases validate the feasibility of the proposed method and show that HFDG outperforms local binary patterns (LBP), histograms of oriented gradients (HOG), enhanced local directional patterns (ELDP), and Gabor feature-based methods.

  12. Why the long face? The importance of vertical image structure for biological "barcodes" underlying face recognition.

    PubMed

    Spence, Morgan L; Storrs, Katherine R; Arnold, Derek H

    2014-07-29

    Humans are experts at face recognition. The mechanisms underlying this complex capacity are not fully understood. Recently, it has been proposed that face recognition is supported by a coarse-scale analysis of visual information contained in horizontal bands of contrast distributed along the vertical image axis-a biological facial "barcode" (Dakin & Watt, 2009). A critical prediction of the facial barcode hypothesis is that the distribution of image contrast along the vertical axis will be more important for face recognition than image distributions along the horizontal axis. Using a novel paradigm involving dynamic image distortions, a series of experiments are presented examining famous face recognition impairments from selectively disrupting image distributions along the vertical or horizontal image axes. Results show that disrupting the image distribution along the vertical image axis is more disruptive for recognition than matched distortions along the horizontal axis. Consistent with the facial barcode hypothesis, these results suggest that human face recognition relies disproportionately on appropriately scaled distributions of image contrast along the vertical image axis.

  13. Part-based set matching for face recognition in surveillance

    NASA Astrophysics Data System (ADS)

    Zheng, Fei; Wang, Guijin; Lin, Xinggang

    2013-12-01

    Face recognition in surveillance is a hot topic in computer vision due to the strong demand for public security and remains a challenging task owing to large variations in viewpoint and illumination of cameras. In surveillance, image sets are the most natural form of input by incorporating tracking. Recent advances in set-based matching also show its great potential for exploring the feature space for face recognition by making use of multiple samples of subjects. In this paper, we propose a novel method that exploits the salient features (such as eyes, noses, mouth) in set-based matching. To represent image sets, we adopt the affine hull model, which can general unseen appearances in the form of affine combinations of sample images. In our proposal, a robust part detector is first used to find four salient parts for each face image: two eyes, nose, and mouth. For each part, we construct an affine hull model by using the local binary pattern histograms of multiple samples of the part. We also construct an affine model for the whole face region. Then, we find the closest distance between the corresponding affine hull models to measure the similarity between parts/face regions, and a weighting scheme is introduced to combine the five distances (four parts and the whole face region) to obtain the final distance between two subjects. In the recognition phase, a nearest neighbor classifier is used. Experiments on the public ChokePoint dataset and our dataset demonstrate the superior performance of our method.

  14. Recognition advantage of happy faces: tracing the neurocognitive processes.

    PubMed

    Calvo, Manuel G; Beltrán, David

    2013-09-01

    The present study aimed to identify the brain processes-and their time course-underlying the typical behavioral recognition advantage of happy facial expressions. To this end, we recorded EEG activity during an expression categorization task for happy, angry, fearful, sad, and neutral faces, and the correlation between event-related-potential (ERP) patterns and recognition performance was assessed. N170 (150-180 ms) was enhanced for angry, fearful and sad faces; N2 was reduced and early posterior negativity (EPN; both, 200-320 ms) was enhanced for happy and angry faces; P3b (350-450 ms) was reduced for happy and neutral faces; and slow positive wave (SPW; 700-800 ms) was reduced for happy faces. This reveals (a) an early processing (N170) of negative affective valence (i.e., angry, fearful, and sad), (b) discrimination (N2 and EPN) of affective intensity or arousal (i.e., angry and happy), and (c) facilitated categorization (P3b) and decision (SPW) due to expressive distinctiveness (i.e., happy). In addition, N2, EPN, P3b, and SPW were related to categorization accuracy and speed. This suggests that conscious expression recognition and the typical happy face advantage depend on encoding of expressive intensity and, especially, on later response selection, rather than on the early processing of affective valence.

  15. Neural and genetic foundations of face recognition and prosopagnosia.

    PubMed

    Grüter, Thomas; Grüter, Martina; Carbon, Claus-Christian

    2008-03-01

    Faces are of essential importance for human social life. They provide valuable information about the identity, expression, gaze, health, and age of a person. Recent face-processing models assume highly interconnected neural structures between different temporal, occipital, and frontal brain areas with several feedback loops. A selective deficit in the visual learning and recognition of faces is known as prosopagnosia, which can be found both in acquired and congenital form. Recently, a hereditary sub-type of congenital prosopagnosia with a very high prevalence rate of 2.5% has been identified. Recent research results show that hereditary prosopagnosia is a clearly circumscribed face-processing deficit with a characteristic set of clinical symptoms. Comparing face processing of people of prosopagnosia with that of controls can help to develop a more conclusive and integrated model of face processing. Here, we provide a summary of the current state of face processing research. We also describe the different types of prosopagnosia and present the set of typical symptoms found in the hereditary type. Finally, we will discuss the implications for future face recognition research.

  16. Efficient live face detection to counter spoof attack in face recognition systems

    NASA Astrophysics Data System (ADS)

    Biswas, Bikram Kumar; Alam, Mohammad S.

    2015-03-01

    Face recognition is a critical tool used in almost all major biometrics based security systems. But recognition, authentication and liveness detection of the face of an actual user is a major challenge because an imposter or a non-live face of the actual user can be used to spoof the security system. In this research, a robust technique is proposed which detects liveness of faces in order to counter spoof attacks. The proposed technique uses a three-dimensional (3D) fast Fourier transform to compare spectral energies of a live face and a fake face in a mathematically selective manner. The mathematical model involves evaluation of energies of selective high frequency bands of average power spectra of both live and non-live faces. It also carries out proper recognition and authentication of the face of the actual user using the fringe-adjusted joint transform correlation technique, which has been found to yield the highest correlation output for a match. Experimental tests show that the proposed technique yields excellent results for identifying live faces.

  17. Three faces of self-face recognition: potential for a multi-dimensional diagnostic tool.

    PubMed

    Sugiura, Motoaki

    2015-01-01

    The recognition of self-face is a unique and complex phenomenon in many aspects, including its associated perceptual integration process, its emergence during development, and its socio-motivational effect. This may explain the failure of classical attempts to identify the cortical areas specifically responsive to self-face and designate them as a unique system related to 'self'. Neuroimaging findings regarding self-face recognition seem to be explained comprehensively by a recent forward-model account of the three categories of self: the physical, interpersonal, and social selves. Self-face-specific activation in the sensory and motor association cortices may reflect cognitive scrutiny due to prediction error or task-induced top-down attention in the physical internal schema related to the self-face. Self-face-specific deactivation in some amodal association cortices in the dorsomedial frontal and lateral posterior cortices may reflect adaptive suppression of the default recruitment of the social-response system during face recognition. Self-face-specific activation under a social context in the ventral aspect of the medial prefrontal cortex and the posterior cingulate cortex may reflect cognitive scrutiny of the internal schema related to the social value of the self. The multi-facet nature of self-face-specific activation may hold potential as the basis for a multi-dimensional diagnostic tool for the cognitive system. PMID:25450313

  18. Orienting to face expression during encoding improves men's recognition of own gender faces.

    PubMed

    Fulton, Erika K; Bulluck, Megan; Hertzog, Christopher

    2015-10-01

    It is unclear why women have superior episodic memory of faces, but the benefit may be partially the result of women engaging in superior processing of facial expressions. Therefore, we hypothesized that orienting instructions to attend to facial expression at encoding would significantly improve men's memory of faces and possibly reduce gender differences. We directed 203 college students (122 women) to study 120 faces under instructions to orient to either the person's gender or their emotional expression. They later took a recognition test of these faces by either judging whether they had previously studied the same person or that person with the exact same expression; the latter test evaluated recollection of specific facial details. Orienting to facial expressions during encoding significantly improved men's recognition of own-gender faces and eliminated the advantage that women had for male faces under gender orienting instructions. Although gender differences in spontaneous strategy use when orienting to faces cannot fully account for gender differences in face recognition, orienting men to facial expression during encoding is one way to significantly improve their episodic memory for male faces.

  19. Emotional Recognition in Autism Spectrum Conditions from Voices and Faces

    ERIC Educational Resources Information Center

    Stewart, Mary E.; McAdam, Clair; Ota, Mitsuhiko; Peppe, Sue; Cleland, Joanne

    2013-01-01

    The present study reports on a new vocal emotion recognition task and assesses whether people with autism spectrum conditions (ASC) perform differently from typically developed individuals on tests of emotional identification from both the face and the voice. The new test of vocal emotion contained trials in which the vocal emotion of the sentence…

  20. Impact of Intention on the ERP Correlates of Face Recognition

    ERIC Educational Resources Information Center

    Guillaume, Fabrice; Tiberghien, Guy

    2013-01-01

    The present study investigated the impact of study-test similarity on face recognition by manipulating, in the same experiment, the expression change (same vs. different) and the task-processing context (inclusion vs. exclusion instructions) as within-subject variables. Consistent with the dual-process framework, the present results showed that…

  1. Evolutionary-Rough Feature Selection for Face Recognition

    NASA Astrophysics Data System (ADS)

    Mazumdar, Debasis; Mitra, Soma; Mitra, Sushmita

    Elastic Bunch Graph Matching is a feature-based face recognition algorithm which has been used to determine facial attributes from an image. However the dimension of the feature vectors, in case of EBGM, is quite high. Feature selection is a useful preprocessing step for reducing dimensionality, removing irrelevant data, improving learning accuracy and enhancing output comprehensibility.

  2. An Inner Face Advantage in Children's Recognition of Familiar Peers

    ERIC Educational Resources Information Center

    Ge, Liezhong; Anzures, Gizelle; Wang, Zhe; Kelly, David J.; Pascalis, Olivier; Quinn, Paul C.; Slater, Alan M.; Yang, Zhiliang; Lee, Kang

    2008-01-01

    Children's recognition of familiar own-age peers was investigated. Chinese children (4-, 8-, and 14-year-olds) were asked to identify their classmates from photographs showing the entire face, the internal facial features only, the external facial features only, or the eyes, nose, or mouth only. Participants from all age groups were familiar with…

  3. Quaternion-Based Discriminant Analysis Method for Color Face Recognition

    PubMed Central

    Xu, Yong

    2012-01-01

    Pattern recognition techniques have been used to automatically recognize the objects, personal identities, predict the function of protein, the category of the cancer, identify lesion, perform product inspection, and so on. In this paper we propose a novel quaternion-based discriminant method. This method represents and classifies color images in a simple and mathematically tractable way. The proposed method is suitable for a large variety of real-world applications such as color face recognition and classification of the ground target shown in multispectrum remote images. This method first uses the quaternion number to denote the pixel in the color image and exploits a quaternion vector to represent the color image. This method then uses the linear discriminant analysis algorithm to transform the quaternion vector into a lower-dimensional quaternion vector and classifies it in this space. The experimental results show that the proposed method can obtain a very high accuracy for color face recognition. PMID:22937054

  4. Undersampled face recognition via robust auxiliary dictionary learning.

    PubMed

    Wei, Chia-Po; Wang, Yu-Chiang Frank

    2015-06-01

    In this paper, we address the problem of robust face recognition with undersampled training data. Given only one or few training images available per subject, we present a novel recognition approach, which not only handles test images with large intraclass variations such as illumination and expression. The proposed method is also to handle the corrupted ones due to occlusion or disguise, which is not present during training. This is achieved by the learning of a robust auxiliary dictionary from the subjects not of interest. Together with the undersampled training data, both intra and interclass variations can thus be successfully handled, while the unseen occlusions can be automatically disregarded for improved recognition. Our experiments on four face image datasets confirm the effectiveness and robustness of our approach, which is shown to outperform state-of-the-art sparse representation-based methods.

  5. Quaternion-based discriminant analysis method for color face recognition.

    PubMed

    Xu, Yong

    2012-01-01

    Pattern recognition techniques have been used to automatically recognize the objects, personal identities, predict the function of protein, the category of the cancer, identify lesion, perform product inspection, and so on. In this paper we propose a novel quaternion-based discriminant method. This method represents and classifies color images in a simple and mathematically tractable way. The proposed method is suitable for a large variety of real-world applications such as color face recognition and classification of the ground target shown in multispectrum remote images. This method first uses the quaternion number to denote the pixel in the color image and exploits a quaternion vector to represent the color image. This method then uses the linear discriminant analysis algorithm to transform the quaternion vector into a lower-dimensional quaternion vector and classifies it in this space. The experimental results show that the proposed method can obtain a very high accuracy for color face recognition. PMID:22937054

  6. Face Recognition in Unrestricted Posture using Invariant Image Information

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Jun'Ichi; Seike, Hiroshi

    In face recognition (face verification, face expression etc.), a full face or near full face is used and the face image is about fixed size in general. Especially, eyes, nose and mouth are usually located from the upper part to the lower part in the input image. But, in order to recognize the face in any posture, it is important to remove an influence caused by the position and three-dimensional turning of the face. The authors propose a method for detecting the face position in unknown posture, using an invariant image information. First, we show that the spectrum, which is obtained by polar transform and Fourier transform of the image, is shift-invariant and rotation-invariant, and is shift-invariant toward depth. Next, we describe on the detection of the face position in unrestricted posture, using the calculation of correlation of the spectrum. In this paper, the proposal method is explained and the experimental result, which is performed to verify the efficacy of the method, is demonstrated.

  7. Fixation patterns during recognition of personally familiar and unfamiliar faces.

    PubMed

    van Belle, Goedele; Ramon, Meike; Lefèvre, Philippe; Rossion, Bruno

    2010-01-01

    Previous studies recording eye gaze during face perception have rendered somewhat inconclusive findings with respect to fixation differences between familiar and unfamiliar faces. This can be attributed to a number of factors that differ across studies: the type and extent of familiarity with the faces presented, the definition of areas of interest subject to analyses, as well as a lack of consideration for the time course of scan patterns. Here we sought to address these issues by recording fixations in a recognition task with personally familiar and unfamiliar faces. After a first common fixation on a central superior location of the face in between features, suggesting initial holistic encoding, and a subsequent left eye bias, local features were focused and explored more for familiar than unfamiliar faces. Although the number of fixations did not differ for un-/familiar faces, the locations of fixations began to differ before familiarity decisions were provided. This suggests that in the context of familiarity decisions without time constraints, differences in processing familiar and unfamiliar faces arise relatively early - immediately upon initiation of the first fixation to identity-specific information - and that the local features of familiar faces are processed more than those of unfamiliar faces. PMID:21607074

  8. Face recognition using local gradient binary count pattern

    NASA Astrophysics Data System (ADS)

    Zhao, Xiaochao; Lin, Yaping; Ou, Bo; Yang, Junfeng; Wu, Zhelun

    2015-11-01

    A local feature descriptor, the local gradient binary count pattern (LGBCP), is proposed for face recognition. Unlike some current methods that extract features directly from a face image in the spatial domain, LGBCP encodes the local gradient information of the face's texture in an effective way and provides a more discriminative code than other methods. We compute the gradient information of a face image through convolutions with compass masks. The gradient information is encoded using the local binary count operator. We divide a face into several subregions and extract the distribution of the LGBCP codes from each subregion. Then all the histograms are concatenated into a vector, which is used for face description. For recognition, the chi-square statistic is used to measure the similarity of different feature vectors. Besides directly calculating the similarity of two feature vectors, we provide a weighted matching scheme in which different weights are assigned to different subregions. The nearest-neighborhood classifier is exploited for classification. Experiments are conducted on the FERET, CAS-PEAL, and AR face databases. LGBCP achieves 96.15% on the Fb set of FERET. For CAS-PEAL, LGBCP gets 96.97%, 98.91%, and 90.89% on the aging, distance, and expression sets, respectively.

  9. Face recognition system for set-top box-based intelligent TV.

    PubMed

    Lee, Won Oh; Kim, Yeong Gon; Hong, Hyung Gil; Park, Kang Ryoung

    2014-01-01

    Despite the prevalence of smart TVs, many consumers continue to use conventional TVs with supplementary set-top boxes (STBs) because of the high cost of smart TVs. However, because the processing power of a STB is quite low, the smart TV functionalities that can be implemented in a STB are very limited. Because of this, negligible research has been conducted regarding face recognition for conventional TVs with supplementary STBs, even though many such studies have been conducted with smart TVs. In terms of camera sensors, previous face recognition systems have used high-resolution cameras, cameras with high magnification zoom lenses, or camera systems with panning and tilting devices that can be used for face recognition from various positions. However, these cameras and devices cannot be used in intelligent TV environments because of limitations related to size and cost, and only small, low cost web-cameras can be used. The resulting face recognition performance is degraded because of the limited resolution and quality levels of the images. Therefore, we propose a new face recognition system for intelligent TVs in order to overcome the limitations associated with low resource set-top box and low cost web-cameras. We implement the face recognition system using a software algorithm that does not require special devices or cameras. Our research has the following four novelties: first, the candidate regions in a viewer's face are detected in an image captured by a camera connected to the STB via low processing background subtraction and face color filtering; second, the detected candidate regions of face are transmitted to a server that has high processing power in order to detect face regions accurately; third, in-plane rotations of the face regions are compensated based on similarities between the left and right half sub-regions of the face regions; fourth, various poses of the viewer's face region are identified using five templates obtained during the initial user

  10. Face Recognition System for Set-Top Box-Based Intelligent TV

    PubMed Central

    Lee, Won Oh; Kim, Yeong Gon; Hong, Hyung Gil; Park, Kang Ryoung

    2014-01-01

    Despite the prevalence of smart TVs, many consumers continue to use conventional TVs with supplementary set-top boxes (STBs) because of the high cost of smart TVs. However, because the processing power of a STB is quite low, the smart TV functionalities that can be implemented in a STB are very limited. Because of this, negligible research has been conducted regarding face recognition for conventional TVs with supplementary STBs, even though many such studies have been conducted with smart TVs. In terms of camera sensors, previous face recognition systems have used high-resolution cameras, cameras with high magnification zoom lenses, or camera systems with panning and tilting devices that can be used for face recognition from various positions. However, these cameras and devices cannot be used in intelligent TV environments because of limitations related to size and cost, and only small, low cost web-cameras can be used. The resulting face recognition performance is degraded because of the limited resolution and quality levels of the images. Therefore, we propose a new face recognition system for intelligent TVs in order to overcome the limitations associated with low resource set-top box and low cost web-cameras. We implement the face recognition system using a software algorithm that does not require special devices or cameras. Our research has the following four novelties: first, the candidate regions in a viewer's face are detected in an image captured by a camera connected to the STB via low processing background subtraction and face color filtering; second, the detected candidate regions of face are transmitted to a server that has high processing power in order to detect face regions accurately; third, in-plane rotations of the face regions are compensated based on similarities between the left and right half sub-regions of the face regions; fourth, various poses of the viewer's face region are identified using five templates obtained during the initial user

  11. New Robust Face Recognition Methods Based on Linear Regression

    PubMed Central

    Mi, Jian-Xun; Liu, Jin-Xing; Wen, Jiajun

    2012-01-01

    Nearest subspace (NS) classification based on linear regression technique is a very straightforward and efficient method for face recognition. A recently developed NS method, namely the linear regression-based classification (LRC), uses downsampled face images as features to perform face recognition. The basic assumption behind this kind method is that samples from a certain class lie on their own class-specific subspace. Since there are only few training samples for each individual class, which will cause the small sample size (SSS) problem, this problem gives rise to misclassification of previous NS methods. In this paper, we propose two novel LRC methods using the idea that every class-specific subspace has its unique basis vectors. Thus, we consider that each class-specific subspace is spanned by two kinds of basis vectors which are the common basis vectors shared by many classes and the class-specific basis vectors owned by one class only. Based on this concept, two classification methods, namely robust LRC 1 and 2 (RLRC 1 and 2), are given to achieve more robust face recognition. Unlike some previous methods which need to extract class-specific basis vectors, the proposed methods are developed merely based on the existence of the class-specific basis vectors but without actually calculating them. Experiments on three well known face databases demonstrate very good performance of the new methods compared with other state-of-the-art methods. PMID:22879992

  12. Anti Theft Mechanism Through Face recognition Using FPGA

    NASA Astrophysics Data System (ADS)

    Sundari, Y. B. T.; Laxminarayana, G.; Laxmi, G. Vijaya

    2012-11-01

    The use of vehicle is must for everyone. At the same time, protection from theft is also very important. Prevention of vehicle theft can be done remotely by an authorized person. The location of the car can be found by using GPS and GSM controlled by FPGA. In this paper, face recognition is used to identify the persons and comparison is done with the preloaded faces for authorization. The vehicle will start only when the authorized personís face is identified. In the event of theft attempt or unauthorized personís trial to drive the vehicle, an MMS/SMS will be sent to the owner along with the location. Then the authorized person can alert the security personnel for tracking and catching the vehicle. For face recognition, a Principal Component Analysis (PCA) algorithm is developed using MATLAB. The control technique for GPS and GSM is developed using VHDL over SPTRAN 3E FPGA. The MMS sending method is written in VB6.0. The proposed application can be implemented with some modifications in the systems wherever the face recognition or detection is needed like, airports, international borders, banking applications etc.

  13. Objective 3D face recognition: Evolution, approaches and challenges.

    PubMed

    Smeets, Dirk; Claes, Peter; Vandermeulen, Dirk; Clement, John Gerald

    2010-09-10

    Face recognition is a natural human ability and a widely accepted identification and authentication method. In modern legal settings, a lot of credence is placed on identifications made by eyewitnesses. Consequently these are based on human perception which is often flawed and can lead to situations where identity is disputed. Therefore, there is a clear need to secure identifications in an objective way based on anthropometric measures. Anthropometry has existed for many years and has evolved with each advent of new technology and computing power. As a result of this, face recognition methodology has shifted from a purely 2D image-based approach to the use of 3D facial shape. However, one of the main challenges still remaining is the non-rigid structure of the face, which can change permanently over varying time-scales and briefly with facial expressions. The majority of face recognition methods have been developed by scientists with a very technical background such as biometry, pattern recognition and computer vision. This article strives to bridge the gap between these communities and the forensic science end-users. A concise review of face recognition using 3D shape is given. Methods using 3D shape applied to data embodying facial expressions are tabulated for reference. From this list a categorization of different strategies to deal with expressions is presented. The underlying concepts and practical issues relating to the application of each strategy are given, without going into technical details. The discussion clearly articulates the justification to establish archival, reference databases to compare and evaluate different strategies. PMID:20395086

  14. Objective 3D face recognition: Evolution, approaches and challenges.

    PubMed

    Smeets, Dirk; Claes, Peter; Vandermeulen, Dirk; Clement, John Gerald

    2010-09-10

    Face recognition is a natural human ability and a widely accepted identification and authentication method. In modern legal settings, a lot of credence is placed on identifications made by eyewitnesses. Consequently these are based on human perception which is often flawed and can lead to situations where identity is disputed. Therefore, there is a clear need to secure identifications in an objective way based on anthropometric measures. Anthropometry has existed for many years and has evolved with each advent of new technology and computing power. As a result of this, face recognition methodology has shifted from a purely 2D image-based approach to the use of 3D facial shape. However, one of the main challenges still remaining is the non-rigid structure of the face, which can change permanently over varying time-scales and briefly with facial expressions. The majority of face recognition methods have been developed by scientists with a very technical background such as biometry, pattern recognition and computer vision. This article strives to bridge the gap between these communities and the forensic science end-users. A concise review of face recognition using 3D shape is given. Methods using 3D shape applied to data embodying facial expressions are tabulated for reference. From this list a categorization of different strategies to deal with expressions is presented. The underlying concepts and practical issues relating to the application of each strategy are given, without going into technical details. The discussion clearly articulates the justification to establish archival, reference databases to compare and evaluate different strategies.

  15. Face recognition system using multiple face model of hybrid Fourier feature under uncontrolled illumination variation.

    PubMed

    Hwang, Wonjun; Wang, Haitao; Kim, Hyunwoo; Kee, Seok-Cheol; Kim, Junmo

    2011-04-01

    The authors present a robust face recognition system for large-scale data sets taken under uncontrolled illumination variations. The proposed face recognition system consists of a novel illumination-insensitive preprocessing method, a hybrid Fourier-based facial feature extraction, and a score fusion scheme. First, in the preprocessing stage, a face image is transformed into an illumination-insensitive image, called an "integral normalized gradient image," by normalizing and integrating the smoothed gradients of a facial image. Then, for feature extraction of complementary classifiers, multiple face models based upon hybrid Fourier features are applied. The hybrid Fourier features are extracted from different Fourier domains in different frequency bandwidths, and then each feature is individually classified by linear discriminant analysis. In addition, multiple face models are generated by plural normalized face images that have different eye distances. Finally, to combine scores from multiple complementary classifiers, a log likelihood ratio-based score fusion scheme is applied. The proposed system using the face recognition grand challenge (FRGC) experimental protocols is evaluated; FRGC is a large available data set. Experimental results on the FRGC version 2.0 data sets have shown that the proposed method shows an average of 81.49% verification rate on 2-D face images under various environmental variations such as illumination changes, expression changes, and time elapses. PMID:20923738

  16. Face recognition with the Karhunen-Loeve transform

    NASA Astrophysics Data System (ADS)

    Suarez, Pedro F.

    1991-12-01

    The major goal of this research was to investigate machine recognition of faces. The approach taken to achieve this goal was to investigate the use of Karhunen-Loe've Transform (KLT) by implementing flexible and practical code. The KLT utilizes the eigenvectors of the covariance matrix as a basis set. Faces were projected onto the eigenvectors, called eigenfaces, and the resulting projection coefficients were used as features. Face recognition accuracies for the KLT coefficients were superior to Fourier based techniques. Additionally, this thesis demonstrated the image compression and reconstruction capabilities of the KLT. This theses also developed the use of the KLT as a facial feature detector. The ability to differentiate between facial features provides a computer communications interface for non-vocal people with cerebral palsy. Lastly, this thesis developed a KLT based axis system for laser scanner data of human heads. The scanner data axis system provides the anthropometric community a more precise method of fitting custom helmets.

  17. Multi-stream face recognition for crime-fighting

    NASA Astrophysics Data System (ADS)

    Jassim, Sabah A.; Sellahewa, Harin

    2007-04-01

    Automatic face recognition (AFR) is a challenging task that is increasingly becoming the preferred biometric trait for identification and has the potential of becoming an essential tool in the fight against crime and terrorism. Closed-circuit television (CCTV) cameras have increasingly been used over the last few years for surveillance in public places such as airports, train stations and shopping centers. They are used to detect and prevent crime, shoplifting, public disorder and terrorism. The work of law-enforcing and intelligence agencies is becoming more reliant on the use of databases of biometric data for large section of the population. Face is one of the most natural biometric traits that can be used for identification and surveillance. However, variations in lighting conditions, facial expressions, face size and pose are a great obstacle to AFR. This paper is concerned with using waveletbased face recognition schemes in the presence of variations of expressions and illumination. In particular, we will investigate the use of a combination of wavelet frequency channels for a multi-stream face recognition using various wavelet subbands as different face signal streams. The proposed schemes extend our recently developed face veri.cation scheme for implementation on mobile devices. We shall present experimental results on the performance of our proposed schemes for a number of face databases including a new AV database recorded on a PDA. By analyzing the various experimental data, we shall demonstrate that the multi-stream approach is robust against variations in illumination and facial expressions than the previous single-stream approach.

  18. Always on My Mind? Recognition of Attractive Faces May Not Depend on Attention

    PubMed Central

    Silva, André; Macedo, António F.; Albuquerque, Pedro B.; Arantes, Joana

    2016-01-01

    Little research has examined what happens to attention and memory as a whole when humans see someone attractive. Hence, we investigated whether attractive stimuli gather more attention and are better remembered than unattractive stimuli. Participants took part in an attention task – in which matrices containing attractive and unattractive male naturalistic photographs were presented to 54 females, and measures of eye-gaze location and fixation duration using an eye-tracker were taken – followed by a recognition task. Eye-gaze was higher for the attractive stimuli compared to unattractive stimuli. Also, attractive photographs produced more hits and false recognitions than unattractive photographs which may indicate that regardless of attention allocation, attractive photographs produce more correct but also more false recognitions. We present an evolutionary explanation for this, as attending to more attractive faces but not always remembering them accurately and differentially compared with unseen attractive faces, may help females secure mates with higher reproductive value. PMID:26858683

  19. Neural Mechanism for Mirrored Self-face Recognition.

    PubMed

    Sugiura, Motoaki; Miyauchi, Carlos Makoto; Kotozaki, Yuka; Akimoto, Yoritaka; Nozawa, Takayuki; Yomogida, Yukihito; Hanawa, Sugiko; Yamamoto, Yuki; Sakuma, Atsushi; Nakagawa, Seishu; Kawashima, Ryuta

    2015-09-01

    Self-face recognition in the mirror is considered to involve multiple processes that integrate 2 perceptual cues: temporal contingency of the visual feedback on one's action (contingency cue) and matching with self-face representation in long-term memory (figurative cue). The aim of this study was to examine the neural bases of these processes by manipulating 2 perceptual cues using a "virtual mirror" system. This system allowed online dynamic presentations of real-time and delayed self- or other facial actions. Perception-level processes were identified as responses to only a single perceptual cue. The effect of the contingency cue was identified in the cuneus. The regions sensitive to the figurative cue were subdivided by the response to a static self-face, which was identified in the right temporal, parietal, and frontal regions, but not in the bilateral occipitoparietal regions. Semantic- or integration-level processes, including amodal self-representation and belief validation, which allow modality-independent self-recognition and the resolution of potential conflicts between perceptual cues, respectively, were identified in distinct regions in the right frontal and insular cortices. The results are supportive of the multicomponent notion of self-recognition and suggest a critical role for contingency detection in the co-emergence of self-recognition and empathy in infants.

  20. Locally linear regression for pose-invariant face recognition.

    PubMed

    Chai, Xiujuan; Shan, Shiguang; Chen, Xilin; Gao, Wen

    2007-07-01

    The variation of facial appearance due to the viewpoint (/pose) degrades face recognition systems considerably, which is one of the bottlenecks in face recognition. One of the possible solutions is generating virtual frontal view from any given nonfrontal view to obtain a virtual gallery/probe face. Following this idea, this paper proposes a simple, but efficient, novel locally linear regression (LLR) method, which generates the virtual frontal view from a given nonfrontal face image. We first justify the basic assumption of the paper that there exists an approximate linear mapping between a nonfrontal face image and its frontal counterpart. Then, by formulating the estimation of the linear mapping as a prediction problem, we present the regression-based solution, i.e., globally linear regression. To improve the prediction accuracy in the case of coarse alignment, LLR is further proposed. In LLR, we first perform dense sampling in the nonfrontal face image to obtain many overlapped local patches. Then, the linear regression technique is applied to each small patch for the prediction of its virtual frontal patch. Through the combination of all these patches, the virtual frontal view is generated. The experimental results on the CMU PIE database show distinct advantage of the proposed method over Eigen light-field method.

  1. Observation distance and recognition of photographs of celebrities' faces.

    PubMed

    Greene, Ernest; Fraser, Scott C

    2002-10-01

    Subjects were tested to assess the distance at which they could recognize the faces of celebrities (more specifically, a set of 44 portrait photographs of movie and television actors). The set of test photographs was shown initially at a distance of 200 ft. and then closer in increments of 20 ft. When the actor in a given photograph was identified, either by name, character role, or by the movie or television show in which the actor had starred, the recognition-distance was recorded and the photograph was removed from the test set. Those which were not recognized (even at the closest distance) were not included in the data summaries or statistical analysis. In calculating recognition-distance for each photograph, the values were adjusted to reflect the distance at which recognition would have occurred if all the faces were of normal size. The upper limit for recognition, as defined by the distance above which only 10% of the faces are identified, was just over 160 ft. for women, and just under 200 ft. for men. There was also a significant difference in mean recognition distance between women and men. The large range of recognition-distance (across photographs and across subjects) argues that the distance is not controlled primarily by the feature detail provided in a given photograph or by the discrimination and recall skills of the observer. More likely it is a function of diverse memory associations, so that the distance at which each photograph is recognized will depend on such factors as frequency and recency of exposure, perceived attractiveness, and how much the subject admires the celebrity. PMID:12434863

  2. Improving representation-based classification for robust face recognition

    NASA Astrophysics Data System (ADS)

    Zhang, Hongzhi; Zhang, Zheng; Li, Zhengming; Chen, Yan; Shi, Jian

    2014-06-01

    The sparse representation classification (SRC) method proposed by Wright et al. is considered as the breakthrough of face recognition because of its good performance. Nevertheless it still cannot perfectly address the face recognition problem. The main reason for this is that variation of poses, facial expressions, and illuminations of the facial image can be rather severe and the number of available facial images are fewer than the dimensions of the facial image, so a certain linear combination of all the training samples is not able to fully represent the test sample. In this study, we proposed a novel framework to improve the representation-based classification (RBC). The framework first ran the sparse representation algorithm and determined the unavoidable deviation between the test sample and optimal linear combination of all the training samples in order to represent it. It then exploited the deviation and all the training samples to resolve the linear combination coefficients. Finally, the classification rule, the training samples, and the renewed linear combination coefficients were used to classify the test sample. Generally, the proposed framework can work for most RBC methods. From the viewpoint of regression analysis, the proposed framework has a solid theoretical soundness. Because it can, to an extent, identify the bias effect of the RBC method, it enables RBC to obtain more robust face recognition results. The experimental results on a variety of face databases demonstrated that the proposed framework can improve the collaborative representation classification, SRC, and improve the nearest neighbor classifier.

  3. Impact of intention on the ERP correlates of face recognition.

    PubMed

    Guillaume, Fabrice; Tiberghien, Guy

    2013-02-01

    The present study investigated the impact of study-test similarity on face recognition by manipulating, in the same experiment, the expression change (same vs. different) and the task-processing context (inclusion vs. exclusion instructions) as within-subject variables. Consistent with the dual-process framework, the present results showed that participants performed better on the inclusion task than on the exclusion task, with no response bias. A mid-frontal FN400 old/new effect and a parietal old/new effect were found in both tasks. However, modulations of the ERP old/new effects generated by the expression change on recognized faces differed across tasks. The modulations of the ERP old/new effects were proportional to the degree of matching between the study face and the recognition face in the inclusion task, but not in the exclusion task. The observed modulation of the FN400 old/new effect by the task instructions when familiarity and conceptual priming were kept constant indicates that these early ERP correlates of recognition depend on voluntary task-related control. The present results question the idea that FN400 reflects implicit memory processes such as conceptual priming and show that the extent to which the FN400 discriminates between conditions depends on the retrieval orientation at test. They are discussed in relation to recent controversies about the ERP correlates of familiarity in face recognition. This study suggests that while both conceptual and perceptual information can contribute to the familiarity signal reflected by the FN400 effect, their relative contributions vary with the task demands.

  4. Deep learning and face recognition: the state of the art

    NASA Astrophysics Data System (ADS)

    Balaban, Stephen

    2015-05-01

    Deep Neural Networks (DNNs) have established themselves as a dominant technique in machine learning. DNNs have been top performers on a wide variety of tasks including image classification, speech recognition, and face recognition.1-3 Convolutional neural networks (CNNs) have been used in nearly all of the top performing methods on the Labeled Faces in the Wild (LFW) dataset.3-6 In this talk and accompanying paper, I attempt to provide a review and summary of the deep learning techniques used in the state-of-the-art. In addition, I highlight the need for both larger and more challenging public datasets to benchmark these systems. Despite the ability of DNNs and autoencoders to perform unsupervised feature learning, modern facial recognition pipelines still require domain specific engineering in the form of re-alignment. For example, in Facebook's recent DeepFace paper, a 3D "frontalization" step lies at the beginning of the pipeline. This step creates a 3D face model for the incoming image and then uses a series of affine transformations of the fiducial points to "frontalize" the image. This step enables the DeepFace system to use a neural network architecture with locally connected layers without weight sharing as opposed to standard convolutional layers.6 Deep learning techniques combined with large datasets have allowed research groups to surpass human level performance on the LFW dataset.3, 5 The high accuracy (99.63% for FaceNet at the time of publishing) and utilization of outside data (hundreds of millions of images in the case of Google's FaceNet) suggest that current face verification benchmarks such as LFW may not be challenging enough, nor provide enough data, for current techniques.3, 5 There exist a variety of organizations with mobile photo sharing applications that would be capable of releasing a very large scale and highly diverse dataset of facial images captured on mobile devices. Such an "ImageNet for Face Recognition" would likely receive a warm

  5. Neural correlates of impaired emotional face recognition in cerebellar lesions.

    PubMed

    Adamaszek, Michael; Kirkby, Kenneth C; D'Agata, Fedrico; Olbrich, Sebastian; Langner, Sönke; Steele, Christopher; Sehm, Bernhard; Busse, Stefan; Kessler, Christof; Hamm, Alfons

    2015-07-10

    Clinical and neuroimaging data indicate a cerebellar contribution to emotional processing, which may account for affective-behavioral disturbances in patients with cerebellar lesions. We studied the neurophysiology of cerebellar involvement in recognition of emotional facial expression. Participants comprised eight patients with discrete ischemic cerebellar lesions and eight control patients without any cerebrovascular stroke. Event-related potentials (ERP) were used to measure responses to faces from the Karolinska Directed Emotional Faces Database (KDEF), interspersed in a stream of images with salient contents. Images of faces augmented N170 in both groups, but increased late positive potential (LPP) only in control patients without brain lesions. Dipole analysis revealed altered activation patterns for negative emotions in patients with cerebellar lesions, including activation of the left inferior prefrontal area to images of faces showing fear, contralateral to controls. Correlation analysis indicated that lesions of cerebellar area Crus I contribute to ERP deviations. Overall, our results implicate the cerebellum in integrating emotional information at different higher order stages, suggesting distinct cerebellar contributions to the proposed large-scale cerebral network of emotional face recognition. PMID:25912431

  6. Still-to-video face recognition in unconstrained environments

    NASA Astrophysics Data System (ADS)

    Wang, Haoyu; Liu, Changsong; Ding, Xiaoqing

    2015-02-01

    Face images from video sequences captured in unconstrained environments usually contain several kinds of variations, e.g. pose, facial expression, illumination, image resolution and occlusion. Motion blur and compression artifacts also deteriorate recognition performance. Besides, in various practical systems such as law enforcement, video surveillance and e-passport identification, only a single still image per person is enrolled as the gallery set. Many existing methods may fail to work due to variations in face appearances and the limit of available gallery samples. In this paper, we propose a novel approach for still-to-video face recognition in unconstrained environments. By assuming that faces from still images and video frames share the same identity space, a regularized least squares regression method is utilized to tackle the multi-modality problem. Regularization terms based on heuristic assumptions are enrolled to avoid overfitting. In order to deal with the single image per person problem, we exploit face variations learned from training sets to synthesize virtual samples for gallery samples. We adopt a learning algorithm combining both affine/convex hull-based approach and regularizations to match image sets. Experimental results on a real-world dataset consisting of unconstrained video sequences demonstrate that our method outperforms the state-of-the-art methods impressively.

  7. Neural correlates of impaired emotional face recognition in cerebellar lesions.

    PubMed

    Adamaszek, Michael; Kirkby, Kenneth C; D'Agata, Fedrico; Olbrich, Sebastian; Langner, Sönke; Steele, Christopher; Sehm, Bernhard; Busse, Stefan; Kessler, Christof; Hamm, Alfons

    2015-07-10

    Clinical and neuroimaging data indicate a cerebellar contribution to emotional processing, which may account for affective-behavioral disturbances in patients with cerebellar lesions. We studied the neurophysiology of cerebellar involvement in recognition of emotional facial expression. Participants comprised eight patients with discrete ischemic cerebellar lesions and eight control patients without any cerebrovascular stroke. Event-related potentials (ERP) were used to measure responses to faces from the Karolinska Directed Emotional Faces Database (KDEF), interspersed in a stream of images with salient contents. Images of faces augmented N170 in both groups, but increased late positive potential (LPP) only in control patients without brain lesions. Dipole analysis revealed altered activation patterns for negative emotions in patients with cerebellar lesions, including activation of the left inferior prefrontal area to images of faces showing fear, contralateral to controls. Correlation analysis indicated that lesions of cerebellar area Crus I contribute to ERP deviations. Overall, our results implicate the cerebellum in integrating emotional information at different higher order stages, suggesting distinct cerebellar contributions to the proposed large-scale cerebral network of emotional face recognition.

  8. IR Fringe Projection for 3D Face Recognition

    NASA Astrophysics Data System (ADS)

    Spagnolo, Giuseppe Schirripa; Cozzella, Lorenzo; Simonetti, Carla

    2010-04-01

    Facial recognitions of people can be used for the identification of individuals, or can serve as verification e.g. for access controls. The process requires that the facial data is captured and then compared with stored reference data. Different from traditional methods which use 2D images to recognize human faces, this article shows a known shape extraction methodology applied to the extraction of 3D human faces conjugated with a non conventional optical system able to work in ``invisible'' way. The proposed method is experimentally simple, and it has a low-cost set-up.

  9. Friends with Faces: How Social Networks Can Enhance Face Recognition and Vice Versa

    NASA Astrophysics Data System (ADS)

    Mavridis, Nikolaos; Kazmi, Wajahat; Toulis, Panos

    The "friendship" relation, a social relation among individuals, is one of the primary relations modeled in some of the world's largest online social networking sites, such as "FaceBook." On the other hand, the "co-occurrence" relation, as a relation among faces appearing in pictures, is one that is easily detectable using modern face detection techniques. These two relations, though appearing in different realms (social vs. visual sensory), have a strong correlation: faces that co-occur in photos often belong to individuals who are friends. Using real-world data gathered from "Facebook," which were gathered as part of the "FaceBots" project, the world's first physical face-recognizing and conversing robot that can utilize and publish information on "Facebook" was established. We present here methods as well as results for utilizing this correlation in both directions. Both algorithms for utilizing knowledge of the social context for faster and better face recognition are given, as well as algorithms for estimating the friendship network of a number of individuals given photos containing their faces. The results are quite encouraging. In the primary example, doubling of the recognition accuracy as well as a sixfold improvement in speed is demonstrated. Various improvements, interesting statistics, as well as an empirical investigation leading to predictions of scalability to much bigger data sets are discussed.

  10. Estimating missing tensor data by face synthesis for expression recognition

    NASA Astrophysics Data System (ADS)

    Tan, Huachun; Chen, Hao; Zhang, Jie

    2009-01-01

    In this paper, a new method of facial expression recognition is proposed for missing tensor data. In this method, the missing tensor data is estimated by facial expression synthesis in order to construct the full tensor, which is used for multi-factorization face analysis. The full tensor data allows for the full use of the information of a given database, and hence improves the performance of face analysis. Compared with EM algorithm for missing data estimation, the proposed method avoids iteration process and reduces the estimation complexity. The proposed missing tensor data estimation is applied for expression recognition. The experimental results show that the proposed method is performing better than only utilize the original smaller tensor.

  11. Determination of candidate subjects for better recognition of faces

    NASA Astrophysics Data System (ADS)

    Wang, Xuansheng; Chen, Zhen; Teng, Zhongming

    2016-05-01

    In order to improve the accuracy of face recognition and to solve the problem of various poses, we present an improved collaborative representation classification (CRC) algorithm using original training samples and the corresponding mirror images. First, the mirror images are generated from the original training samples. Second, both original training samples and their mirror images are simultaneously used to represent the test sample via improved collaborative representation. Then, some classes which are "close" to the test sample are coarsely selected as candidate classes. At last, the candidate classes are used to represent the test sample again, and then the class most similar to the test sample can be determined finely. The experimental results show our proposed algorithm has more robustness than the original CRC algorithm and can effectively improve the accuracy of face recognition.

  12. An integrated modeling approach to age invariant face recognition

    NASA Astrophysics Data System (ADS)

    Alvi, Fahad Bashir; Pears, Russel

    2015-03-01

    This Research study proposes a novel method for face recognition based on Anthropometric features that make use of an integrated approach comprising of a global and personalized models. The system is aimed to at situations where lighting, illumination, and pose variations cause problems in face recognition. A Personalized model covers the individual aging patterns while a Global model captures general aging patterns in the database. We introduced a de-aging factor that de-ages each individual in the database test and training sets. We used the k nearest neighbor approach for building a personalized model and global model. Regression analysis was applied to build the models. During the test phase, we resort to voting on different features. We used FG-Net database for checking the results of our technique and achieved 65 percent Rank 1 identification rate.

  13. Sorted Index Numbers for Privacy Preserving Face Recognition

    NASA Astrophysics Data System (ADS)

    Wang, Yongjin; Hatzinakos, Dimitrios

    2009-12-01

    This paper presents a novel approach for changeable and privacy preserving face recognition. We first introduce a new method of biometric matching using the sorted index numbers (SINs) of feature vectors. Since it is impossible to recover any of the exact values of the original features, the transformation from original features to the SIN vectors is noninvertible. To address the irrevocable nature of biometric signals whilst obtaining stronger privacy protection, a random projection-based method is employed in conjunction with the SIN approach to generate changeable and privacy preserving biometric templates. The effectiveness of the proposed method is demonstrated on a large generic data set, which contains images from several well-known face databases. Extensive experimentation shows that the proposed solution may improve the recognition accuracy.

  14. Design and implementation of face recognition system based on Windows

    NASA Astrophysics Data System (ADS)

    Zhang, Min; Liu, Ting; Li, Ailan

    2015-07-01

    In view of the basic Windows login password input way lacking of safety and convenient operation, we will introduce the biometrics technology, face recognition, into the computer to login system. Not only can it encrypt the computer system, also according to the level to identify administrators at all levels. With the enhancement of the system security, user input can neither be a cumbersome nor worry about being stolen password confidential.

  15. Familiar and unfamiliar face recognition in crested macaques (Macaca nigra)

    PubMed Central

    Micheletta, Jérôme; Whitehouse, Jamie; Parr, Lisa A.; Marshman, Paul; Engelhardt, Antje; Waller, Bridget M.

    2015-01-01

    Many species use facial features to identify conspecifics, which is necessary to navigate a complex social environment. The fundamental mechanisms underlying face processing are starting to be well understood in a variety of primate species. However, most studies focus on a limited subset of species tested with unfamiliar faces. As well as limiting our understanding of how widely distributed across species these skills are, this also limits our understanding of how primates process faces of individuals they know, and whether social factors (e.g. dominance and social bonds) influence how readily they recognize others. In this study, socially housed crested macaques voluntarily participated in a series of computerized matching-to-sample tasks investigating their ability to discriminate (i) unfamiliar individuals and (ii) members of their own social group. The macaques performed above chance on all tasks. Familiar faces were not easier to discriminate than unfamiliar faces. However, the subjects were better at discriminating higher ranking familiar individuals, but not unfamiliar ones. This suggests that our subjects applied their knowledge of their dominance hierarchies to the pictorial representation of their group mates. Faces of high-ranking individuals garner more social attention, and therefore might be more deeply encoded than other individuals. Our results extend the study of face recognition to a novel species, and consequently provide valuable data for future comparative studies. PMID:26064665

  16. Emotion recognition: the role of featural and configural face information.

    PubMed

    Bombari, Dario; Schmid, Petra C; Schmid Mast, Marianne; Birri, Sandra; Mast, Fred W; Lobmaier, Janek S

    2013-01-01

    Several studies investigated the role of featural and configural information when processing facial identity. A lot less is known about their contribution to emotion recognition. In this study, we addressed this issue by inducing either a featural or a configural processing strategy (Experiment 1) and by investigating the attentional strategies in response to emotional expressions (Experiment 2). In Experiment 1, participants identified emotional expressions in faces that were presented in three different versions (intact, blurred, and scrambled) and in two orientations (upright and inverted). Blurred faces contain mainly configural information, and scrambled faces contain mainly featural information. Inversion is known to selectively hinder configural processing. Analyses of the discriminability measure (A') and response times (RTs) revealed that configural processing plays a more prominent role in expression recognition than featural processing, but their relative contribution varies depending on the emotion. In Experiment 2, we qualified these differences between emotions by investigating the relative importance of specific features by means of eye movements. Participants had to match intact expressions with the emotional cues that preceded the stimulus. The analysis of eye movements confirmed that the recognition of different emotions rely on different types of information. While the mouth is important for the detection of happiness and fear, the eyes are more relevant for anger, fear, and sadness. PMID:23679155

  17. Using Regression to Measure Holistic Face Processing Reveals a Strong Link with Face Recognition Ability

    ERIC Educational Resources Information Center

    DeGutis, Joseph; Wilmer, Jeremy; Mercado, Rogelio J.; Cohan, Sarah

    2013-01-01

    Although holistic processing is thought to underlie normal face recognition ability, widely discrepant reports have recently emerged about this link in an individual differences context. Progress in this domain may have been impeded by the widespread use of subtraction scores, which lack validity due to their contamination with control condition…

  18. Effects of Lateral Reversal on Recognition Memory for Photographs of Faces.

    ERIC Educational Resources Information Center

    McKelvie, Stuart J.

    1983-01-01

    Examined recognition memory for photographs of faces in four experiments using students and adults. Results supported a feature (rather than Gestalt) model of facial recognition in which the two sides of the face are different in its memory representation. (JAC)

  19. Recognition of Faces in Unconstrained Environments: A Comparative Study

    NASA Astrophysics Data System (ADS)

    Ruiz-del-Solar, Javier; Verschae, Rodrigo; Correa, Mauricio

    2009-12-01

    The aim of this work is to carry out a comparative study of face recognition methods that are suitable to work in unconstrained environments. The analyzed methods are selected by considering their performance in former comparative studies, in addition to be real-time, to require just one image per person, and to be fully online. In the study two local-matching methods, histograms of LBP features and Gabor Jet descriptors, one holistic method, generalized PCA, and two image-matching methods, SIFT-based and ERCF-based, are analyzed. The methods are compared using the FERET, LFW, UCHFaceHRI, and FRGC databases, which allows evaluating them in real-world conditions that include variations in scale, pose, lighting, focus, resolution, facial expression, accessories, makeup, occlusions, background and photographic quality. Main conclusions of this study are: there is a large dependence of the methods on the amount of face and background information that is included in the face's images, and the performance of all methods decreases largely with outdoor-illumination. The analyzed methods are robust to inaccurate alignment, face occlusions, and variations in expressions, to a large degree. LBP-based methods are an excellent election if we need real-time operation as well as high recognition rates.

  20. A novel polar-based human face recognition computational model.

    PubMed

    Zana, Y; Mena-Chalco, J P; Cesar, R M

    2009-07-01

    Motivated by a recently proposed biologically inspired face recognition approach, we investigated the relation between human behavior and a computational model based on Fourier-Bessel (FB) spatial patterns. We measured human recognition performance of FB filtered face images using an 8-alternative forced-choice method. Test stimuli were generated by converting the images from the spatial to the FB domain, filtering the resulting coefficients with a band-pass filter, and finally taking the inverse FB transformation of the filtered coefficients. The performance of the computational models was tested using a simulation of the psychophysical experiment. In the FB model, face images were first filtered by simulated V1- type neurons and later analyzed globally for their content of FB components. In general, there was a higher human contrast sensitivity to radially than to angularly filtered images, but both functions peaked at the 11.3-16 frequency interval. The FB-based model presented similar behavior with regard to peak position and relative sensitivity, but had a wider frequency band width and a narrower response range. The response pattern of two alternative models, based on local FB analysis and on raw luminance, strongly diverged from the human behavior patterns. These results suggest that human performance can be constrained by the type of information conveyed by polar patterns, and consequently that humans might use FB-like spatial patterns in face processing. PMID:19578643

  1. A motivational determinant of facial emotion recognition: regulatory focus affects recognition of emotions in faces.

    PubMed

    Sassenrath, Claudia; Sassenberg, Kai; Ray, Devin G; Scheiter, Katharina; Jarodzka, Halszka

    2014-01-01

    Two studies examined an unexplored motivational determinant of facial emotion recognition: observer regulatory focus. It was predicted that a promotion focus would enhance facial emotion recognition relative to a prevention focus because the attentional strategies associated with promotion focus enhance performance on well-learned or innate tasks - such as facial emotion recognition. In Study 1, a promotion or a prevention focus was experimentally induced and better facial emotion recognition was observed in a promotion focus compared to a prevention focus. In Study 2, individual differences in chronic regulatory focus were assessed and attention allocation was measured using eye tracking during the facial emotion recognition task. Results indicated that the positive relation between a promotion focus and facial emotion recognition is mediated by shorter fixation duration on the face which reflects a pattern of attention allocation matched to the eager strategy in a promotion focus (i.e., striving to make hits). A prevention focus did not have an impact neither on perceptual processing nor on facial emotion recognition. Taken together, these findings demonstrate important mechanisms and consequences of observer motivational orientation for facial emotion recognition.

  2. The Effect of Inversion on Face Recognition in Adults with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Hedley, Darren; Brewer, Neil; Young, Robyn

    2015-01-01

    Face identity recognition has widely been shown to be impaired in individuals with autism spectrum disorders (ASD). In this study we examined the influence of inversion on face recognition in 26 adults with ASD and 33 age and IQ matched controls. Participants completed a recognition test comprising upright and inverted faces. Participants with ASD…

  3. The Role of Active Exploration of 3D Face Stimuli on Recognition Memory of Facial Information

    ERIC Educational Resources Information Center

    Liu, Chang Hong; Ward, James; Markall, Helena

    2007-01-01

    Research on face recognition has mainly relied on methods in which observers are relatively passive viewers of face stimuli. This study investigated whether active exploration of three-dimensional (3D) face stimuli could facilitate recognition memory. A standard recognition task and a sequential matching task were employed in a yoked design.…

  4. Presentation attack detection for face recognition using light field camera.

    PubMed

    Raghavendra, R; Raja, Kiran B; Busch, Christoph

    2015-03-01

    The vulnerability of face recognition systems isa growing concern that has drawn the interest from both academic and research communities. Despite the availability of a broad range of face presentation attack detection (PAD)(or countermeasure or antispoofing) schemes, there exists no superior PAD technique due to evolution of sophisticated presentation attacks (or spoof attacks). In this paper, we present a new perspective for face presentation attack detection by introducing light field camera (LFC). Since the use of a LFC can record the direction of each incoming ray in addition to the intensity, it exhibits an unique characteristic of rendering multiple depth(or focus) images in a single capture. Thus, we present a novel approach that involves exploring the variation of the focus between multiple depth (or focus) images rendered by the LFC that in turn can be used to reveal the presentation attacks. To this extent, we first collect a new face artefact database using LFC that comprises of 80 subjects. Face artefacts are generated by simulating two widely used attacks, such as photo print and electronic screen attack. Extensive experiments carried out on the light field face artefact database have revealed the outstanding performance of the proposed PAD scheme when benchmarked with various well established state-of-the-art schemes. PMID:25622320

  5. Cortical Thickness in Fusiform Face Area Predicts Face and Object Recognition Performance

    PubMed Central

    McGugin, Rankin W.; Van Gulick, Ana E.; Gauthier, Isabel

    2016-01-01

    The fusiform face area (FFA) is defined by its selectivity for faces. Several studies have shown that the response of FFA to non-face objects can predict behavioral performance for these objects. However, one possible account is that experts pay more attention to objects in their domain of expertise, driving signals up. Here we show an effect of expertise with non-face objects in FFA that cannot be explained by differential attention to objects of expertise. We explore the relationship between cortical thickness of FFA and face and object recognition using the Cambridge Face Memory Test and Vanderbilt Expertise Test, respectively. We measured cortical thickness in functionally-defined regions in a group of men who evidenced functional expertise effects for cars in FFA. Performance with faces and objects together accounted for approximately 40% of the variance in cortical thickness of several FFA patches. While subjects with a thicker FFA cortex performed better with vehicles, those with a thinner FFA cortex performed better with faces and living objects. The results point to a domain-general role of FFA in object perception and reveal an interesting double dissociation that does not contrast faces and objects, but rather living and non-living objects. PMID:26439272

  6. T2 relaxation time correlates of face recognition deficits in temporal lobe epilepsy.

    PubMed

    Bengner, Thomas; Siemonsen, Susanne; Stodieck, Stefan; Fiehler, Jens

    2008-11-01

    This study explored structural correlates of immediate and delayed face recognition in 22 nonsurgical patients with nonlesional, unilateral mesial temporal lobe epilepsy (TLE, 10 left/12 right). We measured T2 relaxation time bilaterally in the hippocampus, the amygdala, and the fusiform gyrus. Apart from raised T2 values in the ipsilateral hippocampus, we found increased T2 values in the ipsilateral amygdala. Patients with right TLE exhibited impaired face recognition as a result of a decrease from immediate to delayed recognition. Higher T2 values in the right than left fusiform gyrus or hippocampus were related to worse immediate face recognition, but did not correlate with 24-hour face recognition. These preliminary results indicate that structural changes in the fusiform gyrus and hippocampus may influence immediate face recognition deficits, but have no linear influence on long-term face recognition in TLE. We suggest that long-term face recognition depends on a right hemispheric network encompassing structures outside the temporal lobe.

  7. Individual differences in adaptive coding of face identity are linked to individual differences in face recognition ability.

    PubMed

    Rhodes, Gillian; Jeffery, Linda; Taylor, Libby; Hayward, William G; Ewing, Louise

    2014-06-01

    Despite their similarity as visual patterns, we can discriminate and recognize many thousands of faces. This expertise has been linked to 2 coding mechanisms: holistic integration of information across the face and adaptive coding of face identity using norms tuned by experience. Recently, individual differences in face recognition ability have been discovered and linked to differences in holistic coding. Here we show that they are also linked to individual differences in adaptive coding of face identity, measured using face identity aftereffects. Identity aftereffects correlated significantly with several measures of face-selective recognition ability. They also correlated marginally with own-race face recognition ability, suggesting a role for adaptive coding in the well-known other-race effect. More generally, these results highlight the important functional role of adaptive face-coding mechanisms in face expertise, taking us beyond the traditional focus on holistic coding mechanisms.

  8. Individual differences in adaptive coding of face identity are linked to individual differences in face recognition ability.

    PubMed

    Rhodes, Gillian; Jeffery, Linda; Taylor, Libby; Hayward, William G; Ewing, Louise

    2014-06-01

    Despite their similarity as visual patterns, we can discriminate and recognize many thousands of faces. This expertise has been linked to 2 coding mechanisms: holistic integration of information across the face and adaptive coding of face identity using norms tuned by experience. Recently, individual differences in face recognition ability have been discovered and linked to differences in holistic coding. Here we show that they are also linked to individual differences in adaptive coding of face identity, measured using face identity aftereffects. Identity aftereffects correlated significantly with several measures of face-selective recognition ability. They also correlated marginally with own-race face recognition ability, suggesting a role for adaptive coding in the well-known other-race effect. More generally, these results highlight the important functional role of adaptive face-coding mechanisms in face expertise, taking us beyond the traditional focus on holistic coding mechanisms. PMID:24684315

  9. Thermal-to-visible face recognition using multiple kernel learning

    NASA Astrophysics Data System (ADS)

    Hu, Shuowen; Gurram, Prudhvi; Kwon, Heesung; Chan, Alex L.

    2014-06-01

    Recognizing faces acquired in the thermal spectrum from a gallery of visible face images is a desired capability for the military and homeland security, especially for nighttime surveillance and intelligence gathering. However, thermal-tovisible face recognition is a highly challenging problem, due to the large modality gap between thermal and visible imaging. In this paper, we propose a thermal-to-visible face recognition approach based on multiple kernel learning (MKL) with support vector machines (SVMs). We first subdivide the face into non-overlapping spatial regions or blocks using a method based on coalitional game theory. For comparison purposes, we also investigate uniform spatial subdivisions. Following this subdivision, histogram of oriented gradients (HOG) features are extracted from each block and utilized to compute a kernel for each region. We apply sparse multiple kernel learning (SMKL), which is a MKLbased approach that learns a set of sparse kernel weights, as well as the decision function of a one-vs-all SVM classifier for each of the subjects in the gallery. We also apply equal kernel weights (non-sparse) and obtain one-vs-all SVM models for the same subjects in the gallery. Only visible images of each subject are used for MKL training, while thermal images are used as probe images during testing. With subdivision generated by game theory, we achieved Rank-1 identification rate of 50.7% for SMKL and 93.6% for equal kernel weighting using a multimodal dataset of 65 subjects. With uniform subdivisions, we achieved a Rank-1 identification rate of 88.3% for SMKL, but 92.7% for equal kernel weighting.

  10. Information Theory for Gabor Feature Selection for Face Recognition

    NASA Astrophysics Data System (ADS)

    Shen, Linlin; Bai, Li

    2006-12-01

    A discriminative and robust feature—kernel enhanced informative Gabor feature—is proposed in this paper for face recognition. Mutual information is applied to select a set of informative and nonredundant Gabor features, which are then further enhanced by kernel methods for recognition. Compared with one of the top performing methods in the 2004 Face Verification Competition (FVC2004), our methods demonstrate a clear advantage over existing methods in accuracy, computation efficiency, and memory cost. The proposed method has been fully tested on the FERET database using the FERET evaluation protocol. Significant improvements on three of the test data sets are observed. Compared with the classical Gabor wavelet-based approaches using a huge number of features, our method requires less than 4 milliseconds to retrieve a few hundreds of features. Due to the substantially reduced feature dimension, only 4 seconds are required to recognize 200 face images. The paper also unified different Gabor filter definitions and proposed a training sample generation algorithm to reduce the effects caused by unbalanced number of samples available in different classes.

  11. Log-Gabor Weber descriptor for face recognition

    NASA Astrophysics Data System (ADS)

    Li, Jing; Sang, Nong; Gao, Changxin

    2015-09-01

    The Log-Gabor transform, which is suitable for analyzing gradually changing data such as in iris and face images, has been widely used in image processing, pattern recognition, and computer vision. In most cases, only the magnitude or phase information of the Log-Gabor transform is considered. However, the complementary effect taken by combining magnitude and phase information simultaneously for an image-feature extraction problem has not been systematically explored in the existing works. We propose a local image descriptor for face recognition, called Log-Gabor Weber descriptor (LGWD). The novelty of our LGWD is twofold: (1) to fully utilize the information from the magnitude or phase feature of multiscale and orientation Log-Gabor transform, we apply the Weber local binary pattern operator to each transform response. (2) The encoded Log-Gabor magnitude and phase information are fused at the feature level by utilizing kernel canonical correlation analysis strategy, considering that feature level information fusion is effective when the modalities are correlated. Experimental results on the AR, Extended Yale B, and UMIST face databases, compared with those available from recent experiments reported in the literature, show that our descriptor yields a better performance than state-of-the art methods.

  12. Driver face recognition as a security and safety feature

    NASA Astrophysics Data System (ADS)

    Vetter, Volker; Giefing, Gerd-Juergen; Mai, Rudolf; Weisser, Hubert

    1995-09-01

    We present a driver face recognition system for comfortable access control and individual settings of automobiles. The primary goals are the prevention of car thefts and heavy accidents caused by unauthorized use (joy-riders), as well as the increase of safety through optimal settings, e.g. of the mirrors and the seat position. The person sitting on the driver's seat is observed automatically by a small video camera in the dashboard. All he has to do is to behave cooperatively, i.e. to look into the camera. A classification system validates his access. Only after a positive identification, the car can be used and the driver-specific environment (e.g. seat position, mirrors, etc.) may be set up to ensure the driver's comfort and safety. The driver identification system has been integrated in a Volkswagen research car. Recognition results are presented.

  13. Facial emotion recognition deficits: The new face of schizophrenia.

    PubMed

    Behere, Rishikesh V

    2015-01-01

    Schizophrenia has been classically described to have positive, negative, and cognitive symptom dimension. Emerging evidence strongly supports a fourth dimension of social cognitive symptoms with facial emotion recognition deficits (FERD) representing a new face in our understanding of this complex disorder. FERD have been described to be one among the important deficits in schizophrenia and could be trait markers for the disorder. FERD are associated with socio-occupational dysfunction and hence are of important clinical relevance. This review discusses FERD in schizophrenia, challenges in its assessment in our cultural context, its implications in understanding neurobiological mechanisms and clinical applications. PMID:26600574

  14. Affect and face perception: odors modulate the recognition advantage of happy faces.

    PubMed

    Leppanen, Jukka M; Hietanen, Jari K

    2003-12-01

    Previous choice reaction time studies have provided consistent evidence for faster recognition of positive (e.g., happy) than negative (e.g., disgusted) facial expressions. A predominance of positive emotions in normal contexts may partly explain this effect. The present study used pleasant and unpleasant odors to test whether emotional context affects the happy face advantage. Results from 2 experiments indicated that happiness was recognized faster than disgust in a pleasant context, but this advantage disappeared in an unpleasant context because of the slow recognition of happy faces. Odors may modulate the functioning of those emotion-related brain structures that participate in the formation of the perceptual representations of the facial expressions and in the generation of the conceptual knowledge associated with the signaled emotion.

  15. Face recognition: Eigenface, elastic matching, and neural nets

    SciTech Connect

    Zhang, J.; Yan, Y.; Lades, M.

    1997-09-01

    This paper is a comparative study of three recently proposed algorithms for face recognition: eigenface, autoassociation and classification neural nets, and elastic matching. After these algorithms were analyzed under a common statistical decision framework, they were evaluated experimentally on four individual data bases, each with a moderate subject size, and a combined data base with more than a hundred different subjects. Analysis and experimental results indicate that the eigenface algorithm, which is essentially a minimum distance classifier, works well when lighting variation is small. Its performance deteriorates significantly as lighting variation increases. The elastic matching algorithm, on the other hand, is insensitive to lighting, face position, and expression variations and therefore is more versatile. The performance of the autoassociation and classification nets is upper bounded by that of the eigenface but is more difficult to implement in practice.

  16. Fashioning the Face: Sensorimotor Simulation Contributes to Facial Expression Recognition.

    PubMed

    Wood, Adrienne; Rychlowska, Magdalena; Korb, Sebastian; Niedenthal, Paula

    2016-03-01

    When we observe a facial expression of emotion, we often mimic it. This automatic mimicry reflects underlying sensorimotor simulation that supports accurate emotion recognition. Why this is so is becoming more obvious: emotions are patterns of expressive, behavioral, physiological, and subjective feeling responses. Activation of one component can therefore automatically activate other components. When people simulate a perceived facial expression, they partially activate the corresponding emotional state in themselves, which provides a basis for inferring the underlying emotion of the expresser. We integrate recent evidence in favor of a role for sensorimotor simulation in emotion recognition. We then connect this account to a domain-general understanding of how sensory information from multiple modalities is integrated to generate perceptual predictions in the brain. PMID:26876363

  17. A Comparative Study of 2D PCA Face Recognition Method with Other Statistically Based Face Recognition Methods

    NASA Astrophysics Data System (ADS)

    Senthilkumar, R.; Gnanamurthy, R. K.

    2016-09-01

    In this paper, two-dimensional principal component analysis (2D PCA) is compared with other algorithms like 1D PCA, Fisher discriminant analysis (FDA), independent component analysis (ICA) and Kernel PCA (KPCA) which are used for image representation and face recognition. As opposed to PCA, 2D PCA is based on 2D image matrices rather than 1D vectors, so the image matrix does not need to be transformed into a vector prior to feature extraction. Instead, an image covariance matrix is constructed directly using the original image matrices and its Eigen vectors are derived for image feature extraction. To test 2D PCA and evaluate its performance, a series of experiments are performed on three face image databases: ORL, Senthil, and Yale face databases. The recognition rate across all trials higher using 2D PCA than PCA, FDA, ICA and KPCA. The experimental results also indicated that the extraction of image features is computationally more efficient using 2D PCA than PCA.

  18. Quality labeled faces in the wild (QLFW): a database for studying face recognition in real-world environments

    NASA Astrophysics Data System (ADS)

    Karam, Lina J.; Zhu, Tong

    2015-03-01

    The varying quality of face images is an important challenge that limits the effectiveness of face recognition technology when applied in real-world applications. Existing face image databases do not consider the effect of distortions that commonly occur in real-world environments. This database (QLFW) represents an initial attempt to provide a set of labeled face images spanning the wide range of quality, from no perceived impairment to strong perceived impairment for face detection and face recognition applications. Types of impairment include JPEG2000 compression, JPEG compression, additive white noise, Gaussian blur and contrast change. Subjective experiments are conducted to assess the perceived visual quality of faces under different levels and types of distortions and also to assess the human recognition performance under the considered distortions. One goal of this work is to enable automated performance evaluation of face recognition technologies in the presence of different types and levels of visual distortions. This will consequently enable the development of face recognition systems that can operate reliably on real-world visual content in the presence of real-world visual distortions. Another goal is to enable the development and assessment of visual quality metrics for face images and for face detection and recognition applications.

  19. Cortical Thickness in Fusiform Face Area Predicts Face and Object Recognition Performance.

    PubMed

    McGugin, Rankin W; Van Gulick, Ana E; Gauthier, Isabel

    2016-02-01

    The fusiform face area (FFA) is defined by its selectivity for faces. Several studies have shown that the response of FFA to nonface objects can predict behavioral performance for these objects. However, one possible account is that experts pay more attention to objects in their domain of expertise, driving signals up. Here, we show an effect of expertise with nonface objects in FFA that cannot be explained by differential attention to objects of expertise. We explore the relationship between cortical thickness of FFA and face and object recognition using the Cambridge Face Memory Test and Vanderbilt Expertise Test, respectively. We measured cortical thickness in functionally defined regions in a group of men who evidenced functional expertise effects for cars in FFA. Performance with faces and objects together accounted for approximately 40% of the variance in cortical thickness of several FFA patches. Whereas participants with a thicker FFA cortex performed better with vehicles, those with a thinner FFA cortex performed better with faces and living objects. The results point to a domain-general role of FFA in object perception and reveal an interesting double dissociation that does not contrast faces and objects but rather living and nonliving objects. PMID:26439272

  20. Cortical Thickness in Fusiform Face Area Predicts Face and Object Recognition Performance.

    PubMed

    McGugin, Rankin W; Van Gulick, Ana E; Gauthier, Isabel

    2016-02-01

    The fusiform face area (FFA) is defined by its selectivity for faces. Several studies have shown that the response of FFA to nonface objects can predict behavioral performance for these objects. However, one possible account is that experts pay more attention to objects in their domain of expertise, driving signals up. Here, we show an effect of expertise with nonface objects in FFA that cannot be explained by differential attention to objects of expertise. We explore the relationship between cortical thickness of FFA and face and object recognition using the Cambridge Face Memory Test and Vanderbilt Expertise Test, respectively. We measured cortical thickness in functionally defined regions in a group of men who evidenced functional expertise effects for cars in FFA. Performance with faces and objects together accounted for approximately 40% of the variance in cortical thickness of several FFA patches. Whereas participants with a thicker FFA cortex performed better with vehicles, those with a thinner FFA cortex performed better with faces and living objects. The results point to a domain-general role of FFA in object perception and reveal an interesting double dissociation that does not contrast faces and objects but rather living and nonliving objects.

  1. Face recognition using 4-PSK joint transform correlation

    NASA Astrophysics Data System (ADS)

    Moniruzzaman, Md.; Alam, Mohammad S.

    2016-04-01

    This paper presents an efficient phase-encoded and 4-phase shift keying (PSK)-based fringe-adjusted joint transform correlation (FJTC) technique for face recognition applications. The proposed technique uses phase encoding and a 4- channel phase shifting method on the reference image which can be pre-calculated without affecting the system processing speed. The 4-channel PSK step eliminates the unwanted zero-order term, autocorrelation among multiple similar input scene objects while yield enhanced cross-correlation output. For each channel, discrete wavelet decomposition preprocessing has been used to accommodate the impact of various 3D facial expressions, effects of noise, and illumination variations. The performance of the proposed technique has been tested using various image datasets such as Yale, and extended Yale B under different environments such as illumination variation and 3D changes in facial expressions. The test results show that the proposed technique yields significantly better performance when compared to existing JTC-based face recognition techniques.

  2. Infrared face recognition based on binary particle swarm optimization and SVM-wrapper model

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua; Liu, Guodong

    2015-10-01

    Infrared facial imaging, being light- independent, and not vulnerable to facial skin, expressions and posture, can avoid or limit the drawbacks of face recognition in visible light. Robust feature selection and representation is a key issue for infrared face recognition research. This paper proposes a novel infrared face recognition method based on local binary pattern (LBP). LBP can improve the robust of infrared face recognition under different environment situations. How to make full use of the discriminant ability in LBP patterns is an important problem. A search algorithm combination binary particle swarm with SVM is used to find out the best discriminative subset in LBP features. Experimental results show that the proposed method outperforms traditional LBP based infrared face recognition methods. It can significantly improve the recognition performance of infrared face recognition.

  3. Using regression to measure holistic face processing reveals a strong link with face recognition ability.

    PubMed

    DeGutis, Joseph; Wilmer, Jeremy; Mercado, Rogelio J; Cohan, Sarah

    2013-01-01

    Although holistic processing is thought to underlie normal face recognition ability, widely discrepant reports have recently emerged about this link in an individual differences context. Progress in this domain may have been impeded by the widespread use of subtraction scores, which lack validity due to their contamination with control condition variance. Regressing, rather than subtracting, a control condition from a condition of interest corrects this validity problem by statistically removing all control condition variance, thereby producing a specific measure that is uncorrelated with the control measure. Using 43 participants, we measured the relationships amongst the Cambridge Face Memory Test (CFMT) and two holistic processing measures, the composite task (CT) and the part-whole task (PW). For the holistic processing measures (CT and PW), we contrasted the results for regressing vs. subtracting the control conditions (parts for PW; misaligned congruency effect for CT) from the conditions of interest (wholes for PW; aligned congruency effect for CT). The regression-based holistic processing measures correlated with each other and with CFMT, supporting the idea of a unitary holistic processing mechanism that is involved in skilled face recognition. Subtraction scores yielded weaker correlations, especially for the PW. Together, the regression-based holistic processing measures predicted more than twice the amount of variance in CFMT (R(2)=.21) than their respective subtraction measures (R(2)=.10). We conclude that holistic processing is robustly linked to skilled face recognition. In addition to confirming this theoretically significant link, these results provide a case in point for the inappropriateness of subtraction scores when requiring a specific individual differences measure that removes the variance of a control task.

  4. Ambient temperature normalization for infrared face recognition based on the second-order polynomial model

    NASA Astrophysics Data System (ADS)

    Wang, Zhengzi

    2015-08-01

    The influence of ambient temperature is a big challenge to robust infrared face recognition. This paper proposes a new ambient temperature normalization algorithm to improve the performance of infrared face recognition under variable ambient temperatures. Based on statistical regression theory, a second order polynomial model is learned to describe the ambient temperature's impact on infrared face image. Then, infrared image was normalized to reference ambient temperature by the second order polynomial model. Finally, this normalization method is applied to infrared face recognition to verify its efficiency. The experiments demonstrate that the proposed temperature normalization method is feasible and can significantly improve the robustness of infrared face recognition.

  5. Knowledge scale effects in face recognition: an electrophysiological investigation.

    PubMed

    Abdel Rahman, Rasha; Sommer, Werner

    2012-03-01

    Although the amount or scale of biographical knowledge held in store about a person may differ widely, little is known about whether and how these differences may affect the retrieval processes triggered by the person's face. In a learning paradigm, we manipulated the scale of biographical knowledge while controlling for a common set of minimal knowledge and perceptual experience with the faces. A few days after learning, and again after 6 months, knowledge effects were assessed in three tasks, none of which concerned the additional knowledge. Whereas the performance effects of additional knowledge were small, event-related brain potentials recorded during testing showed amplitude modulations in the time range of the N400 component-indicative of knowledge access--but also at a much earlier latency in the P100 component--reflecting early stages of visual analysis. However, no effects were found in the N170 component, which is taken to reflect structural analyses of faces. The present findings replicate knowledge scale effects in object recognition and suggest that enhanced knowledge affects both early visual processes and the later processes associated with semantic processing, even when this knowledge is not task-relevant.

  6. The impact of specular highlights on 3D-2D face recognition

    NASA Astrophysics Data System (ADS)

    Christlein, Vincent; Riess, Christian; Angelopoulou, Elli; Evangelopoulos, Georgios; Kakadiaris, Ioannis

    2013-05-01

    One of the most popular form of biometrics is face recognition. Face recognition techniques typically assume that a face exhibits Lambertian reectance. However, a face often exhibits prominent specularities, especially in outdoor environments. These specular highlights can compromise an identity authentication. In this work, we analyze the impact of such highlights on a 3D-2D face recognition system. First, we investigate three different specularity removal methods as preprocessing steps for face recognition. Then, we explicitly model facial specularities within the face detection system with the Cook-Torrance reflectance model. In our experiments, specularity removal increases the recognition rate on an outdoor face database by about 5% at a false alarm rate of 10-3. The integration of the Cook-Torrance model further improves these results, increasing the verification rate by 19% at a FAR of 10-3.

  7. Influence of Emotional Facial Expressions on 3-5-Year-Olds' Face Recognition

    ERIC Educational Resources Information Center

    Freitag, Claudia; Schwarzer, Gudrun

    2011-01-01

    Three experiments examined 3- and 5-year-olds' recognition of faces in constant and varied emotional expressions. Children were asked to identify repeatedly presented target faces, distinguishing them from distractor faces, during an immediate recognition test and during delayed assessments after 10 min and one week. Emotional facial expression…

  8. The Effects of Inversion and Familiarity on Face versus Body Cues to Person Recognition

    ERIC Educational Resources Information Center

    Robbins, Rachel A.; Coltheart, Max

    2012-01-01

    Extensive research has focused on face recognition, and much is known about this topic. However, much of this work seems to be based on an assumption that faces are the most important aspect of person recognition. Here we test this assumption in two experiments. We show that when viewers are forced to choose, they "do" use the face more than the…

  9. The effect of inversion on face recognition in adults with autism spectrum disorder.

    PubMed

    Hedley, Darren; Brewer, Neil; Young, Robyn

    2015-05-01

    Face identity recognition has widely been shown to be impaired in individuals with autism spectrum disorders (ASD). In this study we examined the influence of inversion on face recognition in 26 adults with ASD and 33 age and IQ matched controls. Participants completed a recognition test comprising upright and inverted faces. Participants with ASD performed worse than controls on the recognition task but did not show an advantage for inverted face recognition. Both groups directed more visual attention to the eye than the mouth region and gaze patterns were not found to be associated with recognition performance. These results provide evidence of a normal effect of inversion on face recognition in adults with ASD.

  10. Formal Implementation of a Performance Evaluation Model for the Face Recognition System

    PubMed Central

    Shin, Yong-Nyuo; Kim, Jason; Lee, Yong-Jun; Shin, Woochang; Choi, Jin-Young

    2008-01-01

    Due to usability features, practical applications, and its lack of intrusiveness, face recognition technology, based on information, derived from individuals' facial features, has been attracting considerable attention recently. Reported recognition rates of commercialized face recognition systems cannot be admitted as official recognition rates, as they are based on assumptions that are beneficial to the specific system and face database. Therefore, performance evaluation methods and tools are necessary to objectively measure the accuracy and performance of any face recognition system. In this paper, we propose and formalize a performance evaluation model for the biometric recognition system, implementing an evaluation tool for face recognition systems based on the proposed model. Furthermore, we performed evaluations objectively by providing guidelines for the design and implementation of a performance evaluation system, formalizing the performance test process. PMID:18317524

  11. A framework for the recognition of 3D faces and expressions

    NASA Astrophysics Data System (ADS)

    Li, Chao; Barreto, Armando

    2006-04-01

    Face recognition technology has been a focus both in academia and industry for the last couple of years because of its wide potential applications and its importance to meet the security needs of today's world. Most of the systems developed are based on 2D face recognition technology, which uses pictures for data processing. With the development of 3D imaging technology, 3D face recognition emerges as an alternative to overcome the difficulties inherent with 2D face recognition, i.e. sensitivity to illumination conditions and orientation positioning of the subject. But 3D face recognition still needs to tackle the problem of deformation of facial geometry that results from the expression changes of a subject. To deal with this issue, a 3D face recognition framework is proposed in this paper. It is composed of three subsystems: an expression recognition system, a system for the identification of faces with expression, and neutral face recognition system. A system for the recognition of faces with one type of expression (happiness) and neutral faces was implemented and tested on a database of 30 subjects. The results proved the feasibility of this framework.

  12. Local ICA for the Most Wanted face recognition

    NASA Astrophysics Data System (ADS)

    Guan, Xin; Szu, Harold H.; Markowitz, Zvi

    2000-04-01

    Facial disguises of FBI Most Wanted criminals are inevitable and anticipated in our design of automatic/aided target recognition (ATR) imaging systems. For example, man's facial hairs may hide his mouth and chin but not necessarily the nose and eyes. Sunglasses will cover the eyes but not the nose, mouth, and chins. This fact motivates us to build sets of the independent component analyses bases separately for each facial region of the entire alleged criminal group. Then, given an alleged criminal face, collective votes are obtained from all facial regions in terms of 'yes, no, abstain' and are tallied for a potential alarm. Moreover, and innocent outside shall fall below the alarm threshold and is allowed to pass the checkpoint. Such a PD versus FAR called ROC curve is obtained.

  13. New nonlinear features for inspection, robotics, and face recognition

    NASA Astrophysics Data System (ADS)

    Casasent, David P.; Talukder, Ashit

    1999-10-01

    Classification of real-time X-ray images of randomly oriented touching pistachio nuts is discussed. The ultimate objective is the development of a system for automated non- invasive detection of defective product items on a conveyor belt. We discuss the extraction of new features that allow better discrimination between damaged and clean items (pistachio nuts). This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discriminating feature (MRDF) extraction method computes nonlinear features that are used as inputs to a new modified k nearest neighbor classifier. In this work, the MRDF is applied to standard features (rather than iconic data). The MRDF is robust to various probability distributions of the input class and is shown to provide good classification and new ROC (receiver operating characteristic) data. Other applications of these new feature spaces in robotics and face recognition are also noted.

  14. Face recognition using multiple maximum scatter difference discrimination dictionary learning

    NASA Astrophysics Data System (ADS)

    Zhu, Yanyong; Dong, Jiwen; Li, Hengjian

    2015-10-01

    Based on multiple maximum scatter difference discrimination Dictionary learning, a novel face recognition algorithm is proposed. Dictionary used for sparse coding plays a key role in sparse representation classification. In this paper, a multiple maximum scatter difference discriminated criterion is used for dictionary learning. During the process of dictionary learning, the multiple maximum scatter difference computes its discriminated vectors from both the range of the between class scatter matrix and the null space of the within-class scatter matrix. The proposed algorithm is theoretically elegant and easy to calculate. Extensive experimental studies conducted on the AR database and Extended Yale Database B in comparison with existing basic sparse representation and other classification methods, it shows that the performance is a little better than the original sparse representation methods with lower complexity.

  15. Facial soft biometric features for forensic face recognition.

    PubMed

    Tome, Pedro; Vera-Rodriguez, Ruben; Fierrez, Julian; Ortega-Garcia, Javier

    2015-12-01

    This paper proposes a functional feature-based approach useful for real forensic caseworks, based on the shape, orientation and size of facial traits, which can be considered as a soft biometric approach. The motivation of this work is to provide a set of facial features, which can be understood by non-experts such as judges and support the work of forensic examiners who, in practice, carry out a thorough manual comparison of face images paying special attention to the similarities and differences in shape and size of various facial traits. This new approach constitutes a tool that automatically converts a set of facial landmarks to a set of features (shape and size) corresponding to facial regions of forensic value. These features are furthermore evaluated in a population to generate statistics to support forensic examiners. The proposed features can also be used as additional information that can improve the performance of traditional face recognition systems. These features follow the forensic methodology and are obtained in a continuous and discrete manner from raw images. A statistical analysis is also carried out to study the stability, discrimination power and correlation of the proposed facial features on two realistic databases: MORPH and ATVS Forensic DB. Finally, the performance of both continuous and discrete features is analyzed using different similarity measures. Experimental results show high discrimination power and good recognition performance, especially for continuous features. A final fusion of the best systems configurations achieves rank 10 match results of 100% for ATVS database and 75% for MORPH database demonstrating the benefits of using this information in practice.

  16. Non-intrusive gesture recognition system combining with face detection based on Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Jin, Jing; Wang, Yuanqing; Xu, Liujing; Cao, Liqun; Han, Lei; Zhou, Biye; Li, Minggao

    2014-11-01

    A non-intrusive gesture recognition human-machine interaction system is proposed in this paper. In order to solve the hand positioning problem which is a difficulty in current algorithms, face detection is used for the pre-processing to narrow the search area and find user's hand quickly and accurately. Hidden Markov Model (HMM) is used for gesture recognition. A certain number of basic gesture units are trained as HMM models. At the same time, an improved 8-direction feature vector is proposed and used to quantify characteristics in order to improve the detection accuracy. The proposed system can be applied in interaction equipments without special training for users, such as household interactive television

  17. Face learning and the emergence of view-independent face recognition: an event-related brain potential study.

    PubMed

    Zimmermann, Friederike G S; Eimer, Martin

    2013-06-01

    Recognizing unfamiliar faces is more difficult than familiar face recognition, and this has been attributed to qualitative differences in the processing of familiar and unfamiliar faces. Familiar faces are assumed to be represented by view-independent codes, whereas unfamiliar face recognition depends mainly on view-dependent low-level pictorial representations. We employed an electrophysiological marker of visual face recognition processes in order to track the emergence of view-independence during the learning of previously unfamiliar faces. Two face images showing either the same or two different individuals in the same or two different views were presented in rapid succession, and participants had to perform an identity-matching task. On trials where both faces showed the same view, repeating the face of the same individual triggered an N250r component at occipito-temporal electrodes, reflecting the rapid activation of visual face memory. A reliable N250r component was also observed on view-change trials. Crucially, this view-independence emerged as a result of face learning. In the first half of the experiment, N250r components were present only on view-repetition trials but were absent on view-change trials, demonstrating that matching unfamiliar faces was initially based on strictly view-dependent codes. In the second half, the N250r was triggered not only on view-repetition trials but also on view-change trials, indicating that face recognition had now become more view-independent. This transition may be due to the acquisition of abstract structural codes of individual faces during face learning, but could also reflect the formation of associative links between sets of view-specific pictorial representations of individual faces.

  18. Supervised orthogonal discriminant subspace projects learning for face recognition.

    PubMed

    Chen, Yu; Xu, Xiao-Hong

    2014-02-01

    In this paper, a new linear dimension reduction method called supervised orthogonal discriminant subspace projection (SODSP) is proposed, which addresses high-dimensionality of data and the small sample size problem. More specifically, given a set of data points in the ambient space, a novel weight matrix that describes the relationship between the data points is first built. And in order to model the manifold structure, the class information is incorporated into the weight matrix. Based on the novel weight matrix, the local scatter matrix as well as non-local scatter matrix is defined such that the neighborhood structure can be preserved. In order to enhance the recognition ability, we impose an orthogonal constraint into a graph-based maximum margin analysis, seeking to find a projection that maximizes the difference, rather than the ratio between the non-local scatter and the local scatter. In this way, SODSP naturally avoids the singularity problem. Further, we develop an efficient and stable algorithm for implementing SODSP, especially, on high-dimensional data set. Moreover, the theoretical analysis shows that LPP is a special instance of SODSP by imposing some constraints. Experiments on the ORL, Yale, Extended Yale face database B and FERET face database are performed to test and evaluate the proposed algorithm. The results demonstrate the effectiveness of SODSP.

  19. Automatic recognition of facial movement for paralyzed face.

    PubMed

    Wang, Ting; Dong, Junyu; Sun, Xin; Zhang, Shu; Wang, Shengke

    2014-01-01

    Facial nerve paralysis is a common disease due to nerve damage. Most approaches for evaluating the degree of facial paralysis rely on a set of different facial movements as commanded by doctors. Therefore, automatic recognition of the patterns of facial movement is fundamental to the evaluation of the degree of facial paralysis. In this paper, a novel method named Active Shape Models plus Local Binary Patterns (ASMLBP) is presented for recognizing facial movement patterns. Firstly, the Active Shape Models (ASMs) are used in the method to locate facial key points. According to these points, the face is divided into eight local regions. Then the descriptors of these regions are extracted by using Local Binary Patterns (LBP) to recognize the patterns of facial movement. The proposed ASMLBP method is tested on both the collected facial paralysis database with 57 patients and another publicly available database named the Japanese Female Facial Expression (JAFFE). Experimental results demonstrate that the proposed method is efficient for both paralyzed and normal faces.

  20. A reciprocal model of face recognition and autistic traits: evidence from an individual differences perspective.

    PubMed

    Halliday, Drew W R; MacDonald, Stuart W S; Scherf, K Suzanne; Sherf, Suzanne K; Tanaka, James W

    2014-01-01

    Although not a core symptom of the disorder, individuals with autism often exhibit selective impairments in their face processing abilities. Importantly, the reciprocal connection between autistic traits and face perception has rarely been examined within the typically developing population. In this study, university participants from the social sciences, physical sciences, and humanities completed a battery of measures that assessed face, object and emotion recognition abilities, general perceptual-cognitive style, and sub-clinical autistic traits (the Autism Quotient (AQ)). We employed separate hierarchical multiple regression analyses to evaluate which factors could predict face recognition scores and AQ scores. Gender, object recognition performance, and AQ scores predicted face recognition behaviour. Specifically, males, individuals with more autistic traits, and those with lower object recognition scores performed more poorly on the face recognition test. Conversely, university major, gender and face recognition performance reliably predicted AQ scores. Science majors, males, and individuals with poor face recognition skills showed more autistic-like traits. These results suggest that the broader autism phenotype is associated with lower face recognition abilities, even among typically developing individuals.

  1. A reciprocal model of face recognition and autistic traits: evidence from an individual differences perspective.

    PubMed

    Halliday, Drew W R; MacDonald, Stuart W S; Scherf, K Suzanne; Sherf, Suzanne K; Tanaka, James W

    2014-01-01

    Although not a core symptom of the disorder, individuals with autism often exhibit selective impairments in their face processing abilities. Importantly, the reciprocal connection between autistic traits and face perception has rarely been examined within the typically developing population. In this study, university participants from the social sciences, physical sciences, and humanities completed a battery of measures that assessed face, object and emotion recognition abilities, general perceptual-cognitive style, and sub-clinical autistic traits (the Autism Quotient (AQ)). We employed separate hierarchical multiple regression analyses to evaluate which factors could predict face recognition scores and AQ scores. Gender, object recognition performance, and AQ scores predicted face recognition behaviour. Specifically, males, individuals with more autistic traits, and those with lower object recognition scores performed more poorly on the face recognition test. Conversely, university major, gender and face recognition performance reliably predicted AQ scores. Science majors, males, and individuals with poor face recognition skills showed more autistic-like traits. These results suggest that the broader autism phenotype is associated with lower face recognition abilities, even among typically developing individuals. PMID:24853862

  2. A Reciprocal Model of Face Recognition and Autistic Traits: Evidence from an Individual Differences Perspective

    PubMed Central

    Halliday, Drew W. R.; MacDonald, Stuart W. S.; Sherf, Suzanne K.; Tanaka, James W.

    2014-01-01

    Although not a core symptom of the disorder, individuals with autism often exhibit selective impairments in their face processing abilities. Importantly, the reciprocal connection between autistic traits and face perception has rarely been examined within the typically developing population. In this study, university participants from the social sciences, physical sciences, and humanities completed a battery of measures that assessed face, object and emotion recognition abilities, general perceptual-cognitive style, and sub-clinical autistic traits (the Autism Quotient (AQ)). We employed separate hierarchical multiple regression analyses to evaluate which factors could predict face recognition scores and AQ scores. Gender, object recognition performance, and AQ scores predicted face recognition behaviour. Specifically, males, individuals with more autistic traits, and those with lower object recognition scores performed more poorly on the face recognition test. Conversely, university major, gender and face recognition performance reliably predicted AQ scores. Science majors, males, and individuals with poor face recognition skills showed more autistic-like traits. These results suggest that the broader autism phenotype is associated with lower face recognition abilities, even among typically developing individuals. PMID:24853862

  3. An event-related brain potential study of explicit face recognition.

    PubMed

    Gosling, Angela; Eimer, Martin

    2011-07-01

    To determine the time course of face recognition and its links to face-sensitive event-related potential (ERP) components, ERPs elicited by faces of famous individuals and ERPs to non-famous control faces were compared in a task that required explicit judgements of facial identity. As expected, the face-selective N170 component was unaffected by the difference between famous and non-famous faces. In contrast, the occipito-temporal N250 component was linked to face recognition, as it was selectively triggered by famous faces. Importantly, this component was present for famous faces that were judged to be definitely known relative to famous faces that just appeared familiar, demonstrating that it is associated with the explicit identification of a particular face. The N250 is likely to reflect early perceptual stages of face recognition where long-term memory traces of familiar faces in ventral visual cortex are activated by matching on-line face representations. Famous faces also triggered a broadly distributed longer-latency positivity (P600f) that showed a left-hemisphere bias and was larger for definitely known faces, suggesting links between this component and name generation. These results show that successful face recognition is predicted by ERP components over face-specific visual areas that emerge within 230 ms after stimulus onset.

  4. Structural attributes of the temporal lobe predict face recognition ability in youth.

    PubMed

    Li, Jun; Dong, Minghao; Ren, Aifeng; Ren, Junchan; Zhang, Jinsong; Huang, Liyu

    2016-04-01

    The face recognition ability varies across individuals. However, it remains elusive how brain anatomical structure is related to the face recognition ability in healthy subjects. In this study, we adopted voxel-based morphometry analysis and machine learning approach to investigate the neural basis of individual face recognition ability using anatomical magnetic resonance imaging. We demonstrated that the gray matter volume (GMV) of the right ventral anterior temporal lobe (vATL), an area sensitive to face identity, is significant positively correlated with the subject's face recognition ability which was measured by the Cambridge face memory test (CFMT) score. Furthermore, the predictive model established by the balanced cross-validation combined with linear regression method revealed that the right vATL GMV can predict subjects' face ability. However, the subjects' Cambridge face memory test scores cannot be predicted by the GMV of the face processing network core brain regions including the right occipital face area (OFA) and the right face fusion area (FFA). Our results suggest that the right vATL may play an important role in face recognition and might provide insight into the neural mechanisms underlying face recognition deficits in patients with pathophysiological conditions such as prosopagnosia.

  5. Face recognition ability matures late: evidence from individual differences in young adults.

    PubMed

    Susilo, Tirta; Germine, Laura; Duchaine, Bradley

    2013-10-01

    Does face recognition ability mature early in childhood (early maturation hypothesis) or does it continue to develop well into adulthood (late maturation hypothesis)? This fundamental issue in face recognition is typically addressed by comparing child and adult participants. However, the interpretation of such studies is complicated by children's inferior test-taking abilities and general cognitive functions. Here we examined the developmental trajectory of face recognition ability in an individual differences study of 18-33 year-olds (n = 2,032), an age interval in which participants are competent test takers with comparable general cognitive functions. We found a positive association between age and face recognition, controlling for nonface visual recognition, verbal memory, sex, and own-race bias. Our study supports the late maturation hypothesis in face recognition, and illustrates how individual differences investigations of young adults can address theoretical issues concerning the development of perceptual and cognitive abilities.

  6. Orientation and Affective Expression Effects on Face Recognition in Williams Syndrome and Autism

    ERIC Educational Resources Information Center

    Rose, Fredric E.; Lincoln, Alan J.; Lai, Zona; Ene, Michaela; Searcy, Yvonne M.; Bellugi, Ursula

    2007-01-01

    We sought to clarify the nature of the face processing strength commonly observed in individuals with Williams syndrome (WS) by comparing the face recognition ability of persons with WS to that of persons with autism and to healthy controls under three conditions: Upright faces with neutral expressions, upright faces with varying affective…

  7. The Cambridge Face Memory Test for Children (CFMT-C): a new tool for measuring face recognition skills in childhood.

    PubMed

    Croydon, Abigail; Pimperton, Hannah; Ewing, Louise; Duchaine, Brad C; Pellicano, Elizabeth

    2014-09-01

    Face recognition ability follows a lengthy developmental course, not reaching maturity until well into adulthood. Valid and reliable assessments of face recognition memory ability are necessary to examine patterns of ability and disability in face processing, yet there is a dearth of such assessments for children. We modified a well-known test of face memory in adults, the Cambridge Face Memory Test (Duchaine & Nakayama, 2006, Neuropsychologia, 44, 576-585), to make it developmentally appropriate for children. To establish its utility, we administered either the upright or inverted versions of the computerised Cambridge Face Memory Test - Children (CFMT-C) to 401 children aged between 5 and 12 years. Our results show that the CFMT-C is sufficiently sensitive to demonstrate age-related gains in the recognition of unfamiliar upright and inverted faces, does not suffer from ceiling or floor effects, generates robust inversion effects, and is capable of detecting difficulties in face memory in children diagnosed with autism. Together, these findings indicate that the CFMT-C constitutes a new valid assessment tool for children's face recognition skills.

  8. The effect of gaze direction on three-dimensional face recognition in infants.

    PubMed

    Yamashita, Wakayo; Kanazawa, So; Yamaguchi, Masami K

    2012-09-01

    Eye gaze is an important tool for social contact. In this study, we investigated whether direct gaze facilitates the recognition of three-dimensional face images in infants. We presented artificially produced face images in rotation to 6-8 month-old infants. The eye gaze of the face images was either direct or averted. Sixty-one sequential images of each face were created by rotating the vertical axis of the face from frontal view to ± 30°. The recognition performances of the infants were then compared between faces with direct gaze and faces with averted gaze. Infants showed evidence that they were able to discriminate the novel from familiarized face by 8 months of age and only when gaze is direct. These results suggest that gaze direction may affect three-dimensional face recognition in infants.

  9. Development of Face Recognition in 5- to 15-Year-Olds

    ERIC Educational Resources Information Center

    Kinnunen, Suna; Korkman, Marit; Laasonen, Marja; Lahti-Nuuttila, Pekka

    2013-01-01

    This study focuses on the development of face recognition in typically developing preschool- and school-aged children (aged 5 to 15 years old, "n" = 611, 336 girls). Social predictors include sex differences and own-sex bias. At younger ages, the development of face recognition was rapid and became more gradual as the age increased up…

  10. Tracking and recognition face in videos with incremental local sparse representation model

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Wang, Yunhong; Zhang, Zhaoxiang

    2013-10-01

    This paper addresses the problem of tracking and recognizing faces via incremental local sparse representation. First a robust face tracking algorithm is proposed via employing local sparse appearance and covariance pooling method. In the following face recognition stage, with the employment of a novel template update strategy, which combines incremental subspace learning, our recognition algorithm adapts the template to appearance changes and reduces the influence of occlusion and illumination variation. This leads to a robust video-based face tracking and recognition with desirable performance. In the experiments, we test the quality of face recognition in real-world noisy videos on YouTube database, which includes 47 celebrities. Our proposed method produces a high face recognition rate at 95% of all videos. The proposed face tracking and recognition algorithms are also tested on a set of noisy videos under heavy occlusion and illumination variation. The tracking results on challenging benchmark videos demonstrate that the proposed tracking algorithm performs favorably against several state-of-the-art methods. In the case of the challenging dataset in which faces undergo occlusion and illumination variation, and tracking and recognition experiments under significant pose variation on the University of California, San Diego (Honda/UCSD) database, our proposed method also consistently demonstrates a high recognition rate.

  11. Image Description with Local Patterns: An Application to Face Recognition

    NASA Astrophysics Data System (ADS)

    Zhou, Wei; Ahrary, Alireza; Kamata, Sei-Ichiro

    In this paper, we propose a novel approach for presenting the local features of digital image using 1D Local Patterns by Multi-Scans (1DLPMS). We also consider the extentions and simplifications of the proposed approach into facial images analysis. The proposed approach consists of three steps. At the first step, the gray values of pixels in image are represented as a vector giving the local neighborhood intensity distrubutions of the pixels. Then, multi-scans are applied to capture different spatial information on the image with advantage of less computation than other traditional ways, such as Local Binary Patterns (LBP). The second step is encoding the local features based on different encoding rules using 1D local patterns. This transformation is expected to be less sensitive to illumination variations besides preserving the appearance of images embedded in the original gray scale. At the final step, Grouped 1D Local Patterns by Multi-Scans (G1DLPMS) is applied to make the proposed approach computationally simpler and easy to extend. Next, we further formulate boosted algorithm to extract the most discriminant local features. The evaluated results demonstrate that the proposed approach outperforms the conventional approaches in terms of accuracy in applications of face recognition, gender estimation and facial expression.

  12. Face recognition: a convolutional neural-network approach.

    PubMed

    Lawrence, S; Giles, C L; Tsoi, A C; Back, A D

    1997-01-01

    We present a hybrid neural-network for human face recognition which compares favourably with other methods. The system combines local image sampling, a self-organizing map (SOM) neural network, and a convolutional neural network. The SOM provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the SOM, and a multilayer perceptron (MLP) in place of the convolutional network for comparison. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze the computational complexity and discuss how new classes could be added to the trained recognizer.

  13. Peak Shift but Not Range Effects in Recognition of Faces

    ERIC Educational Resources Information Center

    Spetch, Marcia L.; Cheng, Ken; Clifford, Colin W. G.

    2004-01-01

    University students were trained to discriminate between two gray-scale images of faces that varied along a continuum from a unique face to an average face created by morphing. Following training, participants were tested without feedback for their ability to recognize the positive face (S+) within a range of faces along the continuum. In…

  14. Size determines whether specialized expert processes are engaged for recognition of faces.

    PubMed

    Yang, Nan; Shafai, Fakhri; Oruc, Ipek

    2014-07-22

    Many influential models of face recognition postulate specialized expert processes that are engaged when viewing upright, own-race faces, as opposed to a general-purpose recognition route used for nonface objects and inverted or other-race faces. In contrast, others have argued that empirical differences do not stem from qualitatively distinct processing. We offer a potential resolution to this ongoing controversy. We hypothesize that faces engage specialized processes at large sizes only. To test this, we measured recognition efficiencies for a wide range of sizes. Upright face recognition efficiency increased with size. This was not due to better visibility of basic image features at large sizes. We ensured this by calculating efficiency relative to a specialized ideal observer unique to each individual that incorporated size-related changes in visibility and by measuring inverted efficiencies across the same range of face sizes. Inverted face recognition efficiencies did not change with size. A qualitative face inversion effect, defined as the ratio of relative upright and inverted efficiencies, showed a complete lack of inversion effects for small sizes up to 6°. In contrast, significant face inversion effects were found for all larger sizes. Size effects may stem from predominance of larger faces in the overall exposure to faces, which occur at closer viewing distances typical of social interaction. Our results offer a potential explanation for the contradictory findings in the literature regarding the special status of faces.

  15. Understanding gender bias in face recognition: effects of divided attention at encoding.

    PubMed

    Palmer, Matthew A; Brewer, Neil; Horry, Ruth

    2013-03-01

    Prior research has demonstrated a female own-gender bias in face recognition, with females better at recognizing female faces than male faces. We explored the basis for this effect by examining the effect of divided attention during encoding on females' and males' recognition of female and male faces. For female participants, divided attention impaired recognition performance for female faces to a greater extent than male faces in a face recognition paradigm (Study 1; N=113) and an eyewitness identification paradigm (Study 2; N=502). Analysis of remember-know judgments (Study 2) indicated that divided attention at encoding selectively reduced female participants' recollection of female faces at test. For male participants, divided attention selectively reduced recognition performance (and recollection) for male stimuli in Study 2, but had similar effects on recognition of male and female faces in Study 1. Overall, the results suggest that attention at encoding contributes to the female own-gender bias by facilitating the later recollection of female faces.

  16. Face Recognition Is Affected by Similarity in Spatial Frequency Range to a Greater Degree Than Within-Category Object Recognition

    ERIC Educational Resources Information Center

    Collin, Charles A.; Liu, Chang Hong; Troje, Nikolaus F.; McMullen, Patricia A.; Chaudhuri, Avi

    2004-01-01

    Previous studies have suggested that face identification is more sensitive to variations in spatial frequency content than object recognition, but none have compared how sensitive the 2 processes are to variations in spatial frequency overlap (SFO). The authors tested face and object matching accuracy under varying SFO conditions. Their results…

  17. Experience moderates overlap between object and face recognition, suggesting a common ability.

    PubMed

    Gauthier, Isabel; McGugin, Rankin W; Richler, Jennifer J; Herzmann, Grit; Speegle, Magen; Van Gulick, Ana E

    2014-07-03

    Some research finds that face recognition is largely independent from the recognition of other objects; a specialized and innate ability to recognize faces could therefore have little or nothing to do with our ability to recognize objects. We propose a new framework in which recognition performance for any category is the product of domain-general ability and category-specific experience. In Experiment 1, we show that the overlap between face and object recognition depends on experience with objects. In 256 subjects we measured face recognition, object recognition for eight categories, and self-reported experience with these categories. Experience predicted neither face recognition nor object recognition but moderated their relationship: Face recognition performance is increasingly similar to object recognition performance with increasing object experience. If a subject has a lot of experience with objects and is found to perform poorly, they also prove to have a low ability with faces. In a follow-up survey, we explored the dimensions of experience with objects that may have contributed to self-reported experience in Experiment 1. Different dimensions of experience appear to be more salient for different categories, with general self-reports of expertise reflecting judgments of verbal knowledge about a category more than judgments of visual performance. The complexity of experience and current limitations in its measurement support the importance of aggregating across multiple categories. Our findings imply that both face and object recognition are supported by a common, domain-general ability expressed through experience with a category and best measured when accounting for experience.

  18. Experience moderates overlap between object and face recognition, suggesting a common ability

    PubMed Central

    Gauthier, Isabel; McGugin, Rankin W.; Richler, Jennifer J.; Herzmann, Grit; Speegle, Magen; Van Gulick, Ana E.

    2014-01-01

    Some research finds that face recognition is largely independent from the recognition of other objects; a specialized and innate ability to recognize faces could therefore have little or nothing to do with our ability to recognize objects. We propose a new framework in which recognition performance for any category is the product of domain-general ability and category-specific experience. In Experiment 1, we show that the overlap between face and object recognition depends on experience with objects. In 256 subjects we measured face recognition, object recognition for eight categories, and self-reported experience with these categories. Experience predicted neither face recognition nor object recognition but moderated their relationship: Face recognition performance is increasingly similar to object recognition performance with increasing object experience. If a subject has a lot of experience with objects and is found to perform poorly, they also prove to have a low ability with faces. In a follow-up survey, we explored the dimensions of experience with objects that may have contributed to self-reported experience in Experiment 1. Different dimensions of experience appear to be more salient for different categories, with general self-reports of expertise reflecting judgments of verbal knowledge about a category more than judgments of visual performance. The complexity of experience and current limitations in its measurement support the importance of aggregating across multiple categories. Our findings imply that both face and object recognition are supported by a common, domain-general ability expressed through experience with a category and best measured when accounting for experience. PMID:24993021

  19. From theory to implementation: building a multidimensional space for face recognition.

    PubMed

    Catz, Or; Kampf, Michal; Nachson, Israel; Babkoff, Harvey

    2009-06-01

    The purpose of the present study was to empirically construct a multidimensional model of face space based upon Valentine's [Valentine, T. (1991). A unified account of the effects of distinctiveness, inversion, and race in face recognition. Quarterly Journal of Experimental Psychology, 43A, 161-204; Valentine, T. (2001). Face-space models of face recognition. In M. J. Wenger, & J. T. Townsend, (Eds.). Computational, geometric, and process perspectives on facial cognition: Contexts and challenges. Scientific psychology series (pp. 83-113). Mahwah, NJ: Erlbaum] metaphoric model. Two-hundred and ten participants ranked 200 faces on a 21-dimensional space composed of internal facial features. On the basis of these dimensions an index of distance from the center of the dimensional space was calculated. A factor analysis revealed six factors which highlighted the importance of both featural and holistic processes in face recognition. Testing the model in relation to facial distinctiveness and face recognition strengthened its validity by emphasizing the relevance of the constructed multidimensional space for face recognition. The data are discussed within the framework of theoretical models of face recognition.

  20. Accurate Iris Recognition at a Distance Using Stabilized Iris Encoding and Zernike Moments Phase Features.

    PubMed

    Tan, Chun-Wei; Kumar, Ajay

    2014-07-10

    Accurate iris recognition from the distantly acquired face or eye images requires development of effective strategies which can account for significant variations in the segmented iris image quality. Such variations can be highly correlated with the consistency of encoded iris features and the knowledge that such fragile bits can be exploited to improve matching accuracy. A non-linear approach to simultaneously account for both local consistency of iris bit and also the overall quality of the weight map is proposed. Our approach therefore more effectively penalizes the fragile bits while simultaneously rewarding more consistent bits. In order to achieve more stable characterization of local iris features, a Zernike moment-based phase encoding of iris features is proposed. Such Zernike moments-based phase features are computed from the partially overlapping regions to more effectively accommodate local pixel region variations in the normalized iris images. A joint strategy is adopted to simultaneously extract and combine both the global and localized iris features. The superiority of the proposed iris matching strategy is ascertained by providing comparison with several state-of-the-art iris matching algorithms on three publicly available databases: UBIRIS.v2, FRGC, CASIA.v4-distance. Our experimental results suggest that proposed strategy can achieve significant improvement in iris matching accuracy over those competing approaches in the literature, i.e., average improvement of 54.3%, 32.7% and 42.6% in equal error rates, respectively for UBIRIS.v2, FRGC, CASIA.v4-distance. PMID:25029459

  1. A new method of NIR face recognition using kernel projection DCV and neural networks

    NASA Astrophysics Data System (ADS)

    Qiao, Ya; Lu, Yuan; Feng, Yun-song; Li, Feng; Ling, Yongshun

    2013-09-01

    A new face recognition system was proposed, which used active near infrared imaging system (ANIRIS) as face images acquisition equipment, used kernel discriminative common vector (KDCV) as the feature extraction algorithm and used neural network as the recognition method. The ANIRIS was established by 40 NIR LEDs which used as active light source and a HWB800-IR-80 near infrared filter which used together with CCD camera to serve as the imaging detector. Its function of reducing the influence of varying illuminations to recognition rate was discussed. The KDCV feature extraction and neural network recognition parts were realized by Matlab programming. The experiments on HITSZ Lab2 face database and self-built face database show that the average recognition rate reached more than 95%, proving the effectiveness of proposed system.

  2. Using eye movements as an index of implicit face recognition in autism spectrum disorder.

    PubMed

    Hedley, Darren; Young, Robyn; Brewer, Neil

    2012-10-01

    Individuals with an autism spectrum disorder (ASD) typically show impairment on face recognition tasks. Performance has usually been assessed using overt, explicit recognition tasks. Here, a complementary method involving eye tracking was used to examine implicit face recognition in participants with ASD and in an intelligence quotient-matched non-ASD control group. Differences in eye movement indices between target and foil faces were used as an indicator of implicit face recognition. Explicit face recognition was assessed using old-new discrimination and reaction time measures. Stimuli were faces of studied (target) or unfamiliar (foil) persons. Target images at test were either identical to the images presented at study or altered by changing the lighting, pose, or by masking with visual noise. Participants with ASD performed worse than controls on the explicit recognition task. Eye movement-based measures, however, indicated that implicit recognition may not be affected to the same degree as explicit recognition. Autism Res 2012, 5: 363-379. © 2012 International Society for Autism Research, Wiley Periodicals, Inc.

  3. Component Structure of Individual Differences in True and False Recognition of Faces

    ERIC Educational Resources Information Center

    Bartlett, James C.; Shastri, Kalyan K.; Abdi, Herve; Neville-Smith, Marsha

    2009-01-01

    Principal-component analyses of 4 face-recognition studies uncovered 2 independent components. The first component was strongly related to false-alarm errors with new faces as well as to facial "conjunctions" that recombine features of previously studied faces. The second component was strongly related to hits as well as to the conjunction/new…

  4. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance.

    PubMed

    Majaj, Najib J; Hong, Ha; Solomon, Ethan A; DiCarlo, James J

    2015-09-30

    To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT ("face patches") did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of ∼60,000 IT neurons and is executed as a simple weighted sum of those firing rates. Significance statement: We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a

  5. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance

    PubMed Central

    Hong, Ha; Solomon, Ethan A.; DiCarlo, James J.

    2015-01-01

    To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT (“face patches”) did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of ∼60,000 IT neurons and is executed as a simple weighted sum of those firing rates. SIGNIFICANCE STATEMENT We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a

  6. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance.

    PubMed

    Majaj, Najib J; Hong, Ha; Solomon, Ethan A; DiCarlo, James J

    2015-09-30

    To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT ("face patches") did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of ∼60,000 IT neurons and is executed as a simple weighted sum of those firing rates. Significance statement: We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a

  7. Fusion of Visible and Thermal Descriptors Using Genetic Algorithms for Face Recognition Systems.

    PubMed

    Hermosilla, Gabriel; Gallardo, Francisco; Farias, Gonzalo; San Martin, Cesar

    2015-01-01

    The aim of this article is to present a new face recognition system based on the fusion of visible and thermal features obtained from the most current local matching descriptors by maximizing face recognition rates through the use of genetic algorithms. The article considers a comparison of the performance of the proposed fusion methodology against five current face recognition methods and classic fusion techniques used commonly in the literature. These were selected by considering their performance in face recognition. The five local matching methods and the proposed fusion methodology are evaluated using the standard visible/thermal database, the Equinox database, along with a new database, the PUCV-VTF, designed for visible-thermal studies in face recognition and described for the first time in this work. The latter is created considering visible and thermal image sensors with different real-world conditions, such as variations in illumination, facial expression, pose, occlusion, etc. The main conclusions of this article are that two variants of the proposed fusion methodology surpass current face recognition methods and the classic fusion techniques reported in the literature, attaining recognition rates of over 97% and 99% for the Equinox and PUCV-VTF databases, respectively. The fusion methodology is very robust to illumination and expression changes, as it combines thermal and visible information efficiently by using genetic algorithms, thus allowing it to choose optimal face areas where one spectrum is more representative than the other. PMID:26213932

  8. Fusion of Visible and Thermal Descriptors Using Genetic Algorithms for Face Recognition Systems

    PubMed Central

    Hermosilla, Gabriel; Gallardo, Francisco; Farias, Gonzalo; San Martin, Cesar

    2015-01-01

    The aim of this article is to present a new face recognition system based on the fusion of visible and thermal features obtained from the most current local matching descriptors by maximizing face recognition rates through the use of genetic algorithms. The article considers a comparison of the performance of the proposed fusion methodology against five current face recognition methods and classic fusion techniques used commonly in the literature. These were selected by considering their performance in face recognition. The five local matching methods and the proposed fusion methodology are evaluated using the standard visible/thermal database, the Equinox database, along with a new database, the PUCV-VTF, designed for visible-thermal studies in face recognition and described for the first time in this work. The latter is created considering visible and thermal image sensors with different real-world conditions, such as variations in illumination, facial expression, pose, occlusion, etc. The main conclusions of this article are that two variants of the proposed fusion methodology surpass current face recognition methods and the classic fusion techniques reported in the literature, attaining recognition rates of over 97% and 99% for the Equinox and PUCV-VTF databases, respectively. The fusion methodology is very robust to illumination and expression changes, as it combines thermal and visible information efficiently by using genetic algorithms, thus allowing it to choose optimal face areas where one spectrum is more representative than the other. PMID:26213932

  9. Fusion of Visible and Thermal Descriptors Using Genetic Algorithms for Face Recognition Systems.

    PubMed

    Hermosilla, Gabriel; Gallardo, Francisco; Farias, Gonzalo; San Martin, Cesar

    2015-01-01

    The aim of this article is to present a new face recognition system based on the fusion of visible and thermal features obtained from the most current local matching descriptors by maximizing face recognition rates through the use of genetic algorithms. The article considers a comparison of the performance of the proposed fusion methodology against five current face recognition methods and classic fusion techniques used commonly in the literature. These were selected by considering their performance in face recognition. The five local matching methods and the proposed fusion methodology are evaluated using the standard visible/thermal database, the Equinox database, along with a new database, the PUCV-VTF, designed for visible-thermal studies in face recognition and described for the first time in this work. The latter is created considering visible and thermal image sensors with different real-world conditions, such as variations in illumination, facial expression, pose, occlusion, etc. The main conclusions of this article are that two variants of the proposed fusion methodology surpass current face recognition methods and the classic fusion techniques reported in the literature, attaining recognition rates of over 97% and 99% for the Equinox and PUCV-VTF databases, respectively. The fusion methodology is very robust to illumination and expression changes, as it combines thermal and visible information efficiently by using genetic algorithms, thus allowing it to choose optimal face areas where one spectrum is more representative than the other.

  10. Fearful contextual expression impairs the encoding and recognition of target faces: an ERP study

    PubMed Central

    Lin, Huiyan; Schulz, Claudia; Straube, Thomas

    2015-01-01

    Previous event-related potential (ERP) studies have shown that the N170 to faces is modulated by the emotion of the face and its context. However, it is unclear how the encoding of emotional target faces as reflected in the N170 is modulated by the preceding contextual facial expression when temporal onset and identity of target faces are unpredictable. In addition, no study as yet has investigated whether contextual facial expression modulates later recognition of target faces. To address these issues, participants in the present study were asked to identify target faces (fearful or neutral) that were presented after a sequence of fearful or neutral contextual faces. The number of sequential contextual faces was random and contextual and target faces were of different identities so that temporal onset and identity of target faces were unpredictable. Electroencephalography (EEG) data was recorded during the encoding phase. Subsequently, participants had to perform an unexpected old/new recognition task in which target face identities were presented in either the encoded or the non-encoded expression. ERP data showed a reduced N170 to target faces in fearful as compared to neutral context regardless of target facial expression. In the later recognition phase, recognition rates were reduced for target faces in the encoded expression when they had been encountered in fearful as compared to neutral context. The present findings suggest that fearful compared to neutral contextual faces reduce the allocation of attentional resources towards target faces, which results in limited encoding and recognition of target faces. PMID:26388751

  11. Gabor-based kernel PCA with doubly nonlinear mapping for face recognition with a single face image.

    PubMed

    Xie, Xudong; Lam, Kin-Man

    2006-09-01

    In this paper, a novel Gabor-based kernel principal component analysis (PCA) with doubly nonlinear mapping is proposed for human face recognition. In our approach, the Gabor wavelets are used to extract facial features, then a doubly nonlinear mapping kernel PCA (DKPCA) is proposed to perform feature transformation and face recognition. The conventional kernel PCA nonlinearly maps an input image into a high-dimensional feature space in order to make the mapped features linearly separable. However, this method does not consider the structural characteristics of the face images, and it is difficult to determine which nonlinear mapping is more effective for face recognition. In this paper, a new method of nonlinear mapping, which is performed in the original feature space, is defined. The proposed nonlinear mapping not only considers the statistical property of the input features, but also adopts an eigenmask to emphasize those important facial feature points. Therefore, after this mapping, the transformed features have a higher discriminating power, and the relative importance of the features adapts to the spatial importance of the face images. This new nonlinear mapping is combined with the conventional kernel PCA to be called "doubly" nonlinear mapping kernel PCA. The proposed algorithm is evaluated based on the Yale database, the AR database, the ORL database and the YaleB database by using different face recognition methods such as PCA, Gabor wavelets plus PCA, and Gabor wavelets plus kernel PCA with fractional power polynomial models. Experiments show that consistent and promising results are obtained.

  12. Face Recognition in Low-Light Environments Using Fusion of Thermal Infrared and Intensified Imagery

    NASA Astrophysics Data System (ADS)

    Socolinsky, Diego A.; Wolff, Lawrence B.

    This chapter presents a study of face recognition performance as a function of light level using intensified near infrared imagery in conjunction with thermal infrared imagery. Intensification technology is the most prevalent in both civilian and military night vision equipment and provides enough enhancement for human operators to perform standard tasks under extremely low light conditions. We describe a comprehensive data collection effort undertaken to image subjects under carefully controlled illumination and quantify the performance of standard face recognition algorithms on visible, intensified, and thermal imagery as a function of light level. Performance comparisons for automatic face recognition are reported using the standardized implementations from the Colorado State University Face Identification Evaluation System, as well as Equinox's algorithms. The results contained in this chapter should constitute the initial step for analysis and deployment of face recognition systems designed to work in low-light conditions.

  13. Face recognition in low-light environments using fusion of thermal infrared and intensified imagery

    NASA Astrophysics Data System (ADS)

    Socolinsky, Diego A.; Wolff, Lawrence B.; Lundberg, Andrew J.

    2006-05-01

    This paper presents a study of face recognition performance as a function of light level using intensified near infrared imagery in conjunction with thermal infrared imagery. Intensification technology is the most prevalent in both civilian and military night vision equipment, and provides enough enhancement for human operators to perform standard tasks under extremely low-light conditions. We describe a comprehensive data collection effort undertaken by the authors to image subjects under carefully controlled illumination and quantify the performance of standard face recognition algorithms on visible, intensified and thermal imagery as a function of light level. Performance comparisons for automatic face recognition are reported using the standardized implementations from the CSU Face Identification Evaluation System, as well as Equinox own algorithms. The results contained in this paper should constitute the initial step for analysis and deployment of face recognition systems designed to work in low-light level conditions.

  14. Single-sample face recognition based on intra-class differences in a variation model.

    PubMed

    Cai, Jun; Chen, Jing; Liang, Xing

    2015-01-01

    In this paper, a novel random facial variation modeling system for sparse representation face recognition is presented. Although recently Sparse Representation-Based Classification (SRC) has represented a breakthrough in the field of face recognition due to its good performance and robustness, there is the critical problem that SRC needs sufficiently large training samples to achieve good performance. To address these issues, we challenge the single-sample face recognition problem with intra-class differences of variation in a facial image model based on random projection and sparse representation. In this paper, we present a developed facial variation modeling systems composed only of various facial variations. We further propose a novel facial random noise dictionary learning method that is invariant to different faces. The experiment results on the AR, Yale B, Extended Yale B, MIT and FEI databases validate that our method leads to substantial improvements, particularly in single-sample face recognition problems. PMID:25580904

  15. Single-Sample Face Recognition Based on Intra-Class Differences in a Variation Model

    PubMed Central

    Cai, Jun; Chen, Jing; Liang, Xing

    2015-01-01

    In this paper, a novel random facial variation modeling system for sparse representation face recognition is presented. Although recently Sparse Representation-Based Classification (SRC) has represented a breakthrough in the field of face recognition due to its good performance and robustness, there is the critical problem that SRC needs sufficiently large training samples to achieve good performance. To address these issues, we challenge the single-sample face recognition problem with intra-class differences of variation in a facial image model based on random projection and sparse representation. In this paper, we present a developed facial variation modeling systems composed only of various facial variations. We further propose a novel facial random noise dictionary learning method that is invariant to different faces. The experiment results on the AR, Yale B, Extended Yale B, MIT and FEI databases validate that our method leads to substantial improvements, particularly in single-sample face recognition problems. PMID:25580904

  16. Good match exploration for thermal infrared face recognition based on YWF-SIFT with multi-scale fusion

    NASA Astrophysics Data System (ADS)

    Bai, Junfeng; Ma, Yong; Li, Jing; Li, Hao; Fang, Yu; Wang, Rui; Wang, Hongyuan

    2014-11-01

    Stable local feature detection is a critical prerequisite in the problem of infrared (IR) face recognition. Recently, Scale Invariant Feature Transform (SIFT) is introduced for feature detection in an infrared face frame, which is achieved by applying a simple and effective averaging window with SIFT termed as Y-styled Window Filter (YWF). However, the thermal IR face frame has an intrinsic characteristic such as lack of feature points (keypoints); therefore, the performance of the YWF-SIFT method will be inevitably influenced when it was used for IR face recognition. In this paper, we propose a novel method combining multi-scale fusion with YWF-SIFT to explore more good feature matches. The multi-scale fusion is performed on a thermal IR frame and a corresponding auxiliary visual frame generated from an off-the-shelf low-cost visual camera. The fused image is more informative, and typically contains much more stable features. Besides, the use of YWF-SIFT method enables us to establish feature correspondences more accurately. Quantitative experimental results demonstrate that our algorithm is able to significantly improve the quantity of feature points by approximately 38%. As a result, the performance of YWF-SIFT with multi-scale fusion is enhanced about 12% in infrared human face recognition.

  17. Solving the Border Control Problem: Evidence of Enhanced Face Matching in Individuals with Extraordinary Face Recognition Skills.

    PubMed

    Bobak, Anna Katarzyna; Dowsett, Andrew James; Bate, Sarah

    2016-01-01

    Photographic identity documents (IDs) are commonly used despite clear evidence that unfamiliar face matching is a difficult and error-prone task. The current study set out to examine the performance of seven individuals with extraordinary face recognition memory, so called "super recognisers" (SRs), on two face matching tasks resembling border control identity checks. In Experiment 1, the SRs as a group outperformed control participants on the "Glasgow Face Matching Test", and some case-by-case comparisons also reached significance. In Experiment 2, a perceptually difficult face matching task was used: the "Models Face Matching Test". Once again, SRs outperformed controls both on group and mostly in case-by-case analyses. These findings suggest that SRs are considerably better at face matching than typical perceivers, and would make proficient personnel for border control agencies. PMID:26829321

  18. Solving the Border Control Problem: Evidence of Enhanced Face Matching in Individuals with Extraordinary Face Recognition Skills.

    PubMed

    Bobak, Anna Katarzyna; Dowsett, Andrew James; Bate, Sarah

    2016-01-01

    Photographic identity documents (IDs) are commonly used despite clear evidence that unfamiliar face matching is a difficult and error-prone task. The current study set out to examine the performance of seven individuals with extraordinary face recognition memory, so called "super recognisers" (SRs), on two face matching tasks resembling border control identity checks. In Experiment 1, the SRs as a group outperformed control participants on the "Glasgow Face Matching Test", and some case-by-case comparisons also reached significance. In Experiment 2, a perceptually difficult face matching task was used: the "Models Face Matching Test". Once again, SRs outperformed controls both on group and mostly in case-by-case analyses. These findings suggest that SRs are considerably better at face matching than typical perceivers, and would make proficient personnel for border control agencies.

  19. Solving the Border Control Problem: Evidence of Enhanced Face Matching in Individuals with Extraordinary Face Recognition Skills

    PubMed Central

    Bobak, Anna Katarzyna; Dowsett, Andrew James; Bate, Sarah

    2016-01-01

    Photographic identity documents (IDs) are commonly used despite clear evidence that unfamiliar face matching is a difficult and error-prone task. The current study set out to examine the performance of seven individuals with extraordinary face recognition memory, so called “super recognisers” (SRs), on two face matching tasks resembling border control identity checks. In Experiment 1, the SRs as a group outperformed control participants on the “Glasgow Face Matching Test”, and some case-by-case comparisons also reached significance. In Experiment 2, a perceptually difficult face matching task was used: the “Models Face Matching Test”. Once again, SRs outperformed controls both on group and mostly in case-by-case analyses. These findings suggest that SRs are considerably better at face matching than typical perceivers, and would make proficient personnel for border control agencies. PMID:26829321

  20. Neurophysiology study of early visual processing of face and non-face recognition under simulated prosthetic vision.

    PubMed

    Yang, Yuan; Guo, Hong; Tong, Shanbao; Zhu, Yisheng; Qiu, Yihong

    2009-01-01

    Behavioral researches have shown that the visual function can be partly restored by phosphene-based prosthetic vision for the non-congenital blinds. However, the early visual processing mechanisms of phosphene object recognition is still unclear. This paper aimed to investigate the electro-neurophysiology underlying the phosphene face and non-face recognition. The modulations of latency and amplitude of N170 component in the event-related potential (ERP) were analyzed. Our preliminary results showed that (1) both normal and phosphene face stimuli could elicit prominent N170; nevertheless, phosphene stimuli caused notable latency delay and amplitude suppression on N170 compared with normal stimuli and (2) under phosphene non-face stimuli, a slight but significant latency delay occurred compared with normal stimuli, while amplitude suppression was not observed. Therefore, it was suggested that (1) phosphene perception caused a disruption of the early visual processing for non-canonical images of objects, which was more profound in phosphene face processing; (2) the face-specific processing was reserved under prosthetic vision and (3) holistic processing was the major stage in early visual processing of phosphene face recognition, while part-based processing was attenuated due to the loss of the details.

  1. Locality constrained joint dynamic sparse representation for local matching based face recognition.

    PubMed

    Wang, Jianzhong; Yi, Yugen; Zhou, Wei; Shi, Yanjiao; Qi, Miao; Zhang, Ming; Zhang, Baoxue; Kong, Jun

    2014-01-01

    Recently, Sparse Representation-based Classification (SRC) has attracted a lot of attention for its applications to various tasks, especially in biometric techniques such as face recognition. However, factors such as lighting, expression, pose and disguise variations in face images will decrease the performances of SRC and most other face recognition techniques. In order to overcome these limitations, we propose a robust face recognition method named Locality Constrained Joint Dynamic Sparse Representation-based Classification (LCJDSRC) in this paper. In our method, a face image is first partitioned into several smaller sub-images. Then, these sub-images are sparsely represented using the proposed locality constrained joint dynamic sparse representation algorithm. Finally, the representation results for all sub-images are aggregated to obtain the final recognition result. Compared with other algorithms which process each sub-image of a face image independently, the proposed algorithm regards the local matching-based face recognition as a multi-task learning problem. Thus, the latent relationships among the sub-images from the same face image are taken into account. Meanwhile, the locality information of the data is also considered in our algorithm. We evaluate our algorithm by comparing it with other state-of-the-art approaches. Extensive experiments on four benchmark face databases (ORL, Extended YaleB, AR and LFW) demonstrate the effectiveness of LCJDSRC. PMID:25419662

  2. Locality Constrained Joint Dynamic Sparse Representation for Local Matching Based Face Recognition

    PubMed Central

    Wang, Jianzhong; Yi, Yugen; Zhou, Wei; Shi, Yanjiao; Qi, Miao; Zhang, Ming; Zhang, Baoxue; Kong, Jun

    2014-01-01

    Recently, Sparse Representation-based Classification (SRC) has attracted a lot of attention for its applications to various tasks, especially in biometric techniques such as face recognition. However, factors such as lighting, expression, pose and disguise variations in face images will decrease the performances of SRC and most other face recognition techniques. In order to overcome these limitations, we propose a robust face recognition method named Locality Constrained Joint Dynamic Sparse Representation-based Classification (LCJDSRC) in this paper. In our method, a face image is first partitioned into several smaller sub-images. Then, these sub-images are sparsely represented using the proposed locality constrained joint dynamic sparse representation algorithm. Finally, the representation results for all sub-images are aggregated to obtain the final recognition result. Compared with other algorithms which process each sub-image of a face image independently, the proposed algorithm regards the local matching-based face recognition as a multi-task learning problem. Thus, the latent relationships among the sub-images from the same face image are taken into account. Meanwhile, the locality information of the data is also considered in our algorithm. We evaluate our algorithm by comparing it with other state-of-the-art approaches. Extensive experiments on four benchmark face databases (ORL, Extended YaleB, AR and LFW) demonstrate the effectiveness of LCJDSRC. PMID:25419662

  3. Characterisation of the sympathetic skin response evoked by own-face recognition in healthy subjects.

    PubMed

    Bagnato, Sergio; Boccagni, Cristina; Prestandrea, Caterina; Galardi, Giuseppe

    2010-01-01

    The ability to recognise one's own face is crucial for self-identity formation and it plays a key role in the development of social interactions. Our starting hypothesis was that own-face recognition may be a psychophysiological phenomenon capable of activating the vegetative system in a peculiar manner, via sympathetic pathways. To test this hypothesis we studied the sympathetic skin responses (SSRs) evoked in 18 healthy subjects by the image of their own faces and by six other different visual stimuli. The SSRs were enhanced when participants were shown their own faces. Both SSR area and SSR amplitude contributed to this phenomenon. This work may offer new insights into the psycho- physiological processes involved in own-face recognition; moreover, the SSR could be a useful tool for future studies of patients affected by neuropsychiatric disorders presenting impairment of own-face recognition or representation of self-identity.

  4. Feature-organized sparseness for efficient face recognition from multiple poses

    NASA Astrophysics Data System (ADS)

    Iwamura, Tomo

    2013-05-01

    Automatic and real-time face recognition can be applied into many attractive applications. For example, at a checkpoint it is expected that there are no burdens on a passing person and a security guard in addition to low cost. Normally a unique 3D person is projected into 2D images with information loss. It means a person is no longer unique in 2D space. Furthermore the various conditions such as pose variance, illumination variance and different expression make face recognition difficult. In order to separate a person, his or her subspace should have several faces and be redundant. That is why the database naturally becomes large. Under this situation the efficient face recognition is a key to a surveillance system. Face recognition by spars representation classification (SRC) could be one of promising candidates to realize rapid face recognition. This method can be understood in a similar way to compressive sensing (CS). In this paper, we propose the efficient approach of face recognition by SRC for multiple poses from the viewpoint of CS. The part-cropped database (PCD) is suggested to avoid position misalignments by discarding the information of topological linkages among eyes, a nose and a mouth. Although topological linkages are important for face recognition in general, they cause position misalignments among multiple poses which decrease recognition rate. Our approach solves one of trade-off problem between keeping topological linkages and avoiding position misalignments. According to the simulated experiments, PCD works well to avoid position misalignments and acquires correct recognition despite less information on topological linkages.

  5. Impairments in Monkey and Human Face Recognition in 2-Year-Old Toddlers with Autism Spectrum Disorder and Developmental Delay

    ERIC Educational Resources Information Center

    Chawarska, Katarzyna; Volkmar, Fred

    2007-01-01

    Face recognition impairments are well documented in older children with Autism Spectrum Disorders (ASD); however, the developmental course of the deficit is not clear. This study investigates the progressive specialization of face recognition skills in children with and without ASD. Experiment 1 examines human and monkey face recognition in…

  6. Effects of exposure to facial expression variation in face learning and recognition.

    PubMed

    Liu, Chang Hong; Chen, Wenfeng; Ward, James

    2015-11-01

    Facial expression is a major source of image variation in face images. Linking numerous expressions to the same face can be a huge challenge for face learning and recognition. It remains largely unknown what level of exposure to this image variation is critical for expression-invariant face recognition. We examined this issue in a recognition memory task, where the number of facial expressions of each face being exposed during a training session was manipulated. Faces were either trained with multiple expressions or a single expression, and they were later tested in either the same or different expressions. We found that recognition performance after learning three emotional expressions had no improvement over learning a single emotional expression (Experiments 1 and 2). However, learning three emotional expressions improved recognition compared to learning a single neutral expression (Experiment 3). These findings reveal both the limitation and the benefit of multiple exposures to variations of emotional expression in achieving expression-invariant face recognition. The transfer of expression training to a new type of expression is likely to depend on a relatively extensive level of training and a certain degree of variation across the types of expressions.

  7. Integrating face and gait for human recognition at a distance in video.

    PubMed

    Zhou, Xiaoli; Bhanu, Bir

    2007-10-01

    This paper introduces a new video-based recognition method to recognize noncooperating individuals at a distance in video who expose side views to the camera. Information from two biometrics sources, side face and gait, is utilized and integrated for recognition. For side face, an enhanced side-face image (ESFI), a higher resolution image compared with the image directly obtained from a single video frame, is constructed, which integrates face information from multiple video frames. For gait, the gait energy image (GEI), a spatio-temporal compact representation of gait in video, is used to characterize human-walking properties. The features of face and gait are obtained separately using the principal component analysis and multiple discriminant analysis combined method from ESFI and GEI, respectively. They are then integrated at the match score level by using different fusion strategies. The approach is tested on a database of video sequences, corresponding to 45 people, which are collected over seven months. The different fusion methods are compared and analyzed. The experimental results show that: 1) the idea of constructing ESFI from multiple frames is promising for human recognition in video, and better face features are extracted from ESFI compared to those from the original side-face images (OSFIs); 2) the synchronization of face and gait is not necessary for face template ESFI and gait template GEI; the synthetic match scores combine information from them; and 3) an integrated information from side face and gait is effective for human recognition in video. PMID:17926696

  8. Holistic processing, contact, and the other-race effect in face recognition.

    PubMed

    Zhao, Mintao; Hayward, William G; Bülthoff, Isabelle

    2014-12-01

    Face recognition, holistic processing, and processing of configural and featural facial information are known to be influenced by face race, with better performance for own- than other-race faces. However, whether these various other-race effects (OREs) arise from the same underlying mechanisms or from different processes remains unclear. The present study addressed this question by measuring the OREs in a set of face recognition tasks, and testing whether these OREs are correlated with each other. Participants performed different tasks probing (1) face recognition, (2) holistic processing, (3) processing of configural information, and (4) processing of featural information for both own- and other-race faces. Their contact with other-race people was also assessed with a questionnaire. The results show significant OREs in tasks testing face memory and processing of configural information, but not in tasks testing either holistic processing or processing of featural information. Importantly, there was no cross-task correlation between any of the measured OREs. Moreover, the level of other-race contact predicted only the OREs obtained in tasks testing face memory and processing of configural information. These results indicate that these various cross-race differences originate from different aspects of face processing, in contrary to the view that the ORE in face recognition is due to cross-race differences in terms of holistic processing.

  9. KD-tree based clustering algorithm for fast face recognition on large-scale data

    NASA Astrophysics Data System (ADS)

    Wang, Yuanyuan; Lin, Yaping; Yang, Junfeng

    2015-07-01

    This paper proposes an acceleration method for large-scale face recognition system. When dealing with a large-scale database, face recognition is time-consuming. In order to tackle this problem, we employ the k-means clustering algorithm to classify face data. Specifically, the data in each cluster are stored in the form of the kd-tree, and face feature matching is conducted with the kd-tree based nearest neighborhood search. Experiments on CAS-PEAL and self-collected database show the effectiveness of our proposed method.

  10. High and low performers differ in the use of shape information for face recognition.

    PubMed

    Kaufmann, Jürgen M; Schulz, Claudia; Schweinberger, Stefan R

    2013-06-01

    Previous findings demonstrated that increasing facial distinctiveness by means of spatial caricaturing improves face learning and results in modulations of event-related-potential (ERP) components associated with the processing of typical shape information (P200) and with face learning and recognition (N250). The current study investigated performance-based differences in the effects of spatial caricaturing: a modified version of the Bielefelder famous faces test (BFFT) was applied to subdivide a non-clinical group of 28 participants into better and worse face recognizers. Overall, a learning benefit was seen for caricatured compared to veridical faces. In addition, for learned faces we found larger caricaturing effects in response times, inverse efficiency scores as well as in P200 and N250 amplitudes in worse face recognizers, indicating that these individuals profited disproportionately from exaggerated idiosyncratic face shape. During learning and for novel faces at test, better and worse recognizers showed similar caricaturing effects. We suggest that spatial caricaturing helps better and worse face recognizers accessing critical idiosyncratic shape information that supports identity processing and learning of unfamiliar faces. For familiarized faces, better face recognizers might depend less on exaggerated shape and make better use of texture information than worse recognizers. These results shed light on the transition from unfamiliar to familiar face processing and may also be relevant for developing training-programmes for people with difficulties in face recognition.

  11. Effects of acute psychosocial stress on neural activity to emotional and neutral faces in a face recognition memory paradigm.

    PubMed

    Li, Shijia; Weerda, Riklef; Milde, Christopher; Wolf, Oliver T; Thiel, Christiane M

    2014-12-01

    Previous studies have shown that acute psychosocial stress impairs recognition of declarative memory and that emotional material is especially sensitive to this effect. Animal studies suggest a central role of the amygdala which modulates memory processes in hippocampus, prefrontal cortex and other brain areas. We used functional magnetic resonance imaging (fMRI) to investigate neural correlates of stress-induced modulation of emotional recognition memory in humans. Twenty-seven healthy, right-handed, non-smoker male volunteers performed an emotional face recognition task. During encoding, participants were presented with 50 fearful and 50 neutral faces. One hour later, they underwent either a stress (Trier Social Stress Test) or a control procedure outside the scanner which was followed immediately by the recognition session inside the scanner, where participants had to discriminate between 100 old and 50 new faces. Stress increased salivary cortisol, blood pressure and pulse, and decreased the mood of participants but did not impact recognition memory. BOLD data during recognition revealed a stress condition by emotion interaction in the left inferior frontal gyrus and right hippocampus which was due to a stress-induced increase of neural activity to fearful and a decrease to neutral faces. Functional connectivity analyses revealed a stress-induced increase in coupling between the right amygdala and the right fusiform gyrus, when processing fearful as compared to neutral faces. Our results provide evidence that acute psychosocial stress affects medial temporal and frontal brain areas differentially for neutral and emotional items, with a stress-induced privileged processing of emotional stimuli.

  12. No Own-Age Advantage in Children’s Recognition of Emotion on Prototypical Faces of Different Ages

    PubMed Central

    Griffiths, Sarah; Penton-Voak, Ian S.; Jarrold, Chris; Munafò, Marcus R.

    2015-01-01

    We test whether there is an own-age advantage in emotion recognition using prototypical younger child, older child and adult faces displaying emotional expressions. Prototypes were created by averaging photographs of individuals from 6 different age and sex categories (male 5–8 years, male 9–12 years, female 5–8 years, female 9–12 years, adult male and adult female), each posing 6 basic emotional expressions. In the study 5–8 year old children (n = 33), 9–13 year old children (n = 70) and adults (n = 92) labelled these expression prototypes in a 6-alternative forced-choice task. There was no evidence that children or adults recognised expressions better on faces from their own age group. Instead, child facial expression prototypes were recognised as accurately as adult expression prototypes by all age groups. This suggests there is no substantial own-age advantage in children’s emotion recognition. PMID:25978656

  13. Image-Invariant Responses in Face-Selective Regions Do Not Explain the Perceptual Advantage for Familiar Face Recognition

    PubMed Central

    Davies-Thompson, Jodie; Newling, Katherine

    2013-01-01

    The ability to recognize familiar faces across different viewing conditions contrasts with the inherent difficulty in the perception of unfamiliar faces across similar image manipulations. It is widely believed that this difference in perception and recognition is based on the neural representation for familiar faces being less sensitive to changes in the image than it is for unfamiliar faces. Here, we used an functional magnetic resonance-adaptation paradigm to investigate image invariance in face-selective regions of the human brain. We found clear evidence for a degree of image-invariant adaptation to facial identity in face-selective regions, such as the fusiform face area. However, contrary to the predictions of models of face processing, comparable levels of image invariance were evident for both familiar and unfamiliar faces. This suggests that the marked differences in the perception of familiar and unfamiliar faces may not depend on differences in the way multiple images are represented in core face-selective regions of the human brain. PMID:22345357

  14. Image-invariant responses in face-selective regions do not explain the perceptual advantage for familiar face recognition.

    PubMed

    Davies-Thompson, Jodie; Newling, Katherine; Andrews, Timothy J

    2013-02-01

    The ability to recognize familiar faces across different viewing conditions contrasts with the inherent difficulty in the perception of unfamiliar faces across similar image manipulations. It is widely believed that this difference in perception and recognition is based on the neural representation for familiar faces being less sensitive to changes in the image than it is for unfamiliar faces. Here, we used an functional magnetic resonance-adaptation paradigm to investigate image invariance in face-selective regions of the human brain. We found clear evidence for a degree of image-invariant adaptation to facial identity in face-selective regions, such as the fusiform face area. However, contrary to the predictions of models of face processing, comparable levels of image invariance were evident for both familiar and unfamiliar faces. This suggests that the marked differences in the perception of familiar and unfamiliar faces may not depend on differences in the way multiple images are represented in core face-selective regions of the human brain.

  15. Confidence-Accuracy Calibration in Absolute and Relative Face Recognition Judgments

    ERIC Educational Resources Information Center

    Weber, Nathan; Brewer, Neil

    2004-01-01

    Confidence-accuracy (CA) calibration was examined for absolute and relative face recognition judgments as well as for recognition judgments from groups of stimuli presented simultaneously or sequentially (i.e., simultaneous or sequential mini-lineups). When the effect of difficulty was controlled, absolute and relative judgments produced…

  16. Principal patterns of fractional-order differential gradients for face recognition

    NASA Astrophysics Data System (ADS)

    Yu, Lei; Cao, Qi; Zhao, Anping

    2015-01-01

    We investigate the ability of fractional-order differentiation (FD) for facial texture representation and present a local descriptor, called the principal patterns of fractional-order differential gradients (PPFDGs), for face recognition. In PPFDG, multiple FD gradient patterns of a face image are obtained utilizing multiorientation FD masks. As a result, each pixel of the face image can be represented as a high-dimensional gradient vector. Then, by employing principal component analysis to the gradient vectors over the centered neighborhood of each pixel, we capture the principal gradient patterns and meanwhile compute the corresponding orientation patterns from which oriented gradient magnitudes are computed. Histogram features are finally extracted from these oriented gradient magnitude patterns as the face representation using local binary patterns. Experimental results on face recognition technology, A.M. Martinez and R. Benavente, Extended Yale B, and labeled faces in the wild face datasets validate the effectiveness of the proposed method.

  17. Capturing specific abilities as a window into human individuality: the example of face recognition.

    PubMed

    Wilmer, Jeremy B; Germine, Laura; Chabris, Christopher F; Chatterjee, Garga; Gerbasi, Margaret; Nakayama, Ken

    2012-01-01

    Proper characterization of each individual's unique pattern of strengths and weaknesses requires good measures of diverse abilities. Here, we advocate combining our growing understanding of neural and cognitive mechanisms with modern psychometric methods in a renewed effort to capture human individuality through a consideration of specific abilities. We articulate five criteria for the isolation and measurement of specific abilities, then apply these criteria to face recognition. We cleanly dissociate face recognition from more general visual and verbal recognition. This dissociation stretches across ability as well as disability, suggesting that specific developmental face recognition deficits are a special case of a broader specificity that spans the entire spectrum of human face recognition performance. Item-by-item results from 1,471 web-tested participants, included as supplementary information, fuel item analyses, validation, norming, and item response theory (IRT) analyses of our three tests: (a) the widely used Cambridge Face Memory Test (CFMT); (b) an Abstract Art Memory Test (AAMT), and (c) a Verbal Paired-Associates Memory Test (VPMT). The availability of this data set provides a solid foundation for interpreting future scores on these tests. We argue that the allied fields of experimental psychology, cognitive neuroscience, and vision science could fuel the discovery of additional specific abilities to add to face recognition, thereby providing new perspectives on human individuality.

  18. Capturing specific abilities as a window into human individuality: The example of face recognition

    PubMed Central

    Wilmer, Jeremy B.; Germine, Laura; Chabris, Christopher F.; Chatterjee, Garga; Gerbasi, Margaret; Nakayama, Ken

    2013-01-01

    Proper characterization of each individual's unique pattern of strengths and weaknesses requires good measures of diverse abilities. Here, we advocate combining our growing understanding of neural and cognitive mechanisms with modern psychometric methods in a renewed effort to capture human individuality through a consideration of specific abilities. We articulate five criteria for the isolation and measurement of specific abilities, then apply these criteria to face recognition. We cleanly dissociate face recognition from more general visual and verbal recognition. This dissociation stretches across ability as well as disability, suggesting that specific developmental face recognition deficits are a special case of a broader specificity that spans the entire spectrum of human face recognition performance. Item-by-item results from 1,471 web-tested participants, included as supplementary information, fuel item analyses, validation, norming, and item response theory (IRT) analyses of our three tests: (a) the widely used Cambridge Face Memory Test (CFMT); (b) an Abstract Art Memory Test (AAMT), and (c) a Verbal Paired-Associates Memory Test (VPMT). The availability of this data set provides a solid foundation for interpreting future scores on these tests. We argue that the allied fields of experimental psychology, cognitive neuroscience, and vision science could fuel the discovery of additional specific abilities to add to face recognition, thereby providing new perspectives on human individuality. PMID:23428079

  19. Recognition Memory Measures Yield Disproportionate Effects of Aging on Learning Face-Name Associations

    PubMed Central

    James, Lori E.; Fogler, Kethera A.; Tauber, Sarah K.

    2008-01-01

    No previous research has tested whether the specific age-related deficit in learning face-name associations that has been identified using recall tasks also occurs for recognition memory measures. Young and older participants saw pictures of unfamiliar people with a name and an occupation for each person, and were tested on a matching (in Experiment 1) or multiple-choice (in Experiment 2) recognition memory test. For both recognition measures, the pattern of effects was the same as that obtained using a recall measure: more face-occupation associations were remembered than face-name associations, young adults remembered more associated information than older adults overall, and older adults had disproportionately poorer memory for face-name associations. Findings implicate age-related difficulty in forming and retrieving the association between the face and the name as the primary cause of obtained deficits in previous name learning studies. PMID:18808254

  20. Atypical Development of Face and Greeble Recognition in Autism

    ERIC Educational Resources Information Center

    Scherf, K. Suzanne; Behrmann, Marlene; Minshew, Nancy; Luna, Beatriz

    2008-01-01

    Background: Impaired face processing is a widely documented deficit in autism. Although the origin of this deficit is unclear, several groups have suggested that a lack of perceptual expertise is contributory. We investigated whether individuals with autism develop expertise in visuoperceptual processing of faces and whether any deficiency in such…

  1. Effect of Partial Occlusion on Newborns' Face Preference and Recognition

    ERIC Educational Resources Information Center

    Gava, Lucia; Valenza, Eloisa; Turati, Chiara; de Schonen, Scania

    2008-01-01

    Many studies have shown that newborns prefer (e.g. Goren, Sarty & Wu, 1975 ; Valenza, Simion, Macchi Cassia & Umilta, 1996) and recognize (e.g. Bushnell, Say & Mullin, 1989; Pascalis & de Schonen, 1994) faces. However, it is not known whether, at birth, faces are still preferred and recognized when some of their parts are not visible because…

  2. Using Computerized Games to Teach Face Recognition Skills to Children with Autism Spectrum Disorder: The "Let's Face It!" Program

    ERIC Educational Resources Information Center

    Tanaka, James W.; Wolf, Julie M.; Klaiman, Cheryl; Koenig, Kathleen; Cockburn, Jeffrey; Herlihy, Lauren; Brown, Carla; Stahl, Sherin; Kaiser, Martha D.; Schultz, Robert T.

    2010-01-01

    Background: An emerging body of evidence indicates that relative to typically developing children, children with autism are selectively impaired in their ability to recognize facial identity. A critical question is whether face recognition skills can be enhanced through a direct training intervention. Methods: In a randomized clinical trial,…

  3. Recognition and identification of famous faces in patients with unilateral temporal lobe epilepsy.

    PubMed

    Seidenberg, Michael; Griffith, Randall; Sabsevitz, David; Moran, Maria; Haltiner, Alan; Bell, Brian; Swanson, Sara; Hammeke, Thomas; Hermann, Bruce

    2002-01-01

    We examined the performance of 21 patients with unilateral temporal lobe epilepsy (TLE) and hippocampal damage (10 lefts, and 11 rights) and 10 age-matched controls on the recognition and identification (name and occupation) of well-known faces. Famous face stimuli were selected from four time periods; 1970s, 1980s, 1990-1994, and 1995-1996. Differential patterns of performance were observed for the left and right TLE group across distinct face processing components. The left TLE group showed a selective impairment in naming famous faces while they performed similar to the controls in face recognition and semantic identification (i.e. occupation). In contrast, the right TLE group was impaired across all components of face memory; face recognition, semantic identification, and face naming. Face naming impairment in the left TLE group was characterized by a temporal gradient with better naming performance for famous faces from more distant time periods. Findings are discussed in terms of the role of the temporal lobe system for the acquisition, retention, and retrieval of face semantic networks, and the differential effects of lateralized temporal lobe lesions in this process.

  4. From face to interface recognition: a differential geometric approach to distinguish DNA from RNA binding surfaces.

    PubMed

    Shazman, Shula; Elber, Gershon; Mandel-Gutfreund, Yael

    2011-09-01

    Protein nucleic acid interactions play a critical role in all steps of the gene expression pathway. Nucleic acid (NA) binding proteins interact with their partners, DNA or RNA, via distinct regions on their surface that are characterized by an ensemble of chemical, physical and geometrical properties. In this study, we introduce a novel methodology based on differential geometry, commonly used in face recognition, to characterize and predict NA binding surfaces on proteins. Applying the method on experimentally solved three-dimensional structures of proteins we successfully classify double-stranded DNA (dsDNA) from single-stranded RNA (ssRNA) binding proteins, with 83% accuracy. We show that the method is insensitive to conformational changes that occur upon binding and can be applicable for de novo protein-function prediction. Remarkably, when concentrating on the zinc finger motif, we distinguish successfully between RNA and DNA binding interfaces possessing the same binding motif even within the same protein, as demonstrated for the RNA polymerase transcription-factor, TFIIIA. In conclusion, we present a novel methodology to characterize protein surfaces, which can accurately tell apart dsDNA from an ssRNA binding interfaces. The strength of our method in recognizing fine-tuned differences on NA binding interfaces make it applicable for many other molecular recognition problems, with potential implications for drug design.

  5. From face to interface recognition: a differential geometric approach to distinguish DNA from RNA binding surfaces.

    PubMed

    Shazman, Shula; Elber, Gershon; Mandel-Gutfreund, Yael

    2011-09-01

    Protein nucleic acid interactions play a critical role in all steps of the gene expression pathway. Nucleic acid (NA) binding proteins interact with their partners, DNA or RNA, via distinct regions on their surface that are characterized by an ensemble of chemical, physical and geometrical properties. In this study, we introduce a novel methodology based on differential geometry, commonly used in face recognition, to characterize and predict NA binding surfaces on proteins. Applying the method on experimentally solved three-dimensional structures of proteins we successfully classify double-stranded DNA (dsDNA) from single-stranded RNA (ssRNA) binding proteins, with 83% accuracy. We show that the method is insensitive to conformational changes that occur upon binding and can be applicable for de novo protein-function prediction. Remarkably, when concentrating on the zinc finger motif, we distinguish successfully between RNA and DNA binding interfaces possessing the same binding motif even within the same protein, as demonstrated for the RNA polymerase transcription-factor, TFIIIA. In conclusion, we present a novel methodology to characterize protein surfaces, which can accurately tell apart dsDNA from an ssRNA binding interfaces. The strength of our method in recognizing fine-tuned differences on NA binding interfaces make it applicable for many other molecular recognition problems, with potential implications for drug design. PMID:21693557

  6. From face to interface recognition: a differential geometric approach to distinguish DNA from RNA binding surfaces

    PubMed Central

    Shazman, Shula; Elber, Gershon; Mandel-Gutfreund, Yael

    2011-01-01

    Protein nucleic acid interactions play a critical role in all steps of the gene expression pathway. Nucleic acid (NA) binding proteins interact with their partners, DNA or RNA, via distinct regions on their surface that are characterized by an ensemble of chemical, physical and geometrical properties. In this study, we introduce a novel methodology based on differential geometry, commonly used in face recognition, to characterize and predict NA binding surfaces on proteins. Applying the method on experimentally solved three-dimensional structures of proteins we successfully classify double-stranded DNA (dsDNA) from single-stranded RNA (ssRNA) binding proteins, with 83% accuracy. We show that the method is insensitive to conformational changes that occur upon binding and can be applicable for de novo protein-function prediction. Remarkably, when concentrating on the zinc finger motif, we distinguish successfully between RNA and DNA binding interfaces possessing the same binding motif even within the same protein, as demonstrated for the RNA polymerase transcription-factor, TFIIIA. In conclusion, we present a novel methodology to characterize protein surfaces, which can accurately tell apart dsDNA from an ssRNA binding interfaces. The strength of our method in recognizing fine-tuned differences on NA binding interfaces make it applicable for many other molecular recognition problems, with potential implications for drug design. PMID:21693557

  7. A new face of sleep: The impact of post-learning sleep on recognition memory for face-name associations.

    PubMed

    Maurer, Leonie; Zitting, Kirsi-Marja; Elliott, Kieran; Czeisler, Charles A; Ronda, Joseph M; Duffy, Jeanne F

    2015-12-01

    Sleep has been demonstrated to improve consolidation of many types of new memories. However, few prior studies have examined how sleep impacts learning of face-name associations. The recognition of a new face along with the associated name is an important human cognitive skill. Here we investigated whether post-presentation sleep impacts recognition memory of new face-name associations in healthy adults. Fourteen participants were tested twice. Each time, they were presented 20 photos of faces with a corresponding name. Twelve hours later, they were shown each face twice, once with the correct and once with an incorrect name, and asked if each face-name combination was correct and to rate their confidence. In one condition the 12-h interval between presentation and recall included an 8-h nighttime sleep opportunity ("Sleep"), while in the other condition they remained awake ("Wake"). There were more correct and highly confident correct responses when the interval between presentation and recall included a sleep opportunity, although improvement between the "Wake" and "Sleep" conditions was not related to duration of sleep or any sleep stage. These data suggest that a nighttime sleep opportunity improves the ability to correctly recognize face-name associations. Further studies investigating the mechanism of this improvement are important, as this finding has implications for individuals with sleep disturbances and/or memory impairments.

  8. When family looks strange and strangers look normal: a case of impaired face perception and recognition after stroke.

    PubMed

    Heutink, Joost; Brouwer, Wiebo H; Kums, Evelien; Young, Andy; Bouma, Anke

    2012-02-01

    We describe a patient (JS) with impaired recognition and distorted visual perception of faces after an ischemic stroke. Strikingly, JS reports that the faces of family members look distorted, while faces of other people look normal. After neurological and neuropsychological examination, we assessed response accuracy, response times, and skin conductance responses on a face recognition task in which photographs of close family members, celebrities and unfamiliar people were presented. JS' performance was compared to the performance of three healthy control participants. Results indicate that three aspects of face perception appear to be impaired in JS. First, she has impaired recognition of basic emotional expressions. Second, JS has poor recognition of familiar faces in general, but recognition of close family members is disproportionally impaired compared to faces of celebrities. Third, JS perceives faces of family members as distorted. In this paper we consider whether these impairments can be interpreted in terms of previously described disorders of face perception and recent models for face perception.

  9. Eye-tracking the own-race bias in face recognition: revealing the perceptual and socio-cognitive mechanisms.

    PubMed

    Hills, Peter J; Pake, J Michael

    2013-12-01

    Own-race faces are recognised more accurately than other-race faces and may even be viewed differently as measured by an eye-tracker (Goldinger, Papesh, & He, 2009). Alternatively, observer race might direct eye-movements (Blais, Jack, Scheepers, Fiset, & Caldara, 2008). Observer differences in eye-movements are likely to be based on experience of the physiognomic characteristics that are differentially discriminating for Black and White faces. Two experiments are reported that employed standard old/new recognition paradigms in which Black and White observers viewed Black and White faces with their eye-movements recorded. Experiment 1 showed that there were observer race differences in terms of the features scanned but observers employed the same strategy across different types of faces. Experiment 2 demonstrated that other-race faces could be recognised more accurately if participants had their first fixation directed to more diagnostic features using fixation crosses. These results are entirely consistent with those presented by Blais et al. (2008) and with the perceptual interpretation that the own-race bias is due to inappropriate attention allocated to the facial features (Hills & Lewis, 2006, 2011).

  10. Robust and discriminating method for face recognition based on correlation technique and independent component analysis model.

    PubMed

    Alfalou, A; Brosseau, C

    2011-03-01

    We demonstrate a novel technique for face recognition. Our approach relies on the performances of a strongly discriminating optical correlation method along with the robustness of the independent component analysis (ICA) model. Simulations were performed to illustrate how this algorithm can identify a face with images from the Pointing Head Pose Image Database. While maintaining algorithmic simplicity, this approach based on ICA representation significantly increases the true recognition rate compared to that obtained using our previously developed all-numerical ICA identity recognition method and another method based on optical correlation and a standard composite filter. PMID:21368935

  11. Verbal Overshadowing and Face Recognition in Young and Old Adults

    ERIC Educational Resources Information Center

    Kinlen, Thomas J.; Adams-Price, Carolyn E.; Henley, Tracy B.

    2007-01-01

    Verbal overshadowing has been found to disrupt recognition accuracy when hard-to-describe stimuli are used. The current study replicates previous research on verbal overshadowing with younger people and extends this research into an older population to examine the possible link between verbal expertise and verbal overshadowing. It was hypothesized…

  12. Face identity recognition in autism spectrum disorders: a review of behavioral studies.

    PubMed

    Weigelt, Sarah; Koldewyn, Kami; Kanwisher, Nancy

    2012-03-01

    Face recognition--the ability to recognize a person from their facial appearance--is essential for normal social interaction. Face recognition deficits have been implicated in the most common disorder of social interaction: autism. Here we ask: is face identity recognition in fact impaired in people with autism? Reviewing behavioral studies we find no strong evidence for a qualitative difference in how facial identity is processed between those with and without autism: markers of typical face identity recognition, such as the face inversion effect, seem to be present in people with autism. However, quantitatively--i.e., how well facial identity is remembered or discriminated--people with autism perform worse than typical individuals. This impairment is particularly clear in face memory and in face perception tasks in which a delay intervenes between sample and test, and less so in tasks with no memory demand. Although some evidence suggests that this deficit may be specific to faces, further evidence on this question is necessary.

  13. An Own-Race Advantage for Components as Well as Configurations in Face Recognition

    ERIC Educational Resources Information Center

    Hayward, William G.; Rhodes, Gillian; Schwaninger, Adrian

    2008-01-01

    The own-race advantage in face recognition has been hypothesized as being due to a superiority in the processing of configural information for own-race faces. Here we examined the contributions of both configural and component processing to the own-race advantage. We recruited 48 Caucasian participants in Australia and 48 Chinese participants in…

  14. Brief Report: Face-Specific Recognition Deficits in Young Children with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Bradshaw, Jessica; Shic, Frederick; Chawarska, Katarzyna

    2011-01-01

    This study used eyetracking to investigate the ability of young children with autism spectrum disorders (ASD) to recognize social (faces) and nonsocial (simple objects and complex block patterns) stimuli using the visual paired comparison (VPC) paradigm. Typically developing (TD) children showed evidence for recognition of faces and simple…

  15. Face Recognition in Children with a Pervasive Developmental Disorder Not Otherwise Specified.

    ERIC Educational Resources Information Center

    Serra, M.; Althaus, M.; de Sonneville, L. M. J.; Stant, A. D.; Jackson, A. E.; Minderaa, R. B.

    2003-01-01

    A study investigated the accuracy and speed of face recognition in 26 children (ages 7-10) with Pervasive Developmental Disorder Not Otherwise Specified. Subjects needed an amount of time to recognize the faces that almost equaled the time they needed to recognize abstract patterns that were difficult to distinguish. (Contains references.)…

  16. The Ups and Downs of Face Recognition: A Unique Developmental Trend?

    ERIC Educational Resources Information Center

    Flin, Rhona H.

    Children's ability to recognize unfamiliar faces shows an unusual developmental trend: performance improves from 6 to 11 years, a temporary regression occurs at 12 years, and then recovery leads to adult-level performance. The first study described in this paper tested 80 children 5 to 11 years of age on a face-matching and recognition task.…

  17. Deficits in other-race face recognition: no evidence for encoding-based effects.

    PubMed

    Papesh, Megan H; Goldinger, Stephen D

    2009-12-01

    The other-race effect (ORE) in face recognition is typically observed in tasks which require long-term memory. Several studies, however, have found the effect early in face encoding (Lindsay, Jack, & Christian, 1991; Walker & Hewstone, 2006). In 6 experiments, with over 300 participants, we found no evidence that the recognition deficit associated with the ORE reflects deficits in immediate encoding. In Experiment 1, with a study-to-test retention interval of 4 min, participants were better able to recognise White faces, relative to Asian faces. Experiment 1 also validated the use of computer-generated faces in subsequent experiments. In Experiments 2 through 4, performance was virtually identical to Asian and White faces in match-to-sample, immediate recognition. In Experiment 5, decreasing target-foil similarity and disrupting the retention interval with trivia questions elicited a re-emergence of the ORE. Experiments 6A and 6B replicated this effect, and showed that memory for Asian faces was particularly susceptible to distraction; White faces were recognised equally well, regardless of trivia questions during the retention interval. The recognition deficit in the ORE apparently emerges from retention or retrieval deficits, not differences in immediate perceptual processing.

  18. Brief Report: Developing Spatial Frequency Biases for Face Recognition in Autism and Williams Syndrome

    ERIC Educational Resources Information Center

    Leonard, Hayley C.; Annaz, Dagmara; Karmiloff-Smith, Annette; Johnson, Mark H.

    2011-01-01

    The current study investigated whether contrasting face recognition abilities in autism and Williams syndrome could be explained by different spatial frequency biases over developmental time. Typically-developing children and groups with Williams syndrome and autism were asked to recognise faces in which low, middle and high spatial frequency…

  19. Face Processing and Facial Emotion Recognition in Adults with Down Syndrome

    ERIC Educational Resources Information Center

    Barisnikov, Koviljka; Hippolyte, Loyse; Van der Linden, Martial

    2008-01-01

    Face processing and facial expression recognition was investigated in 17 adults with Down syndrome, and results were compared with those of a child control group matched for receptive vocabulary. On the tasks involving faces without emotional content, the adults with Down syndrome performed significantly worse than did the controls. However, their…

  20. Sex differences in face recognition memory in patients with temporal lobe epilepsy, patients with generalized epilepsy, and healthy controls.

    PubMed

    Bengner, T; Fortmeier, C; Malina, T; Lindenau, M; Voges, B; Goebell, E; Stodieck, S

    2006-12-01

    The influence of sex on face recognition memory was studied in 49 patients with temporal lobe epilepsy, 20 patients with generalized epilepsy, and 32 healthy controls. After learning 20 faces, serially presented for 5 seconds each, subjects had to recognize the 20 among 40 faces (including 20 new faces) immediately and 24 hours later. Women had better face recognition than men, with no significant differences between groups. Women's advantage was due mainly to superior delayed recognition. Taken together, the results suggest that sex has a similar impact on face recognition in patients with epilepsy and healthy controls, and that testing delayed face recognition raises sensitivity for sex differences. The influence of sex on face recognition in patients with epilepsy should be acknowledged when evaluating individuals or comparing groups.

  1. The design and implementation of effective face detection and recognition system

    NASA Astrophysics Data System (ADS)

    Sun, Yigui

    2011-06-01

    In the paper, a face detection and recognition system (FDRS) based on video sequences and still image is proposed. It uses the AdaBoost algorithm to detect human face in the image or frame, adopts Discrete Cosine Transforms (DCT) for feature extraction and recognition in face image. The related technologies are firstly outlined. Then, the system requirements and UML use case diagram are described. In addition, the paper mainly introduces the design solution and key procedures. The FDRS's source-code is built in VC++, Standard Template Library (STL) and Intel Open Source Computer Vision Library (OpenCV).

  2. Determining optimally orthogonal discriminant vectors in DCT domain for multiscale-based face recognition

    NASA Astrophysics Data System (ADS)

    Niu, Yanmin; Wang, Xuchu

    2011-02-01

    This paper presents a new face recognition method that extracts multiple discriminant features based on multiscale image enhancement technique and kernel-based orthogonal feature extraction improvements with several interesting characteristics. First, it can extract more discriminative multiscale face feature than traditional pixel-based or Gabor-based feature. Second, it can effectively deal with the small sample size problem as well as feature correlation problem by using eigenvalue decomposition on scatter matrices. Finally, the extractor handles nonlinearity efficiently by using kernel trick. Multiple recognition experiments on open face data set with comparison to several related methods show the effectiveness and superiority of the proposed method.

  3. Social trait judgment and affect recognition from static faces and video vignettes in schizophrenia

    PubMed Central

    McIntosh, Lindsey G.; Park, Sohee

    2014-01-01

    Social impairment is a core feature of schizophrenia, present from the pre-morbid stage and predictive of outcome, but the etiology of this deficit remains poorly understood. Successful and adaptive social interactions depend on one’s ability to make rapid and accurate judgments about others in real time. Our surprising ability to form accurate first impressions from brief exposures, known as “thin slices” of behavior has been studied very extensively in healthy participants. We sought to examine affect and social trait judgment from thin slices of static or video stimuli in order to investigate the ability of schizophrenic individuals to form reliable social impressions of others. 21 individuals with schizophrenia (SZ) and 20 matched healthy participants (HC) were asked to identify emotions and social traits for actors in standardized face stimuli as well as brief video clips. Sound was removed from videos to remove all verbal cues. Clinical symptoms in SZ and delusional ideation in both groups were measured. Results showed a general impairment in affect recognition for both types of stimuli in SZ. However, the two groups did not differ in the judgments of trustworthiness, approachability, attractiveness, and intelligence. Interestingly, in SZ, the severity of positive symptoms was correlated with higher ratings of attractiveness, trustworthiness, and approachability. Finally, increased delusional ideation in SZ was associated with a tendency to rate others as more trustworthy, while the opposite was true for HC. These findings suggest that complex social judgments in SZ are affected by symptomatology. PMID:25037526

  4. Social trait judgment and affect recognition from static faces and video vignettes in schizophrenia.

    PubMed

    McIntosh, Lindsey G; Park, Sohee

    2014-09-01

    Social impairment is a core feature of schizophrenia, present from the pre-morbid stage and predictive of outcome, but the etiology of this deficit remains poorly understood. Successful and adaptive social interactions depend on one's ability to make rapid and accurate judgments about others in real time. Our surprising ability to form accurate first impressions from brief exposures, known as "thin slices" of behavior has been studied very extensively in healthy participants. We sought to examine affect and social trait judgment from thin slices of static or video stimuli in order to investigate the ability of schizophrenic individuals to form reliable social impressions of others. 21 individuals with schizophrenia (SZ) and 20 matched healthy participants (HC) were asked to identify emotions and social traits for actors in standardized face stimuli as well as brief video clips. Sound was removed from videos to remove all verbal cues. Clinical symptoms in SZ and delusional ideation in both groups were measured. Results showed a general impairment in affect recognition for both types of stimuli in SZ. However, the two groups did not differ in the judgments of trustworthiness, approachability, attractiveness, and intelligence. Interestingly, in SZ, the severity of positive symptoms was correlated with higher ratings of attractiveness, trustworthiness, and approachability. Finally, increased delusional ideation in SZ was associated with a tendency to rate others as more trustworthy, while the opposite was true for HC. These findings suggest that complex social judgments in SZ are affected by symptomatology.

  5. Social trait judgment and affect recognition from static faces and video vignettes in schizophrenia.

    PubMed

    McIntosh, Lindsey G; Park, Sohee

    2014-09-01

    Social impairment is a core feature of schizophrenia, present from the pre-morbid stage and predictive of outcome, but the etiology of this deficit remains poorly understood. Successful and adaptive social interactions depend on one's ability to make rapid and accurate judgments about others in real time. Our surprising ability to form accurate first impressions from brief exposures, known as "thin slices" of behavior has been studied very extensively in healthy participants. We sought to examine affect and social trait judgment from thin slices of static or video stimuli in order to investigate the ability of schizophrenic individuals to form reliable social impressions of others. 21 individuals with schizophrenia (SZ) and 20 matched healthy participants (HC) were asked to identify emotions and social traits for actors in standardized face stimuli as well as brief video clips. Sound was removed from videos to remove all verbal cues. Clinical symptoms in SZ and delusional ideation in both groups were measured. Results showed a general impairment in affect recognition for both types of stimuli in SZ. However, the two groups did not differ in the judgments of trustworthiness, approachability, attractiveness, and intelligence. Interestingly, in SZ, the severity of positive symptoms was correlated with higher ratings of attractiveness, trustworthiness, and approachability. Finally, increased delusional ideation in SZ was associated with a tendency to rate others as more trustworthy, while the opposite was true for HC. These findings suggest that complex social judgments in SZ are affected by symptomatology. PMID:25037526

  6. Oxytocin increases bias, but not accuracy, in face recognition line-ups.

    PubMed

    Bate, Sarah; Bennetts, Rachel; Parris, Benjamin A; Bindemann, Markus; Udale, Robert; Bussunt, Amanda

    2015-07-01

    Previous work indicates that intranasal inhalation of oxytocin improves face recognition skills, raising the possibility that it may be used in security settings. However, it is unclear whether oxytocin directly acts upon the core face-processing system itself or indirectly improves face recognition via affective or social salience mechanisms. In a double-blind procedure, 60 participants received either an oxytocin or placebo nasal spray before completing the One-in-Ten task-a standardized test of unfamiliar face recognition containing target-present and target-absent line-ups. Participants in the oxytocin condition outperformed those in the placebo condition on target-present trials, yet were more likely to make false-positive errors on target-absent trials. Signal detection analyses indicated that oxytocin induced a more liberal response bias, rather than increasing accuracy per se. These findings support a social salience account of the effects of oxytocin on face recognition and indicate that oxytocin may impede face recognition in certain scenarios. PMID:25433464

  7. Emotional facial expressions differentially influence predictions and performance for face recognition.

    PubMed

    Nomi, Jason S; Rhodes, Matthew G; Cleary, Anne M

    2013-01-01

    This study examined how participants' predictions of future memory performance are influenced by emotional facial expressions. Participants made judgements of learning (JOLs) predicting the likelihood that they would correctly identify a face displaying a happy, angry, or neutral emotional expression in a future two-alternative forced-choice recognition test of identity (i.e., recognition that a person's face was seen before). JOLs were higher for studied faces with happy and angry emotional expressions than for neutral faces. However, neutral test faces with studied neutral expressions had significantly higher identity recognition rates than neutral test faces studied with happy or angry expressions. Thus, these data are the first to demonstrate that people believe happy and angry emotional expressions will lead to better identity recognition in the future relative to neutral expressions. This occurred despite the fact that neutral expressions elicited better identity recognition than happy and angry expressions. These findings contribute to the growing literature examining the interaction of cognition and emotion.

  8. Oxytocin increases bias, but not accuracy, in face recognition line-ups.

    PubMed

    Bate, Sarah; Bennetts, Rachel; Parris, Benjamin A; Bindemann, Markus; Udale, Robert; Bussunt, Amanda

    2015-07-01

    Previous work indicates that intranasal inhalation of oxytocin improves face recognition skills, raising the possibility that it may be used in security settings. However, it is unclear whether oxytocin directly acts upon the core face-processing system itself or indirectly improves face recognition via affective or social salience mechanisms. In a double-blind procedure, 60 participants received either an oxytocin or placebo nasal spray before completing the One-in-Ten task-a standardized test of unfamiliar face recognition containing target-present and target-absent line-ups. Participants in the oxytocin condition outperformed those in the placebo condition on target-present trials, yet were more likely to make false-positive errors on target-absent trials. Signal detection analyses indicated that oxytocin induced a more liberal response bias, rather than increasing accuracy per se. These findings support a social salience account of the effects of oxytocin on face recognition and indicate that oxytocin may impede face recognition in certain scenarios.

  9. Catechol-O-methyltransferase val(158)met Polymorphism Interacts with Sex to Affect Face Recognition Ability.

    PubMed

    Lamb, Yvette N; McKay, Nicole S; Singh, Shrimal S; Waldie, Karen E; Kirk, Ian J

    2016-01-01

    The catechol-O-methyltransferase (COMT) val158met polymorphism affects the breakdown of synaptic dopamine. Consequently, this polymorphism has been associated with a variety of neurophysiological and behavioral outcomes. Some of the effects have been found to be sex-specific and it appears estrogen may act to down-regulate the activity of the COMT enzyme. The dopaminergic system has been implicated in face recognition, a form of cognition for which a female advantage has typically been reported. This study aimed to investigate potential joint effects of sex and COMT genotype on face recognition. A sample of 142 university students was genotyped and assessed using the Faces I subtest of the Wechsler Memory Scale - Third Edition (WMS-III). A significant two-way interaction between sex and COMT genotype on face recognition performance was found. Of the male participants, COMT val homozygotes and heterozygotes had significantly lower scores than met homozygotes. Scores did not differ between genotypes for female participants. While male val homozygotes had significantly lower scores than female val homozygotes, no sex differences were observed in the heterozygotes and met homozygotes. This study contributes to the accumulating literature documenting sex-specific effects of the COMT polymorphism by demonstrating a COMT-sex interaction for face recognition, and is consistent with a role for dopamine in face recognition. PMID:27445927

  10. Blurred face recognition by fusing blur-invariant texture and structure features

    NASA Astrophysics Data System (ADS)

    Zhu, Mengyu; Cao, Zhiguo; Xiao, Yang; Xie, Xiaokang

    2015-10-01

    Blurred face recognition is still remaining as a challenge task, but with wide applications. Image blur can largely affect recognition performance. The local phase quantization (LPQ) was proposed to extract the blur-invariant texture information. It was used for blurred face recognition and achieved good performance. However, LPQ considers only the phase blur-invariant texture information, which is not sufficient. In addition, LPQ is extracted holistically, which cannot fully explore its discriminative power on local spatial properties. In this paper, we propose a novel method for blurred face recognition. The texture and structure blur-invariant features are extracted and fused to generate a more complete description on blurred image. For texture blur-invariant feature, LPQ is extracted in a densely sampled way and vector of locally aggregated descriptors (VLAD) is employed to enhance its performance. For structure blur-invariant feature, the histogram of oriented gradient (HOG) is used. To further enhance its blur invariance, we improve HOG by eliminating weak gradient magnitude which is more sensitive to image blur than the strong gradient. The improved HOG is then fused with the original HOG by canonical correlation analysis (CCA). At last, we fuse them together by CCA to form the final blur-invariant representation of the face image. The experiments are performed on three face datasets. The results demonstrate that our improvements and our proposition can have a good performance in blurred face recognition.

  11. Emotional facial expressions differentially influence predictions and performance for face recognition.

    PubMed

    Nomi, Jason S; Rhodes, Matthew G; Cleary, Anne M

    2013-01-01

    This study examined how participants' predictions of future memory performance are influenced by emotional facial expressions. Participants made judgements of learning (JOLs) predicting the likelihood that they would correctly identify a face displaying a happy, angry, or neutral emotional expression in a future two-alternative forced-choice recognition test of identity (i.e., recognition that a person's face was seen before). JOLs were higher for studied faces with happy and angry emotional expressions than for neutral faces. However, neutral test faces with studied neutral expressions had significantly higher identity recognition rates than neutral test faces studied with happy or angry expressions. Thus, these data are the first to demonstrate that people believe happy and angry emotional expressions will lead to better identity recognition in the future relative to neutral expressions. This occurred despite the fact that neutral expressions elicited better identity recognition than happy and angry expressions. These findings contribute to the growing literature examining the interaction of cognition and emotion. PMID:22712473

  12. Catechol-O-methyltransferase val158met Polymorphism Interacts with Sex to Affect Face Recognition Ability

    PubMed Central

    Lamb, Yvette N.; McKay, Nicole S.; Singh, Shrimal S.; Waldie, Karen E.; Kirk, Ian J.

    2016-01-01

    The catechol-O-methyltransferase (COMT) val158met polymorphism affects the breakdown of synaptic dopamine. Consequently, this polymorphism has been associated with a variety of neurophysiological and behavioral outcomes. Some of the effects have been found to be sex-specific and it appears estrogen may act to down-regulate the activity of the COMT enzyme. The dopaminergic system has been implicated in face recognition, a form of cognition for which a female advantage has typically been reported. This study aimed to investigate potential joint effects of sex and COMT genotype on face recognition. A sample of 142 university students was genotyped and assessed using the Faces I subtest of the Wechsler Memory Scale – Third Edition (WMS-III). A significant two-way interaction between sex and COMT genotype on face recognition performance was found. Of the male participants, COMT val homozygotes and heterozygotes had significantly lower scores than met homozygotes. Scores did not differ between genotypes for female participants. While male val homozygotes had significantly lower scores than female val homozygotes, no sex differences were observed in the heterozygotes and met homozygotes. This study contributes to the accumulating literature documenting sex-specific effects of the COMT polymorphism by demonstrating a COMT-sex interaction for face recognition, and is consistent with a role for dopamine in face recognition. PMID:27445927

  13. Gabor feature based classification using the enhanced fisher linear discriminant model for face recognition.

    PubMed

    Liu, Chengjun; Wechsler, Harry

    2002-01-01

    This paper introduces a novel Gabor-Fisher (1936) classifier (GFC) for face recognition. The GFC method, which is robust to changes in illumination and facial expression, applies the enhanced Fisher linear discriminant model (EFM) to an augmented Gabor feature vector derived from the Gabor wavelet representation of face images. The novelty of this paper comes from 1) the derivation of an augmented Gabor feature vector, whose dimensionality is further reduced using the EFM by considering both data compression and recognition (generalization) performance; 2) the development of a Gabor-Fisher classifier for multi-class problems; and 3) extensive performance evaluation studies. In particular, we performed comparative studies of different similarity measures applied to various classifiers. We also performed comparative experimental studies of various face recognition schemes, including our novel GFC method, the Gabor wavelet method, the eigenfaces method, the Fisherfaces method, the EFM method, the combination of Gabor and the eigenfaces method, and the combination of Gabor and the Fisherfaces method. The feasibility of the new GFC method has been successfully tested on face recognition using 600 FERET frontal face images corresponding to 200 subjects, which were acquired under variable illumination and facial expressions. The novel GFC method achieves 100% accuracy on face recognition using only 62 features. PMID:18244647

  14. Unmasking a shady mirror effect: recognition of normal versus obscured faces.

    PubMed

    Vokey, John R; Hockley, William E

    2012-01-01

    Hockley, Hemsworth, and Consoli (1999) found that following the study of normal faces, a recognition test of normal faces versus faces wearing sunglasses produced a mirror effect: The sunglasses manipulation decreased hit rates and increased false-alarm rates. The stimuli used by Hockley et al. (1999) consisted of separate poses of models wearing or not wearing sunglasses. In the current experiments, we separately manipulated same versus different depictions of individual faces and whether or not the faces were partially obscured. The results of a simulation and four experiments suggest that the test-based, mirror effect observed by Hockley et al. (1999) is actually two separable effects.

  15. The Own-Age Bias in Face Recognition: A Meta-Analytic and Theoretical Review

    ERIC Educational Resources Information Center

    Rhodes, Matthew G.; Anastasi, Jeffrey S.

    2012-01-01

    A large number of studies have examined the finding that recognition memory for faces of one's own age group is often superior to memory for faces of another age group. We examined this "own-age bias" (OAB) in the meta-analyses reported. These data showed that hits were reliably greater for same-age relative to other-age faces (g = 0.23) and that…

  16. Self-Face Recognition in Schizophrenia: An Eye-Tracking Study

    PubMed Central

    Bortolon, Catherine; Capdevielle, Delphine; Salesse, Robin N.; Raffard, Stéphane

    2016-01-01

    Self-face recognition has been shown to be impaired in schizophrenia (SZ), according to studies using behavioral tasks implicating cognitive demands. Here, we employed an eye-tracking methodology, which is a relevant tool to understand impairments in self-face recognition deficits in SZ because it provides a natural, continuous and online record of face processing. Moreover, it allows collecting the most relevant and informative features each individual looks at during the self-face recognition. These advantages are especially relevant considering the fundamental role played by the patterns of visual exploration on face processing. Thus, this paper aims to investigate self-face recognition deficits in SZ using eye-tracking methodology. Visual scan paths were monitored in 20 patients with SZ and 20 healthy controls. Self, famous, and unknown faces were morphed in steps of 20%. Location, number, and duration of fixations on relevant areas were recorded with an eye-tracking system. Participants performed a passive exploration task (no specific instruction was provided), followed by an active decision making task (individuals were explicitly requested to recognize the different faces). Results showed that patients with SZ had fewer and longer fixations compared to controls. Nevertheless, both groups focused their attention on relevant facial features in a similar way. No significant difference was found between groups when participants were requested to recognize the faces (active task). In conclusion, using an eye tracking methodology and two tasks with low levels of cognitive demands, our results suggest that patients with SZ are able to: (1) explore faces and focus on relevant features of the face in a similar way as controls; and (2) recognize their own face. PMID:26903833

  17. Accurate three-dimensional pose recognition from monocular images using template matched filtering

    NASA Astrophysics Data System (ADS)

    Picos, Kenia; Diaz-Ramirez, Victor H.; Kober, Vitaly; Montemayor, Antonio S.; Pantrigo, Juan J.

    2016-06-01

    An accurate algorithm for three-dimensional (3-D) pose recognition of a rigid object is presented. The algorithm is based on adaptive template matched filtering and local search optimization. When a scene image is captured, a bank of correlation filters is constructed to find the best correspondence between the current view of the target in the scene and a target image synthesized by means of computer graphics. The synthetic image is created using a known 3-D model of the target and an iterative procedure based on local search. Computer simulation results obtained with the proposed algorithm in synthetic and real-life scenes are presented and discussed in terms of accuracy of pose recognition in the presence of noise, cluttered background, and occlusion. Experimental results show that our proposal presents high accuracy for 3-D pose estimation using monocular images.

  18. ERP investigation of study-test background mismatch during face recognition in schizophrenia.

    PubMed

    Guillaume, Fabrice; Guillem, François; Tiberghien, Guy; Stip, Emmanuel

    2012-01-01

    Old/new effects on event-related potentials (ERP) were explored in 20 patients with schizophrenia and 20 paired comparison subjects during unfamiliar face recognition. Extrinsic perceptual changes - which influence the overall familiarity of an item while retaining face-intrinsic features for use in structural face encoding - were manipulated between the study phase and the test. The question raised here concerns whether these perceptual incongruities would have a different effect on the sense of familiarity and the corresponding behavioral and ERP measures in the two groups. The results showed that schizophrenia patients were more inclined to consider old faces shown against a new background as distractors. This drop in face familiarity was accompanied by the disappearance of ERP old/new effects in this condition, i.e., FN400 and parietal old/new effects. Indeed, while ERP old/new recognition effects were found in both groups when the picture of the face was physically identical to the one presented for study, the ERP correlates of recognition disappeared among patients when the background behind the face was different. This difficulty in disregarding a background change suggests that recognition among patients with schizophrenia is based on a global perceptual matching strategy rather than on the extraction of configural information from the face. The correlations observed between FN400 amplitude, the rejection of faces with a different background, and the reality-distortion scores support the idea that the recognition deficit found in schizophrenia results from early anomalies that are carried over onto the parietal ERP old/new effect. Face-extrinsic perceptual variations provide an opportune situation for gaining insight into the social difficulties that patients encounter throughout their lives.

  19. On the particular vulnerability of face recognition to aging: a review of three hypotheses

    PubMed Central

    Boutet, Isabelle; Taler, Vanessa; Collin, Charles A.

    2015-01-01

    Age-related face recognition deficits are characterized by high false alarms to unfamiliar faces, are not as pronounced for other complex stimuli, and are only partially related to general age-related impairments in cognition. This paper reviews some of the underlying processes likely to be implicated in theses deficits by focusing on areas where contradictions abound as a means to highlight avenues for future research. Research pertaining to the three following hypotheses is presented: (i) perceptual deterioration, (ii) encoding of configural information, and (iii) difficulties in recollecting contextual information. The evidence surveyed provides support for the idea that all three factors are likely to contribute, under certain conditions, to the deficits in face recognition seen in older adults. We discuss how these different factors might interact in the context of a generic framework of the different stages implicated in face recognition. Several suggestions for future investigations are outlined. PMID:26347670

  20. Learning deformation model for expression-robust 3D face recognition

    NASA Astrophysics Data System (ADS)

    Guo, Zhe; Liu, Shu; Wang, Yi; Lei, Tao

    2015-12-01

    Expression change is the major cause of local plastic deformation of the facial surface. The intra-class differences with large expression change somehow are larger than the inter-class differences as it's difficult to distinguish the same individual with facial expression change. In this paper, an expression-robust 3D face recognition method is proposed by learning expression deformation model. The expression of the individuals on the training set is modeled by principal component analysis, the main components are retained to construct the facial deformation model. For the test 3D face, the shape difference between the test and the neutral face in training set is used for reconstructing the expression change by the constructed deformation model. The reconstruction residual error is used for face recognition. The average recognition rate on GavabDB and self-built database reaches 85.1% and 83%, respectively, which shows strong robustness for expression changes.

  1. Saccadic eye movements and face recognition performance in patients with central glaucomatous visual field defects.

    PubMed

    Glen, Fiona C; Smith, Nicholas D; Crabb, David P

    2013-04-19

    Patients with more advanced glaucoma are likely to experience problems with everyday visual tasks such as face recognition. However, some patients still perform well at face recognition despite their visual field (VF) defects. This study investigated whether certain eye movement patterns are associated with better performance in the Cambridge Face Memory Test. For patients with bilateral VF defects in their central 10° of VF, making larger saccades appeared to be associated with better face recognition performance (rho=0.60, p=0.001). Associations were less apparent for the patients without significant 10° defects. There were no significant associations between saccade amplitude and task performance in people with healthy vision (rho=-0.24; p=0.13). These findings suggest that some patients with likely symptomatic glaucomatous damage manifest eye movements to adapt to VF loss during certain visual activities.

  2. Face recognition in pictures is affected by perspective transformation but not by the centre of projection.

    PubMed

    Liu, Chang Hong; Ward, James

    2006-01-01

    Recognition of unfamiliar faces is susceptible to image differences caused by angular sizes subtended from the face to the camera. Research on perception of cubes suggests that apparent distortions of a shape due to large camera angle are correctable by placing the observer at the centre of projection, especially when visibility of the picture surface is low (Yang and Kubovy, 1999 Perception & Psychophysics 61 456-467). To explore the implication of this finding for face perception, observers performed recognition and matching tasks where face images with reduced visibility of picture surface were shown with observers either at the centre of projection or at other viewpoints. The results show that, unlike perception of cubes, the effect of perspective transformation on face recognition is largely unaffected by the centre of projection. Furthermore, the use of perspective cues is not affected by textured surfaces. The limitation of perspective in restoring 3-D information of faces suggests a stronger role for image-based, rather than model-based, processes in recognition of unfamiliar faces. PMID:17283930

  3. The effect of gaze direction on three-dimensional face recognition in infant brain activity.

    PubMed

    Yamashita, Wakayo; Kanazawa, So; Yamaguchi, Masami K; Kakigi, Ryusuke

    2012-09-12

    In three-dimensional face recognition studies, it is well known that viewing rotating faces enhance face recognition. For infants, our previous study indicated that 8-month-old infants showed recognition of three-dimensional rotating faces with a direct gaze, and they did not learn with an averted gaze. This suggests that gaze direction may affect three-dimensional face recognition in infants. In this experiment, we used near-infrared spectroscopy to measure infants' hemodynamic responses to averted gaze and direct gaze. We hypothesized that infants would show different neural activity for averted and direct gazes. The responses were compared with the baseline activation during the presentation of non-face objects. We found that the concentration of oxyhemoglobin increased in the temporal cortex on both sides only during the presentation of averted gaze compared with that of the baseline period. This is the first study to show that infants' brain activity in three-dimensional face processing is different between averted gaze and direct gaze.

  4. 3D face recognition based on multiple keypoint descriptors and sparse representation.

    PubMed

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying; Lu, Jianwei

    2014-01-01

    Recent years have witnessed a growing interest in developing methods for 3D face recognition. However, 3D scans often suffer from the problems of missing parts, large facial expressions, and occlusions. To be useful in real-world applications, a 3D face recognition approach should be able to handle these challenges. In this paper, we propose a novel general approach to deal with the 3D face recognition problem by making use of multiple keypoint descriptors (MKD) and the sparse representation-based classification (SRC). We call the proposed method 3DMKDSRC for short. Specifically, with 3DMKDSRC, each 3D face scan is represented as a set of descriptor vectors extracted from keypoints by meshSIFT. Descriptor vectors of gallery samples form the gallery dictionary. Given a probe 3D face scan, its descriptors are extracted at first and then its identity can be determined by using a multitask SRC. The proposed 3DMKDSRC approach does not require the pre-alignment between two face scans and is quite robust to the problems of missing data, occlusions and expressions. Its superiority over the other leading 3D face recognition schemes has been corroborated by extensive experiments conducted on three benchmark databases, Bosphorus, GavabDB, and FRGC2.0. The Matlab source code for 3DMKDSRC and the related evaluation results are publicly available at http://sse.tongji.edu.cn/linzhang/3dmkdsrcface/3dmkdsrc.htm. PMID:24940876

  5. 3D Face Recognition Based on Multiple Keypoint Descriptors and Sparse Representation

    PubMed Central

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying; Lu, Jianwei

    2014-01-01

    Recent years have witnessed a growing interest in developing methods for 3D face recognition. However, 3D scans often suffer from the problems of missing parts, large facial expressions, and occlusions. To be useful in real-world applications, a 3D face recognition approach should be able to handle these challenges. In this paper, we propose a novel general approach to deal with the 3D face recognition problem by making use of multiple keypoint descriptors (MKD) and the sparse representation-based classification (SRC). We call the proposed method 3DMKDSRC for short. Specifically, with 3DMKDSRC, each 3D face scan is represented as a set of descriptor vectors extracted from keypoints by meshSIFT. Descriptor vectors of gallery samples form the gallery dictionary. Given a probe 3D face scan, its descriptors are extracted at first and then its identity can be determined by using a multitask SRC. The proposed 3DMKDSRC approach does not require the pre-alignment between two face scans and is quite robust to the problems of missing data, occlusions and expressions. Its superiority over the other leading 3D face recognition schemes has been corroborated by extensive experiments conducted on three benchmark databases, Bosphorus, GavabDB, and FRGC2.0. The Matlab source code for 3DMKDSRC and the related evaluation results are publicly available at http://sse.tongji.edu.cn/linzhang/3dmkdsrcface/3dmkdsrc.htm. PMID:24940876

  6. Emotion recognition through static faces and moving bodies: a comparison between typically developed adults and individuals with high level of autistic traits.

    PubMed

    Actis-Grosso, Rossana; Bossi, Francesco; Ricciardelli, Paola

    2015-01-01

    We investigated whether the type of stimulus (pictures of static faces vs. body motion) contributes differently to the recognition of emotions. The performance (accuracy and response times) of 25 Low Autistic Traits (LAT group) young adults (21 males) and 20 young adults (16 males) with either High Autistic Traits or with High Functioning Autism Spectrum Disorder (HAT group) was compared in the recognition of four emotions (Happiness, Anger, Fear, and Sadness) either shown in static faces or conveyed by moving body patch-light displays (PLDs). Overall, HAT individuals were as accurate as LAT ones in perceiving emotions both with faces and with PLDs. Moreover, they correctly described non-emotional actions depicted by PLDs, indicating that they perceived the motion conveyed by the PLDs per se. For LAT participants, happiness proved to be the easiest emotion to be recognized: in line with previous studies we found a happy face advantage for faces, which for the first time was also found for bodies (happy body advantage). Furthermore, LAT participants recognized sadness better by static faces and fear by PLDs. This advantage for motion kinematics in the recognition of fear was not present in HAT participants, suggesting that (i) emotion recognition is not generally impaired in HAT individuals, (ii) the cues exploited for emotion recognition by LAT and HAT groups are not always the same. These findings are discussed against the background of emotional processing in typically and atypically developed individuals. PMID:26557101

  7. Face and Emotion Recognition in MCDD versus PDD-NOS

    ERIC Educational Resources Information Center

    Herba, Catherine M.; de Bruin, Esther; Althaus, Monika; Verheij, Fop; Ferdinand, Robert F.

    2008-01-01

    Previous studies indicate that Multiple Complex Developmental Disorder (MCDD) children differ from PDD-NOS and autistic children on a symptom level and on psychophysiological functioning. Children with MCDD (n = 21) and PDD-NOS (n = 62) were compared on two facets of social-cognitive functioning: identification of neutral faces and facial…

  8. Self-Face and Self-Body Recognition in Autism

    ERIC Educational Resources Information Center

    Gessaroli, Erica; Andreini, Veronica; Pellegri, Elena; Frassinetti, Francesca

    2013-01-01

    The advantage in responding to self vs. others' body and face-parts (the so called self-advantage) is considered to reflect the implicit access to the bodily self representation and has been studied in healthy and brain-damaged adults in previous studies. If the distinction of the self from others is a key aspect of social behaviour and is a…

  9. Face Recognition for Access Control Systems Combining Image-Difference Features Based on a Probabilistic Model

    NASA Astrophysics Data System (ADS)

    Miwa, Shotaro; Kage, Hiroshi; Hirai, Takashi; Sumi, Kazuhiko

    We propose a probabilistic face recognition algorithm for Access Control System(ACS)s. Comparing with existing ACSs using low cost IC-cards, face recognition has advantages in usability and security that it doesn't require people to hold cards over scanners and doesn't accept imposters with authorized cards. Therefore face recognition attracts more interests in security markets than IC-cards. But in security markets where low cost ACSs exist, price competition is important, and there is a limitation on the quality of available cameras and image control. Therefore ACSs using face recognition are required to handle much lower quality images, such as defocused and poor gain-controlled images than high security systems, such as immigration control. To tackle with such image quality problems we developed a face recognition algorithm based on a probabilistic model which combines a variety of image-difference features trained by Real AdaBoost with their prior probability distributions. It enables to evaluate and utilize only reliable features among trained ones during each authentication, and achieve high recognition performance rates. The field evaluation using a pseudo Access Control System installed in our office shows that the proposed system achieves a constant high recognition performance rate independent on face image qualities, that is about four times lower EER (Equal Error Rate) under a variety of image conditions than one without any prior probability distributions. On the other hand using image difference features without any prior probabilities are sensitive to image qualities. We also evaluated PCA, and it has worse, but constant performance rates because of its general optimization on overall data. Comparing with PCA, Real AdaBoost without any prior distribution performs twice better under good image conditions, but degrades to a performance as good as PCA under poor image conditions.

  10. 3D fast wavelet network model-assisted 3D face recognition

    NASA Astrophysics Data System (ADS)

    Said, Salwa; Jemai, Olfa; Zaied, Mourad; Ben Amar, Chokri

    2015-12-01

    In last years, the emergence of 3D shape in face recognition is due to its robustness to pose and illumination changes. These attractive benefits are not all the challenges to achieve satisfactory recognition rate. Other challenges such as facial expressions and computing time of matching algorithms remain to be explored. In this context, we propose our 3D face recognition approach using 3D wavelet networks. Our approach contains two stages: learning stage and recognition stage. For the training we propose a novel algorithm based on 3D fast wavelet transform. From 3D coordinates of the face (x,y,z), we proceed to voxelization to get a 3D volume which will be decomposed by 3D fast wavelet transform and modeled after that with a wavelet network, then their associated weights are considered as vector features to represent each training face . For the recognition stage, an unknown identity face is projected on all the training WN to obtain a new vector features after every projection. A similarity score is computed between the old and the obtained vector features. To show the efficiency of our approach, experimental results were performed on all the FRGC v.2 benchmark.

  11. Histogram of Gabor phase patterns (HGPP): a novel object representation approach for face recognition.

    PubMed

    Zhang, Baochang; Shan, Shiguang; Chen, Xilin; Gao, Wen

    2007-01-01

    A novel object descriptor, histogram of Gabor phase pattern (HGPP), is proposed for robust face recognition. In HGPP, the quadrant-bit codes are first extracted from faces based on the Gabor transformation. Global Gabor phase pattern (GGPP) and local Gabor phase pattern (LGPP) are then proposed to encode the phase variations. GGPP captures the variations derived from the orientation changing of Gabor wavelet at a given scale (frequency), while LGPP encodes the local neighborhood variations by using a novel local XOR pattern (LXP) operator. They are both divided into the nonoverlapping rectangular regions, from which spatial histograms are extracted and concatenated into an extended histogram feature to represent the original image. Finally, the recognition is performed by using the nearest-neighbor classifier with histogram intersection as the similarity measurement. The features of HGPP lie in two aspects: 1) HGPP can describe the general face images robustly without the training procedure; 2) HGPP encodes the Gabor phase information, while most previous face recognition methods exploit the Gabor magnitude information. In addition, Fisher separation criterion is further used to improve the performance of HGPP by weighing the subregions of the image according to their discriminative powers. The proposed methods are successfully applied to face recognition, and the experiment results on the large-scale FERET and CAS-PEAL databases show that the proposed algorithms significantly outperform other well-known systems in terms of recognition rate.

  12. Semantic and visual determinants of face recognition in a prosopagnosic patient.

    PubMed

    Dixon, M J; Bub, D N; Arguin, M

    1998-05-01

    Prosopagnosia is the neuropathological inability to recognize familiar people by their faces. It can occur in isolation or can coincide with recognition deficits for other nonface objects. Often, patients whose prosopagnosia is accompanied by object recognition difficulties have more trouble identifying certain categories of objects relative to others. In previous research, we demonstrated that objects that shared multiple visual features and were semantically close posed severe recognition difficulties for a patient with temporal lobe damage. We now demonstrate that this patient's face recognition is constrained by these same parameters. The prosopagnosic patient ELM had difficulties pairing faces to names when the faces shared visual features and the names were semantically related (e.g., Tonya Harding, Nancy Kerrigan, and Josee Chouinard -three ice skaters). He made tenfold fewer errors when the exact same faces were associated with semantically unrelated people (e.g., singer Celine Dion, actress Betty Grable, and First Lady Hillary Clinton). We conclude that prosopagnosia and co-occurring category-specific recognition problems both stem from difficulties disambiguating the stored representations of objects that share multiple visual features and refer to semantically close identities or concepts.

  13. Semantic and visual determinants of face recognition in a prosopagnosic patient.

    PubMed

    Dixon, M J; Bub, D N; Arguin, M

    1998-05-01

    Prosopagnosia is the neuropathological inability to recognize familiar people by their faces. It can occur in isolation or can coincide with recognition deficits for other nonface objects. Often, patients whose prosopagnosia is accompanied by object recognition difficulties have more trouble identifying certain categories of objects relative to others. In previous research, we demonstrated that objects that shared multiple visual features and were semantically close posed severe recognition difficulties for a patient with temporal lobe damage. We now demonstrate that this patient's face recognition is constrained by these same parameters. The prosopagnosic patient ELM had difficulties pairing faces to names when the faces shared visual features and the names were semantically related (e.g., Tonya Harding, Nancy Kerrigan, and Josee Chouinard -three ice skaters). He made tenfold fewer errors when the exact same faces were associated with semantically unrelated people (e.g., singer Celine Dion, actress Betty Grable, and First Lady Hillary Clinton). We conclude that prosopagnosia and co-occurring category-specific recognition problems both stem from difficulties disambiguating the stored representations of objects that share multiple visual features and refer to semantically close identities or concepts. PMID:9869710

  14. Motion as a cue to face recognition: evidence from congenital prosopagnosia.

    PubMed

    Longmore, Christopher A; Tree, Jeremy J

    2013-04-01

    Congenital prosopagnosia is a condition that, present from an early age, makes it difficult for an individual to recognise someone from his or her face. Typically, research into prosopagnosia has employed static images that do not contain the extra information we can obtain from moving faces and, as a result, very little is known about the role of facial motion for identity processing in prosopagnosia. Two experiments comparing the performance of four congenital prosopagnosics with that of age matched and younger controls on their ability to learn and recognise (Experiment 1) and match (Experiment 2) novel faces are reported. It was found that younger controls' recognition memory performance increased with dynamic presentation, however only one of the four prosopagnosics showed any improvement. Motion aided matching performance of age matched controls and all prosopagnosics. In addition, the face inversion effect, an effect that tends to be reduced in prosopagnosia, emerged when prosopagnosics matched moving faces. The results suggest that facial motion can be used as a cue to identity, but that this may be a complex and difficult cue to retain. As prosopagnosics performance improved with the dynamic presentation of faces it would appear that prosopagnosics can use motion as a cue to recognition, and the different patterns for the face inversion effect that occurred in the prosopagnosics for static and dynamic faces suggests that the mechanisms used for dynamic facial motion recognition are dissociable from static mechanisms. PMID:23391556

  15. The cross-race effect in face recognition memory by bicultural individuals.

    PubMed

    Marsh, Benjamin U; Pezdek, Kathy; Ozery, Daphna Hausman

    2016-09-01

    Social-cognitive models of the cross-race effect (CRE) generally specify that cross-race faces are automatically categorized as an out-group, and that different encoding processes are then applied to same-race and cross-race faces, resulting in better recognition memory for same-race faces. We examined whether cultural priming moderates the cognitive categorization of cross-race faces. In Experiment 1, monoracial Latino-Americans, considered to have a bicultural self, were primed to focus on either a Latino or American cultural self and then viewed Latino and White faces. Latino-Americans primed as Latino exhibited higher recognition accuracy (A') for Latino than White faces; those primed as American exhibited higher recognition accuracy for White than Latino faces. In Experiment 2, as predicted, prime condition did not moderate the CRE in European-Americans. These results suggest that for monoracial biculturals, priming either of their cultural identities influences the encoding processes applied to same- and cross-race faces, thereby moderating the CRE. PMID:27219532

  16. Motion as a cue to face recognition: evidence from congenital prosopagnosia.

    PubMed

    Longmore, Christopher A; Tree, Jeremy J

    2013-04-01

    Congenital prosopagnosia is a condition that, present from an early age, makes it difficult for an individual to recognise someone from his or her face. Typically, research into prosopagnosia has employed static images that do not contain the extra information we can obtain from moving faces and, as a result, very little is known about the role of facial motion for identity processing in prosopagnosia. Two experiments comparing the performance of four congenital prosopagnosics with that of age matched and younger controls on their ability to learn and recognise (Experiment 1) and match (Experiment 2) novel faces are reported. It was found that younger controls' recognition memory performance increased with dynamic presentation, however only one of the four prosopagnosics showed any improvement. Motion aided matching performance of age matched controls and all prosopagnosics. In addition, the face inversion effect, an effect that tends to be reduced in prosopagnosia, emerged when prosopagnosics matched moving faces. The results suggest that facial motion can be used as a cue to identity, but that this may be a complex and difficult cue to retain. As prosopagnosics performance improved with the dynamic presentation of faces it would appear that prosopagnosics can use motion as a cue to recognition, and the different patterns for the face inversion effect that occurred in the prosopagnosics for static and dynamic faces suggests that the mechanisms used for dynamic facial motion recognition are dissociable from static mechanisms.

  17. Super resolution based face recognition: do we need training image set?

    NASA Astrophysics Data System (ADS)

    Al-Hassan, Nadia; Sellahewa, Harin; Jassim, Sabah A.

    2013-05-01

    This paper is concerned with face recognition under uncontrolled condition, e.g. at a distance surveillance scenarios, and post-rioting forensic, whereby captured face images are severely degraded/blurred and of low-resolution. This is a tough challenge due to many factors including capturing conditions. We present the results of our investigations into recently developed Compressive Sensing (CS) theory to develop scalable face recognition schemes using a variety of overcomplete dictionaries that construct super-resolved face images from any input low-resolution degraded face image. We shall demonstrate that deterministic as well as non-deterministic dictionaries that do not involve the use of face image information but satisfy some form of the Restricted Isometry Property used for CS can achieve face recognition accuracy levels, as good as if not better than those achieved by dictionaries proposed in the literature, that are learned from face image databases using elaborate procedures. We shall elaborate on how this approach helps in crime fighting and terrorism.

  18. The cross-race effect in face recognition memory by bicultural individuals.

    PubMed

    Marsh, Benjamin U; Pezdek, Kathy; Ozery, Daphna Hausman

    2016-09-01

    Social-cognitive models of the cross-race effect (CRE) generally specify that cross-race faces are automatically categorized as an out-group, and that different encoding processes are then applied to same-race and cross-race faces, resulting in better recognition memory for same-race faces. We examined whether cultural priming moderates the cognitive categorization of cross-race faces. In Experiment 1, monoracial Latino-Americans, considered to have a bicultural self, were primed to focus on either a Latino or American cultural self and then viewed Latino and White faces. Latino-Americans primed as Latino exhibited higher recognition accuracy (A') for Latino than White faces; those primed as American exhibited higher recognition accuracy for White than Latino faces. In Experiment 2, as predicted, prime condition did not moderate the CRE in European-Americans. These results suggest that for monoracial biculturals, priming either of their cultural identities influences the encoding processes applied to same- and cross-race faces, thereby moderating the CRE.

  19. Adaptive weighted local textural features for illumination, expression, and occlusion invariant face recognition

    NASA Astrophysics Data System (ADS)

    Cui, Chen; Asari, Vijayan K.

    2014-03-01

    Biometric features such as fingerprints, iris patterns, and face features help to identify people and restrict access to secure areas by performing advanced pattern analysis and matching. Face recognition is one of the most promising biometric methodologies for human identification in a non-cooperative security environment. However, the recognition results obtained by face recognition systems are a affected by several variations that may happen to the patterns in an unrestricted environment. As a result, several algorithms have been developed for extracting different facial features for face recognition. Due to the various possible challenges of data captured at different lighting conditions, viewing angles, facial expressions, and partial occlusions in natural environmental conditions, automatic facial recognition still remains as a difficult issue that needs to be resolved. In this paper, we propose a novel approach to tackling some of these issues by analyzing the local textural descriptions for facial feature representation. The textural information is extracted by an enhanced local binary pattern (ELBP) description of all the local regions of the face. The relationship of each pixel with respect to its neighborhood is extracted and employed to calculate the new representation. ELBP reconstructs a much better textural feature extraction vector from an original gray level image in different lighting conditions. The dimensionality of the texture image is reduced by principal component analysis performed on each local face region. Each low dimensional vector representing a local region is now weighted based on the significance of the sub-region. The weight of each sub-region is determined by employing the local variance estimate of the respective region, which represents the significance of the region. The final facial textural feature vector is obtained by concatenating the reduced dimensional weight sets of all the modules (sub-regions) of the face image

  20. Age-Related Differences in Brain Electrical Activity during Extended Continuous Face Recognition in Younger Children, Older Children and Adults

    ERIC Educational Resources Information Center

    Van Strien, Jan W.; Glimmerveen, Johanna C.; Franken, Ingmar H. A.; Martens, Vanessa E. G.; de Bruin, Eveline A.

    2011-01-01

    To examine the development of recognition memory in primary-school children, 36 healthy younger children (8-9 years old) and 36 healthy older children (11-12 years old) participated in an ERP study with an extended continuous face recognition task (Study 1). Each face of a series of 30 faces was shown randomly six times interspersed with…

  1. Correlation based efficient face recognition and color change detection

    NASA Astrophysics Data System (ADS)

    Elbouz, M.; Alfalou, A.; Brosseau, C.; Alam, M. S.; Qasmi, S.

    2013-01-01

    Identifying the human face via correlation is a topic attracting widespread interest. At the heart of this technique lies the comparison of an unknown target image to a known reference database of images. However, the color information in the target image remains notoriously difficult to interpret. In this paper, we report a new technique which: (i) is robust against illumination change, (ii) offers discrimination ability to detect color change between faces having similar shape, and (iii) is specifically designed to detect red colored stains (i.e. facial bleeding). We adopt the Vanderlugt correlator (VLC) architecture with a segmented phase filter and we decompose the color target image using normalized red, green, and blue (RGB), and hue, saturation, and value (HSV) scales. We propose a new strategy to effectively utilize color information in signatures for further increasing the discrimination ability. The proposed algorithm has been found to be very efficient for discriminating face subjects with different skin colors, and those having color stains in different areas of the facial image.

  2. Discriminative multimanifold analysis for face recognition from a single training sample per person.

    PubMed

    Lu, Jiwen; Tan, Yap-Peng; Wang, Gang

    2013-01-01

    Conventional appearance-based face recognition methods usually assume that there are multiple samples per person (MSPP) available for discriminative feature extraction during the training phase. In many practical face recognition applications such as law enhancement, e-passport, and ID card identification, this assumption, however, may not hold as there is only a single sample per person (SSPP) enrolled or recorded in these systems. Many popular face recognition methods fail to work well in this scenario because there are not enough samples for discriminant learning. To address this problem, we propose in this paper a novel discriminative multimanifold analysis (DMMA) method by learning discriminative features from image patches. First, we partition each enrolled face image into several nonoverlapping patches to form an image set for each sample per person. Then, we formulate the SSPP face recognition as a manifold-manifold matching problem and learn multiple DMMA feature spaces to maximize the manifold margins of different persons. Finally, we present a reconstruction-based manifold-manifold distance to identify the unlabeled subjects. Experimental results on three widely used face databases are presented to demonstrate the efficacy of the proposed approach.

  3. Face recognition across makeup and plastic surgery from real-world images

    NASA Astrophysics Data System (ADS)

    Moeini, Ali; Faez, Karim; Moeini, Hossein

    2015-09-01

    A study for feature extraction is proposed to handle the problem of facial appearance changes including facial makeup and plastic surgery in face recognition. To extend a face recognition method robust to facial appearance changes, features are individually extracted from facial depth on which facial makeup and plastic surgery have no effect. Then facial depth features are added to facial texture features to perform feature extraction. Accordingly, a three-dimensional (3-D) face is reconstructed from only a single two-dimensional (2-D) frontal image in real-world scenarios. Then the facial depth is extracted from the reconstructed model. Afterward, the dual-tree complex wavelet transform (DT-CWT) is applied to both texture and reconstructed depth images to extract the feature vectors. Finally, the final feature vectors are generated by combining 2-D and 3-D feature vectors, and are then classified by adopting the support vector machine. Promising results have been achieved for makeup-invariant face recognition on two available image databases including YouTube makeup and virtual makeup, and plastic surgery-invariant face recognition on a plastic surgery face database is compared to several state-of-the-art feature extraction methods. Several real-world scenarios are also planned to evaluate the performance of the proposed method on a combination of these three databases with 1102 subjects.

  4. Variation in the Oxytocin Receptor Gene Is Associated with Face Recognition and its Neural Correlates

    PubMed Central

    Westberg, Lars; Henningsson, Susanne; Zettergren, Anna; Svärd, Joakim; Hovey, Daniel; Lin, Tian; Ebner, Natalie C.; Fischer, Håkan

    2016-01-01

    The ability to recognize faces is crucial for daily social interactions. Recent studies suggest that intranasal oxytocin administration improves social recognition in humans. Oxytocin signaling in the amygdala plays an essential role for social recognition in mice, and oxytocin administration has been shown to influence amygdala activity in humans. It is therefore possible that the effects of oxytocin on human social recognition depend on mechanisms that take place in the amygdala—a central region for memory processing also in humans. Variation in the gene encoding the oxytocin receptor (OXTR) has been associated with several aspects of social behavior. The present study examined the potential associations between nine OXTR polymorphisms, distributed across the gene, and the ability to recognize faces, as well as face-elicited amygdala activity measured by functional magnetic resonance imaging (fMRI) during incidental encoding of faces. The OXTR 3′ polymorphism rs7632287, previously related to social bonding behavior and autism risk, was associated with participants’ ability to recognize faces. Carriers of the GA genotype, associated with enhanced memory, displayed higher amygdala activity during face encoding compared to carriers of the GG genotype. In line with work in rodents, these findings suggest that, in humans, naturally occurring endogenous modulation of OXTR function affects social recognition through an amygdala-dependent mechanism. These findings contribute to the understanding of how oxytocin regulates human social behaviors. PMID:27713694

  5. Near real-time face detection and recognition using a wireless camera network

    NASA Astrophysics Data System (ADS)

    Nicolo, Francesco; Parupati, Srikanth; Kulathumani, Vinod; Schmid, Natalia A.

    2012-06-01

    We present a portable wireless multi-camera network based system that quickly recognizes face of human subjects. The system uses low-power embedded cameras to acquire video frames of subjects in an uncontrolled environment and opportunistically extracts frontal face images in real time. The extracted images may have heavy motion blur, small resolution and large pose variability. A quality based selection process is first employed to discard some of the images that are not suitable for recognition. Then, the face images are geometrically normalized according to a pool of four standard resolutions, by using coordinates of detected eyes. The images are transmitted to a fusion center which has a multi-resolution templates gallery set. An optimized double-stage recognition algorithm based on Gabor filters and simplified Weber local descriptor is implemented to extract features from normalized probe face images. At the fusion center the comparison between gallery images and probe images acquired by a wireless network of seven embedded cameras is performed. A score fusion strategy is adopted to produce a single matching score. The performance of the proposed algorithm is compared to the commercial face recognition engine Faceit G8 by L1 and other well known methods based on local descriptors. The experiments show that the overall system is able to provide similar or better recognition performance of the commercial engine with a shorter computational time, especially with low resolution face images. In conclusion, the designed system is able to detect and recognize individuals in near real time.

  6. Face ethnicity and measurement reliability affect face recognition performance in developmental prosopagnosia: evidence from the Cambridge Face Memory Test-Australian.

    PubMed

    McKone, Elinor; Hall, Ashleigh; Pidcock, Madeleine; Palermo, Romina; Wilkinson, Ross B; Rivolta, Davide; Yovel, Galit; Davis, Joshua M; O'Connor, Kirsty B

    2011-03-01

    The Cambridge Face Memory Test (CFMT, Duchaine & Nakayama, 2006) provides a validated format for testing novel face learning and has been a crucial instrument in the diagnosis of developmental prosopagnosia. Yet, some individuals who report everyday face recognition symptoms consistent with prosopagnosia, and are impaired on famous face tasks, perform normally on the CFMT. Possible reasons include measurement error, CFMT assessment of memory only at short delays, and a face set whose ethnicity is matched to only some Caucasian groups. We develop the "CFMT-Australian" (CFMT-Aus), which complements the CFMT-original by using ethnicity better matched to a different European subpopulation. Results confirm reliability (.88) and validity (convergent, divergent using cars, inversion effects). We show that face ethnicity within a race has subtle but clear effects on face processing even in normal participants (includes cross-over interaction for face ethnicity by perceiver country of origin in distinctiveness ratings). We show that CFMT-Aus clarifies diagnosis of prosopagnosia in 6 previously ambiguous cases. In 3 cases, this appears due to the better ethnic match to prosopagnosics. We also show that face memory at short (<3-min), 20-min, and 24-hr delays taps overlapping processes in normal participants. There is some suggestion that a form of prosopagnosia may exist that is long delay only and/or reflects failure to benefit from face repetition.

  7. Effects of surface materials on polarimetric-thermal measurements: applications to face recognition.

    PubMed

    Short, Nathaniel J; Yuffa, Alex J; Videen, Gorden; Hu, Shuowen

    2016-07-01

    Materials, such as cosmetics, applied to the face can severely inhibit biometric face-recognition systems operating in the visible spectrum. These products are typically made up of materials having different spectral properties and color pigmentation that distorts the perceived shape of the face. The surface of the face emits thermal radiation, due to the living tissue beneath the surface of the skin. The emissivity of skin is approximately 0.99; in comparison, oil- and plastic-based materials, commonly found in cosmetics and face paints, have an emissivity range of 0.9-0.95 in the long-wavelength infrared part of the spectrum. Due to these properties, all three are good thermal emitters and have little impact on the heat transferred from the face. Polarimetric-thermal imaging provides additional details of the face and is also dependent upon the thermal radiation from the face. In this paper, we provide a theoretical analysis on the thermal conductivity of various materials commonly applied to the face using a metallic sphere. Additionally, we observe the impact of environmental conditions on the strength of the polarimetric signature and the ability to recover geometric details. Finally, we show how these materials degrade the performance of traditional face-recognition methods and provide an approach to mitigating this effect using polarimetric-thermal imaging.

  8. Effects of surface materials on polarimetric-thermal measurements: applications to face recognition.

    PubMed

    Short, Nathaniel J; Yuffa, Alex J; Videen, Gorden; Hu, Shuowen

    2016-07-01

    Materials, such as cosmetics, applied to the face can severely inhibit biometric face-recognition systems operating in the visible spectrum. These products are typically made up of materials having different spectral properties and color pigmentation that distorts the perceived shape of the face. The surface of the face emits thermal radiation, due to the living tissue beneath the surface of the skin. The emissivity of skin is approximately 0.99; in comparison, oil- and plastic-based materials, commonly found in cosmetics and face paints, have an emissivity range of 0.9-0.95 in the long-wavelength infrared part of the spectrum. Due to these properties, all three are good thermal emitters and have little impact on the heat transferred from the face. Polarimetric-thermal imaging provides additional details of the face and is also dependent upon the thermal radiation from the face. In this paper, we provide a theoretical analysis on the thermal conductivity of various materials commonly applied to the face using a metallic sphere. Additionally, we observe the impact of environmental conditions on the strength of the polarimetric signature and the ability to recover geometric details. Finally, we show how these materials degrade the performance of traditional face-recognition methods and provide an approach to mitigating this effect using polarimetric-thermal imaging. PMID:27409214

  9. Can the usage of human growth hormones affect facial appearance and the accuracy of face recognition systems?

    NASA Astrophysics Data System (ADS)

    Rose, Jake; Martin, Michael; Bourlai, Thirimachos

    2014-06-01

    In law enforcement and security applications, the acquisition of face images is critical in producing key trace evidence for the successful identification of potential threats. The goal of the study is to demonstrate that steroid usage significantly affects human facial appearance and hence, the performance of commercial and academic face recognition (FR) algorithms. In this work, we evaluate the performance of state-of-the-art FR algorithms on two unique face image datasets of subjects before (gallery set) and after (probe set) steroid (or human growth hormone) usage. For the purpose of this study, datasets of 73 subjects were created from multiple sources found on the Internet, containing images of men and women before and after steroid usage. Next, we geometrically pre-processed all images of both face datasets. Then, we applied image restoration techniques on the same face datasets, and finally, we applied FR algorithms in order to match the pre-processed face images of our probe datasets against the face images of the gallery set. Experimental results demonstrate that only a specific set of FR algorithms obtain the most accurate results (in terms of the rank-1 identification rate). This is because there are several factors that influence the efficiency of face matchers including (i) the time lapse between the before and after image pre-processing and restoration face photos, (ii) the usage of different drugs (e.g. Dianabol, Winstrol, and Decabolan), (iii) the usage of different cameras to capture face images, and finally, (iv) the variability of standoff distance, illumination and other noise factors (e.g. motion noise). All of the previously mentioned complicated scenarios make clear that cross-scenario matching is a very challenging problem and, thus, further investigation is required.

  10. Stereotype Priming in Face Recognition: Interactions between Semantic and Visual Information in Face Encoding

    ERIC Educational Resources Information Center

    Hills, Peter J.; Lewis, Michael B.; Honey, R. C.

    2008-01-01

    The accuracy with which previously unfamiliar faces are recognised is increased by the presentation of a stereotype-congruent occupation label [Klatzky, R. L., Martin, G. L., & Kane, R. A. (1982a). "Semantic interpretation effects on memory for faces." "Memory & Cognition," 10, 195-206; Klatzky, R. L., Martin, G. L., & Kane, R. A. (1982b).…

  11. Subject independent facial expression recognition with robust face detection using a convolutional neural network.

    PubMed

    Matsugu, Masakazu; Mori, Katsuhiko; Mitari, Yusuke; Kaneda, Yuji

    2003-01-01

    Reliable detection of ordinary facial expressions (e.g. smile) despite the variability among individuals as well as face appearance is an important step toward the realization of perceptual user interface with autonomous perception of persons. We describe a rule-based algorithm for robust facial expression recognition combined with robust face detection using a convolutional neural network. In this study, we address the problem of subject independence as well as translation, rotation, and scale invariance in the recognition of facial expression. The result shows reliable detection of smiles with recognition rate of 97.6% for 5600 still images of more than 10 subjects. The proposed algorithm demonstrated the ability to discriminate smiling from talking based on the saliency score obtained from voting visual cues. To the best of our knowledge, it is the first facial expression recognition model with the property of subject independence combined with robustness to variability in facial appearance.

  12. Illumination-invariant face recognition with a contrast sensitive silicon retina

    SciTech Connect

    Buhmann, J.M.; Lades, M.; Eeckman, F.

    1993-11-29

    Changes in lighting conditions strongly effect the performance and reliability of computer vision systems. We report face recognition results under drastically changing lighting conditions for a computer vision system which concurrently uses a contrast sensitive silicon retina and a conventional, gain controlled CCD camera. For both input devices the face recognition system employs an elastic matching algorithm with wavelet based features to classify unknown faces. To assess the effect of analog on-chip preprocessing by the silicon retina the CCD images have been digitally preprocessed with a bandpass filter to adjust the power spectrum. The silicon retina with its ability to adjust sensitivity increases the recognition rate up to 50 percent. These comparative experiments demonstrate that preprocessing with an analog VLSI silicon retina generates image data enriched with object-constant features.

  13. Image Generation Using Bidirectional Integral Features for Face Recognition with a Single Sample per Person

    PubMed Central

    Lee, Yonggeol; Lee, Minsik; Choi, Sang-Il

    2015-01-01

    In face recognition, most appearance-based methods require several images of each person to construct the feature space for recognition. However, in the real world it is difficult to collect multiple images per person, and in many cases there is only a single sample per person (SSPP). In this paper, we propose a method to generate new images with various illuminations from a single image taken under frontal illumination. Motivated by the integral image, which was developed for face detection, we extract the bidirectional integral feature (BIF) to obtain the characteristics of the illumination condition at the time of the picture being taken. The experimental results for various face databases show that the proposed method results in improved recognition performance under illumination variation. PMID:26414018

  14. Development of holistic vs. featural processing in face recognition.

    PubMed

    Nakabayashi, Kazuyo; Liu, Chang Hong

    2014-01-01

    According to a classic view developed by Carey and Diamond (1977), young children process faces in a piecemeal fashion before adult-like holistic processing starts to emerge at the age of around 10 years. This is known as the encoding switch hypothesis. Since then, a growing body of studies have challenged the theory. This article will provide a critical appraisal of this literature, followed by an analysis of some more recent developments. We will conclude, quite contrary to the classical view, that holistic processing is not only present in early child development, but could even precede the development of part-based processing.

  15. Face recognition via edge-based Gabor feature representation for plastic surgery-altered images

    NASA Astrophysics Data System (ADS)

    Chude-Olisah, Chollette C.; Sulong, Ghazali; Chude-Okonkwo, Uche A. K.; Hashim, Siti Z. M.

    2014-12-01

    Plastic surgery procedures on the face introduce skin texture variations between images of the same person (intra-subject), thereby making the task of face recognition more difficult than in normal scenario. Usually, in contemporary face recognition systems, the original gray-level face image is used as input to the Gabor descriptor, which translates to encoding some texture properties of the face image. The texture-encoding process significantly degrades the performance of such systems in the case of plastic surgery due to the presence of surgically induced intra-subject variations. Based on the proposition that the shape of significant facial components such as eyes, nose, eyebrow, and mouth remains unchanged after plastic surgery, this paper employs an edge-based Gabor feature representation approach for the recognition of surgically altered face images. We use the edge information, which is dependent on the shapes of the significant facial components, to address the plastic surgery-induced texture variation problems. To ensure that the significant facial components represent useful edge information with little or no false edges, a simple illumination normalization technique is proposed for preprocessing. Gabor wavelet is applied to the edge image to accentuate on the uniqueness of the significant facial components for discriminating among different subjects. The performance of the proposed method is evaluated on the Georgia Tech (GT) and the Labeled Faces in the Wild (LFW) databases with illumination and expression problems, and the plastic surgery database with texture changes. Results show that the proposed edge-based Gabor feature representation approach is robust against plastic surgery-induced face variations amidst expression and illumination problems and outperforms the existing plastic surgery face recognition methods reported in the literature.

  16. Rapid communication: Global-local processing affects recognition of distractor emotional faces.

    PubMed

    Srinivasan, Narayanan; Gupta, Rashmi

    2011-03-01

    Recent studies have shown links between happy faces and global, distributed attention as well as sad faces to local, focused attention. Emotions have been shown to affect global-local processing. Given that studies on emotion-cognition interactions have not explored the effect of perceptual processing at different spatial scales on processing stimuli with emotional content, the present study investigated the link between perceptual focus and emotional processing. The study investigated the effects of global-local processing on the recognition of distractor faces with emotional expressions. Participants performed a digit discrimination task with digits at either the global level or the local level presented against a distractor face (happy or sad) as background. The results showed that global processing associated with broad scope of attention facilitates recognition of happy faces, and local processing associated with narrow scope of attention facilitates recognition of sad faces. The novel results of the study provide conclusive evidence for emotion-cognition interactions by demonstrating the effect of perceptual processing on emotional faces. The results along with earlier complementary results on the effect of emotion on global-local processing support a reciprocal relationship between emotional processing and global-local processing. Distractor processing with emotional information also has implications for theories of selective attention.

  17. Detecting Superior Face Recognition Skills in a Large Sample of Young British Adults

    PubMed Central

    Bobak, Anna K.; Pampoulov, Philip; Bate, Sarah

    2016-01-01

    The Cambridge Face Memory Test Long Form (CFMT+) and Cambridge Face Perception Test (CFPT) are typically used to assess the face processing ability of individuals who believe they have superior face recognition skills. Previous large-scale studies have presented norms for the CFPT but not the CFMT+. However, previous research has also highlighted the necessity for establishing country-specific norms for these tests, indicating that norming data is required for both tests using young British adults. The current study addressed this issue in 254 British participants. In addition to providing the first norm for performance on the CFMT+ in any large sample, we also report the first UK specific cut-off for superior face recognition on the CFPT. Further analyses identified a small advantage for females on both tests, and only small associations between objective face recognition skills and self-report measures. A secondary aim of the study was to examine the relationship between trait or social anxiety and face processing ability, and no associations were noted. The implications of these findings for the classification of super-recognizers are discussed. PMID:27713706

  18. An in-depth cognitive examination of individuals with superior face recognition skills.

    PubMed

    Bobak, Anna K; Bennetts, Rachel J; Parris, Benjamin A; Jansari, Ashok; Bate, Sarah

    2016-09-01

    Previous work has reported the existence of "super-recognisers" (SRs), or individuals with extraordinary face recognition skills. However, the precise underpinnings of this ability have not yet been investigated. In this paper we examine (a) the face-specificity of super recognition, (b) perception of facial identity in SRs, (c) whether SRs present with enhancements in holistic processing and (d) the consistency of these findings across different SRs. A detailed neuropsychological investigation into six SRs indicated domain-specificity in three participants, with some evidence of enhanced generalised visuo-cognitive or socio-emotional processes in the remaining individuals. While superior face-processing skills were restricted to face memory in three of the SRs, enhancements to facial identity perception were observed in the others. Notably, five of the six participants showed at least some evidence of enhanced holistic processing. These findings indicate cognitive heterogeneity in the presentation of superior face recognition, and have implications for our theoretical understanding of the typical face-processing system and the identification of superior face-processing skills in applied settings. PMID:27344238

  19. Real-Time Measurement of Face Recognition in Rapid Serial Visual Presentation

    PubMed Central

    Touryan, Jon; Gibson, Laurie; Horne, James H.; Weber, Paul

    2011-01-01

    Event-related potentials (ERPs) have been used extensively to study the processes involved in recognition memory. In particular, the early familiarity component of recognition has been linked to the FN400 (mid-frontal negative deflection between 300 and 500 ms), whereas the recollection component has been linked to a later positive deflection over the parietal cortex (500–800 ms). In this study, we measured the ERPs elicited by faces with varying degrees of familiarity. Participants viewed a continuous sequence of faces with either low (novel faces), medium (celebrity faces), or high (faces of friends and family) familiarity while performing a separate face-identification task. We found that the level of familiarity was significantly correlated with the magnitude of both the early and late recognition components. Additionally, by using a single-trial classification technique, applied to the entire evoked response, we were able to distinguish between familiar and unfamiliar faces with a high degree of accuracy. The classification of high versus low familiarly resulted in areas under the curve of up to 0.99 for some participants. Interestingly, our classifier model (a linear discriminant function) was developed using a completely separate object categorization task on a different population of participants. PMID:21716601

  20. Subject-specific and pose-oriented facial features for face recognition across poses.

    PubMed

    Lee, Ping-Han; Hsu, Gee-Sern; Wang, Yun-Wen; Hung, Yi-Ping

    2012-10-01

    Most face recognition scenarios assume that frontal faces or mug shots are available for enrollment to the database, faces of other poses are collected in the probe set. Given a face from the probe set, one needs to determine whether a match in the database exists. This is under the assumption that in forensic applications, most suspects have their mug shots available in the database, and face recognition aims at recognizing the suspects when their faces of various poses are captured by a surveillance camera. This paper considers a different scenario: given a face with multiple poses available, which may or may not include a mug shot, develop a method to recognize the face with poses different from those captured. That is, given two disjoint sets of poses of a face, one for enrollment and the other for recognition, this paper reports a method best for handling such cases. The proposed method includes feature extraction and classification. For feature extraction, we first cluster the poses of each subject's face in the enrollment set into a few pose classes and then decompose the appearance of the face in each pose class using Embedded Hidden Markov Model, which allows us to define a set of subject-specific and pose-priented (SSPO) facial components for each subject. For classification, an Adaboost weighting scheme is used to fuse the component classifiers with SSPO component features. The proposed method is proven to outperform other approaches, including a component-based classifier with local facial features cropped manually, in an extensive performance evaluation study. PMID:22547457

  1. Wavelet decomposition based principal component analysis for face recognition using MATLAB

    NASA Astrophysics Data System (ADS)

    Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish

    2016-03-01

    For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.

  2. The Painful Face - Pain Expression Recognition Using Active Appearance Models.

    PubMed

    Ashraf, Ahmed Bilal; Lucey, Simon; Cohn, Jeffrey F; Chen, Tsuhan; Ambadar, Zara; Prkachin, Kenneth M; Solomon, Patricia E

    2009-10-01

    Pain is typically assessed by patient self-report. Self-reported pain, however, is difficult to interpret and may be impaired or in some circumstances (i.e., young children and the severely ill) not even possible. To circumvent these problems behavioral scientists have identified reliable and valid facial indicators of pain. Hitherto, these methods have required manual measurement by highly skilled human observers. In this paper we explore an approach for automatically recognizing acute pain without the need for human observers. Specifically, our study was restricted to automatically detecting pain in adult patients with rotator cuff injuries. The system employed video input of the patients as they moved their affected and unaffected shoulder. Two types of ground truth were considered. Sequence-level ground truth consisted of Likert-type ratings by skilled observers. Frame-level ground truth was calculated from presence/absence and intensity of facial actions previously associated with pain. Active appearance models (AAM) were used to decouple shape and appearance in the digitized face images. Support vector machines (SVM) were compared for several representations from the AAM and of ground truth of varying granularity. We explored two questions pertinent to the construction, design and development of automatic pain detection systems. First, at what level (i.e., sequence- or frame-level) should datasets be labeled in order to obtain satisfactory automatic pain detection performance? Second, how important is it, at both levels of labeling, that we non-rigidly register the face?

  3. Toward a unified model of face and object recognition in the human visual system

    PubMed Central

    Wallis, Guy

    2013-01-01

    Our understanding of the mechanisms and neural substrates underlying visual recognition has made considerable progress over the past 30 years. During this period, accumulating evidence has led many scientists to conclude that objects and faces are recognised in fundamentally distinct ways, and in fundamentally distinct cortical areas. In the psychological literature, in particular, this dissociation has led to a palpable disconnect between theories of how we process and represent the two classes of object. This paper follows a trend in part of the recognition literature to try to reconcile what we know about these two forms of recognition by considering the effects of learning. Taking a widely accepted, self-organizing model of object recognition, this paper explains how such a system is affected by repeated exposure to specific stimulus classes. In so doing, it explains how many aspects of recognition generally regarded as unusual to faces (holistic processing, configural processing, sensitivity to inversion, the other-race effect, the prototype effect, etc.) are emergent properties of category-specific learning within such a system. Overall, the paper describes how a single model of recognition learning can and does produce the seemingly very different types of representation associated with faces and objects. PMID:23966963

  4. Toward a unified model of face and object recognition in the human visual system.

    PubMed

    Wallis, Guy

    2013-01-01

    Our understanding of the mechanisms and neural substrates underlying visual recognition has made considerable progress over the past 30 years. During this period, accumulating evidence has led many scientists to conclude that objects and faces are recognised in fundamentally distinct ways, and in fundamentally distinct cortical areas. In the psychological literature, in particular, this dissociation has led to a palpable disconnect between theories of how we process and represent the two classes of object. This paper follows a trend in part of the recognition literature to try to reconcile what we know about these two forms of recognition by considering the effects of learning. Taking a widely accepted, self-organizing model of object recognition, this paper explains how such a system is affected by repeated exposure to specific stimulus classes. In so doing, it explains how many aspects of recognition generally regarded as unusual to faces (holistic processing, configural processing, sensitivity to inversion, the other-race effect, the prototype effect, etc.) are emergent properties of category-specific learning within such a system. Overall, the paper describes how a single model of recognition learning can and does produce the seemingly very different types of representation associated with faces and objects. PMID:23966963

  5. Oxytocin eliminates the own-race bias in face recognition memory.

    PubMed

    Blandón-Gitlin, Iris; Pezdek, Kathy; Saldivar, Sesar; Steelman, Erin

    2014-09-11

    The neuropeptide Oxytocin influences a number of social behaviors, including processing of faces. We examined whether Oxytocin facilitates the processing of out-group faces and reduce the own-race bias (ORB). The ORB is a robust phenomenon characterized by poor recognition memory of other-race faces compared to the same-race faces. In Experiment 1, participants received intranasal solutions of Oxytocin or placebo prior to viewing White and Black faces. On a subsequent recognition test, whereas in the placebo condition the same-race faces were better recognized than other-race faces, in the Oxytocin condition Black and White faces were equally well recognized, effectively eliminating the ORB. In Experiment 2, Oxytocin was administered after the study phase. The ORB resulted, but Oxytocin did not significantly reduce the effect. This study is the first to show that Oxytocin can enhance face memory of out-group members and underscore the importance of social encoding mechanisms underlying the own-race bias. This article is part of a Special Issue entitled Oxytocin and Social Behav.

  6. Are faces processed like words? A diagnostic test for recognition by parts.

    PubMed

    Martelli, Marialuisa; Majaj, Najib J; Pelli, Denis G

    2005-02-04

    Do we identify an object as a whole or by its parts? This simple question has been surprisingly hard to answer. It has been suggested that faces are recognized as wholes and words are recognized by parts. Here we answer the question by applying a test for crowding. In crowding, a target is harder to identify in the presence of nearby flankers. Previous work has described crowding between objects. We show that crowding also occurs between the parts of an object. Such internal crowding severely impairs perception, identification, and fMRI face-area activation. We apply a diagnostic test for crowding to a word and a face, and we find that the critical spacing of the parts required for recognition is proportional to distance from fixation and independent of size and kind. The critical spacing defines an isolation field around the target. Some objects can be recognized only when each part is isolated from the rest of the object by the critical spacing. In that case, recognition is by parts. Recognition is holistic if the observer can recognize the object even when the whole object fits within a critical spacing. Such an object has only one part. Multiple parts within an isolation field will crowd each other and spoil recognition. To assess the robustness of the crowding test, we manipulated familiarity through inversion and the face- and word-superiority effects. We find that threshold contrast for word and face identification is the product of two factors: familiarity and crowding. Familiarity increases sensitivity by a factor of x1.5, independent of eccentricity, while crowding attenuates sensitivity more and more as eccentricity increases. Our findings show that observers process words and faces in much the same way: The effects of familiarity and crowding do not distinguish between them. Words and faces are both recognized by parts, and their parts -- letters and facial features -- are recognized holistically. We propose that internal crowding be taken as the

  7. Recognition memory for distractor faces depends on attentional load at exposure.

    PubMed

    Jenkins, Rob; Lavie, Nilli; Driver, Jon

    2005-04-01

    Incidental recognition memory for faces previously exposed as task-irrelevant distractors was assessed as a function of the attentional load of an unrelated task performed on superimposed letter strings at exposure. In Experiment 1, subjects were told to ignore the faces and either to judge the color of the letters (low load) or to search for an angular target letter among other angular letters (high load). A surprise recognition memory test revealed that despite the irrelevance of all faces at exposure, those exposed under low-load conditions were later recognized, but those exposed under high-load conditions were not. Experiment 2 found a similar pattern when both the high- and low-load tasks required shape judgments for the letters but made differing attentional demands. Finally, Experiment 3 showed that high load in a nonface task can significantly reduce even immediate recognition of a fixated face from the preceding trial. These results demonstrate that load in a nonface domain (e.g., letter shape) can reduce face recognition, in accord with Lavie's load theory. In addition to their theoretical impact, these results may have practical implications for eyewitness testimony. PMID:16082812

  8. Face Recognition Using Sparse Representation-Based Classification on K-Nearest Subspace

    PubMed Central

    Mi, Jian-Xun; Liu, Jin-Xing

    2013-01-01

    The sparse representation-based classification (SRC) has been proven to be a robust face recognition method. However, its computational complexity is very high due to solving a complex -minimization problem. To improve the calculation efficiency, we propose a novel face recognition method, called sparse representation-based classification on k-nearest subspace (SRC-KNS). Our method first exploits the distance between the test image and the subspace of each individual class to determine the nearest subspaces and then performs SRC on the selected classes. Actually, SRC-KNS is able to reduce the scale of the sparse representation problem greatly and the computation to determine the nearest subspaces is quite simple. Therefore, SRC-KNS has a much lower computational complexity than the original SRC. In order to well recognize the occluded face images, we propose the modular SRC-KNS. For this modular method, face images are partitioned into a number of blocks first and then we propose an indicator to remove the contaminated blocks and choose the nearest subspaces. Finally, SRC is used to classify the occluded test sample in the new feature space. Compared to the approach used in the original SRC work, our modular SRC-KNS can greatly reduce the computational load. A number of face recognition experiments show that our methods have five times speed-up at least compared to the original SRC, while achieving comparable or even better recognition rates. PMID:23555671

  9. Is that me or my twin? Lack of self-face recognition advantage in identical twins.

    PubMed

    Martini, Matteo; Bufalari, Ilaria; Stazi, Maria Antonietta; Aglioti, Salvatore Maria

    2015-01-01

    Despite the increasing interest in twin studies and the stunning amount of research on face recognition, the ability of adult identical twins to discriminate their own faces from those of their co-twins has been scarcely investigated. One's own face is the most distinctive feature of the bodily self, and people typically show a clear advantage in recognizing their own face even more than other very familiar identities. Given the very high level of resemblance of their faces, monozygotic twins represent a unique model for exploring self-face processing. Herein we examined the ability of monozygotic twins to distinguish their own face from the face of their co-twin and of a highly familiar individual. Results show that twins equally recognize their own face and their twin's face. This lack of self-face advantage was negatively predicted by how much they felt physically similar to their co-twin and by their anxious or avoidant attachment style. We speculate that in monozygotic twins, the visual representation of the self-face overlaps with that of the co-twin. Thus, to distinguish the self from the co-twin, monozygotic twins have to rely much more than control participants on the multisensory integration processes upon which the sense of bodily self is based. Moreover, in keeping with the notion that attachment style influences perception of self and significant others, we propose that the observed self/co-twin confusion may depend upon insecure attachment. PMID:25853249

  10. Is the Self Always Better than a Friend? Self-Face Recognition in Christians and Atheists

    PubMed Central

    Ma, Yina; Han, Shihui

    2012-01-01

    Early behavioral studies found that human adults responded faster to their own faces than faces of familiar others or strangers, a finding referred to as self-face advantage. Recent research suggests that the self-face advantage is mediated by implicit positive association with the self and is influenced by sociocultural experience. The current study investigated whether and how Christian belief and practice affect the processing of self-face in a Chinese population. Christian and Atheist participants were recruited for an implicit association test (IAT) in Experiment 1 and a face-owner identification task in Experiment 2. Experiment 1 found that atheists responded faster to self-face when it shared the same response key with positive compared to negative trait adjectives. This IAT effect, however, was significantly reduced in Christians. Experiment 2 found that atheists responded faster to self-face compared to a friend’s face, but this self-face advantage was significantly reduced in Christians. Hierarchical regression analyses further showed that the IAT effect positively predicted self-face advantage in atheists but not in Christians. Our findings suggest that Christian belief and practice may weaken implicit positive association with the self and thus decrease the advantage of the self over a friend during face recognition in the believers. PMID:22662231

  11. Differential outcomes training improves face recognition memory in children and in adults with Down syndrome.

    PubMed

    Esteban, Laura; Plaza, Victoria; López-Crespo, Ginesa; Vivas, Ana B; Estévez, Angeles F

    2014-06-01

    Previous studies have demonstrated that the differential outcomes procedure (DOP), which involves paring a unique reward with a specific stimulus, enhances discriminative learning and memory performance in several populations. The present study aimed to further investigate whether this procedure would improve face recognition memory in 5- and 7-year-old children (Experiment 1) and adults with Down syndrome (Experiment 2). In a delayed matching-to-sample task, participants had to select the previously shown face (sample stimulus) among six alternatives faces (comparison stimuli) in four different delays (1, 5, 10, or 15s). Participants were tested in two conditions: differential, where each sample stimulus was paired with a specific outcome; and non-differential outcomes, where reinforcers were administered randomly. The results showed a significantly better face recognition in the differential outcomes condition relative to the non-differential in both experiments. Implications for memory training programs and future research are discussed.

  12. Shades of the mirror effect: recognition of faces with and without sunglasses.

    PubMed

    Hockley, W E; Hemsworth, D H; Consoli, A

    1999-01-01

    A mirror effect was found for a stimulus manipulation introduced at test. When subjects studied a set of normal faces and then were tested with new and old faces that were normal or wearing sunglasses, the hit rate was higher and the false alarm rate was lower for normal faces. Hit rate differences were reflected in remember and sure recognition responses, whereas differences in false alarm rates were largely seen in know and unsure judgments. In contrast, when subjects studied faces wearing sunglasses, the hit rate was greater for test faces with sunglasses than for normal faces, but there was no difference in false alarm rates. These findings are problematic for single-factor theories of the mirror effect, but can be accommodated within a two-factor account.

  13. Contribution of Bodily and Gravitational Orientation Cues to Face and Letter Recognition.

    PubMed

    Barnett-Cowan, Michael; Snow, Jacqueline C; Culham, Jody C

    2015-01-01

    Sensory information provided by the vestibular system is crucial in cognitive processes such as the ability to recognize objects. The orientation at which objects are most easily recognized--the perceptual upright (PU)--is influenced by body orientation with respect to gravity as detected from the somatosensory and vestibular systems. To date, the influence of these sensory cues on the PU has been measured using a letter recognition task. Here we assessed whether gravitational influences on letter recognition also extend to human face recognition. 13 right-handed observers were positioned in four body orientations (upright, left-side-down, right-side-down, supine) and visually discriminated ambiguous characters ('p'-from-'d'; 'i'-from-'!') and ambiguous faces used in popular visual illusions ('young woman'-from-'old woman'; 'grinning man'-from-'frowning man') in a forced-choice paradigm. The two transition points (e.g., 'p-to-d' and 'd-to-p'; 'young woman-to-old woman' and 'old woman-to-young woman') were fit with a sigmoidal psychometric function and the average of these transitions was taken as the PU for each stimulus category. The results show that both faces and letters are more influenced by body orientation than gravity. However, faces are more optimally recognized when closer in alignment with body orientation than letters--which are more influenced by gravity. Our results indicate that the brain does not utilize a common representation of upright that governs recognition of all object categories. Distinct areas of ventro-temporal cortex that represent faces and letters may weight bodily and gravitational cues differently--possibly to facilitate the specific demands of face and letter recognition.

  14. Contribution of Bodily and Gravitational Orientation Cues to Face and Letter Recognition.

    PubMed

    Barnett-Cowan, Michael; Snow, Jacqueline C; Culham, Jody C

    2015-01-01

    Sensory information provided by the vestibular system is crucial in cognitive processes such as the ability to recognize objects. The orientation at which objects are most easily recognized--the perceptual upright (PU)--is influenced by body orientation with respect to gravity as detected from the somatosensory and vestibular systems. To date, the influence of these sensory cues on the PU has been measured using a letter recognition task. Here we assessed whether gravitational influences on letter recognition also extend to human face recognition. 13 right-handed observers were positioned in four body orientations (upright, left-side-down, right-side-down, supine) and visually discriminated ambiguous characters ('p'-from-'d'; 'i'-from-'!') and ambiguous faces used in popular visual illusions ('young woman'-from-'old woman'; 'grinning man'-from-'frowning man') in a forced-choice paradigm. The two transition points (e.g., 'p-to-d' and 'd-to-p'; 'young woman-to-old woman' and 'old woman-to-young woman') were fit with a sigmoidal psychometric function and the average of these transitions was taken as the PU for each stimulus category. The results show that both faces and letters are more influenced by body orientation than gravity. However, faces are more optimally recognized when closer in alignment with body orientation than letters--which are more influenced by gravity. Our results indicate that the brain does not utilize a common representation of upright that governs recognition of all object categories. Distinct areas of ventro-temporal cortex that represent faces and letters may weight bodily and gravitational cues differently--possibly to facilitate the specific demands of face and letter recognition. PMID:26595950

  15. 3D face recognition under expressions, occlusions, and pose variations.

    PubMed

    Drira, Hassen; Ben Amor, Boulbaba; Srivastava, Anuj; Daoudi, Mohamed; Slama, Rim

    2013-09-01

    We propose a novel geometric framework for analyzing 3D faces, with the specific goals of comparing, matching, and averaging their shapes. Here we represent facial surfaces by radial curves emanating from the nose tips and use elastic shape analysis of these curves to develop a Riemannian framework for analyzing shapes of full facial surfaces. This representation, along with the elastic Riemannian metric, seems natural for measuring facial deformations and is robust to challenges such as large facial expressions (especially those with open mouths), large pose variations, missing parts, and partial occlusions due to glasses, hair, and so on. This framework is shown to be promising from both--empirical and theoretical--perspectives. In terms of the empirical evaluation, our results match or improve upon the state-of-the-art methods on three prominent databases: FRGCv2, GavabDB, and Bosphorus, each posing a different type of challenge. From a theoretical perspective, this framework allows for formal statistical inferences, such as the estimation of missing facial parts using PCA on tangent spaces and computing average shapes.

  16. Neural mechanisms of context effects on face recognition: automatic binding and context shift decrements.

    PubMed

    Hayes, Scott M; Baena, Elsa; Truong, Trong-Kha; Cabeza, Roberto

    2010-11-01

    Although people do not normally try to remember associations between faces and physical contexts, these associations are established automatically, as indicated by the difficulty of recognizing familiar faces in different contexts ("butcher-on-the-bus" phenomenon). The present fMRI study investigated the automatic binding of faces and scenes. In the face-face (F-F) condition, faces were presented alone during both encoding and retrieval, whereas in the face/scene-face (FS-F) condition, they were presented overlaid on scenes during encoding but alone during retrieval (context change). Although participants were instructed to focus only on the faces during both encoding and retrieval, recognition performance was worse in the FS-F than in the F-F condition ("context shift decrement" [CSD]), confirming automatic face-scene binding during encoding. This binding was mediated by the hippocampus as indicated by greater subsequent memory effects (remembered > forgotten) in this region for the FS-F than the F-F condition. Scene memory was mediated by right parahippocampal cortex, which was reactivated during successful retrieval when the faces were associated with a scene during encoding (FS-F condition). Analyses using the CSD as a regressor yielded a clear hemispheric asymmetry in medial temporal lobe activity during encoding: Left hippocampal and parahippocampal activity was associated with a smaller CSD, indicating more flexible memory representations immune to context changes, whereas right hippocampal/rhinal activity was associated with a larger CSD, indicating less flexible representations sensitive to context change. Taken together, the results clarify the neural mechanisms of context effects on face recognition.

  17. Emotion recognition from expressions in face, voice, and body: the Multimodal Emotion Recognition Test (MERT).

    PubMed

    Bänziger, Tanja; Grandjean, Didier; Scherer, Klaus R

    2009-10-01

    Emotion recognition ability has been identified as a central component of emotional competence. We describe the development of an instrument that objectively measures this ability on the basis of actor portrayals of dynamic expressions of 10 emotions (2 variants each for 5 emotion families), operationalized as recognition accuracy in 4 presentation modes combining the visual and auditory sense modalities (audio/video, audio only, video only, still picture). Data from a large validation study, including construct validation using related tests (Profile of Nonverbal Sensitivity; Rosenthal, Hall, DiMatteo, Rogers, & Archer, 1979; Japanese and Caucasian Facial Expressions of Emotion; Biehl et al., 1997; Diagnostic Analysis of Nonverbal Accuracy; Nowicki & Duke, 1994; Emotion Recognition Index; Scherer & Scherer, 2008), are reported. The results show the utility of a test designed to measure both coarse and fine-grained emotion differentiation and modality-specific skills. Factor analysis of the data suggests 2 separate abilities, visual and auditory recognition, which seem to be largely independent of personality dispositions. PMID:19803591

  18. Emotional Faces in Context: Age Differences in Recognition Accuracy and Scanning Patterns

    PubMed Central

    Noh, Soo Rim; Isaacowitz, Derek M.

    2014-01-01

    While age-related declines in facial expression recognition are well documented, previous research relied mostly on isolated faces devoid of context. We investigated the effects of context on age differences in recognition of facial emotions and in visual scanning patterns of emotional faces. While their eye movements were monitored, younger and older participants viewed facial expressions (i.e., anger, disgust) in contexts that were emotionally congruent, incongruent, or neutral to the facial expression to be identified. Both age groups had highest recognition rates of facial expressions in the congruent context, followed by the neutral context, and recognition rates in the incongruent context were worst. These context effects were more pronounced for older adults. Compared to younger adults, older adults exhibited a greater benefit from congruent contextual information, regardless of facial expression. Context also influenced the pattern of visual scanning characteristics of emotional faces in a similar manner across age groups. In addition, older adults initially attended more to context overall. Our data highlight the importance of considering the role of context in understanding emotion recognition in adulthood. PMID:23163713

  19. Emotional face recognition deficit in amnestic patients with mild cognitive impairment: behavioral and electrophysiological evidence

    PubMed Central

    Yang, Linlin; Zhao, Xiaochuan; Wang, Lan; Yu, Lulu; Song, Mei; Wang, Xueyi

    2015-01-01

    Amnestic mild cognitive impairment (MCI) has been conceptualized as a transitional stage between healthy aging and Alzheimer’s disease. Thus, understanding emotional face recognition deficit in patients with amnestic MCI could be useful in determining progression of amnestic MCI. The purpose of this study was to investigate the features of emotional face processing in amnestic MCI by using event-related potentials (ERPs). Patients with amnestic MCI and healthy controls performed a face recognition task, giving old/new responses to previously studied and novel faces with different emotional messages as the stimulus material. Using the learning-recognition paradigm, the experiments were divided into two steps, ie, a learning phase and a test phase. ERPs were analyzed on electroencephalographic recordings. The behavior data indicated high emotion classification accuracy for patients with amnestic MCI and for healthy controls. The mean percentage of correct classifications was 81.19% for patients with amnestic MCI and 96.46% for controls. Our ERP data suggest that patients with amnestic MCI were still be able to undertake personalizing processing for negative faces, but not for neutral or positive faces, in the early frontal processing stage. In the early time window, no differences in frontal old/new effect were found between patients with amnestic MCI and normal controls. However, in the late time window, the three types of stimuli did not elicit any old/new parietal effects in patients with amnestic MCI, suggesting their recollection was impaired. This impairment may be closely associated with amnestic MCI disease. We conclude from our data that face recognition processing and emotional memory is impaired in patients with amnestic MCI. Such damage mainly occurred in the early coding stages. In addition, we found that patients with amnestic MCI had difficulty in post-processing of positive and neutral facial emotions. PMID:26347065

  20. Cultural In-Group Advantage: Emotion Recognition in African American and European American Faces and Voices

    ERIC Educational Resources Information Center

    Wickline, Virginia B.; Bailey, Wendy; Nowicki, Stephen

    2009-01-01

    The authors explored whether there were in-group advantages in emotion recognition of faces and voices by culture or geographic region. Participants were 72 African American students (33 men, 39 women), 102 European American students (30 men, 72 women), 30 African international students (16 men, 14 women), and 30 European international students…

  1. A Smile Enhances 3-Month-Olds' Recognition of an Individual Face

    ERIC Educational Resources Information Center

    Turati, Chiara; Montirosso, Rosario; Brenna, Viola; Ferrara, Veronica; Borgatti, Renato

    2011-01-01

    Recent studies demonstrated that in adults and children recognition of face identity and facial expression mutually interact (Bate, Haslam, & Hodgson, 2009; Spangler, Schwarzer, Korell, & Maier-Karius, 2010). Here, using a familiarization paradigm, we explored the relation between these processes in early infancy, investigating whether 3-month-old…

  2. The prototype effect revisited: Evidence for an abstract feature model of face recognition.

    PubMed

    Wallis, Guy; Siebeck, Ulrike E; Swann, Kellie; Blanz, Volker; Bülthoff, Heinrich H

    2008-01-01

    Humans typically have a remarkable memory for faces. Nonetheless, in some cases they can be fooled. Experiments described in this paper provide new evidence for an effect in which observers falsely "recognize" a face that they have never seen before. The face is a chimera (prototype) built from parts extracted from previously viewed faces. It is known that faces of this kind can be confused with truly familiar faces, a result referred to as the prototype effect. However, recent studies have failed to find evidence for a full effect, one in which the prototype is regarded not only as familiar, but as more familiar than faces which have been seen before. This study sought to reinvestigate the effect. In a pair of experiments, evidence is reported for the full effect based on both an old/new discrimination task and a familiarity ranking task. The results are shown to be consistent with a recognition model in which faces are represented as combinations of reusable, abstract features. In a final experiment, novel predictions of the model are verified by comparing the size of the prototype effect for upright and upside-down faces. Despite the fundamentally piecewise nature of the model, an explanation is provided as to how it can also account for the sensitivity of observers to configural and holistic cues. This discussion is backed up with the use of an unsupervised network model. Overall, the paper describes how an abstract feature-based model can reconcile a range of results in the face recognition literature and, in turn, lessen currently perceived differences between the representation of faces and other objects. PMID:18484826

  3. An ERP investigation of the co-development of hemispheric lateralization of face and word recognition.

    PubMed

    Dundas, Eva M; Plaut, David C; Behrmann, Marlene

    2014-08-01

    The adult human brain would appear to have specialized and independent neural systems for the visual processing of words and faces. Extensive evidence has demonstrated greater selectivity for written words in the left over right hemisphere, and, conversely, greater selectivity for faces in the right over left hemisphere. This study examines the emergence of these complementary neural profiles, as well as the possible relationship between them. Using behavioral and neurophysiological measures, in adults, we observed the standard finding of greater accuracy and a larger N170 ERP component in the left over right hemisphere for words, and conversely, greater accuracy and a larger N170 in the right over the left hemisphere for faces. We also found that although children aged 7-12 years revealed the adult hemispheric pattern for words, they showed neither a behavioral nor a neural hemispheric superiority for faces. Of particular interest, the magnitude of their N170 for faces in the right hemisphere was related to that of the N170 for words in their left hemisphere. These findings suggest that the hemispheric organization of face recognition and of word recognition does not develop independently, and that word lateralization may precede and drive later face lateralization. A theoretical account for the findings, in which competition for visual representations unfolds over the course of development, is discussed.

  4. Speechreading and the Bruce-Young model of face recognition: early findings and recent developments.

    PubMed

    Campbell, Ruth

    2011-11-01

    In the context of face processing, the skill of processing speech from faces (speechreading) occupies a unique cognitive and neuropsychological niche. Neuropsychological dissociations in two cases (Campbell et al., 1986) suggested a very clear pattern: speechreading, but not face recognition, can be impaired by left-hemisphere damage, while face-recognition impairment consequent to right-hemisphere damage leaves speechreading unaffected. However, this story soon proved too simple, while neuroimaging techniques started to reveal further more detailed patterns. These patterns, moreover, were readily accommodated within the Bruce and Young (1986) model. Speechreading requires structural encoding of faces as faces, but further analysis of visible speech is supported by a network comprising several lateral temporal regions and inferior frontal regions. Posterior superior temporal regions play a significant role in speechreading natural speech, including audiovisual binding in hearing people. In deaf people, similar regions and circuits are implicated. While these detailed developments were not predicted by Bruce and Young, nevertheless, their model has stood the test of time, affording a structural framework for exploring speechreading in terms of face processing.

  5. A prescreener for 3D face recognition using radial symmerty and the Hausdorff fraction.

    SciTech Connect

    Koudelka, Melissa L.; Koch, Mark William; Russ, Trina Denise

    2005-04-01

    Face recognition systems require the ability to efficiently scan an existing database of faces to locate a match for a newly acquired face. The large number of faces in real world databases makes computationally intensive algorithms impractical for scanning entire databases. We propose the use of more efficient algorithms to 'prescreen' face databases, determining a limited set of likely matches that can be processed further to identify a match. We use both radial symmetry and shape to extract five features of interest on 3D range images of faces. These facial features determine a very small subset of discriminating points which serve as input to a prescreening algorithm based on a Hausdorff fraction. We show how to compute the Haudorff fraction in linear O(n) time using a range image representation. Our feature extraction and prescreening algorithms are verified using the FRGC v1.0 3D face scan data. Results show 97% of the extracted facial features are within 10 mm or less of manually marked ground truth, and the prescreener has a rank 6 recognition rate of 100%.

  6. Interhemispheric cooperation for face recognition but not for affective facial expressions.

    PubMed

    Schweinberger, Stefan R; Baird, Lyndsay M; Blümler, Margarethe; Kaufmann, Jürgen M; Mohr, Bettina

    2003-01-01

    Interhemispheric cooperation can be indicated by enhanced performance when stimuli are presented to both visual fields relative to one visual field alone. This "bilateral gain" is seen for words but not pseudowords in lexical decision tasks, and has been attributed to the operation of interhemispheric cell assemblies that exist only for meaningful words with acquired cortical representations. Recently, a bilateral gain has been reported for famous but not unfamiliar faces in a face recognition task [Neuropsychologia 40 (2002) 1841]. In Experiment 1 of the present paper, participants performed familiarity decisions for faces that were presented to the left (LVF), the right (RVF), or to both visual fields (BVF). An advantage for BVF relative to both LVF and RVF stimuli was seen in reaction times (RTs) to famous faces, but this bilateral advantage was absent for unfamiliar faces. In Experiment 2, participants classified the expression (happy or neutral) of unfamiliar faces. No bilateral advantage was seen for expressions, although a right hemisphere superiority was seen in terms of higher accuracy for LVF and BVF trials relative to the RVF. Recognition of famous faces (but not of facial expressions) require access to acquired memory representations that may be instantiated via cortical cell assemblies, and it is suggested that interhemispheric cooperation depends on these acquired cortical representations.

  7. An ERP investigation of the co-development of hemispheric lateralization of face and word recognition

    PubMed Central

    Dundas, Eva M.; Plaut, David C.; Behrmann, Marlene

    2014-01-01

    The adult human brain would appear to have specialized and independent neural systems for the visual processing of words and faces. Extensive evidence has demonstrated greater selectivity for written words in the left over right hemisphere, and, conversely, greater selectivity for faces in the right over left hemisphere. This study examines the emergence of these complementary neural profiles, as well as the possible relationship between them. Using behavioral and neurophysiological measures, in adults, we observed the standard finding of greater accuracy and a larger N170 ERP component in the left over right hemisphere for words, and conversely, greater accuracy and a larger N170 in the right over the left hemisphere for faces. We also found that, although children aged 7-12 years revealed the adult hemispheric pattern for words, they showed neither a behavioral nor a neural hemispheric superiority for faces. Of particular interest, the magnitude of their N170 for faces in the right hemisphere was related to that of the N170 for words in their left hemisphere. These findings suggest that the hemispheric organization of face recognition and of word recognition do not develop independently, and that word lateralization may precede and drive later face lateralization. A theoretical account for the findings, in which competition for visual representations unfolds over the course of development, is discussed. PMID:24933662

  8. Speechreading and the Bruce-Young model of face recognition: early findings and recent developments.

    PubMed

    Campbell, Ruth

    2011-11-01

    In the context of face processing, the skill of processing speech from faces (speechreading) occupies a unique cognitive and neuropsychological niche. Neuropsychological dissociations in two cases (Campbell et al., 1986) suggested a very clear pattern: speechreading, but not face recognition, can be impaired by left-hemisphere damage, while face-recognition impairment consequent to right-hemisphere damage leaves speechreading unaffected. However, this story soon proved too simple, while neuroimaging techniques started to reveal further more detailed patterns. These patterns, moreover, were readily accommodated within the Bruce and Young (1986) model. Speechreading requires structural encoding of faces as faces, but further analysis of visible speech is supported by a network comprising several lateral temporal regions and inferior frontal regions. Posterior superior temporal regions play a significant role in speechreading natural speech, including audiovisual binding in hearing people. In deaf people, similar regions and circuits are implicated. While these detailed developments were not predicted by Bruce and Young, nevertheless, their model has stood the test of time, affording a structural framework for exploring speechreading in terms of face processing. PMID:21988379

  9. Accurate palm vein recognition based on wavelet scattering and spectral regression kernel discriminant analysis

    NASA Astrophysics Data System (ADS)

    Elnasir, Selma; Shamsuddin, Siti Mariyam; Farokhi, Sajad

    2015-01-01

    Palm vein recognition (PVR) is a promising new biometric that has been applied successfully as a method of access control by many organizations, which has even further potential in the field of forensics. The palm vein pattern has highly discriminative features that are difficult to forge because of its subcutaneous position in the palm. Despite considerable progress and a few practical issues, providing accurate palm vein readings has remained an unsolved issue in biometrics. We propose a robust and more accurate PVR method based on the combination of wavelet scattering (WS) with spectral regression kernel discriminant analysis (SRKDA). As the dimension of WS generated features is quite large, SRKDA is required to reduce the extracted features to enhance the discrimination. The results based on two public databases-PolyU Hyper Spectral Palmprint public database and PolyU Multi Spectral Palmprint-show the high performance of the proposed scheme in comparison with state-of-the-art methods. The proposed approach scored a 99.44% identification rate and a 99.90% verification rate [equal error rate (EER)=0.1%] for the hyperspectral database and a 99.97% identification rate and a 99.98% verification rate (EER=0.019%) for the multispectral database.

  10. Participant sexual orientation matters: new evidence on the gender bias in face recognition.

    PubMed

    Steffens, Melanie C; Landmann, Sören; Mecklenbräuker, Silvia

    2013-01-01

    Research participants' sexual orientation is not consistently taken into account in experimental psychological research. We argue that it should be in any research related to participant or target gender. Corroborating this argument, an example study is presented on the gender bias in face recognition, the finding that women correctly recognize more female than male faces. In contrast, findings with male participants have been inconclusive. An online experiment (N = 1,147) was carried out, on purpose over-sampling lesbian and gay participants. Findings demonstrate that the pro-female gender bias in face recognition is modified by male participants' sexual orientation. Heterosexual women and lesbians as well as heterosexual men showed a pro-female gender bias in face recognition, whereas gay men showed a pro-male gender bias, consistent with the explanation that differences in face expertise develop congruent with interests. These results contribute to the growing evidence that participant sexual orientation can be used to distinguish between alternative theoretical explanations of given gender-correlated patterns of findings.

  11. Recognition of novel faces after single exposure is enhanced during pregnancy.

    PubMed

    Anderson, Marla V; Rutherford, M D

    2011-01-01

    Protective mechanisms in pregnancy include Nausea and Vomiting in Pregnancy (NVP) (Fessler, 2002; Flaxman and Sherman, 2000), increased sensitivity to health cues (Jones et al., 2005), and increased vigilance to out-group members (Navarette, Fessler, and Eng, 2007). While common perception suggests that pregnancy results in decreased cognitive function, an adaptationist perspective might predict that some aspects of cognition would be enhanced during pregnancy if they help to protect the reproductive investment. We propose that a reallocation of cognitive resources from nonessential to critical areas engenders the cognitive decline observed in some studies. Here, we used a recognition task disguised as a health rating to determine whether pregnancy facilitates face recognition. We found that pregnant women were significantly better at recognizing faces and that this effect was particularly pronounced for own-race male faces. In human evolutionary history, and today, males present a significant threat to females. Thus, enhanced recognition of faces, and especially male faces, during pregnancy may serve a protective function.

  12. The time course of individual face recognition: A pattern analysis of ERP signals.

    PubMed

    Nemrodov, Dan; Niemeier, Matthias; Mok, Jenkin Ngo Yin; Nestor, Adrian

    2016-05-15

    An extensive body of work documents the time course of neural face processing in the human visual cortex. However, the majority of this work has focused on specific temporal landmarks, such as N170 and N250 components, derived through univariate analyses of EEG data. Here, we take on a broader evaluation of ERP signals related to individual face recognition as we attempt to move beyond the leading theoretical and methodological framework through the application of pattern analysis to ERP data. Specifically, we investigate the spatiotemporal profile of identity recognition across variation in emotional expression. To this end, we apply pattern classification to ERP signals both in time, for any single electrode, and in space, across multiple electrodes. Our results confirm the significance of traditional ERP components in face processing. At the same time though, they support the idea that the temporal profile of face recognition is incompletely described by such components. First, we show that signals associated with different facial identities can be discriminated from each other outside the scope of these components, as early as 70ms following stimulus presentation. Next, electrodes associated with traditional ERP components as well as, critically, those not associated with such components are shown to contribute information to stimulus discriminability. And last, the levels of ERP-based pattern discrimination are found to correlate with recognition accuracy across subjects confirming the relevance of these methods for bridging brain and behavior data. Altogether, the current results shed new light on the fine-grained time course of neural face processing and showcase the value of novel methods for pattern analysis to investigating fundamental aspects of visual recognition. PMID:26973169

  13. A 2D range Hausdorff approach for 3D face recognition.

    SciTech Connect

    Koch, Mark William; Russ, Trina Denise; Little, Charles Quentin

    2005-04-01

    This paper presents a 3D facial recognition algorithm based on the Hausdorff distance metric. The standard 3D formulation of the Hausdorff matching algorithm has been modified to operate on a 2D range image, enabling a reduction in computation from O(N2) to O(N) without large storage requirements. The Hausdorff distance is known for its robustness to data outliers and inconsistent data between two data sets, making it a suitable choice for dealing with the inherent problems in many 3D datasets due to sensor noise and object self-occlusion. For optimal performance, the algorithm assumes a good initial alignment between probe and template datasets. However, to minimize the error between two faces, the alignment can be iteratively refined. Results from the algorithm are presented using 3D face images from the Face Recognition Grand Challenge database version 1.0.

  14. A Cognitively-Motivated Framework for Partial Face Recognition in Unconstrained Scenarios

    PubMed Central

    Monteiro, João C.; Cardoso, Jaime S.

    2015-01-01

    Humans perform and rely on face recognition routinely and effortlessly throughout their daily lives. Multiple works in recent years have sought to replicate this process in a robust and automatic way. However, it is known that the performance of face recognition algorithms is severely compromised in non-ideal image acquisition scenarios. In an attempt to deal with conditions, such as occlusion and heterogeneous illumination, we propose a new approach motivated by the global precedent hypothesis of the human brain's cognitive mechanisms of perception. An automatic modeling of SIFT keypoint descriptors using a Gaussian mixture model (GMM)-based universal background model method is proposed. A decision is, then, made in an innovative hierarchical sense, with holistic information gaining precedence over a more detailed local analysis. The algorithm was tested on the ORL, ARand Extended Yale B Face databases and presented state-of-the-art performance for a variety of experimental setups. PMID:25602266

  15. Early Maturity of Face Recognition: No Childhood Development of Holistic Processing, Novel Face Encoding, or Face-Space

    ERIC Educational Resources Information Center

    Crookes, Kate; McKone, Elinor

    2009-01-01

    Historically, it was believed the perceptual mechanisms involved in individuating faces developed only very slowly over the course of childhood, and that adult levels of expertise were not reached until well into adolescence. Over the last 10 years, there has been some erosion of this view by demonstrations that all adult-like behavioural…

  16. Own- and Other-Race Face Identity Recognition in Children: The Effects of Pose and Feature Composition

    ERIC Educational Resources Information Center

    Anzures, Gizelle; Kelly, David J.; Pascalis, Olivier; Quinn, Paul C.; Slater, Alan M.; de Viviés, Xavier; Lee, Kang

    2014-01-01

    We used a matching-to-sample task and manipulated facial pose and feature composition to examine the other-race effect (ORE) in face identity recognition between 5 and 10 years of age. Overall, the present findings provide a genuine measure of own- and other-race face identity recognition in children that is independent of photographic and image…

  17. Face Memory and Object Recognition in Children with High-Functioning Autism or Asperger Syndrome and in Their Parents

    ERIC Educational Resources Information Center

    Kuusikko-Gauffin, Sanna; Jansson-Verkasalo, Eira; Carter, Alice; Pollock-Wurman, Rachel; Jussila, Katja; Mattila, Marja-Leena; Rahko, Jukka; Ebeling, Hanna; Pauls, David; Moilanen, Irma

    2011-01-01

    Children with Autism Spectrum Disorders (ASDs) have reported to have impairments in face, recognition and face memory, but intact object recognition and object memory. Potential abnormalities, in these fields at the family level of high-functioning children with ASD remains understudied despite, the ever-mounting evidence that ASDs are genetic and…

  18. ERP Correlates of Target-Distracter Differentiation in Repeated Runs of a Continuous Recognition Task with Emotional and Neutral Faces

    ERIC Educational Resources Information Center

    Treese, Anne-Cecile; Johansson, Mikael; Lindgren, Magnus

    2010-01-01

    The emotional salience of faces has previously been shown to induce memory distortions in recognition memory tasks. This event-related potential (ERP) study used repeated runs of a continuous recognition task with emotional and neutral faces to investigate emotion-induced memory distortions. In the second and third runs, participants made more…

  19. Emotion Recognition in Faces and the Use of Visual Context in Young People with High-Functioning Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Wright, Barry; Clarke, Natalie; Jordan, Jo; Young, Andrew W.; Clarke, Paula; Miles, Jeremy; Nation, Kate; Clarke, Leesa; Williams, Christine

    2008-01-01

    We compared young people with high-functioning autism spectrum disorders (ASDs) with age, sex and IQ matched controls on emotion recognition of faces and pictorial context. Each participant completed two tests of emotion recognition. The first used Ekman series faces. The second used facial expressions in visual context. A control task involved…

  20. Recognizing the same face in different contexts: Testing within-person face recognition in typical development and in autism

    PubMed Central

    Neil, Louise; Cappagli, Giulia; Karaminis, Themelis; Jenkins, Rob; Pellicano, Elizabeth

    2016-01-01

    Unfamiliar face recognition follows a particularly protracted developmental trajectory and is more likely to be atypical in children with autism than those without autism. There is a paucity of research, however, examining the ability to recognize the same face across multiple naturally varying images. Here, we investigated within-person face recognition in children with and without autism. In Experiment 1, typically developing 6- and 7-year-olds, 8- and 9-year-olds, 10- and 11-year-olds, 12- to 14-year-olds, and adults were given 40 grayscale photographs of two distinct male identities (20 of each face taken at different ages, from different angles, and in different lighting conditions) and were asked to sort them by identity. Children mistook images of the same person as images of different people, subdividing each individual into many perceived identities. Younger children divided images into more perceived identities than adults and also made more misidentification errors (placing two different identities together in the same group) than older children and adults. In Experiment 2, we used the same procedure with 32 cognitively able children with autism. Autistic children reported a similar number of identities and made similar numbers of misidentification errors to a group of typical children of similar age and ability. Fine-grained analysis using matrices revealed marginal group differences in overall performance. We suggest that the immature performance in typical and autistic children could arise from problems extracting the perceptual commonalities from different images of the same person and building stable representations of facial identity. PMID:26615971

  1. Face recognition deficits in autism spectrum disorders are both domain specific and process specific.

    PubMed

    Weigelt, Sarah; Koldewyn, Kami; Kanwisher, Nancy

    2013-01-01

    Although many studies have reported face identity recognition deficits in autism spectrum disorders (ASD), two fundamental question remains: 1) Is this deficit "process specific" for face memory in particular, or does it extend to perceptual discrimination of faces as well? And 2) Is the deficit "domain specific" for faces, or is it found more generally for other social or even nonsocial stimuli? The answers to these questions are important both for understanding the nature of autism and its developmental etiology, and for understanding the functional architecture of face processing in the typical brain. Here we show that children with ASD are impaired (compared to age and IQ-matched typical children) in face memory, but not face perception, demonstrating process specificity. Further, we find no deficit for either memory or perception of places or cars, indicating domain specificity. Importantly, we further showed deficits in both the perception and memory of bodies, suggesting that the relevant domain of deficit may be social rather than specifically facial. These results provide a more precise characterization of the cognitive phenotype of autism and further indicate a functional dissociation between face memory and face perception.

  2. Dissociable roles of internal feelings and face recognition ability in facial expression decoding.

    PubMed

    Zhang, Lin; Song, Yiying; Liu, Ling; Liu, Jia

    2016-05-15

    The problem of emotion recognition has been tackled by researchers in both affective computing and cognitive neuroscience. While affective computing relies on analyzing visual features from facial expressions, it has been proposed that humans recognize emotions by internally simulating the emotional states conveyed by others' expressions, in addition to perceptual analysis of facial features. Here we investigated whether and how our internal feelings contributed to the ability to decode facial expressions. In two independent large samples of participants, we observed that individuals who generally experienced richer internal feelings exhibited a higher ability to decode facial expressions, and the contribution of internal feelings was independent of face recognition ability. Further, using voxel-based morphometry, we found that the gray matter volume (GMV) of bilateral superior temporal sulcus (STS) and the right inferior parietal lobule was associated with facial expression decoding through the mediating effect of internal feelings, while the GMV of bilateral STS, precuneus, and the right central opercular cortex contributed to facial expression decoding through the mediating effect of face recognition ability. In addition, the clusters in bilateral STS involved in the two components were neighboring yet separate. Our results may provide clues about the mechanism by which internal feelings, in addition to face recognition ability, serve as an important instrument for humans in facial expression decoding.

  3. Multimodal biometrics approach using face and ear recognition to overcome adverse effects of pose changes

    NASA Astrophysics Data System (ADS)

    Wang, Yu; He, Dejian; Yu, Chongchong; Jiang, Tongqiang; Liu, Zaiwen

    2012-10-01

    A personal identification method is proposed which uses face and ear together to overcome mass information loss resulting from pose changes. Several aspects are mainly considered: First, ears are at both sides of the face. Their physiological position is approximately orthogonal and their information is complementary to each other when the head pose changes. Therefore, fusing the face and ear is reasonable. Second, the texture feature is extracted using a uniform local binary pattern (ULBP) descriptor which is more compact. Third, Haar wavelet transform, blocked-based, and multiscale ideas are taken into account to further strengthen the extracted texture information. Finally, texture features of face and ear are fused using serial strategy, parallel strategy, and kernel canonical correlation analysis to further increase the recognition rate. Experimental results show that it is both fast and robust to use ULBP to extract texture features. Haar wavelet transform, block-based, and multiscale methods can effectively enhance texture information of the face or ear ULBP descriptor. Multimodal biometrics fusion about face and ear is feasible and effective. The recognition rates of the proposed approach outperform remarkably those of the classic principal component analysis (PCA), kernel PCA, or Gabor texture feature extraction method especially when sharp pose change happens.

  4. Facial deblur inference using subspace analysis for recognition of blurred faces.

    PubMed

    Nishiyama, Masashi; Hadid, Abdenour; Takeshima, Hidenori; Shotton, Jamie; Kozakaya, Tatsuo; Yamaguchi, Osamu

    2011-04-01

    This paper proposes a novel method for recognizing faces degraded by blur using deblurring of facial images. The main issue is how to infer a Point Spread Function (PSF) representing the process of blur on faces. Inferring a PSF from a single facial image is an ill-posed problem. Our method uses learned prior information derived from a training set of blurred faces to make the problem more tractable. We construct a feature space such that blurred faces degraded by the same PSF are similar to one another. We learn statistical models that represent prior knowledge of predefined PSF sets in this feature space. A query image of unknown blur is compared with each model and the closest one is selected for PSF inference. The query image is deblurred using the PSF corresponding to that model and is thus ready for recognition. Experiments on a large face database (FERET) artificially degraded by focus or motion blur show that our method substantially improves the recognition performance compared to existing methods. We also demonstrate improved performance on real blurred images on the FRGC 1.0 face database. Furthermore, we show and explain how combining the proposed facial deblur inference with the local phase quantization (LPQ) method can further enhance the performance.

  5. Recognition memory for faces and scenes in amnesia: dissociable roles of medial temporal lobe structures.

    PubMed

    Taylor, Karen J; Henson, Richard N A; Graham, Kim S

    2007-06-18

    The relative contributions of the hippocampus and the perirhinal cortex to recognition memory are currently the subject of intense debate. Whereas some authors propose that both structures play a similar role in recognition memory, others suggest that the hippocampus might mediate recollective and/or associative aspects of recognition memory, whereas the perirhinal cortex may mediate item memory. Here we investigate an alternative functional demarcation between these structures, following reports of stimulus-specific perceptual deficits in amnesics with medial temporal lobe (MTL) lesions. Using a novel recognition memory test for faces and scenes, participants with broad damage to MTL structures, which included the hippocampus and the perirhinal cortex, were impaired on both face and scene memory. By contrast, participants with damage limited to the hippocampus showed deficits only in memory for scenes. These findings imply that although both the hippocampus and surrounding cortex contribute to recognition memory, their respective roles can be distinguished according to the type of material to be remembered. This interaction between lesion site and stimulus category may explain some of the inconsistencies present in the literature.

  6. Robust Face Recognition via Multi-Scale Patch-Based Matrix Regression

    PubMed Central

    Gao, Guangwei; Yang, Jian; Jing, Xiaoyuan; Huang, Pu; Hua, Juliang; Yue, Dong

    2016-01-01

    In many real-world applications such as smart card solutions, law enforcement, surveillance and access control, the limited training sample size is the most fundamental problem. By making use of the low-rank structural information of the reconstructed error image, the so-called nuclear norm-based matrix regression has been demonstrated to be effective for robust face recognition with continuous occlusions. However, the recognition performance of nuclear norm-based matrix regression degrades greatly in the face of the small sample size problem. An alternative solution to tackle this problem is performing matrix regression on each patch and then integrating the outputs from all patches. However, it is difficult to set an optimal patch size across different databases. To fully utilize the complementary information from different patch scales for the final decision, we propose a multi-scale patch-based matrix regression scheme based on which the ensemble of multi-scale outputs can be achieved optimally. Extensive experiments on benchmark face databases validate the effectiveness and robustness of our method, which outperforms several state-of-the-art patch-based face recognition algorithms. PMID:27525734

  7. Robust Face Recognition via Multi-Scale Patch-Based Matrix Regression.

    PubMed

    Gao, Guangwei; Yang, Jian; Jing, Xiaoyuan; Huang, Pu; Hua, Juliang; Yue, Dong

    2016-01-01

    In many real-world applications such as smart card solutions, law enforcement, surveillance and access control, the limited training sample size is the most fundamental problem. By making use of the low-rank structural information of the reconstructed error image, the so-called nuclear norm-based matrix regression has been demonstrated to be effective for robust face recognition with continuous occlusions. However, the recognition performance of nuclear norm-based matrix regression degrades greatly in the face of the small sample size problem. An alternative solution to tackle this problem is performing matrix regression on each patch and then integrating the outputs from all patches. However, it is difficult to set an optimal patch size across different databases. To fully utilize the complementary information from different patch scales for the final decision, we propose a multi-scale patch-based matrix regression scheme based on which the ensemble of multi-scale outputs can be achieved optimally. Extensive experiments on benchmark face databases validate the effectiveness and robustness of our method, which outperforms several state-of-the-art patch-based face recognition algorithms. PMID:27525734

  8. Optimized face recognition algorithm using radial basis function neural networks and its practical applications.

    PubMed

    Yoo, Sung-Hoon; Oh, Sung-Kwun; Pedrycz, Witold

    2015-09-01

    In this study, we propose a hybrid method of face recognition by using face region information extracted from the detected face region. In the preprocessing part, we develop a hybrid approach based on the Active Shape Model (ASM) and the Principal Component Analysis (PCA) algorithm. At this step, we use a CCD (Charge Coupled Device) camera to acquire a facial image by using AdaBoost and then Histogram Equalization (HE) is employed to improve the quality of the image. ASM extracts the face contour and image shape to produce a personal profile. Then we use a PCA method to reduce dimensionality of face images. In the recognition part, we consider the improved Radial Basis Function Neural Networks (RBF NNs) to identify a unique pattern associated with each person. The proposed RBF NN architecture consists of three functional modules realizing the condition phase, the conclusion phase, and the inference phase completed with the help of fuzzy rules coming in the standard 'if-then' format. In the formation of the condition part of the fuzzy rules, the input space is partitioned with the use of Fuzzy C-Means (FCM) clustering. In the conclusion part of the fuzzy rules, the connections (weights) of the RBF NNs are represented by four kinds of polynomials such as constant, linear, quadratic, and reduced quadratic. The values of the coefficients are determined by running a gradient descent method. The output of the RBF NNs model is obtained by running a fuzzy inference method. The essential design parameters of the network (including learning rate, momentum coefficient and fuzzification coefficient used by the FCM) are optimized by means of Differential Evolution (DE). The proposed P-RBF NNs (Polynomial based RBF NNs) are applied to facial recognition and its performance is quantified from the viewpoint of the output performance and recognition rate. PMID:26163042

  9. Optimized face recognition algorithm using radial basis function neural networks and its practical applications.

    PubMed

    Yoo, Sung-Hoon; Oh, Sung-Kwun; Pedrycz, Witold

    2015-09-01

    In this study, we propose a hybrid method of face recognition by using face region information extracted from the detected face region. In the preprocessing part, we develop a hybrid approach based on the Active Shape Model (ASM) and the Principal Component Analysis (PCA) algorithm. At this step, we use a CCD (Charge Coupled Device) camera to acquire a facial image by using AdaBoost and then Histogram Equalization (HE) is employed to improve the quality of the image. ASM extracts the face contour and image shape to produce a personal profile. Then we use a PCA method to reduce dimensionality of face images. In the recognition part, we consider the improved Radial Basis Function Neural Networks (RBF NNs) to identify a unique pattern associated with each person. The proposed RBF NN architecture consists of three functional modules realizing the condition phase, the conclusion phase, and the inference phase completed with the help of fuzzy rules coming in the standard 'if-then' format. In the formation of the condition part of the fuzzy rules, the input space is partitioned with the use of Fuzzy C-Means (FCM) clustering. In the conclusion part of the fuzzy rules, the connections (weights) of the RBF NNs are represented by four kinds of polynomials such as constant, linear, quadratic, and reduced quadratic. The values of the coefficients are determined by running a gradient descent method. The output of the RBF NNs model is obtained by running a fuzzy inference method. The essential design parameters of the network (including learning rate, momentum coefficient and fuzzification coefficient used by the FCM) are optimized by means of Differential Evolution (DE). The proposed P-RBF NNs (Polynomial based RBF NNs) are applied to facial recognition and its performance is quantified from the viewpoint of the output performance and recognition rate.

  10. Age-related differences in brain electrical activity during extended continuous face recognition in younger children, older children and adults.

    PubMed

    Van Strien, Jan W; Glimmerveen, Johanna C; Franken, Ingmar H A; Martens, Vanessa E G; de Bruin, Eveline A

    2011-09-01

    To examine the development of recognition memory in primary-school children, 36 healthy younger children (8-9 years old) and 36 healthy older children (11-12 years old) participated in an ERP study with an extended continuous face recognition task (Study 1). Each face of a series of 30 faces was shown randomly six times interspersed with distracter faces. The children were required to make old vs. new decisions. Older children responded faster than younger children, but younger children exhibited a steeper decrease in latencies across the five repetitions. Older children exhibited better accuracy for new faces, but there were no age differences in recognition accuracy for repeated faces. For the N2, N400 and late positive complex (LPC), we analyzed the old/new effects (repetition 1 vs. new presentation) and the extended repetition effects (repetitions 1 through 5). Compared to older children, younger children exhibited larger frontocentral N2 and N400 old/new effects. For extended face repetitions, negativity of the N2 and N400 decreased in a linear fashion in both age groups. For the LPC, an ERP component thought to reflect recollection, no significant old/new or extended repetition effects were found. Employing the same face recognition paradigm in 20 adults (Study 2), we found a significant N400 old/new effect at lateral frontal sites and a significant LPC repetition effect at parietal sites, with LPC amplitudes increasing linearly with the number of repetitions. This study clearly demonstrates differential developmental courses for the N400 and LPC pertaining to recognition memory for faces. It is concluded that face recognition in children is mediated by early and probably more automatic than conscious recognition processes. In adults, the LPC extended repetition effect indicates that adult face recognition memory is related to a conscious and graded recollection process rather than to an automatic recognition process.

  11. Right perceptual bias and self-face recognition in individuals with congenital prosopagnosia.

    PubMed

    Malaspina, Manuela; Albonico, Andrea; Daini, Roberta

    2016-01-01

    The existence of a drift to base judgments more on the right half-part of facial stimuli, which falls in the observer's left visual field (left perceptual bias (LPB)), in normal individuals has been demonstrated. However, less is known about the existence of this phenomenon in people affected by face impairment from birth, namely congenital prosopagnosics. In the current study, we aimed to investigate the presence of the LPB under face impairment conditions using chimeric stimuli and the most familiar face of all: the self-face. For this purpose we tested 10 participants with congenital prosopagnosia and 21 healthy controls with a face matching task using facial stimuli, involving a spatial manipulation of the left and the right hemi-faces of self-photos and photos of others. Even though congenital prosopagnosics performance was significantly lower than that of controls, both groups showed a consistent self-face advantage. Moreover, congenital prosopagnosics showed optimal performance when the right side of their face was presented, that is, right perceptual bias, suggesting a differential strategy for self-recognition in those subjects. A possible explanation for this result is discussed.

  12. A family at risk: congenital prosopagnosia, poor face recognition and visuoperceptual deficits within one family.

    PubMed

    Johnen, Andreas; Schmukle, Stefan C; Hüttenbrink, Judith; Kischka, Claudia; Kennerknecht, Ingo; Dobel, Christian

    2014-05-01

    Congenital prosopagnosia (CP) describes a severe face processing impairment despite intact early vision and in the absence of overt brain damage. CP is assumed to be present from birth and often transmitted within families. Previous studies reported conflicting findings regarding associated deficits in nonface visuoperceptual tasks. However, diagnostic criteria for CP significantly differed between studies, impeding conclusions on the heterogeneity of the impairment. Following current suggestions for clinical diagnoses of CP, we administered standardized tests for face processing, a self-report questionnaire and general visual processing tests to an extended family (N=28), in which many members reported difficulties with face recognition. This allowed us to assess the degree of heterogeneity of the deficit within a large sample of suspected CPs of similar genetic and environmental background. (a) We found evidence for a severe face processing deficit but intact nonface visuoperceptual skills in three family members - a father and his two sons - who fulfilled conservative criteria for a CP diagnosis on standardized tests and a self-report questionnaire, thus corroborating findings of familial transmissions of CP. (b) Face processing performance of the remaining family members was also significantly below the mean of the general population, suggesting that face processing impairments are transmitted as a continuous trait rather than in a dichotomous all-or-nothing fashion. (c) Self-rating scores of face recognition showed acceptable correlations with standardized tests, suggesting this method as a viable screening procedure for CP diagnoses. (d) Finally, some family members revealed severe impairments in general visual processing and nonface visual memory tasks either in conjunction with face perception deficits or as an isolated impairment. This finding may indicate an elevated risk for more general visuoperceptual deficits in families with prosopagnosic members.

  13. Face recognition across non-uniform motion blur, illumination, and pose.

    PubMed

    Punnappurath, Abhijith; Rajagopalan, Ambasamudram Narayanan; Taheri, Sima; Chellappa, Rama; Seetharaman, Guna

    2015-07-01

    Existing methods for performing face recognition in the presence of blur are based on the convolution model and cannot handle non-uniform blurring situations that frequently arise from tilts and rotations in hand-held cameras. In this paper, we propose a methodology for face recognition in the presence of space-varying motion blur comprising of arbitrarily-shaped kernels. We model the blurred face as a convex combination of geometrically transformed instances of the focused gallery face, and show that the set of all images obtained by non-uniformly blurring a given image forms a convex set. We first propose a non-uniform blur-robust algorithm by making use of the assumption of a sparse camera trajectory in the camera motion space to build an energy function with l1 -norm constraint on the camera motion. The framework is then extended to handle illumination variations by exploiting the fact that the set of all images obtained from a face image by non-uniform blurring and changing the illumination forms a bi-convex set. Finally, we propose an elegant extension to also account for variations in pose. PMID:25775493

  14. No differences in emotion recognition strategies in children with autism spectrum disorder: evidence from hybrid faces.

    PubMed

    Evers, Kris; Kerkhof, Inneke; Steyaert, Jean; Noens, Ilse; Wagemans, Johan

    2014-01-01

    Emotion recognition problems are frequently reported in individuals with an autism spectrum disorder (ASD). However, this research area is characterized by inconsistent findings, with atypical emotion processing strategies possibly contributing to existing contradictions. In addition, an attenuated saliency of the eyes region is often demonstrated in ASD during face identity processing. We wanted to compare reliance on mouth versus eyes information in children with and without ASD, using hybrid facial expressions. A group of six-to-eight-year-old boys with ASD and an age- and intelligence-matched typically developing (TD) group without intellectual disability performed an emotion labelling task with hybrid facial expressions. Five static expressions were used: one neutral expression and four emotional expressions, namely, anger, fear, happiness, and sadness. Hybrid faces were created, consisting of an emotional face half (upper or lower face region) with the other face half showing a neutral expression. Results showed no emotion recognition problem in ASD. Moreover, we provided evidence for the existence of top- and bottom-emotions in children: correct identification of expressions mainly depends on information in the eyes (so-called top-emotions: happiness) or in the mouth region (so-called bottom-emotions: sadness, anger, and fear). No stronger reliance on mouth information was found in children with ASD. PMID:24527213

  15. Amygdala Volume Predicts Inter-Individual Differences in Fearful Face Recognition

    PubMed Central

    Zhao, Ke; Yan, Wen-Jing; Chen, Yu-Hsin; Zuo, Xi-Nian; Fu, Xiaolan

    2013-01-01

    The present study investigates the relationship between inter-individual differences in fearful face recognition and amygdala volume. Thirty normal adults were recruited and each completed two identical facial expression recognition tests offline and two magnetic resonance imaging (MRI) scans. Linear regression indicated that the left amygdala volume negatively correlated with the accuracy of recognizing fearful facial expressions and positively correlated with the probability of misrecognizing fear as surprise. Further exploratory analyses revealed that this relationship did not exist for any other subcortical or cortical regions. Nor did such a relationship exist between the left amygdala volume and performance recognizing the other five facial expressions. These mind-brain associations highlight the importance of the amygdala in recognizing fearful faces and provide insights regarding inter-individual differences in sensitivity toward fear-relevant stimuli. PMID:24009767

  16. Membership-degree preserving discriminant analysis with applications to face recognition.

    PubMed

    Yang, Zhangjing; Liu, Chuancai; Huang, Pu; Qian, Jianjun

    2013-01-01

    In pattern recognition, feature extraction techniques have been widely employed to reduce the dimensionality of high-dimensional data. In this paper, we propose a novel feature extraction algorithm called membership-degree preserving discriminant analysis (MPDA) based on the fisher criterion and fuzzy set theory for face recognition. In the proposed algorithm, the membership degree of each sample to particular classes is firstly calculated by the fuzzy k-nearest neighbor (FKNN) algorithm to characterize the similarity between each sample and class centers, and then the membership degree is incorporated into the definition of the between-class scatter and the within-class scatter. The feature extraction criterion via maximizing the ratio of the between-class scatter to the within-class scatter is applied. Experimental results on the ORL, Yale, and FERET face databases demonstrate the effectiveness of the proposed algorithm.

  17. The neural basis of self-face recognition after self-concept threat and comparison with important others.

    PubMed

    Guan, Lili; Qi, Mingming; Zhang, Qinglin; Yang, Juan

    2014-01-01

    The implicit positive association (IPA) theory attributed self-face advantage to the IPA with self-concept. Previous behavioral study has found that self-concept threat (SCT) could eliminate the self-advantage in face recognition over familiar-face, without taking levels of facial familiarity into account. The current event-related potential study aimed to investigate whether SCT could eliminate the self-face advantage over stranger-face. Fifteen participants completed a "self-friend" comparison task in which participants identified the face orientation of self-face and friend-face after SCT and non-self-