Science.gov

Sample records for 2dpca-based face recognition

  1. 2DPCA-based row-kNN distance computation for face recognition

    NASA Astrophysics Data System (ADS)

    Al-Arashi, Waled Hussein; Suandi, Shahrel Azmin

    2012-04-01

    Since two-dimensional principal component analysis has been used in face recognition, many approaches in 2D-based method have been developed. However, less attention is spent in the classification methods based on 2D image matrix. Considering that the feature extracted from 2DPCA based is a matrix instead of a single vector as in PCA based, a new measurement distance is proposed which considers the rows of the feature matrix. Unlike the previous methods which are depending on the columns or the whole matrix of the feature matrix, the proposed method is combined with the k-nearest neighbour instead of the 1-nearest neighbour. Moreover, by using the proposed method, the drawback of 2DPCA based algorithms compared to PCA based algorithms, which is the increment of the coefficient numbers, can be alleviated. Experimental results on a famous face databases show that by increasing the number of training images per class, the proposed method accuracy is also increased until it surpasses all methods in terms of accuracy and storage capacity.

  2. Genetic specificity of face recognition

    PubMed Central

    Shakeshaft, Nicholas G.; Plomin, Robert

    2015-01-01

    Specific cognitive abilities in diverse domains are typically found to be highly heritable and substantially correlated with general cognitive ability (g), both phenotypically and genetically. Recent twin studies have found the ability to memorize and recognize faces to be an exception, being similarly heritable but phenotypically substantially uncorrelated both with g and with general object recognition. However, the genetic relationships between face recognition and other abilities (the extent to which they share a common genetic etiology) cannot be determined from phenotypic associations. In this, to our knowledge, first study of the genetic associations between face recognition and other domains, 2,000 18- and 19-year-old United Kingdom twins completed tests assessing their face recognition, object recognition, and general cognitive abilities. Results confirmed the substantial heritability of face recognition (61%), and multivariate genetic analyses found that most of this genetic influence is unique and not shared with other cognitive abilities. PMID:26417086

  3. A face recognition embedded system

    NASA Astrophysics Data System (ADS)

    Pun, Kwok Ho; Moon, Yiu Sang; Tsang, Chi Chiu; Chow, Chun Tak; Chan, Siu Man

    2005-03-01

    This paper presents an experimental study of the implementation of a face recognition system in embedded systems. To investigate the feasibility and practicality of real time face recognition on such systems, a door access control system based on face recognition is built. Due to the limited computation power of embedded device, a semi-automatic scheme for face detection and eye location is proposed to solve these computationally hard problems. It is found that to achieve real time performance, optimization of the core face recognition module is needed. As a result, extensive profiling is done to pinpoint the execution hotspots in the system and optimization are carried out. After careful precision analysis, all slow floating point calculations are replaced with their fixed-point versions. Experimental results show that real time performance can be achieved without significant loss in recognition accuracy.

  4. Accuracy enhanced thermal face recognition

    NASA Astrophysics Data System (ADS)

    Lin, Chun-Fu; Lin, Sheng-Fuu

    2013-11-01

    Human face recognition has been generally researched for the last three decades. Face recognition with thermal image has begun to attract significant attention gradually since illumination of environment would not affect the recognition performance. However, the recognition performance of traditional thermal face recognizer is still insufficient in practical application. This study presents a novel thermal face recognizer employing not only thermal features but also critical facial geometric features which would not be influenced by hair style to improve the recognition performance. A three-layer back-propagation feed-forward neural network is applied as the classifier. Traditional thermal face recognizers only use the indirect information of the topography of blood vessels like thermogram as features. To overcome this limitation, the proposed thermal face recognizer can use not only the indirect information but also the direct information of the topography of blood vessels which is unique for every human. Moreover, the recognition performance of the proposed thermal features would not decrease even if the hair of frontal bone varies, the eye blinks or the nose breathes. Experimental results show that the proposed features are significantly more effective than traditional thermal features and the recognition performance of thermal face recognizer is improved.

  5. Sampling design for face recognition

    NASA Astrophysics Data System (ADS)

    Yan, Yanjun; Osadciw, Lisa A.

    2006-04-01

    A face recognition system consists of two integrated parts: One is the face recognition algorithm, the other is the selected classifier and derived features by the algorithm from a data set. The face recognition algorithm definitely plays a central role, but this paper does not aim at evaluating the algorithm, but deriving the best features for this algorithm from a specific database through sampling design of the training set, which directs how the sample should be collected and dictates the sample space. Sampling design can help exert the full potential of the face recognition algorithm without overhaul. Conventional statistical analysis usually assume some distribution to draw the inference, but the design-based inference does not assume any distribution of the data and it does not assume the independency between the sample observations. The simulations illustrates that the systematic sampling scheme performs better than the simple random sampling scheme, and the systematic sampling is comparable to using all available training images in recognition performance. Meanwhile the sampling schemes can save the system resources and alleviate the overfitting problem. However, the post stratification by sex is not shown to be significant in improving the recognition performance.

  6. Face recognition for uncontrolled environments

    NASA Astrophysics Data System (ADS)

    Podilchuk, Christine; Hulbert, William; Flachsbart, Ralph; Barinov, Lev

    2010-04-01

    A new face recognition algorithm has been proposed which is robust to variations in pose, expression, illumination and occlusions such as sunglasses. The algorithm is motivated by the Edit Distance used to determine the similarity between strings of one dimensional data such as DNA and text. The key to this approach is how to extend the concept of an Edit Distance on one-dimensional data to two-dimensional image data. The algorithm is based on mapping one image into another and using the characteristics of the mapping to determine a two-dimensional Pictorial-Edit Distance or P-Edit Distance. We show how the properties of the mapping are similar to insertion, deletion and substitution errors defined in an Edit Distance. This algorithm is particularly well suited for face recognition in uncontrolled environments such as stand-off and other surveillance applications. We will describe an entire system designed for face recognition at a distance including face detection, pose estimation, multi-sample fusion of video frames and identification. Here we describe how the algorithm is used for face recognition at a distance, present some initial results and describe future research directions.(

  7. Covert face recognition without prosopagnosia.

    PubMed

    Ellis, H D; Young, A W; Koenken, G

    1993-01-01

    An experiment is reported where subjects were presented with familiar or unfamiliar faces for supraliminal durations or for durations individually assessed as being below the threshold for recognition. Their electrodermal responses to each stimulus were measured and the results showed higher peak amplitude skin conductance responses for familiar than for unfamiliar faces, regardless of whether they had been displayed supraliminally or subliminally. A parallel is drawn between elevated skin conductance responses to subliminal stimuli and findings of covert recognition of familiar faces in prosopagnosic patients, some of whom show increased electrodermal activity (EDA) to previously familiar faces. The supraliminal presentation data also served to replicate similar work by Tranel et al (1985). The results are considered alongside other data indicating the relation between non-conscious, "automatic" aspects of normal visual information processing and abilities which can be found to be preserved without awareness after brain injury. PMID:24487927

  8. Bayesian Face Recognition and Perceptual Narrowing in Face-Space

    ERIC Educational Resources Information Center

    Balas, Benjamin

    2012-01-01

    During the first year of life, infants' face recognition abilities are subject to "perceptual narrowing", the end result of which is that observers lose the ability to distinguish previously discriminable faces (e.g. other-race faces) from one another. Perceptual narrowing has been reported for faces of different species and different races, in…

  9. Neural microgenesis of personally familiar face recognition

    PubMed Central

    Ramon, Meike; Vizioli, Luca; Liu-Shuang, Joan; Rossion, Bruno

    2015-01-01

    Despite a wealth of information provided by neuroimaging research, the neural basis of familiar face recognition in humans remains largely unknown. Here, we isolated the discriminative neural responses to unfamiliar and familiar faces by slowly increasing visual information (i.e., high-spatial frequencies) to progressively reveal faces of unfamiliar or personally familiar individuals. Activation in ventral occipitotemporal face-preferential regions increased with visual information, independently of long-term face familiarity. In contrast, medial temporal lobe structures (perirhinal cortex, amygdala, hippocampus) and anterior inferior temporal cortex responded abruptly when sufficient information for familiar face recognition was accumulated. These observations suggest that following detailed analysis of individual faces in core posterior areas of the face-processing network, familiar face recognition emerges categorically in medial temporal and anterior regions of the extended cortical face network. PMID:26283361

  10. Face Recognition Increases during Saccade Preparation

    PubMed Central

    Lin, Hai; Rizak, Joshua D.; Ma, Yuan-ye; Yang, Shang-chuan; Chen, Lin; Hu, Xin-tian

    2014-01-01

    Face perception is integral to human perception system as it underlies social interactions. Saccadic eye movements are frequently made to bring interesting visual information, such as faces, onto the fovea for detailed processing. Just before eye movement onset, the processing of some basic features, such as the orientation, of an object improves at the saccade landing point. Interestingly, there is also evidence that indicates faces are processed in early visual processing stages similar to basic features. However, it is not known whether this early enhancement of processing includes face recognition. In this study, three experiments were performed to map the timing of face presentation to the beginning of the eye movement in order to evaluate pre-saccadic face recognition. Faces were found to be similarly processed as simple objects immediately prior to saccadic movements. Starting ∼ 120 ms before a saccade to a target face, independent of whether or not the face was surrounded by other faces, the face recognition gradually improved and the critical spacing of the crowding decreased as saccade onset was approaching. These results suggest that an upcoming saccade prepares the visual system for new information about faces at the saccade landing site and may reduce the background in a crowd to target the intended face. This indicates an important role of pre-saccadic eye movement signals in human face recognition. PMID:24671174

  11. Partial face recognition: alignment-free approach.

    PubMed

    Liao, Shengcai; Jain, Anil K; Li, Stan Z

    2013-05-01

    Numerous methods have been developed for holistic face recognition with impressive performance. However, few studies have tackled how to recognize an arbitrary patch of a face image. Partial faces frequently appear in unconstrained scenarios, with images captured by surveillance cameras or handheld devices (e.g., mobile phones) in particular. In this paper, we propose a general partial face recognition approach that does not require face alignment by eye coordinates or any other fiducial points. We develop an alignment-free face representation method based on Multi-Keypoint Descriptors (MKD), where the descriptor size of a face is determined by the actual content of the image. In this way, any probe face image, holistic or partial, can be sparsely represented by a large dictionary of gallery descriptors. A new keypoint descriptor called Gabor Ternary Pattern (GTP) is also developed for robust and discriminative face recognition. Experimental results are reported on four public domain face databases (FRGCv2.0, AR, LFW, and PubFig) under both the open-set identification and verification scenarios. Comparisons with two leading commercial face recognition SDKs (PittPatt and FaceVACS) and two baseline algorithms (PCA+LDA and LBP) show that the proposed method, overall, is superior in recognizing both holistic and partial faces without requiring alignment. PMID:23520259

  12. Face photo-sketch synthesis and recognition.

    PubMed

    Wang, Xiaogang; Tang, Xiaoou

    2009-11-01

    In this paper, we propose a novel face photo-sketch synthesis and recognition method using a multiscale Markov Random Fields (MRF) model. Our system has three components: 1) given a face photo, synthesizing a sketch drawing; 2) given a face sketch drawing, synthesizing a photo; and 3) searching for face photos in the database based on a query sketch drawn by an artist. It has useful applications for both digital entertainment and law enforcement. We assume that faces to be studied are in a frontal pose, with normal lighting and neutral expression, and have no occlusions. To synthesize sketch/photo images, the face region is divided into overlapping patches for learning. The size of the patches decides the scale of local face structures to be learned. From a training set which contains photo-sketch pairs, the joint photo-sketch model is learned at multiple scales using a multiscale MRF model. By transforming a face photo to a sketch (or transforming a sketch to a photo), the difference between photos and sketches is significantly reduced, thus allowing effective matching between the two in face sketch recognition. After the photo-sketch transformation, in principle, most of the proposed face photo recognition approaches can be applied to face sketch recognition in a straightforward way. Extensive experiments are conducted on a face sketch database including 606 faces, which can be downloaded from our Web site (http://mmlab.ie.cuhk.edu.hk/facesketch.html). PMID:19762924

  13. Pose estimation and frontal face detection for face recognition

    NASA Astrophysics Data System (ADS)

    Lim, Eng Thiam; Wang, Jiangang; Xie, Wei; Ronda, Venkarteswarlu

    2005-05-01

    This paper proposes a pose estimation and frontal face detection algorithm for face recognition. Considering it's application in a real-world environment, the algorithm has to be robust yet computationally efficient. The main contribution of this paper is the efficient face localization, scale and pose estimation using color models. Simulation results showed very low computational load when compare to other face detection algorithm. The second contribution is the introduction of low dimensional statistical face geometrical model. Compared to other statistical face model the proposed method models the face geometry efficiently. The algorithm is demonstrated on a real-time system. The simulation results indicate that the proposed algorithm is computationally efficient.

  14. Extraversion predicts individual differences in face recognition.

    PubMed

    Li, Jingguang; Tian, Moqian; Fang, Huizhen; Xu, Miao; Li, He; Liu, Jia

    2010-07-01

    In daily life, one of the most common social tasks we perform is to recognize faces. However, the relation between face recognition ability and social activities is largely unknown. Here we ask whether individuals with better social skills are also better at recognizing faces. We found that extraverts who have better social skills correctly recognized more faces than introverts. However, this advantage was absent when extraverts were asked to recognize non-social stimuli (e.g., flowers). In particular, the underlying facet that makes extraverts better face recognizers is the gregariousness facet that measures the degree of inter-personal interaction. In addition, the link between extraversion and face recognition ability was independent of general cognitive abilities. These findings provide the first evidence that links face recognition ability to our daily activity in social communication, supporting the hypothesis that extraverts are better at decoding social information than introverts. PMID:20798810

  15. Face recognition motivated by human approach

    NASA Astrophysics Data System (ADS)

    Kamgar-Parsi, Behrooz; Lawson, Wallace Edgar; Kamgar-Parsi, Behzad

    2010-04-01

    We report the development of a face recognition system which operates in the same way as humans in that it is capable of recognizing a number of people, while rejecting everybody else as strangers. While humans do it routinely, a particularly challenging aspect of the problem of open-world face recognition has been the question of rejecting previously unseen faces as unfamiliar. Our approach can handle previously unseen faces; it is based on identifying and enclosing the region(s) in the human face space which belong to the target person(s).

  16. Face recognition system and method using face pattern words and face pattern bytes

    SciTech Connect

    Zheng, Yufeng

    2014-12-23

    The present invention provides a novel system and method for identifying individuals and for face recognition utilizing facial features for face identification. The system and method of the invention comprise creating facial features or face patterns called face pattern words and face pattern bytes for face identification. The invention also provides for pattern recognitions for identification other than face recognition. The invention further provides a means for identifying individuals based on visible and/or thermal images of those individuals by utilizing computer software implemented by instructions on a computer or computer system and a computer readable medium containing instructions on a computer system for face recognition and identification.

  17. Recognition of Unfamiliar Talking Faces at Birth

    ERIC Educational Resources Information Center

    Coulon, Marion; Guellai, Bahia; Streri, Arlette

    2011-01-01

    Sai (2005) investigated the role of speech in newborns' recognition of their mothers' faces. Her results revealed that, when presented with both their mother's face and that of a stranger, newborns preferred looking at their mother only if she had previously talked to them. The present study attempted to extend these findings to any other faces.…

  18. Contextual Modulation of Biases in Face Recognition

    PubMed Central

    Felisberti, Fatima Maria; Pavey, Louisa

    2010-01-01

    Background The ability to recognize the faces of potential cooperators and cheaters is fundamental to social exchanges, given that cooperation for mutual benefit is expected. Studies addressing biases in face recognition have so far proved inconclusive, with reports of biases towards faces of cheaters, biases towards faces of cooperators, or no biases at all. This study attempts to uncover possible causes underlying such discrepancies. Methodology and Findings Four experiments were designed to investigate biases in face recognition during social exchanges when behavioral descriptors (prosocial, antisocial or neutral) embedded in different scenarios were tagged to faces during memorization. Face recognition, measured as accuracy and response latency, was tested with modified yes-no, forced-choice and recall tasks (N = 174). An enhanced recognition of faces tagged with prosocial descriptors was observed when the encoding scenario involved financial transactions and the rules of the social contract were not explicit (experiments 1 and 2). Such bias was eliminated or attenuated by making participants explicitly aware of “cooperative”, “cheating” and “neutral/indifferent” behaviors via a pre-test questionnaire and then adding such tags to behavioral descriptors (experiment 3). Further, in a social judgment scenario with descriptors of salient moral behaviors, recognition of antisocial and prosocial faces was similar, but significantly better than neutral faces (experiment 4). Conclusion The results highlight the relevance of descriptors and scenarios of social exchange in face recognition, when the frequency of prosocial and antisocial individuals in a group is similar. Recognition biases towards prosocial faces emerged when descriptors did not state the rules of a social contract or the moral status of a behavior, and they point to the existence of broad and flexible cognitive abilities finely tuned to minor changes in social context. PMID:20886086

  19. Real-time, face recognition technology

    SciTech Connect

    Brady, S.

    1995-11-01

    The Institute for Scientific Computing Research (ISCR) at Lawrence Livermore National Laboratory recently developed the real-time, face recognition technology KEN. KEN uses novel imaging devices such as silicon retinas developed at Caltech or off-the-shelf CCD cameras to acquire images of a face and to compare them to a database of known faces in a robust fashion. The KEN-Online project makes that recognition technology accessible through the World Wide Web (WWW), an internet service that has recently seen explosive growth. A WWW client can submit face images, add them to the database of known faces and submit other pictures that the system tries to recognize. KEN-Online serves to evaluate the recognition technology and grow a large face database. KEN-Online includes the use of public domain tools such as mSQL for its name-database and perl scripts to assist the uploading of images.

  20. Face Recognition in Humans and Machines

    NASA Astrophysics Data System (ADS)

    O'Toole, Alice; Tistarelli, Massimo

    The study of human face recognition by psychologists and neuroscientists has run parallel to the development of automatic face recognition technologies by computer scientists and engineers. In both cases, there are analogous steps of data acquisition, image processing, and the formation of representations that can support the complex and diverse tasks we accomplish with faces. These processes can be understood and compared in the context of their neural and computational implementations. In this chapter, we present the essential elements of face recognition by humans and machines, taking a perspective that spans psychological, neural, and computational approaches. From the human side, we overview the methods and techniques used in the neurobiology of face recognition, the underlying neural architecture of the system, the role of visual attention, and the nature of the representations that emerges. From the computational side, we discuss face recognition technologies and the strategies they use to overcome challenges to robust operation over viewing parameters. Finally, we conclude the chapter with a look at some recent studies that compare human and machine performances at face recognition.

  1. Gabor wavelet associative memory for face recognition.

    PubMed

    Zhang, Haihong; Zhang, Bailing; Huang, Weimin; Tian, Qi

    2005-01-01

    This letter describes a high-performance face recognition system by combining two recently proposed neural network models, namely Gabor wavelet network (GWN) and kernel associative memory (KAM), into a unified structure called Gabor wavelet associative memory (GWAM). GWAM has superior representation capability inherited from GWN and consequently demonstrates a much better recognition performance than KAM. Extensive experiments have been conducted to evaluate a GWAM-based recognition scheme using three popular face databases, i.e., FERET database, Olivetti-Oracle Research Lab (ORL) database and AR face database. The experimental results consistently show our scheme's superiority and demonstrate its very high-performance comparing favorably to some recent face recognition methods, achieving 99.3% and 100% accuracy, respectively, on the former two databases, exhibiting very robust performance on the last database against varying illumination conditions. PMID:15732406

  2. A novel thermal face recognition approach using face pattern words

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng

    2010-04-01

    A reliable thermal face recognition system can enhance the national security applications such as prevention against terrorism, surveillance, monitoring and tracking, especially at nighttime. The system can be applied at airports, customs or high-alert facilities (e.g., nuclear power plant) for 24 hours a day. In this paper, we propose a novel face recognition approach utilizing thermal (long wave infrared) face images that can automatically identify a subject at both daytime and nighttime. With a properly acquired thermal image (as a query image) in monitoring zone, the following processes will be employed: normalization and denoising, face detection, face alignment, face masking, Gabor wavelet transform, face pattern words (FPWs) creation, face identification by similarity measure (Hamming distance). If eyeglasses are present on a subject's face, an eyeglasses mask will be automatically extracted from the querying face image, and then masked with all comparing FPWs (no more transforms). A high identification rate (97.44% with Top-1 match) has been achieved upon our preliminary face dataset (of 39 subjects) from the proposed approach regardless operating time and glasses-wearing condition.e

  3. Face Recognition: Canonical Mechanisms at Multiple Timescales.

    PubMed

    Giese, Martin A

    2016-07-11

    Adaptation is ubiquitous in the nervous system, and many possible computational roles have been discussed. A new functional imaging study suggests that, in face recognition, the learning of 'norm faces' and adaptation resulting in perceptual after-effects depend on the same mechanism. PMID:27404241

  4. Face-space: A unifying concept in face recognition research.

    PubMed

    Valentine, Tim; Lewis, Michael B; Hills, Peter J

    2016-10-01

    The concept of a multidimensional psychological space, in which faces can be represented according to their perceived properties, is fundamental to the modern theorist in face processing. Yet the idea was not clearly expressed until 1991. The background that led to the development of face-space is explained, and its continuing influence on theories of face processing is discussed. Research that has explored the properties of the face-space and sought to understand caricature, including facial adaptation paradigms, is reviewed. Face-space as a theoretical framework for understanding the effect of ethnicity and the development of face recognition is evaluated. Finally, two applications of face-space in the forensic setting are discussed. From initially being presented as a model to explain distinctiveness, inversion, and the effect of ethnicity, face-space has become a central pillar in many aspects of face processing. It is currently being developed to help us understand adaptation effects with faces. While being in principle a simple concept, face-space has shaped, and continues to shape, our understanding of face perception. PMID:25427883

  5. Robust kernel collaborative representation for face recognition

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Wang, Xiaohui; Ma, Yanbo; Jiang, Yuzheng; Zhu, Yinghui; Jin, Zhong

    2015-05-01

    One of the greatest challenges of representation-based face recognition is that the training samples are usually insufficient. In other words, the training set usually does not include enough samples to show varieties of high-dimensional face images caused by illuminations, facial expressions, and postures. When the test sample is significantly different from the training samples of the same subject, the recognition performance will be sharply reduced. We propose a robust kernel collaborative representation based on virtual samples for face recognition. We think that the virtual training set conveys some reasonable and possible variations of the original training samples. Hence, we design a new object function to more closely match the representation coefficients generated from the original and virtual training sets. In order to further improve the robustness, we implement the corresponding representation-based face recognition in kernel space. It is noteworthy that any kind of virtual training samples can be used in our method. We use noised face images to obtain virtual face samples. The noise can be approximately viewed as a reflection of the varieties of illuminations, facial expressions, and postures. Our work is a simple and feasible way to obtain virtual face samples to impose Gaussian noise (and other types of noise) specifically to the original training samples to obtain possible variations of the original samples. Experimental results on the FERET, Georgia Tech, and ORL face databases show that the proposed method is more robust than two state-of-the-art face recognition methods, such as CRC and Kernel CRC.

  6. Video face recognition against a watch list

    NASA Astrophysics Data System (ADS)

    Abbas, Jehanzeb; Dagli, Charlie K.; Huang, Thomas S.

    2007-10-01

    Due to a large increase in the video surveillance data recently in an effort to maintain high security at public places, we need more robust systems to analyze this data and make tasks like face recognition a realistic possibility in challenging environments. In this paper we explore a watch-list scenario where we use an appearance based model to classify query faces from low resolution videos into either a watch-list or a non-watch-list face. We then use our simple yet a powerful face recognition system to recognize the faces classified as watch-list faces. Where the watch-list includes those people that we are interested in recognizing. Our system uses simple feature machine algorithms from our previous work to match video faces against still images. To test our approach, we match video faces against a large database of still images obtained from a previous work in the field from Yahoo News over a period of time. We do this matching in an efficient manner to come up with a faster and nearly real-time system. This system can be incorporated into a larger surveillance system equipped with advanced algorithms involving anomalous event detection and activity recognition. This is a step towards more secure and robust surveillance systems and efficient video data analysis.

  7. FaceID: A face detection and recognition system

    SciTech Connect

    Shah, M.B.; Rao, N.S.V.; Olman, V.; Uberbacher, E.C.; Mann, R.C.

    1996-12-31

    A face detection system that automatically locates faces in gray-level images is described. Also described is a system which matches a given face image with faces in a database. Face detection in an Image is performed by template matching using templates derived from a selected set of normalized faces. Instead of using original gray level images, vertical gradient images were calculated and used to make the system more robust against variations in lighting conditions and skin color. Faces of different sizes are detected by processing the image at several scales. Further, a coarse-to-fine strategy is used to speed up the processing, and a combination of whole face and face component templates are used to ensure low false detection rates. The input to the face recognition system is a normalized vertical gradient image of a face, which is compared against a database using a set of pretrained feedforward neural networks with a winner-take-all fuser. The training is performed by using an adaptation of the backpropagation algorithm. This system has been developed and tested using images from the FERET database and a set of images obtained from Rowley, et al and Sung and Poggio.

  8. Pseudo-Gabor wavelet for face recognition

    NASA Astrophysics Data System (ADS)

    Xie, Xudong; Liu, Wentao; Lam, Kin-Man

    2013-04-01

    An efficient face-recognition algorithm is proposed, which not only possesses the advantages of linear subspace analysis approaches-such as low computational complexity-but also has the advantage of a high recognition performance with the wavelet-based algorithms. Based on the linearity of Gabor-wavelet transformation and some basic assumptions on face images, we can extract pseudo-Gabor features from the face images without performing any complex Gabor-wavelet transformations. The computational complexity can therefore be reduced while a high recognition performance is still maintained by using the principal component analysis (PCA) method. The proposed algorithm is evaluated based on the Yale database, the Caltech database, the ORL database, the AR database, and the Facial Recognition Technology database, and is compared with several different face recognition methods such as PCA, Gabor wavelets plus PCA, kernel PCA, locality preserving projection, and dual-tree complex wavelet transformation plus PCA. Experiments show that consistent and promising results are obtained.

  9. Gender recognition based on face geometric features

    NASA Astrophysics Data System (ADS)

    Xiao, Jie; Guo, Zhaoli; Cai, Chao

    2013-10-01

    Automatic gender recognition based on face images plays an important role in computer vision and machine vision. In this paper, a novel and simple gender recognition method based on face geometric features is proposed. The method is divided in three steps. Firstly, Pre-processing step provides standard face images for feature extraction. Secondly, Active Shape Model (ASM) is used to extract geometric features in frontal face images. Thirdly, Adaboost classifier is chosen to separate the two classes (male and female). We tested it on 2570 pictures (1420 males and 1150 females) downloaded from the internet, and encouraging results were acquired. The comparison of the proposed geometric feature based method and the full facial image based method demonstrats its superiority.

  10. Face Recognition by Independent Component Analysis

    PubMed Central

    Bartlett, Marian Stewart; Movellan, Javier R.; Sejnowski, Terrence J.

    2010-01-01

    A number of current face recognition algorithms use face representations found by unsupervised statistical methods. Typically these methods find a set of basis images and represent faces as a linear combination of those images. Principal component analysis (PCA) is a popular example of such methods. The basis images found by PCA depend only on pairwise relationships between pixels in the image database. In a task such as face recognition, in which important information may be contained in the high-order relationships among pixels, it seems reasonable to expect that better basis images may be found by methods sensitive to these high-order statistics. Independent component analysis (ICA), a generalization of PCA, is one such method. We used a version of ICA derived from the principle of optimal information transfer through sigmoidal neurons. ICA was performed on face images in the FERET database under two different architectures, one which treated the images as random variables and the pixels as outcomes, and a second which treated the pixels as random variables and the images as outcomes. The first architecture found spatially local basis images for the faces. The second architecture produced a factorial face code. Both ICA representations were superior to representations based on PCA for recognizing faces across days and changes in expression. A classifier that combined the two ICA representations gave the best performance. PMID:18244540

  11. Recognition Memory for Male and Female Faces.

    ERIC Educational Resources Information Center

    Yarmey, A. Daniel

    Sex differences in memory for human faces is reviewed. It is found that research evidence to date is not conclusive, but where differences exist they favor female superiority over males in facial memory. In particular, evidence is cited to suggest that females are reliably superior to males in their recognition memory for other females. This is…

  12. Face recognition using transform domain texture features

    NASA Astrophysics Data System (ADS)

    Rangaswamy, Y.; S K, Ramya; Raja, K. B.; K. R., Venugopal; Patnaik, L. M.

    2013-12-01

    The face recognition is an efficient biometric system to identify a person. In this paper, we propose Face Recognition using Transform Domain Texture Features (FRTDTF). The face images are preprocessed and two sets of texture features are extracted. In first feature set, the Discrete Wavelet Transform (DWT) is applied on face image and considered only high frequency sub band coefficients to extract edge information efficiently. The Dual Tree Complex Wavelet Transform (DTCWT) is applied on high frequency sub bands of DWT to derive Low and High frequency DTCWT coefficients. The texture features of DTCWT coefficients are computed using Overlapping Local Binary Pattern (OLBP) to generate feature set 1. In second feature set, the DTCWT is applied on preprocessed face image and considered all frequency sub bands coefficients to extract significant information and edge information of face image. The texture features of DTCWT matrix are computed using OLBP to generate feature set 2. The final feature set is the concatenation of feature set 1 and set 2. The Euclidian distance (ED) is used to compare test image features with features of face images in the database. It is observed that, the performance parameter values are better in the case of proposed algorithm compared to existing algorithms.

  13. Face recognition with L1-norm subspaces

    NASA Astrophysics Data System (ADS)

    Maritato, Federica; Liu, Ying; Colonnese, Stefania; Pados, Dimitris A.

    2016-05-01

    We consider the problem of representing individual faces by maximum L1-norm projection subspaces calculated from available face-image ensembles. In contrast to conventional L2-norm subspaces, L1-norm subspaces are seen to offer significant robustness to image variations, disturbances, and rank selection. Face recognition becomes then the problem of associating a new unknown face image to the "closest," in some sense, L1 subspace in the database. In this work, we also introduce the concept of adaptively allocating the available number of principal components to different face image classes, subject to a given total number/budget of principal components. Experimental studies included in this paper illustrate and support the theoretical developments.

  14. Biometric watermarking based on face recognition

    NASA Astrophysics Data System (ADS)

    Satonaka, Takami

    2002-04-01

    We describe biometric watermarking procedure based on object recognition for accurate facial signature authentication. An adaptive metric learning algorithm incorporating watermark and facial signatures is introduced to separate an arbitrary pattern of unknown intruder classes from that of known true-user ones. The verification rule of multiple signatures is formulated to map a facial signature pattern in the overlapping classes to a separable disjoint one. The watermark signature, which is uniquely assigned to each face image, reduces the uncertainty of modeling missing facial signature patterns of the unknown intruder classes. The adaptive metric learning algorithm proposed improves a recognition error rate from 2.4% to 0.07% using the ORL database, which is better than previously reported numbers using the Karhunen-Loeve transform, convolution network and the hidden Marcov model. The face recognition facilitates generation and distribution of the watermark key. The watermarking approach focuses on using salient facial features to make watermark signatures robust to various attacks and transformation. The coarse-to-fine approach is presented to integrate pyramidal face detection, geometry analysis and face segmentation for watermarking. We conclude with an assessment of the strength and weakness of the chosen approach as well as possible improvements of the biometric watermarking system.

  15. Double linear regression classification for face recognition

    NASA Astrophysics Data System (ADS)

    Feng, Qingxiang; Zhu, Qi; Tang, Lin-Lin; Pan, Jeng-Shyang

    2015-02-01

    A new classifier designed based on linear regression classification (LRC) classifier and simple-fast representation-based classifier (SFR), named double linear regression classification (DLRC) classifier, is proposed for image recognition in this paper. As we all know, the traditional LRC classifier only uses the distance between test image vectors and predicted image vectors of the class subspace for classification. And the SFR classifier uses the test image vectors and the nearest image vectors of the class subspace to classify the test sample. However, the DLRC classifier computes out the predicted image vectors of each class subspace and uses all the predicted vectors to construct a novel robust global space. Then, the DLRC utilizes the novel global space to get the novel predicted vectors of each class for classification. A mass number of experiments on AR face database, JAFFE face database, Yale face database, Extended YaleB face database, and PIE face database are used to evaluate the performance of the proposed classifier. The experimental results show that the proposed classifier achieves better recognition rate than the LRC classifier, SFR classifier, and several other classifiers.

  16. Gender-Based Prototype Formation in Face Recognition

    ERIC Educational Resources Information Center

    Baudouin, Jean-Yves; Brochard, Renaud

    2011-01-01

    The role of gender categories in prototype formation during face recognition was investigated in 2 experiments. The participants were asked to learn individual faces and then to recognize them. During recognition, individual faces were mixed with faces, which were blended faces of same or different genders. The results of the 2 experiments showed…

  17. Super-resolution benefit for face recognition

    NASA Astrophysics Data System (ADS)

    Hu, Shuowen; Maschal, Robert; Young, S. Susan; Hong, Tsai Hong; Phillips, Jonathon P.

    2011-06-01

    Vast amounts of video footage are being continuously acquired by surveillance systems on private premises, commercial properties, government compounds, and military installations. Facial recognition systems have the potential to identify suspicious individuals on law enforcement watchlists, but accuracy is severely hampered by the low resolution of typical surveillance footage and the far distance of suspects from the cameras. To improve accuracy, super-resolution can enhance suspect details by utilizing a sequence of low resolution frames from the surveillance footage to reconstruct a higher resolution image for input into the facial recognition system. This work measures the improvement of face recognition with super-resolution in a realistic surveillance scenario. Low resolution and super-resolved query sets are generated using a video database at different eye-to-eye distances corresponding to different distances of subjects from the camera. Performance of a face recognition algorithm using the super-resolved and baseline query sets was calculated by matching against galleries consisting of frontal mug shots. The results show that super-resolution improves performance significantly at the examined mid and close ranges.

  18. Face recognition: a model specific ability.

    PubMed

    Wilmer, Jeremy B; Germine, Laura T; Nakayama, Ken

    2014-01-01

    In our everyday lives, we view it as a matter of course that different people are good at different things. It can be surprising, in this context, to learn that most of what is known about cognitive ability variation across individuals concerns the broadest of all cognitive abilities; an ability referred to as general intelligence, general mental ability, or just g. In contrast, our knowledge of specific abilities, those that correlate little with g, is severely constrained. Here, we draw upon our experience investigating an exceptionally specific ability, face recognition, to make the case that many specific abilities could easily have been missed. In making this case, we derive key insights from earlier false starts in the measurement of face recognition's variation across individuals, and we highlight the convergence of factors that enabled the recent discovery that this variation is specific. We propose that the case of face recognition ability illustrates a set of tools and perspectives that could accelerate fruitful work on specific cognitive abilities. By revealing relatively independent dimensions of human ability, such work would enhance our capacity to understand the uniqueness of individual minds. PMID:25346673

  19. Block error correction codes for face recognition

    NASA Astrophysics Data System (ADS)

    Hussein, Wafaa R.; Sellahewa, Harin; Jassim, Sabah A.

    2011-06-01

    Face recognition is one of the most desirable biometric-based authentication schemes to control access to sensitive information/locations and as a proof of identity to claim entitlement to services. The aim of this paper is to develop block-based mechanisms, to reduce recognition errors that result from varying illumination conditions with emphasis on using error correction codes. We investigate the modelling of error patterns in different parts/blocks of face images as a result of differences in illumination conditions, and we use appropriate error correction codes to deal with the corresponding distortion. We test the performance of our proposed schemes using the Extended Yale-B Face Database, which consists of face images belonging to 5 illumination subsets depending on the direction of light source from the camera. In our experiments each image is divided into three horizontal regions as follows: region1, three rows above the eyebrows, eyebrows and eyes; region2, nose region and region3, mouth and chin region. By estimating statistical parameters for errors in each region we select suitable BCH error correction codes that yield improved recognition accuracy for that particular region in comparison to applying error correction codes to the entire image. Discrete Wavelet Transform (DWT) to a depth of 3 is used for face feature extraction, followed by global/local binarization of coefficients in each subbands. We shall demonstrate that the use of BCH improves separation of the distribution of Hamming distances of client-client samples from the distribution of Hamming distances of imposter-client samples.

  20. Face and body recognition show similar improvement during childhood.

    PubMed

    Bank, Samantha; Rhodes, Gillian; Read, Ainsley; Jeffery, Linda

    2015-09-01

    Adults are proficient in extracting identity cues from faces. This proficiency develops slowly during childhood, with performance not reaching adult levels until adolescence. Bodies are similar to faces in that they convey identity cues and rely on specialized perceptual mechanisms. However, it is currently unclear whether body recognition mirrors the slow development of face recognition during childhood. Recent evidence suggests that body recognition develops faster than face recognition. Here we measured body and face recognition in 6- and 10-year-old children and adults to determine whether these two skills show different amounts of improvement during childhood. We found no evidence that they do. Face and body recognition showed similar improvement with age, and children, like adults, were better at recognizing faces than bodies. These results suggest that the mechanisms of face and body memory mature at a similar rate or that improvement of more general cognitive and perceptual skills underlies improvement of both face and body recognition. PMID:25909913

  1. Face recognition: a model specific ability

    PubMed Central

    Wilmer, Jeremy B.; Germine, Laura T.; Nakayama, Ken

    2014-01-01

    In our everyday lives, we view it as a matter of course that different people are good at different things. It can be surprising, in this context, to learn that most of what is known about cognitive ability variation across individuals concerns the broadest of all cognitive abilities; an ability referred to as general intelligence, general mental ability, or just g. In contrast, our knowledge of specific abilities, those that correlate little with g, is severely constrained. Here, we draw upon our experience investigating an exceptionally specific ability, face recognition, to make the case that many specific abilities could easily have been missed. In making this case, we derive key insights from earlier false starts in the measurement of face recognition’s variation across individuals, and we highlight the convergence of factors that enabled the recent discovery that this variation is specific. We propose that the case of face recognition ability illustrates a set of tools and perspectives that could accelerate fruitful work on specific cognitive abilities. By revealing relatively independent dimensions of human ability, such work would enhance our capacity to understand the uniqueness of individual minds. PMID:25346673

  2. The Significance of Hair for Face Recognition

    PubMed Central

    Toseeb, Umar; Keeble, David R. T.; Bryant, Eleanor J.

    2012-01-01

    Hair is a feature of the head that frequently changes in different situations. For this reason much research in the area of face perception has employed stimuli without hair. To investigate the effect of the presence of hair we used faces with and without hair in a recognition task. Participants took part in trials in which the state of the hair either remained consistent (Same) or switched between learning and test (Switch). It was found that in the Same trials performance did not differ for stimuli presented with and without hair. This implies that there is sufficient information in the internal features of the face for optimal performance in this task. It was also found that performance in the Switch trials was substantially lower than in the Same trials. This drop in accuracy when the stimuli were switched suggests that faces are represented in a holistic manner and that manipulation of the hair causes disruption to this, with implications for the interpretation of some previous studies. PMID:22461902

  3. Individual differences in holistic processing predict face recognition ability.

    PubMed

    Wang, Ruosi; Li, Jingguang; Fang, Huizhen; Tian, Moqian; Liu, Jia

    2012-02-01

    Why do some people recognize faces easily and others frequently make mistakes in recognizing faces? Classic behavioral work has shown that faces are processed in a distinctive holistic manner that is unlike the processing of objects. In the study reported here, we investigated whether individual differences in holistic face processing have a significant influence on face recognition. We found that the magnitude of face-specific recognition accuracy correlated with the extent to which participants processed faces holistically, as indexed by the composite-face effect and the whole-part effect. This association is due to face-specific processing in particular, not to a more general aspect of cognitive processing, such as general intelligence or global attention. This finding provides constraints on computational models of face recognition and may elucidate mechanisms underlying cognitive disorders, such as prosopagnosia and autism, that are associated with deficits in face recognition. PMID:22222218

  4. Nonparametric discriminant analysis for face recognition.

    PubMed

    Li, Zhifeng; Lin, Dahua; Tang, Xiaoou

    2009-04-01

    In this paper, we develop a new framework for face recognition based on nonparametric discriminant analysis (NDA) and multi-classifier integration. Traditional LDA-based methods suffer a fundamental limitation originating from the parametric nature of scatter matrices, which are based on the Gaussian distribution assumption. The performance of these methods notably degrades when the actual distribution is Non-Gaussian. To address this problem, we propose a new formulation of scatter matrices to extend the two-class nonparametric discriminant analysis to multi-class cases. Then, we develop two more improved multi-class NDA-based algorithms (NSA and NFA) with each one having two complementary methods based on the principal space and the null space of the intra-class scatter matrix respectively. Comparing to the NSA, the NFA is more effective in the utilization of the classification boundary information. In order to exploit the complementary nature of the two kinds of NFA (PNFA and NNFA), we finally develop a dual NFA-based multi-classifier fusion framework by employing the over complete Gabor representation to boost the recognition performance. We show the improvements of the developed new algorithms over the traditional subspace methods through comparative experiments on two challenging face databases, Purdue AR database and XM2VTS database. PMID:19229090

  5. Towards Robust Face Recognition from Video

    SciTech Connect

    Price, JR

    2001-10-18

    A novel, template-based method for face recognition is presented. The goals of the proposed method are to integrate multiple observations for improved robustness and to provide auxiliary confidence data for subsequent use in an automated video surveillance system. The proposed framework consists of a parallel system of classifiers, referred to as observers, where each observer is trained on one face region. The observer outputs are combined to yield the final recognition result. Three of the four confounding factors--expression, illumination, and decoration--are specifically addressed in this paper. The extension of the proposed approach to address the fourth confounding factor--pose--is straightforward and well supported in previous work. A further contribution of the proposed approach is the computation of a revealing confidence measure. This confidence measure will aid the subsequent application of the proposed method to video surveillance scenarios. Results are reported for a database comprising 676 images of 160 subjects under a variety of challenging circumstances. These results indicate significant performance improvements over previous methods and demonstrate the usefulness of the confidence data.

  6. Random-Profiles-Based 3D Face Recognition System

    PubMed Central

    Joongrock, Kim; Sunjin, Yu; Sangyoun, Lee

    2014-01-01

    In this paper, a noble nonintrusive three-dimensional (3D) face modeling system for random-profile-based 3D face recognition is presented. Although recent two-dimensional (2D) face recognition systems can achieve a reliable recognition rate under certain conditions, their performance is limited by internal and external changes, such as illumination and pose variation. To address these issues, 3D face recognition, which uses 3D face data, has recently received much attention. However, the performance of 3D face recognition highly depends on the precision of acquired 3D face data, while also requiring more computational power and storage capacity than 2D face recognition systems. In this paper, we present a developed nonintrusive 3D face modeling system composed of a stereo vision system and an invisible near-infrared line laser, which can be directly applied to profile-based 3D face recognition. We further propose a novel random-profile-based 3D face recognition method that is memory-efficient and pose-invariant. The experimental results demonstrate that the reconstructed 3D face data consists of more than 50 k 3D point clouds and a reliable recognition rate against pose variation. PMID:24691101

  7. A review of recent advances in 3D face recognition

    NASA Astrophysics Data System (ADS)

    Luo, Jing; Geng, Shuze; Xiao, Zhaoxia; Xiu, Chunbo

    2015-03-01

    Face recognition based on machine vision has achieved great advances and been widely used in the various fields. However, there are some challenges on the face recognition, such as facial pose, variations in illumination, and facial expression. So, this paper gives the recent advances in 3D face recognition. 3D face recognition approaches are categorized into four groups: minutiae approach, space transform approach, geometric features approach, model approach. Several typical approaches are compared in detail, including feature extraction, recognition algorithm, and the performance of the algorithm. Finally, this paper summarized the challenge existing in 3D face recognition and the future trend. This paper aims to help the researches majoring on face recognition.

  8. [Face recognition in patients with autism spectrum disorders].

    PubMed

    Kita, Yosuke; Inagaki, Masumi

    2012-07-01

    The present study aimed to review previous research conducted on face recognition in patients with autism spectrum disorders (ASD). Face recognition is a key question in the ASD research field because it can provide clues for elucidating the neural substrates responsible for the social impairment of these patients. Historically, behavioral studies have reported low performance and/or unique strategies of face recognition among ASD patients. However, the performance and strategy of ASD patients is comparable to those of the control group, depending on the experimental situation or developmental stage, suggesting that face recognition of ASD patients is not entirely impaired. Recent brain function studies, including event-related potential and functional magnetic resonance imaging studies, have investigated the cognitive process of face recognition in ASD patients, and revealed impaired function in the brain's neural network comprising the fusiform gyrus and amygdala. This impaired function is potentially involved in the diminished preference for faces, and in the atypical development of face recognition, eliciting symptoms of unstable behavioral characteristics in these patients. Additionally, face recognition in ASD patients is examined from a different perspective, namely self-face recognition, and facial emotion recognition. While the former topic is intimately linked to basic social abilities such as self-other discrimination, the latter is closely associated with mentalizing. Further research on face recognition in ASD patients should investigate the connection between behavioral and neurological specifics in these patients, by considering developmental changes and the spectrum clinical condition of ASD. PMID:22764354

  9. Direct Gaze Modulates Face Recognition in Young Infants

    ERIC Educational Resources Information Center

    Farroni, Teresa; Massaccesi, Stefano; Menon, Enrica; Johnson, Mark H.

    2007-01-01

    From birth, infants prefer to look at faces that engage them in direct eye contact. In adults, direct gaze is known to modulate the processing of faces, including the recognition of individuals. In the present study, we investigate whether direction of gaze has any effect on face recognition in four-month-old infants. Four-month infants were shown…

  10. Familiar Person Recognition: Is Autonoetic Consciousness More Likely to Accompany Face Recognition Than Voice Recognition?

    NASA Astrophysics Data System (ADS)

    Barsics, Catherine; Brédart, Serge

    2010-11-01

    Autonoetic consciousness is a fundamental property of human memory, enabling us to experience mental time travel, to recollect past events with a feeling of self-involvement, and to project ourselves in the future. Autonoetic consciousness is a characteristic of episodic memory. By contrast, awareness of the past associated with a mere feeling of familiarity or knowing relies on noetic consciousness, depending on semantic memory integrity. Present research was aimed at evaluating whether conscious recollection of episodic memories is more likely to occur following the recognition of a familiar face than following the recognition of a familiar voice. Recall of semantic information (biographical information) was also assessed. Previous studies that investigated the recall of biographical information following person recognition used faces and voices of famous people as stimuli. In this study, the participants were presented with personally familiar people's voices and faces, thus avoiding the presence of identity cues in the spoken extracts and allowing a stricter control of frequency exposure with both types of stimuli (voices and faces). In the present study, the rate of retrieved episodic memories, associated with autonoetic awareness, was significantly higher from familiar faces than familiar voices even though the level of overall recognition was similar for both these stimuli domains. The same pattern was observed regarding semantic information retrieval. These results and their implications for current Interactive Activation and Competition person recognition models are discussed.

  11. Image preprocessing study on KPCA-based face recognition

    NASA Astrophysics Data System (ADS)

    Li, Xuan; Li, Dehua

    2015-12-01

    Face recognition as an important biometric identification method, with its friendly, natural, convenient advantages, has obtained more and more attention. This paper intends to research a face recognition system including face detection, feature extraction and face recognition, mainly through researching on related theory and the key technology of various preprocessing methods in face detection process, using KPCA method, focuses on the different recognition results in different preprocessing methods. In this paper, we choose YCbCr color space for skin segmentation and choose integral projection for face location. We use erosion and dilation of the opening and closing operation and illumination compensation method to preprocess face images, and then use the face recognition method based on kernel principal component analysis method for analysis and research, and the experiments were carried out using the typical face database. The algorithms experiment on MATLAB platform. Experimental results show that integration of the kernel method based on PCA algorithm under certain conditions make the extracted features represent the original image information better for using nonlinear feature extraction method, which can obtain higher recognition rate. In the image preprocessing stage, we found that images under various operations may appear different results, so as to obtain different recognition rate in recognition stage. At the same time, in the process of the kernel principal component analysis, the value of the power of the polynomial function can affect the recognition result.

  12. Face age and sex modulate the other-race effect in face recognition.

    PubMed

    Wallis, Jennifer; Lipp, Ottmar V; Vanman, Eric J

    2012-11-01

    Faces convey a variety of socially relevant cues that have been shown to affect recognition, such as age, sex, and race, but few studies have examined the interactive effect of these cues. White participants of two distinct age groups were presented with faces that differed in race, age, and sex in a face recognition paradigm. Replicating the other-race effect, young participants recognized young own-race faces better than young other-race faces. However, recognition performance did not differ across old faces of different races (Experiments 1, 2A). In addition, participants showed an other-age effect, recognizing White young faces better than White old faces. Sex affected recognition performance only when age was not varied (Experiment 2B). Overall, older participants showed a similar recognition pattern (Experiment 3) as young participants, displaying an other-race effect for young, but not old, faces. However, they recognized young and old White faces on a similar level. These findings indicate that face cues interact to affect recognition performance such that age and sex information reliably modulate the effect of race cues. These results extend accounts of face recognition that explain recognition biases (such as the other-race effect) as a function of dichotomous ingroup/outgroup categorization, in that outgroup characteristics are not simply additive but interactively determine recognition performance. PMID:22933042

  13. Newborns' Face Recognition: Role of Inner and Outer Facial Features

    ERIC Educational Resources Information Center

    Turati, Chiara; Macchi Cassia, Viola; Simion, Francesca; Leo, Irene

    2006-01-01

    Existing data indicate that newborns are able to recognize individual faces, but little is known about what perceptual cues drive this ability. The current study showed that either the inner or outer features of the face can act as sufficient cues for newborns' face recognition (Experiment 1), but the outer part of the face enjoys an advantage…

  14. Neural Substrates for Episodic Encoding and Recognition of Unfamiliar Faces

    ERIC Educational Resources Information Center

    Hofer, Alex; Siedentopf, Christian M.; Ischebeck, Anja; Rettenbacher, Maria A.; Verius, Michael; Golaszewski, Stefan M.; Felber, Stephan; Fleischhacker, W. Wolfgang

    2007-01-01

    Functional MRI was used to investigate brain activation in healthy volunteers during encoding of unfamiliar faces as well as during correct recognition of newly learned faces (CR) compared to correct identification of distractor faces (CF), missed alarms (not recognizing previously presented faces, MA), and false alarms (incorrectly recognizing…

  15. Graph optimized Laplacian eigenmaps for face recognition

    NASA Astrophysics Data System (ADS)

    Dornaika, F.; Assoum, A.; Ruichek, Y.

    2015-01-01

    In recent years, a variety of nonlinear dimensionality reduction techniques (NLDR) have been proposed in the literature. They aim to address the limitations of traditional techniques such as PCA and classical scaling. Most of these techniques assume that the data of interest lie on an embedded non-linear manifold within the higher-dimensional space. They provide a mapping from the high-dimensional space to the low-dimensional embedding and may be viewed, in the context of machine learning, as a preliminary feature extraction step, after which pattern recognition algorithms are applied. Laplacian Eigenmaps (LE) is a nonlinear graph-based dimensionality reduction method. It has been successfully applied in many practical problems such as face recognition. However the construction of LE graph suffers, similarly to other graph-based DR techniques from the following issues: (1) the neighborhood graph is artificially defined in advance, and thus does not necessary benefit the desired DR task; (2) the graph is built using the nearest neighbor criterion which tends to work poorly due to the high-dimensionality of original space; and (3) its computation depends on two parameters whose values are generally uneasy to assign, the neighborhood size and the heat kernel parameter. To address the above-mentioned problems, for the particular case of the LPP method (a linear version of LE), L. Zhang et al.1 have developed a novel DR algorithm whose idea is to integrate graph construction with specific DR process into a unified framework. This algorithm results in an optimized graph rather than a predefined one.

  16. Isolating the Special Component of Face Recognition: Peripheral Identification and a Mooney Face

    ERIC Educational Resources Information Center

    McKone, Elinor

    2004-01-01

    A previous finding argues that, for faces, configural (holistic) processing can operate even in the complete absence of part-based contributions to recognition. Here, this result is confirmed using 2 methods. In both, recognition of inverted faces (parts only) was removed altogether (chance identification of faces in the periphery; no perception…

  17. Familiar Face Recognition in Children with Autism: The Differential Use of Inner and Outer Face Parts

    ERIC Educational Resources Information Center

    Wilson, Rebecca; Pascalis, Olivier; Blades, Mark

    2007-01-01

    We investigated whether children with autistic spectrum disorders (ASD) have a deficit in recognising familiar faces. Children with ASD were given a forced choice familiar face recognition task with three conditions: full faces, inner face parts and outer face parts. Control groups were children with developmental delay (DD) and typically…

  18. Children's Recognition of Unfamiliar Faces: Developments and Determinants.

    ERIC Educational Resources Information Center

    Soppe, H. J. G.

    1986-01-01

    Eight- to 12-year-old primary school children and 13-year-old secondary school children were given a live and photographed face recognition task and several other figural tasks. While scores on most tasks increased with age, face recognition scores were affected by age, decreasing at age 12 (puberty onset). (Author/BB)

  19. Transfer between Pose and Illumination Training in Face Recognition

    ERIC Educational Resources Information Center

    Liu, Chang Hong; Bhuiyan, Md. Al-Amin; Ward, James; Sui, Jie

    2009-01-01

    The relationship between pose and illumination learning in face recognition was examined in a yes-no recognition paradigm. The authors assessed whether pose training can transfer to a new illumination or vice versa. Results show that an extensive level of pose training through a face-name association task was able to generalize to a new…

  20. Recognition of Moving and Static Faces by Young Infants

    ERIC Educational Resources Information Center

    Otsuka, Yumiko; Konishi, Yukuo; Kanazawa, So; Yamaguchi, Masami K.; Abdi, Herve; O'Toole, Alice J.

    2009-01-01

    This study compared 3- to 4-month-olds' recognition of previously unfamiliar faces learned in a moving or a static condition. Infants in the moving condition showed successful recognition with only 30 s familiarization, even when different images of a face were used in the familiarization and test phase (Experiment 1). In contrast, infants in the…

  1. When the face fits: recognition of celebrities from matching and mismatching faces and voices.

    PubMed

    Stevenage, Sarah V; Neil, Greg J; Hamlin, Iain

    2014-01-01

    The results of two experiments are presented in which participants engaged in a face-recognition or a voice-recognition task. The stimuli were face-voice pairs in which the face and voice were co-presented and were either "matched" (same person), "related" (two highly associated people), or "mismatched" (two unrelated people). Analysis in both experiments confirmed that accuracy and confidence in face recognition was consistently high regardless of the identity of the accompanying voice. However accuracy of voice recognition was increasingly affected as the relationship between voice and accompanying face declined. Moreover, when considering self-reported confidence in voice recognition, confidence remained high for correct responses despite the proportion of these responses declining across conditions. These results converged with existing evidence indicating the vulnerability of voice recognition as a relatively weak signaller of identity, and results are discussed in the context of a person-recognition framework. PMID:23531227

  2. Face averages enhance user recognition for smartphone security.

    PubMed

    Robertson, David J; Kramer, Robin S S; Burton, A Mike

    2015-01-01

    Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy). In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual's 'face-average'--a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1) and for real faces (Experiment 2): users could unlock their phones more reliably when the device stored an average of the user's face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings. PMID:25807251

  3. Face Averages Enhance User Recognition for Smartphone Security

    PubMed Central

    Robertson, David J.; Kramer, Robin S. S.; Burton, A. Mike

    2015-01-01

    Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy). In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual’s ‘face-average’ – a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1) and for real faces (Experiment 2): users could unlock their phones more reliably when the device stored an average of the user’s face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings. PMID:25807251

  4. Face Recognition Using ALLE and SIFT for Human Robot Interaction

    NASA Astrophysics Data System (ADS)

    Pan, Yaozhang; Ge, Shuzhi Sam; He, Hongsheng

    Face recognition is a very important aspect in developing human-robot interaction (HRI) for social robots. In this paper, an efficient face recognition algorithm is introduced for building intelligent robot vision system to recognize human faces. Dimension deduction algorithms locally linear embedding (LLE) and adaptive locally linear embedding (ALLE) and feature extraction algorithm scale-invariant feature transform (SIFT) are combined to form new methods called LLE-SIFT and ALLE-SIFT for finding compact and distinctive descriptors for face images. The new feature descriptors are demonstrated to have better performance in face recognition applications than standard SIFT descriptors, which shows that the proposed method is promising for developing robot vision system of face recognition.

  5. Developmental Commonalities between Object and Face Recognition in Adolescence

    PubMed Central

    Jüttner, Martin; Wakui, Elley; Petters, Dean; Davidoff, Jules

    2016-01-01

    In the visual perception literature, the recognition of faces has often been contrasted with that of non-face objects, in terms of differences with regard to the role of parts, part relations and holistic processing. However, recent evidence from developmental studies has begun to blur this sharp distinction. We review evidence for a protracted development of object recognition that is reminiscent of the well-documented slow maturation observed for faces. The prolonged development manifests itself in a retarded processing of metric part relations as opposed to that of individual parts and offers surprising parallels to developmental accounts of face recognition, even though the interpretation of the data is less clear with regard to holistic processing. We conclude that such results might indicate functional commonalities between the mechanisms underlying the recognition of faces and non-face objects, which are modulated by different task requirements in the two stimulus domains. PMID:27014176

  6. Human face recognition by Euclidean distance and neural network

    NASA Astrophysics Data System (ADS)

    Pornpanomchai, Chomtip; Inkuna, Chittrapol

    2010-02-01

    The idea of this project development is to improve the concept of human face recognition that has been studied in order to apply it for a more precise and effective recognition of human faces, and offered an alternative to agencies with respect to their access-departure control system. To accomplish this, a technique of calculation of distances between face features, including efficient face recognition though a neural network, is used. The system uses a technique of image processing consisting of 3 major processes: 1) preprocessing or preparation of images, 2) feature extraction from images of eyes, ears, nose and mouth, used for a calculation of Euclidean distances between each organ; and 3) face recognition using a neural network method. Based on the experimental results from reading image of a total of 200 images from 100 human faces, the system can correctly recognize 96 % with average access time of 3.304 sec per image.

  7. Pose-Invariant Face Recognition via RGB-D Images

    PubMed Central

    Sang, Gaoli; Li, Jing; Zhao, Qijun

    2016-01-01

    Three-dimensional (3D) face models can intrinsically handle large pose face recognition problem. In this paper, we propose a novel pose-invariant face recognition method via RGB-D images. By employing depth, our method is able to handle self-occlusion and deformation, both of which are challenging problems in two-dimensional (2D) face recognition. Texture images in the gallery can be rendered to the same view as the probe via depth. Meanwhile, depth is also used for similarity measure via frontalization and symmetric filling. Finally, both texture and depth contribute to the final identity estimation. Experiments on Bosphorus, CurtinFaces, Eurecom, and Kiwi databases demonstrate that the additional depth information has improved the performance of face recognition with large pose variations and under even more challenging conditions. PMID:26819581

  8. The activation of visual face memory and explicit face recognition are delayed in developmental prosopagnosia.

    PubMed

    Parketny, Joanna; Towler, John; Eimer, Martin

    2015-08-01

    Individuals with developmental prosopagnosia (DP) are strongly impaired in recognizing faces, but the causes of this deficit are not well understood. We employed event-related brain potentials (ERPs) to study the time-course of neural processes involved in the recognition of previously unfamiliar faces in DPs and in age-matched control participants with normal face recognition abilities. Faces of different individuals were presented sequentially in one of three possible views, and participants had to detect a specific Target Face ("Joe"). EEG was recorded during task performance to Target Faces, Nontarget Faces, or the participants' Own Face (which had to be ignored). The N250 component was measured as a marker of the match between a seen face and a stored representation in visual face memory. The subsequent P600f was measured as an index of attentional processes associated with the conscious awareness and recognition of a particular face. Target Faces elicited reliable N250 and P600f in the DP group, but both of these components emerged later in DPs than in control participants. This shows that the activation of visual face memory for previously unknown learned faces and the subsequent attentional processing and conscious recognition of these faces are delayed in DP. N250 and P600f components to Own Faces did not differ between the two groups, indicating that the processing of long-term familiar faces is less affected in DP. However, P600f components to Own Faces were absent in two participants with DP who failed to recognize their Own Face during the experiment. These results provide new evidence that face recognition deficits in DP may be linked to a delayed activation of visual face memory and explicit identity recognition mechanisms. PMID:26169316

  9. Impaired processing of self-face recognition in anorexia nervosa.

    PubMed

    Hirot, France; Lesage, Marine; Pedron, Lya; Meyer, Isabelle; Thomas, Pierre; Cottencin, Olivier; Guardia, Dewi

    2016-03-01

    Body image disturbances and massive weight loss are major clinical symptoms of anorexia nervosa (AN). The aim of the present study was to examine the influence of body changes and eating attitudes on self-face recognition ability in AN. Twenty-seven subjects suffering from AN and 27 control participants performed a self-face recognition task (SFRT). During the task, digital morphs between their own face and a gender-matched unfamiliar face were presented in a random sequence. Participants' self-face recognition failures, cognitive flexibility, body concern and eating habits were assessed with the Self-Face Recognition Questionnaire (SFRQ), Trail Making Test (TMT), Body Shape Questionnaire (BSQ) and Eating Disorder Inventory-2 (EDI-2), respectively. Subjects suffering from AN exhibited significantly greater difficulties than control participants in identifying their own face (p = 0.028). No significant difference was observed between the two groups for TMT (all p > 0.1, non-significant). Regarding predictors of self-face recognition skills, there was a negative correlation between SFRT and body mass index (p = 0.01) and a positive correlation between SFRQ and EDI-2 (p < 0.001) or BSQ (p < 0.001). Among factors involved, nutritional status and intensity of eating disorders could play a part in impaired self-face recognition. PMID:26420298

  10. The effect of distraction on face and voice recognition.

    PubMed

    Stevenage, Sarah V; Neil, Greg J; Barlow, Jess; Dyson, Amy; Eaton-Brown, Catherine; Parsons, Beth

    2013-03-01

    The results of two experiments are presented which explore the effect of distractor items on face and voice recognition. Following from the suggestion that voice processing is relatively weak compared to face processing, it was anticipated that voice recognition would be more affected by the presentation of distractor items between study and test compared to face recognition. Using a sequential matching task with a fixed interval between study and test that either incorporated distractor items or did not, the results supported our prediction. Face recognition remained strong irrespective of the number of distractor items between study and test. In contrast, voice recognition was significantly impaired by the presence of distractor items regardless of their number (Experiment 1). This pattern remained whether distractor items were highly similar to the targets or not (Experiment 2). These results offer support for the proposal that voice processing is a relatively vulnerable method of identification. PMID:22926436

  11. Face Recognition Using Local Quantized Patterns and Gabor Filters

    NASA Astrophysics Data System (ADS)

    Khryashchev, V.; Priorov, A.; Stepanova, O.; Nikitin, A.

    2015-05-01

    The problem of face recognition in a natural or artificial environment has received a great deal of researchers' attention over the last few years. A lot of methods for accurate face recognition have been proposed. Nevertheless, these methods often fail to accurately recognize the person in difficult scenarios, e.g. low resolution, low contrast, pose variations, etc. We therefore propose an approach for accurate and robust face recognition by using local quantized patterns and Gabor filters. The estimation of the eye centers is used as a preprocessing stage. The evaluation of our algorithm on different samples from a standardized FERET database shows that our method is invariant to the general variations of lighting, expression, occlusion and aging. The proposed approach allows about 20% correct recognition accuracy increase compared with the known face recognition algorithms from the OpenCV library. The additional use of Gabor filters can significantly improve the robustness to changes in lighting conditions.

  12. Recognition of Faces of Ingroup and Outgroup Children and Adults

    ERIC Educational Resources Information Center

    Corenblum, B.; Meissner, Christian A.

    2006-01-01

    People are often more accurate in recognizing faces of ingroup members than in recognizing faces of outgroup members. Although own-group biases in face recognition are well established among adults, less attention has been given to such biases among children. This is surprising considering how often children give testimony in criminal and civil…

  13. The use of 3D information in face recognition.

    PubMed

    Liu, Chang Hong; Ward, James

    2006-03-01

    Effects of shading in face recognition have often alluded to 3D shape processing. However, research to date has failed to demonstrate any use of important 3D information. Stereopsis adds no advantage in face encoding [Liu, C. H., Ward, J., & Young, A. W. (in press). Transfer between 2D and 3D representations of faces. Visual Cognition], and perspective transformation impairs rather than assists recognition performance [Liu, C. H. (2003). Is face recognition in pictures affected by the center of projection? In IEEE international workshop on analysis and modeling of faces and gestures (pp. 53-59). Nice, France: IEEE Computer Society]. Although evidence tends to rule out involvement of 3D information in face processing, it remains possible that the usefulness of this information depends on certain combinations of cues. We tested this hypothesis in a recognition task, where face stimuli with several levels of perspective transformation were either presented in stereo or without stereo. We found that even at a moderate level of perspective transformation where training and test faces were separated by just 30 cm, the stereo condition produced better performance. This provides the first evidence that stereo information can facilitate face recognition. We conclude that 3D information plays a role in face processing but only when certain types of 3D cues are properly combined. PMID:16298412

  14. Two dimensional LDA using volume measure in face recognition

    NASA Astrophysics Data System (ADS)

    Meng, Jicheng; Feng, Li; Zheng, Xiaolong

    2007-11-01

    The classification criterion for the two dimensional LDA (2DLDA)-based face recognition methods has been little considered, while we almost pay all attention to the 2DLDA-based feature extraction. The typical classification measure used in 2DLDA-based face recognition is the sum of the Euclidean distance between two feature vectors in feature matrix, called traditional distance measure (TDM). However, this classification criterion does not match the high dimensional geometry space theory. So we apply the volume measure (VM), which is based on the high dimensional geometry theory, to the 2DLDA-based face recognition in this paper. To test its performance, experiments were performed on the YALE face databases. The experimental results show the volume measure (VM) is more efficient than the TDM in 2DLDA-based face recognition.

  15. Multi-feature fusion for thermal face recognition

    NASA Astrophysics Data System (ADS)

    Bi, Yin; Lv, Mingsong; Wei, Yangjie; Guan, Nan; Yi, Wang

    2016-07-01

    Human face recognition has been researched for the last three decades. Face recognition with thermal images now attracts significant attention since they can be used in low/none illuminated environment. However, thermal face recognition performance is still insufficient for practical applications. One main reason is that most existing work leverage only single feature to characterize a face in a thermal image. To solve the problem, we propose multi-feature fusion, a technique that combines multiple features in thermal face characterization and recognition. In this work, we designed a systematical way to combine four features, including Local binary pattern, Gabor jet descriptor, Weber local descriptor and Down-sampling feature. Experimental results show that our approach outperforms methods that leverage only a single feature and is robust to noise, occlusion, expression, low resolution and different l1 -minimization methods.

  16. Optical Correlator for Face Recognition Using Collinear Holographic System

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Kodate, Kashiko

    2006-08-01

    We have constructed an optical correlator for fast face recognition. Recognition rate can be markedly improved, if reference images are optically recorded and can be accessed directly without converting them to digital signals. In addition, a large capacity of optical storage allows us to increase the size of the reference database. We propose a new optical correlator that integrates the optical correlation technology used in our face recognition system and collinear holography. From preliminary correlation experiments using the collinear optical set-up, we achieved excellent performance of high correlation peaks and low error rates. We expect an optical correlation of 10 μs/frame, i.e., 100,000 face/s when applied to face recognition. This system can also be applied to various image searches.

  17. Color constancy in 3D-2D face recognition

    NASA Astrophysics Data System (ADS)

    Meyer, Manuel; Riess, Christian; Angelopoulou, Elli; Evangelopoulos, Georgios; Kakadiaris, Ioannis A.

    2013-05-01

    Face is one of the most popular biometric modalities. However, up to now, color is rarely actively used in face recognition. Yet, it is well-known that when a person recognizes a face, color cues can become as important as shape, especially when combined with the ability of people to identify the color of objects independent of illuminant color variations. In this paper, we examine the feasibility and effect of explicitly embedding illuminant color information in face recognition systems. We empirically examine the theoretical maximum gain of including known illuminant color to a 3D-2D face recognition system. We also investigate the impact of using computational color constancy methods for estimating the illuminant color, which is then incorporated into the face recognition framework. Our experiments show that under close-to-ideal illumination estimates, one can improve face recognition rates by 16%. When the illuminant color is algorithmically estimated, the improvement is approximately 5%. These results suggest that color constancy has a positive impact on face recognition, but the accuracy of the illuminant color estimate has a considerable effect on its benefits.

  18. 3D face recognition based on matching of facial surfaces

    NASA Astrophysics Data System (ADS)

    Echeagaray-Patrón, Beatriz A.; Kober, Vitaly

    2015-09-01

    Face recognition is an important task in pattern recognition and computer vision. In this work a method for 3D face recognition in the presence of facial expression and poses variations is proposed. The method uses 3D shape data without color or texture information. A new matching algorithm based on conformal mapping of original facial surfaces onto a Riemannian manifold followed by comparison of conformal and isometric invariants computed in the manifold is suggested. Experimental results are presented using common 3D face databases that contain significant amount of expression and pose variations.

  19. Face recognition algorithms surpass humans matching faces over changes in illumination.

    PubMed

    O'Toole, Alice J; Jonathon Phillips, P; Jiang, Fang; Ayyad, Janet; Penard, Nils; Abdi, Hervé

    2007-09-01

    There has been significant progress in improving the performance of computer-based face recognition algorithms over the last decade. Although algorithms have been tested and compared extensively with each other, there has been remarkably little work comparing the accuracy of computer-based face recognition systems with humans. We compared seven state-of-the-art face recognition algorithms with humans on a facematching task. Humans and algorithms determined whether pairs of face images, taken under different illumination conditions, were pictures of the same person or of different people. Three algorithms surpassed human performance matching face pairs prescreened to be "difficult" and six algorithms surpassed humans on "easy" face pairs. Although illumination variation continues to challenge face recognition algorithms, current algorithms compete favorably with humans. The superior performance of the best algorithms over humans, in light of the absolute performance levels of the algorithms, underscores the need to compare algorithms with the best current control--humans. PMID:17627050

  20. Integration of faces and voices, but not faces and names, in person recognition.

    PubMed

    O'Mahony, Christiane; Newell, Fiona N

    2012-02-01

    Recent studies on cross-modal recognition suggest that face and voice information are linked for the purpose of person identification. We tested whether congruent associations between familiarized faces and voices facilitated subsequent person recognition relative to incongruent associations. Furthermore, we investigated whether congruent face and name associations would similarly benefit person identification relative to incongruent face and name associations. Participants were familiarized with a set of talking video-images of actors, their names, and their voices. They were then tested on their recognition of either the face, voice, or name of each actor from bimodal stimuli which were either congruent or novel (incongruent) associations between the familiarized face and voice or face and name. We found that response times to familiarity decisions based on congruent face and voice stimuli were facilitated relative to incongruent associations. In contrast, we failed to find a benefit for congruent face and name pairs. Our findings suggest that faces and voices, but not faces and names, are integrated in memory for the purpose of person recognition. These findings have important implications for current models of face perception and support growing evidence for multisensory effects in face perception areas of the brain for the purpose of person recognition. PMID:22229775

  1. The Impact of Early Bilingualism on Face Recognition Processes.

    PubMed

    Kandel, Sonia; Burfin, Sabine; Méary, David; Ruiz-Tada, Elisa; Costa, Albert; Pascalis, Olivier

    2016-01-01

    Early linguistic experience has an impact on the way we decode audiovisual speech in face-to-face communication. The present study examined whether differences in visual speech decoding could be linked to a broader difference in face processing. To identify a phoneme we have to do an analysis of the speaker's face to focus on the relevant cues for speech decoding (e.g., locating the mouth with respect to the eyes). Face recognition processes were investigated through two classic effects in face recognition studies: the Other-Race Effect (ORE) and the Inversion Effect. Bilingual and monolingual participants did a face recognition task with Caucasian faces (own race), Chinese faces (other race), and cars that were presented in an Upright or Inverted position. The results revealed that monolinguals exhibited the classic ORE. Bilinguals did not. Overall, bilinguals were slower than monolinguals. These results suggest that bilinguals' face processing abilities differ from monolinguals'. Early exposure to more than one language may lead to a perceptual organization that goes beyond language processing and could extend to face analysis. We hypothesize that these differences could be due to the fact that bilinguals focus on different parts of the face than monolinguals, making them more efficient in other race face processing but slower. However, more studies using eye-tracking techniques are necessary to confirm this explanation. PMID:27486422

  2. The Impact of Early Bilingualism on Face Recognition Processes

    PubMed Central

    Kandel, Sonia; Burfin, Sabine; Méary, David; Ruiz-Tada, Elisa; Costa, Albert; Pascalis, Olivier

    2016-01-01

    Early linguistic experience has an impact on the way we decode audiovisual speech in face-to-face communication. The present study examined whether differences in visual speech decoding could be linked to a broader difference in face processing. To identify a phoneme we have to do an analysis of the speaker’s face to focus on the relevant cues for speech decoding (e.g., locating the mouth with respect to the eyes). Face recognition processes were investigated through two classic effects in face recognition studies: the Other-Race Effect (ORE) and the Inversion Effect. Bilingual and monolingual participants did a face recognition task with Caucasian faces (own race), Chinese faces (other race), and cars that were presented in an Upright or Inverted position. The results revealed that monolinguals exhibited the classic ORE. Bilinguals did not. Overall, bilinguals were slower than monolinguals. These results suggest that bilinguals’ face processing abilities differ from monolinguals’. Early exposure to more than one language may lead to a perceptual organization that goes beyond language processing and could extend to face analysis. We hypothesize that these differences could be due to the fact that bilinguals focus on different parts of the face than monolinguals, making them more efficient in other race face processing but slower. However, more studies using eye-tracking techniques are necessary to confirm this explanation. PMID:27486422

  3. Effective face recognition using bag of features with additive kernels

    NASA Astrophysics Data System (ADS)

    Yang, Shicai; Bebis, George; Chu, Yongjie; Zhao, Lindu

    2016-01-01

    In past decades, many techniques have been used to improve face recognition performance. The most common and well-studied ways are to use the whole face image to build a subspace based on the reduction of dimensionality. Differing from methods above, we consider face recognition as an image classification problem. The face images of the same person are considered to fall into the same category. Each category and each face image could be both represented by a simple pyramid histogram. Spatial dense scale-invariant feature transform features and bag of features method are used to build categories and face representations. In an effort to make the method more efficient, a linear support vector machine solver, Pegasos, is used for the classification in the kernel space with additive kernels instead of nonlinear SVMs. Our experimental results demonstrate that the proposed method can achieve very high recognition accuracy on the ORL, YALE, and FERET databases.

  4. Original and Mirror Face Images and Minimum Squared Error Classification for Visible Light Face Recognition

    PubMed Central

    Wang, Rong

    2015-01-01

    In real-world applications, the image of faces varies with illumination, facial expression, and poses. It seems that more training samples are able to reveal possible images of the faces. Though minimum squared error classification (MSEC) is a widely used method, its applications on face recognition usually suffer from the problem of a limited number of training samples. In this paper, we improve MSEC by using the mirror faces as virtual training samples. We obtained the mirror faces generated from original training samples and put these two kinds of samples into a new set. The face recognition experiments show that our method does obtain high accuracy performance in classification. PMID:26576452

  5. Feature based sliding window technique for face recognition

    NASA Astrophysics Data System (ADS)

    Javed, Muhammad Younus; Mohsin, Syed Maajid; Anjum, Muhammad Almas

    2010-02-01

    Human beings are commonly identified by biometric schemes which are concerned with identifying individuals by their unique physical characteristics. The use of passwords and personal identification numbers for detecting humans are being used for years now. Disadvantages of these schemes are that someone else may use them or can easily be forgotten. Keeping in view of these problems, biometrics approaches such as face recognition, fingerprint, iris/retina and voice recognition have been developed which provide a far better solution when identifying individuals. A number of methods have been developed for face recognition. This paper illustrates employment of Gabor filters for extracting facial features by constructing a sliding window frame. Classification is done by assigning class label to the unknown image that has maximum features similar to the image stored in the database of that class. The proposed system gives a recognition rate of 96% which is better than many of the similar techniques being used for face recognition.

  6. FaceIt: face recognition from static and live video for law enforcement

    NASA Astrophysics Data System (ADS)

    Atick, Joseph J.; Griffin, Paul M.; Redlich, A. N.

    1997-01-01

    Recent advances in image and pattern recognition technology- -especially face recognition--are leading to the development of a new generation of information systems of great value to the law enforcement community. With these systems it is now possible to pool and manage vast amounts of biometric intelligence such as face and finger print records and conduct computerized searches on them. We review one of the enabling technologies underlying these systems: the FaceIt face recognition engine; and discuss three applications that illustrate its benefits as a problem-solving technology and an efficient and cost effective investigative tool.

  7. Face engagement during infancy predicts later face recognition ability in younger siblings of children with autism.

    PubMed

    de Klerk, Carina C J M; Gliga, Teodora; Charman, Tony; Johnson, Mark H

    2014-07-01

    Face recognition difficulties are frequently documented in children with autism spectrum disorders (ASD). It has been hypothesized that these difficulties result from a reduced interest in faces early in life, leading to decreased cortical specialization and atypical development of the neural circuitry for face processing. However, a recent study by our lab demonstrated that infants at increased familial risk for ASD, irrespective of their diagnostic status at 3 years, exhibit a clear orienting response to faces. The present study was conducted as a follow-up on the same cohort to investigate how measures of early engagement with faces relate to face-processing abilities later in life. We also investigated whether face recognition difficulties are specifically related to an ASD diagnosis, or whether they are present at a higher rate in all those at familial risk. At 3 years we found a reduced ability to recognize unfamiliar faces in the high-risk group that was not specific to those children who received an ASD diagnosis, consistent with face recognition difficulties being an endophenotype of the disorder. Furthermore, we found that longer looking at faces at 7 months was associated with poorer performance on the face recognition task at 3 years in the high-risk group. These findings suggest that longer looking at faces in infants at risk for ASD might reflect early face-processing difficulties and predicts difficulties with recognizing faces later in life. PMID:24314028

  8. Iterative closest normal point for 3D face recognition.

    PubMed

    Mohammadzade, Hoda; Hatzinakos, Dimitrios

    2013-02-01

    The common approach for 3D face recognition is to register a probe face to each of the gallery faces and then calculate the sum of the distances between their points. This approach is computationally expensive and sensitive to facial expression variation. In this paper, we introduce the iterative closest normal point method for finding the corresponding points between a generic reference face and every input face. The proposed correspondence finding method samples a set of points for each face, denoted as the closest normal points. These points are effectively aligned across all faces, enabling effective application of discriminant analysis methods for 3D face recognition. As a result, the expression variation problem is addressed by minimizing the within-class variability of the face samples while maximizing the between-class variability. As an important conclusion, we show that the surface normal vectors of the face at the sampled points contain more discriminatory information than the coordinates of the points. We have performed comprehensive experiments on the Face Recognition Grand Challenge database, which is presently the largest available 3D face database. We have achieved verification rates of 99.6 and 99.2 percent at a false acceptance rate of 0.1 percent for the all versus all and ROC III experiments, respectively, which, to the best of our knowledge, have seven and four times less error rates, respectively, compared to the best existing methods on this database. PMID:22585097

  9. Culture moderates the relationship between interdependence and face recognition

    PubMed Central

    Ng, Andy H.; Steele, Jennifer R.; Sasaki, Joni Y.; Sakamoto, Yumiko; Williams, Amanda

    2015-01-01

    Recent theory suggests that face recognition accuracy is affected by people’s motivations, with people being particularly motivated to remember ingroup versus outgroup faces. In the current research we suggest that those higher in interdependence should have a greater motivation to remember ingroup faces, but this should depend on how ingroups are defined. To examine this possibility, we used a joint individual difference and cultural approach to test (a) whether individual differences in interdependence would predict face recognition accuracy, and (b) whether this effect would be moderated by culture. In Study 1 European Canadians higher in interdependence demonstrated greater recognition for same-race (White), but not cross-race (East Asian) faces. In Study 2 we found that culture moderated this effect. Interdependence again predicted greater recognition for same-race (White), but not cross-race (East Asian) faces among European Canadians; however, interdependence predicted worse recognition for both same-race (East Asian) and cross-race (White) faces among first-generation East Asians. The results provide insight into the role of motivation in face perception as well as cultural differences in the conception of ingroups. PMID:26579011

  10. Face recognition in simulated prosthetic vision: face detection-based image processing strategies

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Wu, Xiaobei; Lu, Yanyu; Wu, Hao; Kan, Han; Chai, Xinyu

    2014-08-01

    Objective. Given the limited visual percepts elicited by current prosthetic devices, it is essential to optimize image content in order to assist implant wearers to achieve better performance of visual tasks. This study focuses on recognition of familiar faces using simulated prosthetic vision. Approach. Combined with region-of-interest (ROI) magnification, three face extraction strategies based on a face detection technique were used: the Viola-Jones face region, the statistical face region (SFR) and the matting face region. Main results. These strategies significantly enhanced recognition performance compared to directly lowering resolution (DLR) with Gaussian dots. The inclusion of certain external features, such as hairstyle, was beneficial for face recognition. Given the high recognition accuracy achieved and applicable processing speed, SFR-ROI was the preferred strategy. DLR processing resulted in significant face gender recognition differences (i.e. females were more easily recognized than males), but these differences were not apparent with other strategies. Significance. Face detection-based image processing strategies improved visual perception by highlighting useful information. Their use is advisable for face recognition when using low-resolution prosthetic vision. These results provide information for the continued design of image processing modules for use in visual prosthetics, thus maximizing the benefits for future prosthesis wearers.

  11. What Types of Visual Recognition Tasks Are Mediated by the Neural Subsystem that Subserves Face Recognition?

    ERIC Educational Resources Information Center

    Brooks, Brian E.; Cooper, Eric E.

    2006-01-01

    Three divided visual field experiments tested current hypotheses about the types of visual shape representation tasks that recruit the cognitive and neural mechanisms underlying face recognition. Experiment 1 found a right hemisphere advantage for subordinate but not basic-level face recognition. Experiment 2 found a right hemisphere advantage for…

  12. Eye movements during emotion recognition in faces.

    PubMed

    Schurgin, M W; Nelson, J; Iida, S; Ohira, H; Chiao, J Y; Franconeri, S L

    2014-01-01

    When distinguishing whether a face displays a certain emotion, some regions of the face may contain more useful information than others. Here we ask whether people differentially attend to distinct regions of a face when judging different emotions. Experiment 1 measured eye movements while participants discriminated between emotional (joy, anger, fear, sadness, shame, and disgust) and neutral facial expressions. Participant eye movements primarily fell in five distinct regions (eyes, upper nose, lower nose, upper lip, nasion). Distinct fixation patterns emerged for each emotion, such as a focus on the lips for joyful faces and a focus on the eyes for sad faces. These patterns were strongest for emotional faces but were still present when viewers sought evidence of emotion within neutral faces, indicating a goal-driven influence on eye-gaze patterns. Experiment 2 verified that these fixation patterns tended to reflect attention to the most diagnostic regions of the face for each emotion. Eye movements appear to follow both stimulus-driven and goal-driven perceptual strategies when decoding emotional information from a face. PMID:25406159

  13. SIFT fusion of kernel eigenfaces for face recognition

    NASA Astrophysics Data System (ADS)

    Kisku, Dakshina R.; Tistarelli, Massimo; Gupta, Phalguni; Sing, Jamuna K.

    2015-10-01

    In this paper, we investigate an application that integrates holistic appearance based method and feature based method for face recognition. The automatic face recognition system makes use of multiscale Kernel PCA (Principal Component Analysis) characterized approximated face images and reduced the number of invariant SIFT (Scale Invariant Feature Transform) keypoints extracted from face projected feature space. To achieve higher variance in the inter-class face images, we compute principal components in higher-dimensional feature space to project a face image onto some approximated kernel eigenfaces. As long as feature spaces retain their distinctive characteristics, reduced number of SIFT points are detected for a number of principal components and keypoints are then fused using user-dependent weighting scheme and form a feature vector. The proposed method is tested on ORL face database, and the efficacy of the system is proved by the test results computed using the proposed algorithm.

  14. Improving cross-modal face recognition using polarimetric imaging.

    PubMed

    Short, Nathaniel; Hu, Shuowen; Gurram, Prudhvi; Gurton, Kristan; Chan, Alex

    2015-03-15

    We investigate the performance of polarimetric imaging in the long-wave infrared (LWIR) spectrum for cross-modal face recognition. For this work, polarimetric imagery is generated as stacks of three components: the conventional thermal intensity image (referred to as S0), and the two Stokes images, S1 and S2, which contain combinations of different polarizations. The proposed face recognition algorithm extracts and combines local gradient magnitude and orientation information from S0, S1, and S2 to generate a robust feature set that is well-suited for cross-modal face recognition. Initial results show that polarimetric LWIR-to-visible face recognition achieves an 18% increase in Rank-1 identification rate compared to conventional LWIR-to-visible face recognition. We conclude that a substantial improvement in automatic face recognition performance can be achieved by exploiting the polarization-state of radiance, as compared to using conventional thermal imagery. PMID:25768137

  15. Fast face recognition by using an inverted index

    NASA Astrophysics Data System (ADS)

    Herrmann, Christian; Beyerer, Jürgen

    2015-02-01

    This contribution addresses the task of searching for faces in large video datasets. Despite vast progress in the field, face recognition remains a challenge for uncontrolled large scale applications like searching for persons in surveillance footage or internet videos. While current productive systems focus on the best shot approach, where only one representative frame from a given face track is selected, thus sacrificing recognition performance, systems achieving state-of-the-art recognition performance, like the recently published DeepFace, ignore recognition speed, which makes them impractical for large scale applications. We suggest a set of measures to address the problem. First, considering the feature location allows collecting the extracted features in according sets. Secondly, the inverted index approach, which became popular in the area of image retrieval, is applied to these feature sets. A face track is thus described by a set of local indexed visual words which enables a fast search. This way, all information from a face track is collected which allows better recognition performance than best shot approaches and the inverted index permits constantly high recognition speeds. Evaluation on a dataset of several thousand videos shows the validity of the proposed approach.

  16. Robust textural features for real time face recognition

    NASA Astrophysics Data System (ADS)

    Cui, Chen; Asari, Vijayan K.; Braun, Andrew D.

    2015-03-01

    Automatic face recognition in real life environment is challenged by various issues such as the object motion, lighting conditions, poses and expressions. In this paper, we present the development of a system based on a refined Enhanced Local Binary Pattern (ELBP) feature set and a Support Vector Machine (SVM) classifier to perform face recognition in a real life environment. Instead of counting the number of 1's in ELBP, we use the 8-bit code of the thresholded data as per the ELBP rule, and then binarize the image with a predefined threshold value, removing the small connections on the binarized image. The proposed system is currently trained with several people's face images obtained from video sequences captured by a surveillance camera. One test set contains the disjoint images of the trained people's faces to test the accuracy and the second test set contains the images of non-trained people's faces to test the percentage of the false positives. The recognition rate among 570 images of 9 trained faces is around 94%, and the false positive rate with 2600 images of 34 non-trained faces is around 1%. Research work is progressing for the recognition of partially occluded faces as well. An appropriate weighting strategy will be applied to the different parts of the face area to achieve a better performance.

  17. Newborns' Face Recognition over Changes in Viewpoint

    ERIC Educational Resources Information Center

    Turati, Chiara; Bulf, Hermann; Simion, Francesca

    2008-01-01

    The study investigated the origins of the ability to recognize faces despite rotations in depth. Four experiments are reported that tested, using the habituation technique, whether 1-to-3-day-old infants are able to recognize the invariant aspects of a face over changes in viewpoint. Newborns failed to recognize facial perceptual invariances…

  18. Error Rates in Users of Automatic Face Recognition Software

    PubMed Central

    White, David; Dunn, James D.; Schmid, Alexandra C.; Kemp, Richard I.

    2015-01-01

    In recent years, wide deployment of automatic face recognition systems has been accompanied by substantial gains in algorithm performance. However, benchmarking tests designed to evaluate these systems do not account for the errors of human operators, who are often an integral part of face recognition solutions in forensic and security settings. This causes a mismatch between evaluation tests and operational accuracy. We address this by measuring user performance in a face recognition system used to screen passport applications for identity fraud. Experiment 1 measured target detection accuracy in algorithm-generated ‘candidate lists’ selected from a large database of passport images. Accuracy was notably poorer than in previous studies of unfamiliar face matching: participants made over 50% errors for adult target faces, and over 60% when matching images of children. Experiment 2 then compared performance of student participants to trained passport officers–who use the system in their daily work–and found equivalent performance in these groups. Encouragingly, a group of highly trained and experienced “facial examiners” outperformed these groups by 20 percentage points. We conclude that human performance curtails accuracy of face recognition systems–potentially reducing benchmark estimates by 50% in operational settings. Mere practise does not attenuate these limits, but superior performance of trained examiners suggests that recruitment and selection of human operators, in combination with effective training and mentorship, can improve the operational accuracy of face recognition systems. PMID:26465631

  19. Developmental Changes in Face Recognition during Childhood: Evidence from Upright and Inverted Faces

    ERIC Educational Resources Information Center

    de Heering, Adelaide; Rossion, Bruno; Maurer, Daphne

    2012-01-01

    Adults are experts at recognizing faces but there is controversy about how this ability develops with age. We assessed 6- to 12-year-olds and adults using a digitized version of the Benton Face Recognition Test, a sensitive tool for assessing face perception abilities. Children's response times for correct responses did not decrease between ages 6…

  20. Face Engagement during Infancy Predicts Later Face Recognition Ability in Younger Siblings of Children with Autism

    ERIC Educational Resources Information Center

    de Klerk, Carina C. J. M.; Gliga, Teodora; Charman, Tony; Johnson, Mark H.

    2014-01-01

    Face recognition difficulties are frequently documented in children with autism spectrum disorders (ASD). It has been hypothesized that these difficulties result from a reduced interest in faces early in life, leading to decreased cortical specialization and atypical development of the neural circuitry for face processing. However, a recent study…

  1. Cross-modal face recognition using multi-matcher face scores

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Blasch, Erik

    2015-05-01

    The performance of face recognition can be improved using information fusion of multimodal images and/or multiple algorithms. When multimodal face images are available, cross-modal recognition is meaningful for security and surveillance applications. For example, a probe face is a thermal image (especially at nighttime), while only visible face images are available in the gallery database. Matching a thermal probe face onto the visible gallery faces requires crossmodal matching approaches. A few such studies were implemented in facial feature space with medium recognition performance. In this paper, we propose a cross-modal recognition approach, where multimodal faces are cross-matched in feature space and the recognition performance is enhanced with stereo fusion at image, feature and/or score level. In the proposed scenario, there are two cameras for stereo imaging, two face imagers (visible and thermal images) in each camera, and three recognition algorithms (circular Gaussian filter, face pattern byte, linear discriminant analysis). A score vector is formed with three cross-matched face scores from the aforementioned three algorithms. A classifier (e.g., k-nearest neighbor, support vector machine, binomial logical regression [BLR]) is trained then tested with the score vectors by using 10-fold cross validations. The proposed approach was validated with a multispectral stereo face dataset from 105 subjects. Our experiments show very promising results: ACR (accuracy rate) = 97.84%, FAR (false accept rate) = 0.84% when cross-matching the fused thermal faces onto the fused visible faces by using three face scores and the BLR classifier.

  2. Face-Recognition Memory: Implications for Children's Eyewitness Testimony.

    ERIC Educational Resources Information Center

    Chance, June E.; Goldstein, Alvin G.

    1984-01-01

    Reviews studies of face-recognition memory and considers implications for assessing the dependability of children's performances as eyewitnesses. Considers personal factors (age, intellectual differences, and gender) and situational factors (familiarity of face, retention interval, and others). Also identifies developmental questions for future…

  3. Supervised Filter Learning for Representation Based Face Recognition

    PubMed Central

    Bi, Chao; Zhang, Lei; Qi, Miao; Zheng, Caixia; Yi, Yugen; Wang, Jianzhong; Zhang, Baoxue

    2016-01-01

    Representation based classification methods, such as Sparse Representation Classification (SRC) and Linear Regression Classification (LRC) have been developed for face recognition problem successfully. However, most of these methods use the original face images without any preprocessing for recognition. Thus, their performances may be affected by some problematic factors (such as illumination and expression variances) in the face images. In order to overcome this limitation, a novel supervised filter learning algorithm is proposed for representation based face recognition in this paper. The underlying idea of our algorithm is to learn a filter so that the within-class representation residuals of the faces' Local Binary Pattern (LBP) features are minimized and the between-class representation residuals of the faces' LBP features are maximized. Therefore, the LBP features of filtered face images are more discriminative for representation based classifiers. Furthermore, we also extend our algorithm for heterogeneous face recognition problem. Extensive experiments are carried out on five databases and the experimental results verify the efficacy of the proposed algorithm. PMID:27416030

  4. Development of Face Recognition in Infant Chimpanzees (Pan Troglodytes)

    ERIC Educational Resources Information Center

    Myowa-Yamakoshi, M.; Yamaguchi, M.K.; Tomonaga, M.; Tanaka, M.; Matsuzawa, T.

    2005-01-01

    In this paper, we assessed the developmental changes in face recognition by three infant chimpanzees aged 1-18 weeks, using preferential-looking procedures that measured the infants' eye- and head-tracking of moving stimuli. In Experiment 1, we prepared photographs of the mother of each infant and an ''average'' chimpanzee face using…

  5. Supervised Filter Learning for Representation Based Face Recognition.

    PubMed

    Bi, Chao; Zhang, Lei; Qi, Miao; Zheng, Caixia; Yi, Yugen; Wang, Jianzhong; Zhang, Baoxue

    2016-01-01

    Representation based classification methods, such as Sparse Representation Classification (SRC) and Linear Regression Classification (LRC) have been developed for face recognition problem successfully. However, most of these methods use the original face images without any preprocessing for recognition. Thus, their performances may be affected by some problematic factors (such as illumination and expression variances) in the face images. In order to overcome this limitation, a novel supervised filter learning algorithm is proposed for representation based face recognition in this paper. The underlying idea of our algorithm is to learn a filter so that the within-class representation residuals of the faces' Local Binary Pattern (LBP) features are minimized and the between-class representation residuals of the faces' LBP features are maximized. Therefore, the LBP features of filtered face images are more discriminative for representation based classifiers. Furthermore, we also extend our algorithm for heterogeneous face recognition problem. Extensive experiments are carried out on five databases and the experimental results verify the efficacy of the proposed algorithm. PMID:27416030

  6. The Development of Spatial Frequency Biases in Face Recognition

    ERIC Educational Resources Information Center

    Leonard, Hayley C.; Karmiloff-Smith, Annette; Johnson, Mark H.

    2010-01-01

    Previous research has suggested that a mid-band of spatial frequencies is critical to face recognition in adults, but few studies have explored the development of this bias in children. We present a paradigm adapted from the adult literature to test spatial frequency biases throughout development. Faces were presented on a screen with particular…

  7. Fusion of visible and infrared imagery for face recognition

    NASA Astrophysics Data System (ADS)

    Chen, Xuerong; Jing, Zhongliang; Sun, Shaoyuan; Xiao, Gang

    2004-12-01

    In recent years face recognition has received substantial attention, but still remained very challenging in real applications. Despite the variety of approaches and tools studied, face recognition is not accurate or robust enough to be used in uncontrolled environments. Infrared (IR) imagery of human faces offers a promising alternative to visible imagery, however, IR has its own limitations. In this paper, a scheme to fuse information from the two modalities is proposed. The scheme is based on eigenfaces and probabilistic neural network (PNN), using fuzzy integral to fuse the objective evidence supplied by each modality. Recognition rate is used to evaluate the fusion scheme. Experimental results show that the scheme improves recognition performance substantially.

  8. Recognition memory in developmental prosopagnosia: electrophysiological evidence for abnormal routes to face recognition

    PubMed Central

    Burns, Edwin J.; Tree, Jeremy J.; Weidemann, Christoph T.

    2014-01-01

    Dual process models of recognition memory propose two distinct routes for recognizing a face: recollection and familiarity. Recollection is characterized by the remembering of some contextual detail from a previous encounter with a face whereas familiarity is the feeling of finding a face familiar without any contextual details. The Remember/Know (R/K) paradigm is thought to index the relative contributions of recollection and familiarity to recognition performance. Despite researchers measuring face recognition deficits in developmental prosopagnosia (DP) through a variety of methods, none have considered the distinct contributions of recollection and familiarity to recognition performance. The present study examined recognition memory for faces in eight individuals with DP and a group of controls using an R/K paradigm while recording electroencephalogram (EEG) data at the scalp. Those with DP were found to produce fewer correct “remember” responses and more false alarms than controls. EEG results showed that posterior “remember” old/new effects were delayed and restricted to the right posterior (RP) area in those with DP in comparison to the controls. A posterior “know” old/new effect commonly associated with familiarity for faces was only present in the controls whereas individuals with DP exhibited a frontal “know” old/new effect commonly associated with words, objects and pictures. These results suggest that individuals with DP do not utilize normal face-specific routes when making face recognition judgments but instead process faces using a pathway more commonly associated with objects. PMID:25177283

  9. The own-age face recognition bias is task dependent.

    PubMed

    Proietti, Valentina; Macchi Cassia, Viola; Mondloch, Catherine J

    2015-08-01

    The own-age bias (OAB) in face recognition (more accurate recognition of own-age than other-age faces) is robust among young adults but not older adults. We investigated the OAB under two different task conditions. In Experiment 1 young and older adults (who reported more recent experience with own than other-age faces) completed a match-to-sample task with young and older adult faces; only young adults showed an OAB. In Experiment 2 young and older adults completed an identity detection task in which we manipulated the identity strength of target and distracter identities by morphing each face with an average face in 20% steps. Accuracy increased with identity strength and facial age influenced older adults' (but not younger adults') strategy, but there was no evidence of an OAB. Collectively, these results suggest that the OAB depends on task demands and may be absent when searching for one identity. PMID:25491773

  10. Sparse representation based face recognition using weighted regions

    NASA Astrophysics Data System (ADS)

    Bilgazyev, Emil; Yeniaras, E.; Uyanik, I.; Unan, Mahmut; Leiss, E. L.

    2013-12-01

    Face recognition is a challenging research topic, especially when the training (gallery) and recognition (probe) images are acquired using different cameras under varying conditions. Even a small noise or occlusion in the images can compromise the accuracy of recognition. Lately, sparse encoding based classification algorithms gave promising results for such uncontrollable scenarios. In this paper, we introduce a novel methodology by modeling the sparse encoding with weighted patches to increase the robustness of face recognition even further. In the training phase, we define a mask (i.e., weight matrix) using a sparse representation selecting the facial regions, and in the recognition phase, we perform comparison on selected facial regions. The algorithm was evaluated both quantitatively and qualitatively using two comprehensive surveillance facial image databases, i.e., SCfaceandMFPV, with the results clearly superior to common state-of-the-art methodologies in different scenarios.

  11. Individual differences in cortical face selectivity predict behavioral performance in face recognition.

    PubMed

    Huang, Lijie; Song, Yiying; Li, Jingguang; Zhen, Zonglei; Yang, Zetian; Liu, Jia

    2014-01-01

    In functional magnetic resonance imaging studies, object selectivity is defined as a higher neural response to an object category than other object categories. Importantly, object selectivity is widely considered as a neural signature of a functionally-specialized area in processing its preferred object category in the human brain. However, the behavioral significance of the object selectivity remains unclear. In the present study, we used the individual differences approach to correlate participants' face selectivity in the face-selective regions with their behavioral performance in face recognition measured outside the scanner in a large sample of healthy adults. Face selectivity was defined as the z score of activation with the contrast of faces vs. non-face objects, and the face recognition ability was indexed as the normalized residual of the accuracy in recognizing previously-learned faces after regressing out that for non-face objects in an old/new memory task. We found that the participants with higher face selectivity in the fusiform face area (FFA) and the occipital face area (OFA), but not in the posterior part of the superior temporal sulcus (pSTS), possessed higher face recognition ability. Importantly, the association of face selectivity in the FFA and face recognition ability cannot be accounted for by FFA response to objects or behavioral performance in object recognition, suggesting that the association is domain-specific. Finally, the association is reliable, confirmed by the replication from another independent participant group. In sum, our finding provides empirical evidence on the validity of using object selectivity as a neural signature in defining object-selective regions in the human brain. PMID:25071513

  12. [Neural basis of self-face recognition: social aspects].

    PubMed

    Sugiura, Motoaki

    2012-07-01

    Considering the importance of the face in social survival and evidence from evolutionary psychology of visual self-recognition, it is reasonable that we expect neural mechanisms for higher social-cognitive processes to underlie self-face recognition. A decade of neuroimaging studies so far has, however, not provided an encouraging finding in this respect. Self-face specific activation has typically been reported in the areas for sensory-motor integration in the right lateral cortices. This observation appears to reflect the physical nature of the self-face which representation is developed via the detection of contingency between one's own action and sensory feedback. We have recently revealed that the medial prefrontal cortex, implicated in socially nuanced self-referential process, is activated during self-face recognition under a rich social context where multiple other faces are available for reference. The posterior cingulate cortex has also exhibited this activation modulation, and in the separate experiment showed a response to attractively manipulated self-face suggesting its relevance to positive self-value. Furthermore, the regions in the right lateral cortices typically showing self-face-specific activation have responded also to the face of one's close friend under the rich social context. This observation is potentially explained by the fact that the contingency detection for physical self-recognition also plays a role in physical social interaction, which characterizes the representation of personally familiar people. These findings demonstrate that neuroscientific exploration reveals multiple facets of the relationship between self-face recognition and social-cognitive process, and that technically the manipulation of social context is key to its success. PMID:22764347

  13. Robust Point Set Matching for Partial Face Recognition.

    PubMed

    Weng, Renliang; Lu, Jiwen; Tan, Yap-Peng

    2016-03-01

    Over the past three decades, a number of face recognition methods have been proposed in computer vision, and most of them use holistic face images for person identification. In many real-world scenarios especially some unconstrained environments, human faces might be occluded by other objects, and it is difficult to obtain fully holistic face images for recognition. To address this, we propose a new partial face recognition approach to recognize persons of interest from their partial faces. Given a pair of gallery image and probe face patch, we first detect keypoints and extract their local textural features. Then, we propose a robust point set matching method to discriminatively match these two extracted local feature sets, where both the textural information and geometrical information of local features are explicitly used for matching simultaneously. Finally, the similarity of two faces is converted as the distance between these two aligned feature sets. Experimental results on four public face data sets show the effectiveness of the proposed approach. PMID:26761775

  14. Recognition of face and non-face stimuli in autistic spectrum disorder.

    PubMed

    Arkush, Leo; Smith-Collins, Adam P R; Fiorentini, Chiara; Skuse, David H

    2013-12-01

    The ability to remember faces is critical for the development of social competence. From childhood to adulthood, we acquire a high level of expertise in the recognition of facial images, and neural processes become dedicated to sustaining competence. Many people with autism spectrum disorder (ASD) have poor face recognition memory; changes in hairstyle or other non-facial features in an otherwise familiar person affect their recollection skills. The observation implies that they may not use the configuration of the inner face to achieve memory competence, but bolster performance in other ways. We aimed to test this hypothesis by comparing the performance of a group of high-functioning unmedicated adolescents with ASD and a matched control group on a "surprise" face recognition memory task. We compared their memory for unfamiliar faces with their memory for images of houses. To evaluate the role that is played by peripheral cues in assisting recognition memory, we cropped both sets of pictures, retaining only the most salient central features. ASD adolescents had poorer recognition memory for faces than typical controls, but their recognition memory for houses was unimpaired. Cropping images of faces did not disproportionately influence their recall accuracy, relative to controls. House recognition skills (cropped and uncropped) were similar in both groups. In the ASD group only, performance on both sets of task was closely correlated, implying that memory for faces and other complex pictorial stimuli is achieved by domain-general (non-dedicated) cognitive mechanisms. Adolescents with ASD apparently do not use domain-specialized processing of inner facial cues to support face recognition memory. PMID:23894016

  15. A Benchmark and Comparative Study of Video-Based Face Recognition on COX Face Database.

    PubMed

    Huang, Zhiwu; Shan, Shiguang; Wang, Ruiping; Zhang, Haihong; Lao, Shihong; Kuerban, Alifu; Chen, Xilin

    2015-12-01

    Face recognition with still face images has been widely studied, while the research on video-based face recognition is inadequate relatively, especially in terms of benchmark datasets and comparisons. Real-world video-based face recognition applications require techniques for three distinct scenarios: 1) Videoto-Still (V2S); 2) Still-to-Video (S2V); and 3) Video-to-Video (V2V), respectively, taking video or still image as query or target. To the best of our knowledge, few datasets and evaluation protocols have benchmarked for all the three scenarios. In order to facilitate the study of this specific topic, this paper contributes a benchmarking and comparative study based on a newly collected still/video face database, named COX(1) Face DB. Specifically, we make three contributions. First, we collect and release a largescale still/video face database to simulate video surveillance with three different video-based face recognition scenarios (i.e., V2S, S2V, and V2V). Second, for benchmarking the three scenarios designed on our database, we review and experimentally compare a number of existing set-based methods. Third, we further propose a novel Point-to-Set Correlation Learning (PSCL) method, and experimentally show that it can be used as a promising baseline method for V2S/S2V face recognition on COX Face DB. Extensive experimental results clearly demonstrate that video-based face recognition needs more efforts, and our COX Face DB is a good benchmark database for evaluation. PMID:26513790

  16. Framework for performance evaluation of face recognition algorithms

    NASA Astrophysics Data System (ADS)

    Black, John A., Jr.; Gargesha, Madhusudhana; Kahol, Kanav; Kuchi, Prem; Panchanathan, Sethuraman

    2002-07-01

    Face detection and recognition is becoming increasingly important in the contexts of surveillance,credit card fraud detection,assistive devices for visual impaired,etc. A number of face recognition algorithms have been proposed in the literature.The availability of a comprehensive face database is crucial to test the performance of these face recognition algorithms.However,while existing publicly-available face databases contain face images with a wide variety of poses angles, illumination angles,gestures,face occlusions,and illuminant colors, these images have not been adequately annotated,thus limiting their usefulness for evaluating the relative performance of face detection algorithms. For example,many of the images in existing databases are not annotated with the exact pose angles at which they were taken.In order to compare the performance of various face recognition algorithms presented in the literature there is a need for a comprehensive,systematically annotated database populated with face images that have been captured (1)at a variety of pose angles (to permit testing of pose invariance),(2)with a wide variety of illumination angles (to permit testing of illumination invariance),and (3)under a variety of commonly encountered illumination color temperatures (to permit testing of illumination color invariance). In this paper, we present a methodology for creating such an annotated database that employs a novel set of apparatus for the rapid capture of face images from a wide variety of pose angles and illumination angles. Four different types of illumination are used,including daylight,skylight,incandescent and fluorescent. The entire set of images,as well as the annotations and the experimental results,is being placed in the public domain,and made available for download over the worldwide web.

  17. Real-time optoelectronic morphological processor for human face recognition

    NASA Astrophysics Data System (ADS)

    Liu, Haisong; Wu, Minxian; Jin, Guofan; Cheng, Gang; He, Qingsheng

    1998-01-01

    Many commercial and law enforcement applications of face recognition need to be high-speed and real-time, such as passing through customs quickly while ensuring security. However, face recognition by using computers only is time- consuming due to the intensive calculation task. Recently optical implementations of real-time face recognition have attracted much attention. In this paper, a real-time optoelectronic morphological processor for face recognition is presented. It is based on original-complementary composite encoding hit-or-miss transformation, which combines the foreground and background of an image into a whole. One liquid-crystal display panel is used as two real- time SLMs for both stored images and face images to be recognized, which are of 256 X 256 pixels. A speed of 40 frames/s and four-channel recognition ability have been achieved. The experimental results have shown that the processor has an accuracy over 90% and error tolerance to rotation up to 8 deg, to noise disturbance up to 25%, and to image loss up to 40%.

  18. Preadolescents' recognition of faces of unfamiliar peers: the effect of attractiveness of faces.

    PubMed

    Mallet, Pascal; Lallemand, Noëlle

    2003-12-01

    The authors examined preadolescents' ability to recognize faces of unfamiliar peers according to their attractiveness. They hypothesized that highly attractive faces would be less accurately recognized than moderately attractive faces because the former are more typical. In Experiment 1, 106 participants (M age = 10 years) were asked to recognize faces of unknown peers who varied in gender and attractiveness (high- vs. medium-attractiveness). Results showed that attractiveness enhanced the accuracy of recognition for boys' faces and impaired recognition of girls' faces. The same interaction was found in Experiment 2, in which 92 participants (M age = 12 years) were tested for their recognition of another set of faces of unfamiliar peers. The authors conducted Experiment 3 to examine whether the reason for that interaction is that high- and medium-attractive girls' faces differ more in typicality than do boys' faces. The effect size of attractiveness on typicality was similar for boys' and girls' faces. The overall results are discussed with reference to the development of face encoding and biological gender differences with respect to the typicality of faces during preadolescence. PMID:14719778

  19. Face recognition using facial expression: a novel approach

    NASA Astrophysics Data System (ADS)

    Singh, Deepak Kumar; Gupta, Priya; Tiwary, U. S.

    2008-04-01

    Facial expressions are undoubtedly the most effective nonverbal communication. The face has always been the equation of a person's identity. The face draws the demarcation line between identity and extinction. Each line on the face adds an attribute to the identity. These lines become prominent when we experience an emotion and these lines do not change completely with age. In this paper we have proposed a new technique for face recognition which focuses on the facial expressions of the subject to identify his face. This is a grey area on which not much light has been thrown earlier. According to earlier researches it is difficult to alter the natural expression. So our technique will be beneficial for identifying occluded or intentionally disguised faces. The test results of the experiments conducted prove that this technique will give a new direction in the field of face recognition. This technique will provide a strong base to the area of face recognition and will be used as the core method for critical defense security related issues.

  20. Face Recognition by Metropolitan Police Super-Recognisers

    PubMed Central

    Robertson, David J.; Noyes, Eilidh; Dowsett, Andrew J.; Jenkins, Rob; Burton, A. Mike

    2016-01-01

    Face recognition is used to prove identity across a wide variety of settings. Despite this, research consistently shows that people are typically rather poor at matching faces to photos. Some professional groups, such as police and passport officers, have been shown to perform just as poorly as the general public on standard tests of face recognition. However, face recognition skills are subject to wide individual variation, with some people showing exceptional ability—a group that has come to be known as ‘super-recognisers’. The Metropolitan Police Force (London) recruits ‘super-recognisers’ from within its ranks, for deployment on various identification tasks. Here we test four working super-recognisers from within this police force, and ask whether they are really able to perform at levels above control groups. We consistently find that the police ‘super-recognisers’ perform at well above normal levels on tests of unfamiliar and familiar face matching, with degraded as well as high quality images. Recruiting employees with high levels of skill in these areas, and allocating them to relevant tasks, is an efficient way to overcome some of the known difficulties associated with unfamiliar face recognition. PMID:26918457

  1. Face Recognition by Metropolitan Police Super-Recognisers.

    PubMed

    Robertson, David J; Noyes, Eilidh; Dowsett, Andrew J; Jenkins, Rob; Burton, A Mike

    2016-01-01

    Face recognition is used to prove identity across a wide variety of settings. Despite this, research consistently shows that people are typically rather poor at matching faces to photos. Some professional groups, such as police and passport officers, have been shown to perform just as poorly as the general public on standard tests of face recognition. However, face recognition skills are subject to wide individual variation, with some people showing exceptional ability-a group that has come to be known as 'super-recognisers'. The Metropolitan Police Force (London) recruits 'super-recognisers' from within its ranks, for deployment on various identification tasks. Here we test four working super-recognisers from within this police force, and ask whether they are really able to perform at levels above control groups. We consistently find that the police 'super-recognisers' perform at well above normal levels on tests of unfamiliar and familiar face matching, with degraded as well as high quality images. Recruiting employees with high levels of skill in these areas, and allocating them to relevant tasks, is an efficient way to overcome some of the known difficulties associated with unfamiliar face recognition. PMID:26918457

  2. Familiarity is not notoriety: phenomenological accounts of face recognition

    PubMed Central

    Liccione, Davide; Moruzzi, Sara; Rossi, Federica; Manganaro, Alessia; Porta, Marco; Nugrahaningsih, Nahumi; Caserio, Valentina; Allegri, Nicola

    2014-01-01

    From a phenomenological perspective, faces are perceived differently from objects as their perception always involves the possibility of a relational engagement (Bredlau, 2011). This is especially true for familiar faces, i.e., faces of people with a history of real relational engagements. Similarly, valence of emotional expressions assumes a key role, as they define the sense and direction of this engagement. Following these premises, the aim of the present study is to demonstrate that face recognition is facilitated by at least two variables, familiarity and emotional expression, and that perception of familiar faces is not influenced by orientation. In order to verify this hypothesis, we implemented a 3 × 3 × 2 factorial design, showing 17 healthy subjects three type of faces (unfamiliar, personally familiar, famous) characterized by three different emotional expressions (happy, hungry/sad, neutral) and in two different orientation (upright vs. inverted). We showed every subject a total of 180 faces with the instructions to give a familiarity judgment. Reaction times (RTs) were recorded and we found that the recognition of a face is facilitated by personal familiarity and emotional expression, and that this process is otherwise independent from a cognitive elaboration of stimuli and remains stable despite orientation. These results highlight the need to make a distinction between famous and personally familiar faces when studying face perception and to consider its historical aspects from a phenomenological point of view. PMID:25225476

  3. Eye contrast polarity is critical for face recognition by infants.

    PubMed

    Otsuka, Yumiko; Motoyoshi, Isamu; Hill, Harold C; Kobayashi, Megumi; Kanazawa, So; Yamaguchi, Masami K

    2013-07-01

    Just as faces share the same basic arrangement of features, with two eyes above a nose above a mouth, human eyes all share the same basic contrast polarity relations, with a sclera lighter than an iris and a pupil, and this is unique among primates. The current study examined whether this bright-dark relationship of sclera to iris plays a critical role in face recognition from early in development. Specifically, we tested face discrimination in 7- and 8-month-old infants while independently manipulating the contrast polarity of the eye region and of the rest of the face. This gave four face contrast polarity conditions: fully positive condition, fully negative condition, positive face with negated eyes ("negative eyes") condition, and negated face with positive eyes ("positive eyes") condition. In a familiarization and novelty preference procedure, we found that 7- and 8-month-olds could discriminate between faces only when the contrast polarity of the eyes was preserved (positive) and that this did not depend on the contrast polarity of the rest of the face. This demonstrates the critical role of eye contrast polarity for face recognition in 7- and 8-month-olds and is consistent with previous findings for adults. PMID:23499321

  4. Embedded wavelet-based face recognition under variable position

    NASA Astrophysics Data System (ADS)

    Cotret, Pascal; Chevobbe, Stéphane; Darouich, Mehdi

    2015-02-01

    For several years, face recognition has been a hot topic in the image processing field: this technique is applied in several domains such as CCTV, electronic devices delocking and so on. In this context, this work studies the efficiency of a wavelet-based face recognition method in terms of subject position robustness and performance on various systems. The use of wavelet transform has a limited impact on the position robustness of PCA-based face recognition. This work shows, for a well-known database (Yale face database B*), that subject position in a 3D space can vary up to 10% of the original ROI size without decreasing recognition rates. Face recognition is performed on approximation coefficients of the image wavelet transform: results are still satisfying after 3 levels of decomposition. Furthermore, face database size can be divided by a factor 64 (22K with K = 3). In the context of ultra-embedded vision systems, memory footprint is one of the key points to be addressed; that is the reason why compression techniques such as wavelet transform are interesting. Furthermore, it leads to a low-complexity face detection stage compliant with limited computation resources available on such systems. The approach described in this work is tested on three platforms from a standard x86-based computer towards nanocomputers such as RaspberryPi and SECO boards. For K = 3 and a database with 40 faces, the execution mean time for one frame is 0.64 ms on a x86-based computer, 9 ms on a SECO board and 26 ms on a RaspberryPi (B model).

  5. The hows and whys of face memory: level of construal influences the recognition of human faces

    PubMed Central

    Wyer, Natalie A.; Hollins, Timothy J.; Pahl, Sabine; Roper, Jean

    2015-01-01

    Three experiments investigated the influence of level of construal (i.e., the interpretation of actions in terms of their meaning or their details) on different stages of face memory. We employed a standard multiple-face recognition paradigm, with half of the faces inverted at test. Construal level was manipulated prior to recognition (Experiment 1), during study (Experiment 2) or both (Experiment 3). The results support a general advantage for high-level construal over low-level construal at both study and at test, and suggest that matching processing style between study and recognition has no advantage. These experiments provide additional evidence in support of a link between semantic processing (i.e., construal) and visual (i.e., face) processing. We conclude with a discussion of implications for current theories relating to both construal and face processing. PMID:26500586

  6. Cellular Phone Face Recognition System Based on Optical Phase Correlation

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Ishikawa, Sayuri; Ohta, Maiko; Kodate, Kashiko

    We propose a high security facial recognition system using a cellular phone on the mobile network. This system is composed of a face recognition engine based on optical phase correlation which uses phase information with emphasis on a Fourier domain, a control sever and the cellular phone with a compact camera for taking pictures, as a portable terminal. Compared with various correlation methods, our face recognition engine revealed the most accurate EER of less than 1%. By using the JAVA interface on this system, we implemented the stable system taking pictures, providing functions to prevent spoofing while transferring images. This recognition system was tested on 300 women students and the results proved this system effective.

  7. The advantages of stereo vision in a face recognition system

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Blasch, Erik

    2014-06-01

    Humans can recognize a face with binocular vision, while computers typically use a single face image. It is known that the performance of face recognition (by a computer) can be improved using the score fusion of multimodal images and multiple algorithms. A question is: Can we apply stereo vision to a face recognition system? We know that human binocular vision has many advantages such as stereopsis (3D vision), binocular summation, and singleness of vision including fusion of binocular images (cyclopean image). For face recognition, a 3D face or 3D facial features are typically computed from a pair of stereo images. In human visual processes, the binocular summation and singleness of vision are similar as image fusion processes. In this paper, we propose an advanced face recognition system with stereo imaging capability, which is comprised of two 2-in-1 multispectral (visible and thermal) cameras and three recognition algorithms (circular Gaussian filter, face pattern byte, and linear discriminant analysis [LDA]). Specifically, we present and compare stereo fusion at three levels (images, features, and scores) by using stereo images (from left camera and right camera). Image fusion is achieved with three methods (Laplacian pyramid, wavelet transform, average); feature fusion is done with three logical operations (AND, OR, XOR); and score fusion is implemented with four classifiers (LDA, k-nearest neighbor, support vector machine, binomial logical regression). The system performance is measured by probability of correct classification (PCC) rate (reported as accuracy rate in this paper) and false accept rate (FAR). The proposed approaches were validated with a multispectral stereo face dataset from 105 subjects. Experimental results show that any type of stereo fusion can improve the PCC, meanwhile reduce the FAR. It seems that stereo image/feature fusion is superior to stereo score fusion in terms of recognition performance. Further score fusion after image

  8. 3D face recognition based on a modified ICP method

    NASA Astrophysics Data System (ADS)

    Zhao, Kankan; Xi, Jiangtao; Yu, Yanguang; Chicharo, Joe F.

    2011-11-01

    3D face recognition technique has gained much more attention recently, and it is widely used in security system, identification system, and access control system, etc. The core technique in 3D face recognition is to find out the corresponding points in different 3D face images. The classic partial Iterative Closest Point (ICP) method is iteratively align the two point sets based on repetitively calculate the closest points as the corresponding points in each iteration. After several iterations, the corresponding points can be obtained accurately. However, if two 3D face images with different scale are from the same person, the classic partial ICP does not work. In this paper we propose a modified partial Iterative Closest Point (ICP) method in which the scaling effect is considered to achieve 3D face recognition. We design a 3x3 diagonal matrix as the scale matrix in each iteration of the classic partial ICP. The probing face image which is multiplied by the scale matrix will keep the similar scale with the reference face image. Therefore, we can accurately determine the corresponding points even the scales of probing image and reference image are different. 3D face images in our experiments are acquired by a 3D data acquisition system based on Digital Fringe Projection Profilometry (DFPP). A 3D database consists of 30 group images, three images with the same scale, which are from the same person with different views, are included in each group. And in different groups, the scale of the 3 images may be different from other groups. The experiment results show that our proposed method can achieve 3D face recognition, especially in the case that the scales of probing image and referent image are different.

  9. Image quality-based adaptive illumination normalisation for face recognition

    NASA Astrophysics Data System (ADS)

    Sellahewa, Harin; Jassim, Sabah A.

    2009-05-01

    Automatic face recognition is a challenging task due to intra-class variations. Changes in lighting conditions during enrolment and identification stages contribute significantly to these intra-class variations. A common approach to address the effects such of varying conditions is to pre-process the biometric samples in order normalise intra-class variations. Histogram equalisation is a widely used illumination normalisation technique in face recognition. However, a recent study has shown that applying histogram equalisation on well-lit face images could lead to a decrease in recognition accuracy. This paper presents a dynamic approach to illumination normalisation, based on face image quality. The quality of a given face image is measured in terms of its luminance distortion by comparing this image against a known reference face image. Histogram equalisation is applied to a probe image if its luminance distortion is higher than a predefined threshold. We tested the proposed adaptive illumination normalisation method on the widely used Extended Yale Face Database B. Identification results demonstrate that our adaptive normalisation produces better identification accuracy compared to the conventional approach where every image is normalised, irrespective of the lighting condition they were acquired.

  10. A Markov Random Field Groupwise Registration Framework for Face Recognition

    PubMed Central

    Liao, Shu; Shen, Dinggang; Chung, Albert C.S.

    2014-01-01

    In this paper, we propose a new framework for tackling face recognition problem. The face recognition problem is formulated as groupwise deformable image registration and feature matching problem. The main contributions of the proposed method lie in the following aspects: (1) Each pixel in a facial image is represented by an anatomical signature obtained from its corresponding most salient scale local region determined by the survival exponential entropy (SEE) information theoretic measure. (2) Based on the anatomical signature calculated from each pixel, a novel Markov random field based groupwise registration framework is proposed to formulate the face recognition problem as a feature guided deformable image registration problem. The similarity between different facial images are measured on the nonlinear Riemannian manifold based on the deformable transformations. (3) The proposed method does not suffer from the generalizability problem which exists commonly in learning based algorithms. The proposed method has been extensively evaluated on four publicly available databases: FERET, CAS-PEAL-R1, FRGC ver 2.0, and the LFW. It is also compared with several state-of-the-art face recognition approaches, and experimental results demonstrate that the proposed method consistently achieves the highest recognition rates among all the methods under comparison. PMID:25506109

  11. Face recognition by using optical correlator with wavelet preprocessing

    NASA Astrophysics Data System (ADS)

    Strzelecki, Jacek; Chalasinska-Macukow, Katarzyna

    2004-08-01

    The method of face recognition by using optical correlator with wavelet preprocessing is presented. The wavelet transform is used to improve the performance of standard Vander Lugt correlator with phase only filter (POF). The influence of various wavelet transforms of images of human faces on the recognition results has been analyzed. The quality of the face recognition process was tested according to two criteria: the peak to correlation energy ratio (PCE), and the discrimination capability (DC). Additionally, proper localization of correlation peak has been controlled. During the preprocessing step a set of three wavelets -- mexican hat, Haar, and Gabor wavelets, with various scales was used. In addition, Gabor wavelets were tested for various orientation angles. During the recognition procedure the input scene and POF are transformed by the same wavelet. We show the results of the computer simulation for a variety of images of human faces: original images without any distortions, noisy images, and images with non-uniform light ilumination. A comparison of results of recognition obtained with and without wavelet preprocessing is given.

  12. FACELOCK-Lock Control Security System Using Face Recognition-

    NASA Astrophysics Data System (ADS)

    Hirayama, Takatsugu; Iwai, Yoshio; Yachida, Masahiko

    A security system using biometric person authentication technologies is suited to various high-security situations. The technology based on face recognition has advantages such as lower user’s resistance and lower stress. However, facial appearances change according to facial pose, expression, lighting, and age. We have developed the FACELOCK security system based on our face recognition methods. Our methods are robust for various facial appearances except facial pose. Our system consists of clients and a server. The client communicates with the server through our protocol over a LAN. Users of our system do not need to be careful about their facial appearance.

  13. Maximal likelihood correspondence estimation for face recognition across pose.

    PubMed

    Li, Shaoxin; Liu, Xin; Chai, Xiujuan; Zhang, Haihong; Lao, Shihong; Shan, Shiguang

    2014-10-01

    Due to the misalignment of image features, the performance of many conventional face recognition methods degrades considerably in across pose scenario. To address this problem, many image matching-based methods are proposed to estimate semantic correspondence between faces in different poses. In this paper, we aim to solve two critical problems in previous image matching-based correspondence learning methods: 1) fail to fully exploit face specific structure information in correspondence estimation and 2) fail to learn personalized correspondence for each probe image. To this end, we first build a model, termed as morphable displacement field (MDF), to encode face specific structure information of semantic correspondence from a set of real samples of correspondences calculated from 3D face models. Then, we propose a maximal likelihood correspondence estimation (MLCE) method to learn personalized correspondence based on maximal likelihood frontal face assumption. After obtaining the semantic correspondence encoded in the learned displacement, we can synthesize virtual frontal images of the profile faces for subsequent recognition. Using linear discriminant analysis method with pixel-intensity features, state-of-the-art performance is achieved on three multipose benchmarks, i.e., CMU-PIE, FERET, and MultiPIE databases. Owe to the rational MDF regularization and the usage of novel maximal likelihood objective, the proposed MLCE method can reliably learn correspondence between faces in different poses even in complex wild environment, i.e., labeled face in the wild database. PMID:25163062

  14. The Role of Higher Level Adaptive Coding Mechanisms in the Development of Face Recognition

    ERIC Educational Resources Information Center

    Pimperton, Hannah; Pellicano, Elizabeth; Jeffery, Linda; Rhodes, Gillian

    2009-01-01

    DevDevelopmental improvements in face identity recognition ability are widely documented, but the source of children's immaturity in face recognition remains unclear. Differences in the way in which children and adults visually represent faces might underlie immaturities in face recognition. Recent evidence of a face identity aftereffect (FIAE),…

  15. Two dimensional discriminant neighborhood preserving embedding in face recognition

    NASA Astrophysics Data System (ADS)

    Pang, Meng; Jiang, Jifeng; Lin, Chuang; Wang, Binghui

    2015-03-01

    One of the key issues of face recognition is to extract the features of face images. In this paper, we propose a novel method, named two-dimensional discriminant neighborhood preserving embedding (2DDNPE), for image feature extraction and face recognition. 2DDNPE benefits from four techniques, i.e., neighborhood preserving embedding (NPE), locality preserving projection (LPP), image based projection and Fisher criterion. Firstly, NPE and LPP are two popular manifold learning techniques which can optimally preserve the local geometry structures of the original samples from different angles. Secondly, image based projection enables us to directly extract the optimal projection vectors from twodimensional image matrices rather than vectors, which avoids the small sample size problem as well as reserves useful structural information embedded in the original images. Finally, the Fisher criterion applied in 2DDNPE can boost face recognition rates by minimizing the within-class distance, while maximizing the between-class distance. To evaluate the performance of 2DDNPE, several experiments are conducted on the ORL and Yale face datasets. The results corroborate that 2DDNPE outperforms the existing 1D feature extraction methods, such as NPE, LPP, LDA and PCA across all experiments with respect to recognition rate and training time. 2DDNPE also delivers consistently promising results compared with other competing 2D methods such as 2DNPP, 2DLPP, 2DLDA and 2DPCA.

  16. Efficient Detection of Occlusion prior to Robust Face Recognition

    PubMed Central

    Dugelay, Jean-Luc

    2014-01-01

    While there has been an enormous amount of research on face recognition under pose/illumination/expression changes and image degradations, problems caused by occlusions attracted relatively less attention. Facial occlusions, due, for example, to sunglasses, hat/cap, scarf, and beard, can significantly deteriorate performances of face recognition systems in uncontrolled environments such as video surveillance. The goal of this paper is to explore face recognition in the presence of partial occlusions, with emphasis on real-world scenarios (e.g., sunglasses and scarf). In this paper, we propose an efficient approach which consists of first analysing the presence of potential occlusion on a face and then conducting face recognition on the nonoccluded facial regions based on selective local Gabor binary patterns. Experiments demonstrate that the proposed method outperforms the state-of-the-art works including KLD-LGBPHS, S-LNMF, OA-LBP, and RSC. Furthermore, performances of the proposed approach are evaluated under illumination and extreme facial expression changes provide also significant results. PMID:24526902

  17. Can Massive but Passive Exposure to Faces Contribute to Face Recognition Abilities?

    ERIC Educational Resources Information Center

    Yovel, Galit; Halsband, Keren; Pelleg, Michel; Farkash, Naomi; Gal, Bracha; Goshen-Gottstein, Yonatan

    2012-01-01

    Recent studies have suggested that individuation of other-race faces is more crucial for enhancing recognition performance than exposure that involves categorization of these faces to an identity-irrelevant criterion. These findings were primarily based on laboratory training protocols that dissociated exposure and individuation by using…

  18. Generating virtual training samples for sparse representation of face images and face recognition

    NASA Astrophysics Data System (ADS)

    Du, Yong; Wang, Yu

    2016-03-01

    There are many challenges in face recognition. In real-world scenes, images of the same face vary with changing illuminations, different expressions and poses, multiform ornaments, or even altered mental status. Limited available training samples cannot convey these possible changes in the training phase sufficiently, and this has become one of the restrictions to improve the face recognition accuracy. In this article, we view the multiplication of two images of the face as a virtual face image to expand the training set and devise a representation-based method to perform face recognition. The generated virtual samples really reflect some possible appearance and pose variations of the face. By multiplying a training sample with another sample from the same subject, we can strengthen the facial contour feature and greatly suppress the noise. Thus, more human essential information is retained. Also, uncertainty of the training data is simultaneously reduced with the increase of the training samples, which is beneficial for the training phase. The devised representation-based classifier uses both the original and new generated samples to perform the classification. In the classification phase, we first determine K nearest training samples for the current test sample by calculating the Euclidean distances between the test sample and training samples. Then, a linear combination of these selected training samples is used to represent the test sample, and the representation result is used to classify the test sample. The experimental results show that the proposed method outperforms some state-of-the-art face recognition methods.

  19. Face recognition with histograms of fractional differential gradients

    NASA Astrophysics Data System (ADS)

    Yu, Lei; Ma, Yan; Cao, Qi

    2014-05-01

    It has proved that fractional differentiation can enhance the edge information and nonlinearly preserve textural detailed information in an image. This paper investigates its ability for face recognition and presents a local descriptor called histograms of fractional differential gradients (HFDG) to extract facial visual features. HFDG encodes a face image into gradient patterns using multiorientation fractional differential masks, from which histograms of gradient directions are computed as the face representation. Experimental results on Yale, face recognition technology (FERET), Carnegie Mellon University pose, illumination, and expression (CMU PIE), and A. Martinez and R. Benavente (AR) databases validate the feasibility of the proposed method and show that HFDG outperforms local binary patterns (LBP), histograms of oriented gradients (HOG), enhanced local directional patterns (ELDP), and Gabor feature-based methods.

  20. Holistic face processing can inhibit recognition of forensic facial composites.

    PubMed

    McIntyre, Alex H; Hancock, Peter J B; Frowd, Charlie D; Langton, Stephen R H

    2016-04-01

    Facial composite systems help eyewitnesses to show the appearance of criminals. However, likenesses created by unfamiliar witnesses will not be completely accurate, and people familiar with the target can find them difficult to identify. Faces are processed holistically; we explore whether this impairs identification of inaccurate composite images and whether recognition can be improved. In Experiment 1 (n = 64) an imaging technique was used to make composites of celebrity faces more accurate and identification was contrasted with the original composite images. Corrected composites were better recognized, confirming that errors in production of the likenesses impair identification. The influence of holistic face processing was explored by misaligning the top and bottom parts of the composites (cf. Young, Hellawell, & Hay, 1987). Misalignment impaired recognition of corrected composites but identification of the original, inaccurate composites significantly improved. This effect was replicated with facial composites of noncelebrities in Experiment 2 (n = 57). We conclude that, like real faces, facial composites are processed holistically: recognition is impaired because unlike real faces, composites contain inaccuracies and holistic face processing makes it difficult to perceive identifiable features. This effect was consistent across composites of celebrities and composites of people who are personally familiar. Our findings suggest that identification of forensic facial composites can be enhanced by presenting composites in a misaligned format. PMID:26436334

  1. Efficient live face detection to counter spoof attack in face recognition systems

    NASA Astrophysics Data System (ADS)

    Biswas, Bikram Kumar; Alam, Mohammad S.

    2015-03-01

    Face recognition is a critical tool used in almost all major biometrics based security systems. But recognition, authentication and liveness detection of the face of an actual user is a major challenge because an imposter or a non-live face of the actual user can be used to spoof the security system. In this research, a robust technique is proposed which detects liveness of faces in order to counter spoof attacks. The proposed technique uses a three-dimensional (3D) fast Fourier transform to compare spectral energies of a live face and a fake face in a mathematically selective manner. The mathematical model involves evaluation of energies of selective high frequency bands of average power spectra of both live and non-live faces. It also carries out proper recognition and authentication of the face of the actual user using the fringe-adjusted joint transform correlation technique, which has been found to yield the highest correlation output for a match. Experimental tests show that the proposed technique yields excellent results for identifying live faces.

  2. Orienting to face expression during encoding improves men's recognition of own gender faces.

    PubMed

    Fulton, Erika K; Bulluck, Megan; Hertzog, Christopher

    2015-10-01

    It is unclear why women have superior episodic memory of faces, but the benefit may be partially the result of women engaging in superior processing of facial expressions. Therefore, we hypothesized that orienting instructions to attend to facial expression at encoding would significantly improve men's memory of faces and possibly reduce gender differences. We directed 203 college students (122 women) to study 120 faces under instructions to orient to either the person's gender or their emotional expression. They later took a recognition test of these faces by either judging whether they had previously studied the same person or that person with the exact same expression; the latter test evaluated recollection of specific facial details. Orienting to facial expressions during encoding significantly improved men's recognition of own-gender faces and eliminated the advantage that women had for male faces under gender orienting instructions. Although gender differences in spontaneous strategy use when orienting to faces cannot fully account for gender differences in face recognition, orienting men to facial expression during encoding is one way to significantly improve their episodic memory for male faces. PMID:26295282

  3. Suitable models for face geometry normalization in facial expression recognition

    NASA Astrophysics Data System (ADS)

    Sadeghi, Hamid; Raie, Abolghasem A.

    2015-01-01

    Recently, facial expression recognition has attracted much attention in machine vision research because of its various applications. Accordingly, many facial expression recognition systems have been proposed. However, the majority of existing systems suffer from a critical problem: geometric variability. It directly affects the performance of geometric feature-based facial expression recognition approaches. Furthermore, it is a crucial challenge in appearance feature-based techniques. This variability appears in both neutral faces and facial expressions. Appropriate face geometry normalization can improve the accuracy of each facial expression recognition system. Therefore, this paper proposes different geometric models or shapes for normalization. Face geometry normalization removes geometric variability of facial images and consequently, appearance feature extraction methods can be accurately utilized to represent facial images. Thus, some expression-based geometric models are proposed for facial image normalization. Next, local binary patterns and local phase quantization are used for appearance feature extraction. A combination of an effective geometric normalization with accurate appearance representations results in more than a 4% accuracy improvement compared to several state-of-the-arts in facial expression recognition. Moreover, utilizing the model of facial expressions which have larger mouth and eye region sizes gives higher accuracy due to the importance of these regions in facial expression.

  4. Face Recognition with Multi-Resolution Spectral Feature Images

    PubMed Central

    Sun, Zhan-Li; Lam, Kin-Man; Dong, Zhao-Yang; Wang, Han; Gao, Qing-Wei; Zheng, Chun-Hou

    2013-01-01

    The one-sample-per-person problem has become an active research topic for face recognition in recent years because of its challenges and significance for real-world applications. However, achieving relatively higher recognition accuracy is still a difficult problem due to, usually, too few training samples being available and variations of illumination and expression. To alleviate the negative effects caused by these unfavorable factors, in this paper we propose a more accurate spectral feature image-based 2DLDA (two-dimensional linear discriminant analysis) ensemble algorithm for face recognition, with one sample image per person. In our algorithm, multi-resolution spectral feature images are constructed to represent the face images; this can greatly enlarge the training set. The proposed method is inspired by our finding that, among these spectral feature images, features extracted from some orientations and scales using 2DLDA are not sensitive to variations of illumination and expression. In order to maintain the positive characteristics of these filters and to make correct category assignments, the strategy of classifier committee learning (CCL) is designed to combine the results obtained from different spectral feature images. Using the above strategies, the negative effects caused by those unfavorable factors can be alleviated efficiently in face recognition. Experimental results on the standard databases demonstrate the feasibility and efficiency of the proposed method. PMID:23418451

  5. Impact of Intention on the ERP Correlates of Face Recognition

    ERIC Educational Resources Information Center

    Guillaume, Fabrice; Tiberghien, Guy

    2013-01-01

    The present study investigated the impact of study-test similarity on face recognition by manipulating, in the same experiment, the expression change (same vs. different) and the task-processing context (inclusion vs. exclusion instructions) as within-subject variables. Consistent with the dual-process framework, the present results showed that…

  6. Simulationist Models of Face-Based Emotion Recognition

    ERIC Educational Resources Information Center

    Goldman, Alvin I.; Sripada, Chandra Sekhar

    2005-01-01

    Recent studies of emotion mindreading reveal that for three emotions, fear, disgust, and anger, deficits in face-based recognition are paired with deficits in the production of the same emotion. What type of mindreading process would explain this pattern of paired deficits? The simulation approach and the theorizing approach are examined to…

  7. Emotional Recognition in Autism Spectrum Conditions from Voices and Faces

    ERIC Educational Resources Information Center

    Stewart, Mary E.; McAdam, Clair; Ota, Mitsuhiko; Peppe, Sue; Cleland, Joanne

    2013-01-01

    The present study reports on a new vocal emotion recognition task and assesses whether people with autism spectrum conditions (ASC) perform differently from typically developed individuals on tests of emotional identification from both the face and the voice. The new test of vocal emotion contained trials in which the vocal emotion of the sentence…

  8. An Inner Face Advantage in Children's Recognition of Familiar Peers

    ERIC Educational Resources Information Center

    Ge, Liezhong; Anzures, Gizelle; Wang, Zhe; Kelly, David J.; Pascalis, Olivier; Quinn, Paul C.; Slater, Alan M.; Yang, Zhiliang; Lee, Kang

    2008-01-01

    Children's recognition of familiar own-age peers was investigated. Chinese children (4-, 8-, and 14-year-olds) were asked to identify their classmates from photographs showing the entire face, the internal facial features only, the external facial features only, or the eyes, nose, or mouth only. Participants from all age groups were familiar with…

  9. Effect of severe image compression on face recognition algorithms

    NASA Astrophysics Data System (ADS)

    Zhao, Peilong; Dong, Jiwen; Li, Hengjian

    2015-10-01

    In today's information age, people will depend more and more on computers to obtain and make use of information, there is a big gap between the multimedia information after digitization that has large data and the current hardware technology that can provide the computer storage resources and network band width. For example, there is a large amount of image storage and transmission problem. Image compression becomes useful in cases when images need to be transmitted across networks in a less costly way by increasing data volume while reducing transmission time. This paper discusses image compression to effect on face recognition system. For compression purposes, we adopted the JPEG, JPEG2000, JPEG XR coding standard. The face recognition algorithms studied are SIFT. As a form of an extensive research, Experimental results show that it still maintains a high recognition rate under the high compression ratio, and JPEG XR standards is superior to other two kinds in terms of performance and complexity.

  10. Quaternion-based discriminant analysis method for color face recognition.

    PubMed

    Xu, Yong

    2012-01-01

    Pattern recognition techniques have been used to automatically recognize the objects, personal identities, predict the function of protein, the category of the cancer, identify lesion, perform product inspection, and so on. In this paper we propose a novel quaternion-based discriminant method. This method represents and classifies color images in a simple and mathematically tractable way. The proposed method is suitable for a large variety of real-world applications such as color face recognition and classification of the ground target shown in multispectrum remote images. This method first uses the quaternion number to denote the pixel in the color image and exploits a quaternion vector to represent the color image. This method then uses the linear discriminant analysis algorithm to transform the quaternion vector into a lower-dimensional quaternion vector and classifies it in this space. The experimental results show that the proposed method can obtain a very high accuracy for color face recognition. PMID:22937054

  11. Fixation Patterns During Recognition of Personally Familiar and Unfamiliar Faces

    PubMed Central

    van Belle, Goedele; Ramon, Meike; Lefèvre, Philippe; Rossion, Bruno

    2010-01-01

    Previous studies recording eye gaze during face perception have rendered somewhat inconclusive findings with respect to fixation differences between familiar and unfamiliar faces. This can be attributed to a number of factors that differ across studies: the type and extent of familiarity with the faces presented, the definition of areas of interest subject to analyses, as well as a lack of consideration for the time course of scan patterns. Here we sought to address these issues by recording fixations in a recognition task with personally familiar and unfamiliar faces. After a first common fixation on a central superior location of the face in between features, suggesting initial holistic encoding, and a subsequent left eye bias, local features were focused and explored more for familiar than unfamiliar faces. Although the number of fixations did not differ for un-/familiar faces, the locations of fixations began to differ before familiarity decisions were provided. This suggests that in the context of familiarity decisions without time constraints, differences in processing familiar and unfamiliar faces arise relatively early – immediately upon initiation of the first fixation to identity-specific information – and that the local features of familiar faces are processed more than those of unfamiliar faces. PMID:21607074

  12. Face recognition using local gradient binary count pattern

    NASA Astrophysics Data System (ADS)

    Zhao, Xiaochao; Lin, Yaping; Ou, Bo; Yang, Junfeng; Wu, Zhelun

    2015-11-01

    A local feature descriptor, the local gradient binary count pattern (LGBCP), is proposed for face recognition. Unlike some current methods that extract features directly from a face image in the spatial domain, LGBCP encodes the local gradient information of the face's texture in an effective way and provides a more discriminative code than other methods. We compute the gradient information of a face image through convolutions with compass masks. The gradient information is encoded using the local binary count operator. We divide a face into several subregions and extract the distribution of the LGBCP codes from each subregion. Then all the histograms are concatenated into a vector, which is used for face description. For recognition, the chi-square statistic is used to measure the similarity of different feature vectors. Besides directly calculating the similarity of two feature vectors, we provide a weighted matching scheme in which different weights are assigned to different subregions. The nearest-neighborhood classifier is exploited for classification. Experiments are conducted on the FERET, CAS-PEAL, and AR face databases. LGBCP achieves 96.15% on the Fb set of FERET. For CAS-PEAL, LGBCP gets 96.97%, 98.91%, and 90.89% on the aging, distance, and expression sets, respectively.

  13. Objective 3D face recognition: Evolution, approaches and challenges.

    PubMed

    Smeets, Dirk; Claes, Peter; Vandermeulen, Dirk; Clement, John Gerald

    2010-09-10

    Face recognition is a natural human ability and a widely accepted identification and authentication method. In modern legal settings, a lot of credence is placed on identifications made by eyewitnesses. Consequently these are based on human perception which is often flawed and can lead to situations where identity is disputed. Therefore, there is a clear need to secure identifications in an objective way based on anthropometric measures. Anthropometry has existed for many years and has evolved with each advent of new technology and computing power. As a result of this, face recognition methodology has shifted from a purely 2D image-based approach to the use of 3D facial shape. However, one of the main challenges still remaining is the non-rigid structure of the face, which can change permanently over varying time-scales and briefly with facial expressions. The majority of face recognition methods have been developed by scientists with a very technical background such as biometry, pattern recognition and computer vision. This article strives to bridge the gap between these communities and the forensic science end-users. A concise review of face recognition using 3D shape is given. Methods using 3D shape applied to data embodying facial expressions are tabulated for reference. From this list a categorization of different strategies to deal with expressions is presented. The underlying concepts and practical issues relating to the application of each strategy are given, without going into technical details. The discussion clearly articulates the justification to establish archival, reference databases to compare and evaluate different strategies. PMID:20395086

  14. Anti Theft Mechanism Through Face recognition Using FPGA

    NASA Astrophysics Data System (ADS)

    Sundari, Y. B. T.; Laxminarayana, G.; Laxmi, G. Vijaya

    2012-11-01

    The use of vehicle is must for everyone. At the same time, protection from theft is also very important. Prevention of vehicle theft can be done remotely by an authorized person. The location of the car can be found by using GPS and GSM controlled by FPGA. In this paper, face recognition is used to identify the persons and comparison is done with the preloaded faces for authorization. The vehicle will start only when the authorized personís face is identified. In the event of theft attempt or unauthorized personís trial to drive the vehicle, an MMS/SMS will be sent to the owner along with the location. Then the authorized person can alert the security personnel for tracking and catching the vehicle. For face recognition, a Principal Component Analysis (PCA) algorithm is developed using MATLAB. The control technique for GPS and GSM is developed using VHDL over SPTRAN 3E FPGA. The MMS sending method is written in VB6.0. The proposed application can be implemented with some modifications in the systems wherever the face recognition or detection is needed like, airports, international borders, banking applications etc.

  15. Face recognition with the Karhunen-Loeve transform

    NASA Astrophysics Data System (ADS)

    Suarez, Pedro F.

    1991-12-01

    The major goal of this research was to investigate machine recognition of faces. The approach taken to achieve this goal was to investigate the use of Karhunen-Loe've Transform (KLT) by implementing flexible and practical code. The KLT utilizes the eigenvectors of the covariance matrix as a basis set. Faces were projected onto the eigenvectors, called eigenfaces, and the resulting projection coefficients were used as features. Face recognition accuracies for the KLT coefficients were superior to Fourier based techniques. Additionally, this thesis demonstrated the image compression and reconstruction capabilities of the KLT. This theses also developed the use of the KLT as a facial feature detector. The ability to differentiate between facial features provides a computer communications interface for non-vocal people with cerebral palsy. Lastly, this thesis developed a KLT based axis system for laser scanner data of human heads. The scanner data axis system provides the anthropometric community a more precise method of fitting custom helmets.

  16. Functional aspects of recollective experience in face recognition.

    PubMed

    Parkin, A J; Gardiner, J M; Rosser, R

    1995-12-01

    This article describes two experiments on awareness in recognition memory for novel faces. Two kinds of awareness, recollective experience and feelings of familiarity in the absence of recollective experience, were measured by "remember" and "know" responses. Experiment 1 showed that "remember" but not "know" responses were reduced by divided attention at study. Experiment 2 showed that massed versus spaced repetition of faces in the study list had opposite effects on "remember" and "know" responses. Massed repetition increased "know" responses and reduced "remember" responses. Spaced repetition increased "remember" responses and reduced "know" responses. The results of both experiments replicate previous findings from the verbal domain in the domain of face recognition, and hence they increase the ecological validity of this experimental approach to memory and awareness and the generality of its database. These findings are discussed from a rehearsal perspective on factors influencing the two states of awareness and in relation to the alternative "process dissociation" procedure. PMID:8750414

  17. Multi-stream face recognition for crime-fighting

    NASA Astrophysics Data System (ADS)

    Jassim, Sabah A.; Sellahewa, Harin

    2007-04-01

    Automatic face recognition (AFR) is a challenging task that is increasingly becoming the preferred biometric trait for identification and has the potential of becoming an essential tool in the fight against crime and terrorism. Closed-circuit television (CCTV) cameras have increasingly been used over the last few years for surveillance in public places such as airports, train stations and shopping centers. They are used to detect and prevent crime, shoplifting, public disorder and terrorism. The work of law-enforcing and intelligence agencies is becoming more reliant on the use of databases of biometric data for large section of the population. Face is one of the most natural biometric traits that can be used for identification and surveillance. However, variations in lighting conditions, facial expressions, face size and pose are a great obstacle to AFR. This paper is concerned with using waveletbased face recognition schemes in the presence of variations of expressions and illumination. In particular, we will investigate the use of a combination of wavelet frequency channels for a multi-stream face recognition using various wavelet subbands as different face signal streams. The proposed schemes extend our recently developed face veri.cation scheme for implementation on mobile devices. We shall present experimental results on the performance of our proposed schemes for a number of face databases including a new AV database recorded on a PDA. By analyzing the various experimental data, we shall demonstrate that the multi-stream approach is robust against variations in illumination and facial expressions than the previous single-stream approach.

  18. Are Face and Object Recognition Independent? A Neurocomputational Modeling Exploration.

    PubMed

    Wang, Panqu; Gauthier, Isabel; Cottrell, Garrison

    2016-04-01

    Are face and object recognition abilities independent? Although it is commonly believed that they are, Gauthier et al. [Gauthier, I., McGugin, R. W., Richler, J. J., Herzmann, G., Speegle, M., & VanGulick, A. E. Experience moderates overlap between object and face recognition, suggesting a common ability. Journal of Vision, 14, 7, 2014] recently showed that these abilities become more correlated as experience with nonface categories increases. They argued that there is a single underlying visual ability, v, that is expressed in performance with both face and nonface categories as experience grows. Using the Cambridge Face Memory Test and the Vanderbilt Expertise Test, they showed that the shared variance between Cambridge Face Memory Test and Vanderbilt Expertise Test performance increases monotonically as experience increases. Here, we address why a shared resource across different visual domains does not lead to competition and to an inverse correlation in abilities? We explain this conundrum using our neurocomputational model of face and object processing ["The Model", TM, Cottrell, G. W., & Hsiao, J. H. Neurocomputational models of face processing. In A. J. Calder, G. Rhodes, M. Johnson, & J. Haxby (Eds.), The Oxford handbook of face perception. Oxford, UK: Oxford University Press, 2011]. We model the domain general ability v as the available computational resources (number of hidden units) in the mapping from input to label and experience as the frequency of individual exemplars in an object category appearing during network training. Our results show that, as in the behavioral data, the correlation between subordinate level face and object recognition accuracy increases as experience grows. We suggest that different domains do not compete for resources because the relevant features are shared between faces and objects. The essential power of experience is to generate a "spreading transform" for faces (separating them in representational space) that

  19. Neural Mechanism for Mirrored Self-face Recognition

    PubMed Central

    Sugiura, Motoaki; Miyauchi, Carlos Makoto; Kotozaki, Yuka; Akimoto, Yoritaka; Nozawa, Takayuki; Yomogida, Yukihito; Hanawa, Sugiko; Yamamoto, Yuki; Sakuma, Atsushi; Nakagawa, Seishu; Kawashima, Ryuta

    2015-01-01

    Self-face recognition in the mirror is considered to involve multiple processes that integrate 2 perceptual cues: temporal contingency of the visual feedback on one's action (contingency cue) and matching with self-face representation in long-term memory (figurative cue). The aim of this study was to examine the neural bases of these processes by manipulating 2 perceptual cues using a “virtual mirror” system. This system allowed online dynamic presentations of real-time and delayed self- or other facial actions. Perception-level processes were identified as responses to only a single perceptual cue. The effect of the contingency cue was identified in the cuneus. The regions sensitive to the figurative cue were subdivided by the response to a static self-face, which was identified in the right temporal, parietal, and frontal regions, but not in the bilateral occipitoparietal regions. Semantic- or integration-level processes, including amodal self-representation and belief validation, which allow modality-independent self-recognition and the resolution of potential conflicts between perceptual cues, respectively, were identified in distinct regions in the right frontal and insular cortices. The results are supportive of the multicomponent notion of self-recognition and suggest a critical role for contingency detection in the co-emergence of self-recognition and empathy in infants. PMID:24770712

  20. Face recognition using spatially constrained earth mover's distance.

    PubMed

    Xu, Dong; Yan, Shuicheng; Luo, Jiebo

    2008-11-01

    Face recognition is a challenging problem, especially when the face images are not strictly aligned (e.g., images can be captured from different viewpoints or the faces may not be accurately cropped by a human or automatic algorithm). In this correspondence, we investigate face recognition under the scenarios with potential spatial misalignments. First, we formulate an asymmetric similarity measure based on Spatially constrained Earth Mover's Distance (SEMD), for which the source image is partitioned into nonoverlapping local patches while the destination image is represented as a set of overlapping local patches at different positions. Assuming that faces are already roughly aligned according to the positions of their eyes, one patch in the source image can be matched only to one of its neighboring patches in the destination image under the spatial constraint of reasonably small misalignments. Because the similarity measure as defined by SEMD is asymmetric, we propose two schemes to combine the two similarity measures computed in both directions. Moreover, we adopt a distance-as-feature approach by treating the distances to the reference images as features in a Kernel Discriminant Analysis (KDA) framework. Experiments on three benchmark face databases, namely the CMU PIE, FERET, and FRGC databases, demonstrate the effectiveness of the proposed SEMD. PMID:18854252

  1. The neural plasticity of other-race face recognition.

    PubMed

    Tanaka, James W; Pierce, Lara J

    2009-03-01

    Although it is well established that people are better at recognizing own-race faces than at recognizing other-race faces, the neural mechanisms mediating this advantage are not well understood. In this study, Caucasian participants were trained to differentiate African American (or Hispanic) faces at the individual level (e.g., Joe, Bob) and to categorize Hispanic (or African American) faces at the basic level of race (e.g., Hispanic, African American). Behaviorally, subordinate-level individuation training led to improved performance on a posttraining recognition test, relative to basic-level training. As measured by event-related potentials, subordinate- and basic-level training had relatively little effect on the face N170 component. However, as compared with basic-level training, subordinate-level training elicited an increased response in the posterior expert N250 component. These results demonstrate that learning to discriminate other-race faces at the subordinate level of the individual leads to improved recognition and enhanced activation of the expert N250 component. PMID:19246333

  2. Spatial location in brief, free-viewing face encoding modulates contextual face recognition

    PubMed Central

    Felisberti, Fatima M.; McDermott, Mark R.

    2013-01-01

    The effect of the spatial location of faces in the visual field during brief, free-viewing encoding in subsequent face recognition is not known. This study addressed this question by tagging three groups of faces with cheating, cooperating or neutral behaviours and presenting them for encoding in two visual hemifields (upper vs. lower or left vs. right). Participants then had to indicate if a centrally presented face had been seen before or not. Head and eye movements were free in all phases. Findings showed that the overall recognition of cooperators was significantly better than cheaters, and it was better for faces encoded in the upper hemifield than in the lower hemifield, both in terms of a higher d′ and faster reaction time (RT). The d′ for any given behaviour in the left and right hemifields was similar. The RT in the left hemifield did not vary with tagged behaviour, whereas the RT in the right hemifield was longer for cheaters than for cooperators. The results showed that memory biases in contextual face recognition were modulated by the spatial location of briefly encoded faces and are discussed in terms of scanning reading habits, top-left bias in lighting preference and peripersonal space. PMID:24349694

  3. A wavelet-based approach to face verification/recognition

    NASA Astrophysics Data System (ADS)

    Jassim, Sabah; Sellahewa, Harin

    2005-10-01

    Face verification/recognition is a tough challenge in comparison to identification based on other biometrics such as iris, or fingerprints. Yet, due to its unobtrusive nature, the face is naturally suitable for security related applications. Face verification process relies on feature extraction from face images. Current schemes are either geometric-based or template-based. In the latter, the face image is statistically analysed to obtain a set of feature vectors that best describe it. Performance of a face verification system is affected by image variations due to illumination, pose, occlusion, expressions and scale. This paper extends our recent work on face verification for constrained platforms, where the feature vector of a face image is the coefficients in the wavelet transformed LL-subbands at depth 3 or more. It was demonstrated that the wavelet-only feature vector scheme has a comparable performance to sophisticated state-of-the-art when tested on two benchmark databases (ORL, and BANCA). The significance of those results stem from the fact that the size of the k-th LL- subband is 1/4k of the original image size. Here, we investigate the use of wavelet coefficients in various subbands at level 3 or 4 using various wavelet filters. We shall compare the performance of the wavelet-based scheme for different filters at different subbands with a number of state-of-the-art face verification/recognition schemes on two benchmark databases, namely ORL and the control section of BANCA. We shall demonstrate that our schemes have comparable performance to (or outperform) the best performing other schemes.

  4. Improving representation-based classification for robust face recognition

    NASA Astrophysics Data System (ADS)

    Zhang, Hongzhi; Zhang, Zheng; Li, Zhengming; Chen, Yan; Shi, Jian

    2014-06-01

    The sparse representation classification (SRC) method proposed by Wright et al. is considered as the breakthrough of face recognition because of its good performance. Nevertheless it still cannot perfectly address the face recognition problem. The main reason for this is that variation of poses, facial expressions, and illuminations of the facial image can be rather severe and the number of available facial images are fewer than the dimensions of the facial image, so a certain linear combination of all the training samples is not able to fully represent the test sample. In this study, we proposed a novel framework to improve the representation-based classification (RBC). The framework first ran the sparse representation algorithm and determined the unavoidable deviation between the test sample and optimal linear combination of all the training samples in order to represent it. It then exploited the deviation and all the training samples to resolve the linear combination coefficients. Finally, the classification rule, the training samples, and the renewed linear combination coefficients were used to classify the test sample. Generally, the proposed framework can work for most RBC methods. From the viewpoint of regression analysis, the proposed framework has a solid theoretical soundness. Because it can, to an extent, identify the bias effect of the RBC method, it enables RBC to obtain more robust face recognition results. The experimental results on a variety of face databases demonstrated that the proposed framework can improve the collaborative representation classification, SRC, and improve the nearest neighbor classifier.

  5. A Highly Accurate Face Recognition System Using Filtering Correlation

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Ishikawa, Sayuri; Kodate, Kashiko

    2007-09-01

    The authors previously constructed a highly accurate fast face recognition optical correlator (FARCO) [E. Watanabe and K. Kodate: Opt. Rev. 12 (2005) 460], and subsequently developed an improved, super high-speed FARCO (S-FARCO), which is able to process several hundred thousand frames per second. The principal advantage of our new system is its wide applicability to any correlation scheme. Three different configurations were proposed, each depending on correlation speed. This paper describes and evaluates a software correlation filter. The face recognition function proved highly accurate, seeing that a low-resolution facial image size (64 × 64 pixels) has been successfully implemented. An operation speed of less than 10 ms was achieved using a personal computer with a central processing unit (CPU) of 3 GHz and 2 GB memory. When we applied the software correlation filter to a high-security cellular phone face recognition system, experiments on 30 female students over a period of three months yielded low error rates: 0% false acceptance rate and 2% false rejection rate. Therefore, the filtering correlation works effectively when applied to low resolution images such as web-based images or faces captured by a monitoring camera.

  6. Near-infrared face recognition utilizing open CV software

    NASA Astrophysics Data System (ADS)

    Sellami, Louiza; Ngo, Hau; Fowler, Chris J.; Kearney, Liam M.

    2014-06-01

    Commercially available hardware, freely available algorithms, and authors' developed software are synergized successfully to detect and recognize subjects in an environment without visible light. This project integrates three major components: an illumination device operating in near infrared (NIR) spectrum, a NIR capable camera and a software algorithm capable of performing image manipulation, facial detection and recognition. Focusing our efforts in the near infrared spectrum allows the low budget system to operate covertly while still allowing for accurate face recognition. In doing so a valuable function has been developed which presents potential benefits in future civilian and military security and surveillance operations.

  7. Impact of intention on the ERP correlates of face recognition.

    PubMed

    Guillaume, Fabrice; Tiberghien, Guy

    2013-02-01

    The present study investigated the impact of study-test similarity on face recognition by manipulating, in the same experiment, the expression change (same vs. different) and the task-processing context (inclusion vs. exclusion instructions) as within-subject variables. Consistent with the dual-process framework, the present results showed that participants performed better on the inclusion task than on the exclusion task, with no response bias. A mid-frontal FN400 old/new effect and a parietal old/new effect were found in both tasks. However, modulations of the ERP old/new effects generated by the expression change on recognized faces differed across tasks. The modulations of the ERP old/new effects were proportional to the degree of matching between the study face and the recognition face in the inclusion task, but not in the exclusion task. The observed modulation of the FN400 old/new effect by the task instructions when familiarity and conceptual priming were kept constant indicates that these early ERP correlates of recognition depend on voluntary task-related control. The present results question the idea that FN400 reflects implicit memory processes such as conceptual priming and show that the extent to which the FN400 discriminates between conditions depends on the retrieval orientation at test. They are discussed in relation to recent controversies about the ERP correlates of familiarity in face recognition. This study suggests that while both conceptual and perceptual information can contribute to the familiarity signal reflected by the FN400 effect, their relative contributions vary with the task demands. PMID:23174431

  8. Deep learning and face recognition: the state of the art

    NASA Astrophysics Data System (ADS)

    Balaban, Stephen

    2015-05-01

    Deep Neural Networks (DNNs) have established themselves as a dominant technique in machine learning. DNNs have been top performers on a wide variety of tasks including image classification, speech recognition, and face recognition.1-3 Convolutional neural networks (CNNs) have been used in nearly all of the top performing methods on the Labeled Faces in the Wild (LFW) dataset.3-6 In this talk and accompanying paper, I attempt to provide a review and summary of the deep learning techniques used in the state-of-the-art. In addition, I highlight the need for both larger and more challenging public datasets to benchmark these systems. Despite the ability of DNNs and autoencoders to perform unsupervised feature learning, modern facial recognition pipelines still require domain specific engineering in the form of re-alignment. For example, in Facebook's recent DeepFace paper, a 3D "frontalization" step lies at the beginning of the pipeline. This step creates a 3D face model for the incoming image and then uses a series of affine transformations of the fiducial points to "frontalize" the image. This step enables the DeepFace system to use a neural network architecture with locally connected layers without weight sharing as opposed to standard convolutional layers.6 Deep learning techniques combined with large datasets have allowed research groups to surpass human level performance on the LFW dataset.3, 5 The high accuracy (99.63% for FaceNet at the time of publishing) and utilization of outside data (hundreds of millions of images in the case of Google's FaceNet) suggest that current face verification benchmarks such as LFW may not be challenging enough, nor provide enough data, for current techniques.3, 5 There exist a variety of organizations with mobile photo sharing applications that would be capable of releasing a very large scale and highly diverse dataset of facial images captured on mobile devices. Such an "ImageNet for Face Recognition" would likely receive a warm

  9. Tolerance for distorted faces: challenges to a configural processing account of familiar face recognition.

    PubMed

    Sandford, Adam; Burton, A Mike

    2014-09-01

    Face recognition is widely held to rely on 'configural processing', an analysis of spatial relations between facial features. We present three experiments in which viewers were shown distorted faces, and asked to resize these to their correct shape. Based on configural theories appealing to metric distances between features, we reason that this should be an easier task for familiar than unfamiliar faces (whose subtle arrangements of features are unknown). In fact, participants were inaccurate at this task, making between 8% and 13% errors across experiments. Importantly, we observed no advantage for familiar faces: in one experiment participants were more accurate with unfamiliars, and in two experiments there was no difference. These findings were not due to general task difficulty - participants were able to resize blocks of colour to target shapes (squares) more accurately. We also found an advantage of familiarity for resizing other stimuli (brand logos). If configural processing does underlie face recognition, these results place constraints on the definition of 'configural'. Alternatively, familiar face recognition might rely on more complex criteria - based on tolerance to within-person variation rather than highly specific measurement. PMID:24853629

  10. Neural correlates of impaired emotional face recognition in cerebellar lesions.

    PubMed

    Adamaszek, Michael; Kirkby, Kenneth C; D'Agata, Fedrico; Olbrich, Sebastian; Langner, Sönke; Steele, Christopher; Sehm, Bernhard; Busse, Stefan; Kessler, Christof; Hamm, Alfons

    2015-07-10

    Clinical and neuroimaging data indicate a cerebellar contribution to emotional processing, which may account for affective-behavioral disturbances in patients with cerebellar lesions. We studied the neurophysiology of cerebellar involvement in recognition of emotional facial expression. Participants comprised eight patients with discrete ischemic cerebellar lesions and eight control patients without any cerebrovascular stroke. Event-related potentials (ERP) were used to measure responses to faces from the Karolinska Directed Emotional Faces Database (KDEF), interspersed in a stream of images with salient contents. Images of faces augmented N170 in both groups, but increased late positive potential (LPP) only in control patients without brain lesions. Dipole analysis revealed altered activation patterns for negative emotions in patients with cerebellar lesions, including activation of the left inferior prefrontal area to images of faces showing fear, contralateral to controls. Correlation analysis indicated that lesions of cerebellar area Crus I contribute to ERP deviations. Overall, our results implicate the cerebellum in integrating emotional information at different higher order stages, suggesting distinct cerebellar contributions to the proposed large-scale cerebral network of emotional face recognition. PMID:25912431

  11. Recognition of faces of ingroup and outgroup children and adults.

    PubMed

    Corenblum, B; Meissner, Christian A

    2006-03-01

    People are often more accurate in recognizing faces of ingroup members than in recognizing faces of outgroup members. Although own-group biases in face recognition are well established among adults, less attention has been given to such biases among children. This is surprising considering how often children give testimony in criminal and civil cases. In the current two studies, Euro-Canadian children attending public school and young adults enrolled in university-level classes were asked whether previously presented photographs of Euro-American and African American adults (Study 1) or photographs of Native Canadian, Euro-Canadian, and African American children (Study 2) were new or old. In both studies, own-group biases were found on measures of discrimination accuracy and response bias as well as on estimates of reaction time, confidence, and confidence-accuracy relations. Results of both studies were consistent with predictions derived from multidimensional face space theory of face recognition. Implications of the current studies for the validity of children's eyewitness testimony are also discussed. PMID:16243349

  12. Sparse Feature Extraction for Pose-Tolerant Face Recognition.

    PubMed

    Abiantun, Ramzi; Prabhu, Utsav; Savvides, Marios

    2014-10-01

    Automatic face recognition performance has been steadily improving over years of research, however it remains significantly affected by a number of factors such as illumination, pose, expression, resolution and other factors that can impact matching scores. The focus of this paper is the pose problem which remains largely overlooked in most real-world applications. Specifically, we focus on one-to-one matching scenarios where a query face image of a random pose is matched against a set of gallery images. We propose a method that relies on two fundamental components: (a) A 3D modeling step to geometrically correct the viewpoint of the face. For this purpose, we extend a recent technique for efficient synthesis of 3D face models called 3D Generic Elastic Model. (b) A sparse feature extraction step using subspace modeling and ℓ1-minimization to induce pose-tolerance in coefficient space. This in return enables the synthesis of an equivalent frontal-looking face, which can be used towards recognition. We show significant performance improvements in verification rates compared to commercial matchers, and also demonstrate the resilience of the proposed method with respect to degrading input quality. We find that the proposed technique is able to match non-frontal images to other non-frontal images of varying angles. PMID:26352635

  13. Still-to-video face recognition in unconstrained environments

    NASA Astrophysics Data System (ADS)

    Wang, Haoyu; Liu, Changsong; Ding, Xiaoqing

    2015-02-01

    Face images from video sequences captured in unconstrained environments usually contain several kinds of variations, e.g. pose, facial expression, illumination, image resolution and occlusion. Motion blur and compression artifacts also deteriorate recognition performance. Besides, in various practical systems such as law enforcement, video surveillance and e-passport identification, only a single still image per person is enrolled as the gallery set. Many existing methods may fail to work due to variations in face appearances and the limit of available gallery samples. In this paper, we propose a novel approach for still-to-video face recognition in unconstrained environments. By assuming that faces from still images and video frames share the same identity space, a regularized least squares regression method is utilized to tackle the multi-modality problem. Regularization terms based on heuristic assumptions are enrolled to avoid overfitting. In order to deal with the single image per person problem, we exploit face variations learned from training sets to synthesize virtual samples for gallery samples. We adopt a learning algorithm combining both affine/convex hull-based approach and regularizations to match image sets. Experimental results on a real-world dataset consisting of unconstrained video sequences demonstrate that our method outperforms the state-of-the-art methods impressively.

  14. Friends with Faces: How Social Networks Can Enhance Face Recognition and Vice Versa

    NASA Astrophysics Data System (ADS)

    Mavridis, Nikolaos; Kazmi, Wajahat; Toulis, Panos

    The "friendship" relation, a social relation among individuals, is one of the primary relations modeled in some of the world's largest online social networking sites, such as "FaceBook." On the other hand, the "co-occurrence" relation, as a relation among faces appearing in pictures, is one that is easily detectable using modern face detection techniques. These two relations, though appearing in different realms (social vs. visual sensory), have a strong correlation: faces that co-occur in photos often belong to individuals who are friends. Using real-world data gathered from "Facebook," which were gathered as part of the "FaceBots" project, the world's first physical face-recognizing and conversing robot that can utilize and publish information on "Facebook" was established. We present here methods as well as results for utilizing this correlation in both directions. Both algorithms for utilizing knowledge of the social context for faster and better face recognition are given, as well as algorithms for estimating the friendship network of a number of individuals given photos containing their faces. The results are quite encouraging. In the primary example, doubling of the recognition accuracy as well as a sixfold improvement in speed is demonstrated. Various improvements, interesting statistics, as well as an empirical investigation leading to predictions of scalability to much bigger data sets are discussed.

  15. Determination of candidate subjects for better recognition of faces

    NASA Astrophysics Data System (ADS)

    Wang, Xuansheng; Chen, Zhen; Teng, Zhongming

    2016-05-01

    In order to improve the accuracy of face recognition and to solve the problem of various poses, we present an improved collaborative representation classification (CRC) algorithm using original training samples and the corresponding mirror images. First, the mirror images are generated from the original training samples. Second, both original training samples and their mirror images are simultaneously used to represent the test sample via improved collaborative representation. Then, some classes which are "close" to the test sample are coarsely selected as candidate classes. At last, the candidate classes are used to represent the test sample again, and then the class most similar to the test sample can be determined finely. The experimental results show our proposed algorithm has more robustness than the original CRC algorithm and can effectively improve the accuracy of face recognition.

  16. An integrated modeling approach to age invariant face recognition

    NASA Astrophysics Data System (ADS)

    Alvi, Fahad Bashir; Pears, Russel

    2015-03-01

    This Research study proposes a novel method for face recognition based on Anthropometric features that make use of an integrated approach comprising of a global and personalized models. The system is aimed to at situations where lighting, illumination, and pose variations cause problems in face recognition. A Personalized model covers the individual aging patterns while a Global model captures general aging patterns in the database. We introduced a de-aging factor that de-ages each individual in the database test and training sets. We used the k nearest neighbor approach for building a personalized model and global model. Regression analysis was applied to build the models. During the test phase, we resort to voting on different features. We used FG-Net database for checking the results of our technique and achieved 65 percent Rank 1 identification rate.

  17. Design and implementation of face recognition system based on Windows

    NASA Astrophysics Data System (ADS)

    Zhang, Min; Liu, Ting; Li, Ailan

    2015-07-01

    In view of the basic Windows login password input way lacking of safety and convenient operation, we will introduce the biometrics technology, face recognition, into the computer to login system. Not only can it encrypt the computer system, also according to the level to identify administrators at all levels. With the enhancement of the system security, user input can neither be a cumbersome nor worry about being stolen password confidential.

  18. Effects of Lateral Reversal on Recognition Memory for Photographs of Faces.

    ERIC Educational Resources Information Center

    McKelvie, Stuart J.

    1983-01-01

    Examined recognition memory for photographs of faces in four experiments using students and adults. Results supported a feature (rather than Gestalt) model of facial recognition in which the two sides of the face are different in its memory representation. (JAC)

  19. Familiar and unfamiliar face recognition in crested macaques (Macaca nigra)

    PubMed Central

    Micheletta, Jérôme; Whitehouse, Jamie; Parr, Lisa A.; Marshman, Paul; Engelhardt, Antje; Waller, Bridget M.

    2015-01-01

    Many species use facial features to identify conspecifics, which is necessary to navigate a complex social environment. The fundamental mechanisms underlying face processing are starting to be well understood in a variety of primate species. However, most studies focus on a limited subset of species tested with unfamiliar faces. As well as limiting our understanding of how widely distributed across species these skills are, this also limits our understanding of how primates process faces of individuals they know, and whether social factors (e.g. dominance and social bonds) influence how readily they recognize others. In this study, socially housed crested macaques voluntarily participated in a series of computerized matching-to-sample tasks investigating their ability to discriminate (i) unfamiliar individuals and (ii) members of their own social group. The macaques performed above chance on all tasks. Familiar faces were not easier to discriminate than unfamiliar faces. However, the subjects were better at discriminating higher ranking familiar individuals, but not unfamiliar ones. This suggests that our subjects applied their knowledge of their dominance hierarchies to the pictorial representation of their group mates. Faces of high-ranking individuals garner more social attention, and therefore might be more deeply encoded than other individuals. Our results extend the study of face recognition to a novel species, and consequently provide valuable data for future comparative studies. PMID:26064665

  20. Emotion recognition: the role of featural and configural face information.

    PubMed

    Bombari, Dario; Schmid, Petra C; Schmid Mast, Marianne; Birri, Sandra; Mast, Fred W; Lobmaier, Janek S

    2013-01-01

    Several studies investigated the role of featural and configural information when processing facial identity. A lot less is known about their contribution to emotion recognition. In this study, we addressed this issue by inducing either a featural or a configural processing strategy (Experiment 1) and by investigating the attentional strategies in response to emotional expressions (Experiment 2). In Experiment 1, participants identified emotional expressions in faces that were presented in three different versions (intact, blurred, and scrambled) and in two orientations (upright and inverted). Blurred faces contain mainly configural information, and scrambled faces contain mainly featural information. Inversion is known to selectively hinder configural processing. Analyses of the discriminability measure (A') and response times (RTs) revealed that configural processing plays a more prominent role in expression recognition than featural processing, but their relative contribution varies depending on the emotion. In Experiment 2, we qualified these differences between emotions by investigating the relative importance of specific features by means of eye movements. Participants had to match intact expressions with the emotional cues that preceded the stimulus. The analysis of eye movements confirmed that the recognition of different emotions rely on different types of information. While the mouth is important for the detection of happiness and fear, the eyes are more relevant for anger, fear, and sadness. PMID:23679155

  1. Combination of direct matching and collaborative representation for face recognition

    NASA Astrophysics Data System (ADS)

    Zhang, Chongyang

    2013-06-01

    It has been proved that representation-based classification (RBC) can achieve high accuracy in face recognition. However, conventional RBC has a very high computational cost. Collaborative representation proposed in [1] not only has the advantages of RBC but also is computationally very efficient. In this paper, a combination of direct matching of images and collaborative representation is proposed for face recognition. Experimental results show that the proposed method can always classify more accurately than collaborative representation! The underlying reason is that direct matching of images and collaborative representation use different ways to calculate the dissimilarity between the test sample and training sample. As a result, the score obtained using direct matching of images is very complementary to the score obtained using collaborative representation. Actually, the analysis shows that the matching scores generated from direct matching of images and collaborative representation always have a low correlation. This allows the proposed method to exploit more information for face recognition and to produce a better result.

  2. Using Regression to Measure Holistic Face Processing Reveals a Strong Link with Face Recognition Ability

    ERIC Educational Resources Information Center

    DeGutis, Joseph; Wilmer, Jeremy; Mercado, Rogelio J.; Cohan, Sarah

    2013-01-01

    Although holistic processing is thought to underlie normal face recognition ability, widely discrepant reports have recently emerged about this link in an individual differences context. Progress in this domain may have been impeded by the widespread use of subtraction scores, which lack validity due to their contamination with control condition…

  3. Comparison of computer-based and optical face recognition paradigms

    NASA Astrophysics Data System (ADS)

    Alorf, Abdulaziz A.

    The main objectives of this thesis are to validate an improved principal components analysis (IPCA) algorithm on images; designing and simulating a digital model for image compression, face recognition and image detection by using a principal components analysis (PCA) algorithm and the IPCA algorithm; designing and simulating an optical model for face recognition and object detection by using the joint transform correlator (JTC); establishing detection and recognition thresholds for each model; comparing between the performance of the PCA algorithm and the performance of the IPCA algorithm in compression, recognition and, detection; and comparing between the performance of the digital model and the performance of the optical model in recognition and detection. The MATLAB(c) software was used for simulating the models. PCA is a technique used for identifying patterns in data and representing the data in order to highlight any similarities or differences. The identification of patterns in data of high dimensions (more than three dimensions) is too difficult because the graphical representation of data is impossible. Therefore, PCA is a powerful method for analyzing data. IPCA is another statistical tool for identifying patterns in data. It uses information theory for improving PCA. The joint transform correlator (JTC) is an optical correlator used for synthesizing a frequency plane filter for coherent optical systems. The IPCA algorithm, in general, behaves better than the PCA algorithm in the most of the applications. It is better than the PCA algorithm in image compression because it obtains higher compression, more accurate reconstruction, and faster processing speed with acceptable errors; in addition, it is better than the PCA algorithm in real-time image detection due to the fact that it achieves the smallest error rate as well as remarkable speed. On the other hand, the PCA algorithm performs better than the IPCA algorithm in face recognition because it offers

  4. A motivational determinant of facial emotion recognition: regulatory focus affects recognition of emotions in faces.

    PubMed

    Sassenrath, Claudia; Sassenberg, Kai; Ray, Devin G; Scheiter, Katharina; Jarodzka, Halszka

    2014-01-01

    Two studies examined an unexplored motivational determinant of facial emotion recognition: observer regulatory focus. It was predicted that a promotion focus would enhance facial emotion recognition relative to a prevention focus because the attentional strategies associated with promotion focus enhance performance on well-learned or innate tasks - such as facial emotion recognition. In Study 1, a promotion or a prevention focus was experimentally induced and better facial emotion recognition was observed in a promotion focus compared to a prevention focus. In Study 2, individual differences in chronic regulatory focus were assessed and attention allocation was measured using eye tracking during the facial emotion recognition task. Results indicated that the positive relation between a promotion focus and facial emotion recognition is mediated by shorter fixation duration on the face which reflects a pattern of attention allocation matched to the eager strategy in a promotion focus (i.e., striving to make hits). A prevention focus did not have an impact neither on perceptual processing nor on facial emotion recognition. Taken together, these findings demonstrate important mechanisms and consequences of observer motivational orientation for facial emotion recognition. PMID:25380247

  5. The Effect of Inversion on Face Recognition in Adults with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Hedley, Darren; Brewer, Neil; Young, Robyn

    2015-01-01

    Face identity recognition has widely been shown to be impaired in individuals with autism spectrum disorders (ASD). In this study we examined the influence of inversion on face recognition in 26 adults with ASD and 33 age and IQ matched controls. Participants completed a recognition test comprising upright and inverted faces. Participants with ASD…

  6. The Role of Active Exploration of 3D Face Stimuli on Recognition Memory of Facial Information

    ERIC Educational Resources Information Center

    Liu, Chang Hong; Ward, James; Markall, Helena

    2007-01-01

    Research on face recognition has mainly relied on methods in which observers are relatively passive viewers of face stimuli. This study investigated whether active exploration of three-dimensional (3D) face stimuli could facilitate recognition memory. A standard recognition task and a sequential matching task were employed in a yoked design.…

  7. Presentation attack detection for face recognition using light field camera.

    PubMed

    Raghavendra, R; Raja, Kiran B; Busch, Christoph

    2015-03-01

    The vulnerability of face recognition systems isa growing concern that has drawn the interest from both academic and research communities. Despite the availability of a broad range of face presentation attack detection (PAD)(or countermeasure or antispoofing) schemes, there exists no superior PAD technique due to evolution of sophisticated presentation attacks (or spoof attacks). In this paper, we present a new perspective for face presentation attack detection by introducing light field camera (LFC). Since the use of a LFC can record the direction of each incoming ray in addition to the intensity, it exhibits an unique characteristic of rendering multiple depth(or focus) images in a single capture. Thus, we present a novel approach that involves exploring the variation of the focus between multiple depth (or focus) images rendered by the LFC that in turn can be used to reveal the presentation attacks. To this extent, we first collect a new face artefact database using LFC that comprises of 80 subjects. Face artefacts are generated by simulating two widely used attacks, such as photo print and electronic screen attack. Extensive experiments carried out on the light field face artefact database have revealed the outstanding performance of the proposed PAD scheme when benchmarked with various well established state-of-the-art schemes. PMID:25622320

  8. Effects of compression and individual variability on face recognition performance

    NASA Astrophysics Data System (ADS)

    McGarry, Delia P.; Arndt, Craig M.; McCabe, Steven A.; D'Amato, Donald P.

    2004-08-01

    The Enhanced Border Security and Visa Entry Reform Act of 2002 requires that the Visa Waiver Program be available only to countries that have a program to issue to their nationals machine-readable passports incorporating biometric identifiers complying with applicable standards established by the International Civil Aviation Organization (ICAO). In June 2002, the New Technologies Working Group of ICAO unanimously endorsed the use of face recognition (FR) as the globally interoperable biometric for machine-assisted identity confirmation with machine-readable travel documents (MRTDs), although Member States may elect to use fingerprint and/or iris recognition as additional biometric technologies. The means and formats are still being developed through which biometric information might be stored in the constrained space of integrated circuit chips embedded within travel documents. Such information will be stored in an open, yet unalterable and very compact format, probably as digitally signed and efficiently compressed images. The objective of this research is to characterize the many factors that affect FR system performance with respect to the legislated mandates concerning FR. A photograph acquisition environment and a commercial face recognition system have been installed at Mitretek, and over 1,400 images have been collected of volunteers. The image database and FR system are being used to analyze the effects of lossy image compression, individual differences, such as eyeglasses and facial hair, and the acquisition environment on FR system performance. Images are compressed by varying ratios using JPEG2000 to determine the trade-off points between recognition accuracy and compression ratio. The various acquisition factors that contribute to differences in FR system performance among individuals are also being measured. The results of this study will be used to refine and test efficient face image interchange standards that ensure highly accurate recognition, both

  9. Markov Network-Based Unified Classifier for Face Recognition.

    PubMed

    Hwang, Wonjun; Kim, Junmo

    2015-11-01

    In this paper, we propose a novel unifying framework using a Markov network to learn the relationships among multiple classifiers. In face recognition, we assume that we have several complementary classifiers available, and assign observation nodes to the features of a query image and hidden nodes to those of gallery images. Under the Markov assumption, we connect each hidden node to its corresponding observation node and the hidden nodes of neighboring classifiers. For each observation-hidden node pair, we collect the set of gallery candidates most similar to the observation instance, and capture the relationship between the hidden nodes in terms of a similarity matrix among the retrieved gallery images. Posterior probabilities in the hidden nodes are computed using the belief propagation algorithm, and we use marginal probability as the new similarity value of the classifier. The novelty of our proposed framework lies in the method that considers classifier dependence using the results of each neighboring classifier. We present the extensive evaluation results for two different protocols, known and unknown image variation tests, using four publicly available databases: 1) the Face Recognition Grand Challenge ver. 2.0; 2) XM2VTS; 3) BANCA; and 4) Multi-PIE. The result shows that our framework consistently yields improved recognition rates in various situations. PMID:26219095

  10. Thermal-to-visible face recognition using multiple kernel learning

    NASA Astrophysics Data System (ADS)

    Hu, Shuowen; Gurram, Prudhvi; Kwon, Heesung; Chan, Alex L.

    2014-06-01

    Recognizing faces acquired in the thermal spectrum from a gallery of visible face images is a desired capability for the military and homeland security, especially for nighttime surveillance and intelligence gathering. However, thermal-tovisible face recognition is a highly challenging problem, due to the large modality gap between thermal and visible imaging. In this paper, we propose a thermal-to-visible face recognition approach based on multiple kernel learning (MKL) with support vector machines (SVMs). We first subdivide the face into non-overlapping spatial regions or blocks using a method based on coalitional game theory. For comparison purposes, we also investigate uniform spatial subdivisions. Following this subdivision, histogram of oriented gradients (HOG) features are extracted from each block and utilized to compute a kernel for each region. We apply sparse multiple kernel learning (SMKL), which is a MKLbased approach that learns a set of sparse kernel weights, as well as the decision function of a one-vs-all SVM classifier for each of the subjects in the gallery. We also apply equal kernel weights (non-sparse) and obtain one-vs-all SVM models for the same subjects in the gallery. Only visible images of each subject are used for MKL training, while thermal images are used as probe images during testing. With subdivision generated by game theory, we achieved Rank-1 identification rate of 50.7% for SMKL and 93.6% for equal kernel weighting using a multimodal dataset of 65 subjects. With uniform subdivisions, we achieved a Rank-1 identification rate of 88.3% for SMKL, but 92.7% for equal kernel weighting.

  11. Log-Gabor Weber descriptor for face recognition

    NASA Astrophysics Data System (ADS)

    Li, Jing; Sang, Nong; Gao, Changxin

    2015-09-01

    The Log-Gabor transform, which is suitable for analyzing gradually changing data such as in iris and face images, has been widely used in image processing, pattern recognition, and computer vision. In most cases, only the magnitude or phase information of the Log-Gabor transform is considered. However, the complementary effect taken by combining magnitude and phase information simultaneously for an image-feature extraction problem has not been systematically explored in the existing works. We propose a local image descriptor for face recognition, called Log-Gabor Weber descriptor (LGWD). The novelty of our LGWD is twofold: (1) to fully utilize the information from the magnitude or phase feature of multiscale and orientation Log-Gabor transform, we apply the Weber local binary pattern operator to each transform response. (2) The encoded Log-Gabor magnitude and phase information are fused at the feature level by utilizing kernel canonical correlation analysis strategy, considering that feature level information fusion is effective when the modalities are correlated. Experimental results on the AR, Extended Yale B, and UMIST face databases, compared with those available from recent experiments reported in the literature, show that our descriptor yields a better performance than state-of-the art methods.

  12. Driver face recognition as a security and safety feature

    NASA Astrophysics Data System (ADS)

    Vetter, Volker; Giefing, Gerd-Juergen; Mai, Rudolf; Weisser, Hubert

    1995-09-01

    We present a driver face recognition system for comfortable access control and individual settings of automobiles. The primary goals are the prevention of car thefts and heavy accidents caused by unauthorized use (joy-riders), as well as the increase of safety through optimal settings, e.g. of the mirrors and the seat position. The person sitting on the driver's seat is observed automatically by a small video camera in the dashboard. All he has to do is to behave cooperatively, i.e. to look into the camera. A classification system validates his access. Only after a positive identification, the car can be used and the driver-specific environment (e.g. seat position, mirrors, etc.) may be set up to ensure the driver's comfort and safety. The driver identification system has been integrated in a Volkswagen research car. Recognition results are presented.

  13. Facial emotion recognition deficits: The new face of schizophrenia.

    PubMed

    Behere, Rishikesh V

    2015-01-01

    Schizophrenia has been classically described to have positive, negative, and cognitive symptom dimension. Emerging evidence strongly supports a fourth dimension of social cognitive symptoms with facial emotion recognition deficits (FERD) representing a new face in our understanding of this complex disorder. FERD have been described to be one among the important deficits in schizophrenia and could be trait markers for the disorder. FERD are associated with socio-occupational dysfunction and hence are of important clinical relevance. This review discusses FERD in schizophrenia, challenges in its assessment in our cultural context, its implications in understanding neurobiological mechanisms and clinical applications. PMID:26600574

  14. Facial emotion recognition deficits: The new face of schizophrenia

    PubMed Central

    Behere, Rishikesh V.

    2015-01-01

    Schizophrenia has been classically described to have positive, negative, and cognitive symptom dimension. Emerging evidence strongly supports a fourth dimension of social cognitive symptoms with facial emotion recognition deficits (FERD) representing a new face in our understanding of this complex disorder. FERD have been described to be one among the important deficits in schizophrenia and could be trait markers for the disorder. FERD are associated with socio-occupational dysfunction and hence are of important clinical relevance. This review discusses FERD in schizophrenia, challenges in its assessment in our cultural context, its implications in understanding neurobiological mechanisms and clinical applications. PMID:26600574

  15. Face recognition: Eigenface, elastic matching, and neural nets

    SciTech Connect

    Zhang, J.; Yan, Y.; Lades, M.

    1997-09-01

    This paper is a comparative study of three recently proposed algorithms for face recognition: eigenface, autoassociation and classification neural nets, and elastic matching. After these algorithms were analyzed under a common statistical decision framework, they were evaluated experimentally on four individual data bases, each with a moderate subject size, and a combined data base with more than a hundred different subjects. Analysis and experimental results indicate that the eigenface algorithm, which is essentially a minimum distance classifier, works well when lighting variation is small. Its performance deteriorates significantly as lighting variation increases. The elastic matching algorithm, on the other hand, is insensitive to lighting, face position, and expression variations and therefore is more versatile. The performance of the autoassociation and classification nets is upper bounded by that of the eigenface but is more difficult to implement in practice.

  16. Effects of distance on face recognition: implications for eyewitness identification.

    PubMed

    Lampinen, James Michael; Erickson, William Blake; Moore, Kara N; Hittson, Aaron

    2014-12-01

    Eyewitnesses sometimes view faces from a distance, but little research has examined the accuracy of witnesses as a function of distance. The purpose to the present project is to examine the relationship between identification accuracy and distance under carefully controlled conditions. This is one of the first studies to examine the ability to recognize faces of strangers at a distance under free-field conditions. Participants viewed eight live human targets, displayed at one of six outdoor distances that varied between 5 and 40 yards. Participants were shown 16 photographs, 8 of the previously viewed targets and 8 of nonviewed foils that matched a verbal description of the target counterpart. Participants rated their confidence of having seen or not having seen each individual on an 8-point scale. Long distances were associated with poor recognition memory and response bias shifts. PMID:24820456

  17. Quality labeled faces in the wild (QLFW): a database for studying face recognition in real-world environments

    NASA Astrophysics Data System (ADS)

    Karam, Lina J.; Zhu, Tong

    2015-03-01

    The varying quality of face images is an important challenge that limits the effectiveness of face recognition technology when applied in real-world applications. Existing face image databases do not consider the effect of distortions that commonly occur in real-world environments. This database (QLFW) represents an initial attempt to provide a set of labeled face images spanning the wide range of quality, from no perceived impairment to strong perceived impairment for face detection and face recognition applications. Types of impairment include JPEG2000 compression, JPEG compression, additive white noise, Gaussian blur and contrast change. Subjective experiments are conducted to assess the perceived visual quality of faces under different levels and types of distortions and also to assess the human recognition performance under the considered distortions. One goal of this work is to enable automated performance evaluation of face recognition technologies in the presence of different types and levels of visual distortions. This will consequently enable the development of face recognition systems that can operate reliably on real-world visual content in the presence of real-world visual distortions. Another goal is to enable the development and assessment of visual quality metrics for face images and for face detection and recognition applications.

  18. A Comparative Study of 2D PCA Face Recognition Method with Other Statistically Based Face Recognition Methods

    NASA Astrophysics Data System (ADS)

    Senthilkumar, R.; Gnanamurthy, R. K.

    2015-07-01

    In this paper, two-dimensional principal component analysis (2D PCA) is compared with other algorithms like 1D PCA, Fisher discriminant analysis (FDA), independent component analysis (ICA) and Kernel PCA (KPCA) which are used for image representation and face recognition. As opposed to PCA, 2D PCA is based on 2D image matrices rather than 1D vectors, so the image matrix does not need to be transformed into a vector prior to feature extraction. Instead, an image covariance matrix is constructed directly using the original image matrices and its Eigen vectors are derived for image feature extraction. To test 2D PCA and evaluate its performance, a series of experiments are performed on three face image databases: ORL, Senthil, and Yale face databases. The recognition rate across all trials higher using 2D PCA than PCA, FDA, ICA and KPCA. The experimental results also indicated that the extraction of image features is computationally more efficient using 2D PCA than PCA.

  19. Learning the spherical harmonic features for 3-D face recognition.

    PubMed

    Liu, Peijiang; Wang, Yunhong; Huang, Di; Zhang, Zhaoxiang; Chen, Liming

    2013-03-01

    In this paper, a competitive method for 3-D face recognition (FR) using spherical harmonic features (SHF) is proposed. With this solution, 3-D face models are characterized by the energies contained in spherical harmonics with different frequencies, thereby enabling the capture of both gross shape and fine surface details of a 3-D facial surface. This is in clear contrast to most 3-D FR techniques which are either holistic or feature based, using local features extracted from distinctive points. First, 3-D face models are represented in a canonical representation, namely, spherical depth map, by which SHF can be calculated. Then, considering the predictive contribution of each SHF feature, especially in the presence of facial expression and occlusion, feature selection methods are used to improve the predictive performance and provide faster and more cost-effective predictors. Experiments have been carried out on three public 3-D face datasets, SHREC2007, FRGC v2.0, and Bosphorus, with increasing difficulties in terms of facial expression, pose, and occlusion, and which demonstrate the effectiveness of the proposed method. PMID:23060332

  20. 3D Multi-Spectrum Sensor System with Face Recognition

    PubMed Central

    Kim, Joongrock; Yu, Sunjin; Kim, Ig-Jae; Lee, Sangyoun

    2013-01-01

    This paper presents a novel three-dimensional (3D) multi-spectrum sensor system, which combines a 3D depth sensor and multiple optical sensors for different wavelengths. Various image sensors, such as visible, infrared (IR) and 3D sensors, have been introduced into the commercial market. Since each sensor has its own advantages under various environmental conditions, the performance of an application depends highly on selecting the correct sensor or combination of sensors. In this paper, a sensor system, which we will refer to as a 3D multi-spectrum sensor system, which comprises three types of sensors, visible, thermal-IR and time-of-flight (ToF), is proposed. Since the proposed system integrates information from each sensor into one calibrated framework, the optimal sensor combination for an application can be easily selected, taking into account all combinations of sensors information. To demonstrate the effectiveness of the proposed system, a face recognition system with light and pose variation is designed. With the proposed sensor system, the optimal sensor combination, which provides new effectively fused features for a face recognition system, is obtained. PMID:24072025

  1. Simultaneous Versus Sequential Presentation in Testing Recognition Memory for Faces.

    PubMed

    Finley, Jason R; Roediger, Henry L; Hughes, Andrea D; Wahlheim, Christopher N; Jacoby, Larry L

    2015-01-01

    Three experiments examined the issue of whether faces could be better recognized in a simul- taneous test format (2-alternative forced choice [2AFC]) or a sequential test format (yes-no). All experiments showed that when target faces were present in the test, the simultaneous procedure led to superior performance (area under the ROC curve), whether lures were high or low in similarity to the targets. However, when a target-absent condition was used in which no lures resembled the targets but the lures were similar to each other, the simultaneous procedure yielded higher false alarm rates (Experiments 2 and 3) and worse overall performance (Experi- ment 3). This pattern persisted even when we excluded responses that participants opted to withhold rather than volunteer. We conclude that for the basic recognition procedures used in these experiments, simultaneous presentation of alternatives (2AFC) generally leads to better discriminability than does sequential presentation (yes-no) when a target is among the alterna- tives. However, our results also show that the opposite can occur when there is no target among the alternatives. An important future step is to see whether these patterns extend to more realistic eyewitness lineup procedures. The pictures used in the experiment are available online at http://www.press.uillinois.edu/journals/ajp/media/testing_recognition/. PMID:26255438

  2. Face recognition using 4-PSK joint transform correlation

    NASA Astrophysics Data System (ADS)

    Moniruzzaman, Md.; Alam, Mohammad S.

    2016-04-01

    This paper presents an efficient phase-encoded and 4-phase shift keying (PSK)-based fringe-adjusted joint transform correlation (FJTC) technique for face recognition applications. The proposed technique uses phase encoding and a 4- channel phase shifting method on the reference image which can be pre-calculated without affecting the system processing speed. The 4-channel PSK step eliminates the unwanted zero-order term, autocorrelation among multiple similar input scene objects while yield enhanced cross-correlation output. For each channel, discrete wavelet decomposition preprocessing has been used to accommodate the impact of various 3D facial expressions, effects of noise, and illumination variations. The performance of the proposed technique has been tested using various image datasets such as Yale, and extended Yale B under different environments such as illumination variation and 3D changes in facial expressions. The test results show that the proposed technique yields significantly better performance when compared to existing JTC-based face recognition techniques.

  3. 3D multi-spectrum sensor system with face recognition.

    PubMed

    Kim, Joongrock; Yu, Sunjin; Kim, Ig-Jae; Lee, Sangyoun

    2013-01-01

    This paper presents a novel three-dimensional (3D) multi-spectrum sensor system, which combines a 3D depth sensor and multiple optical sensors for different wavelengths. Various image sensors, such as visible, infrared (IR) and 3D sensors, have been introduced into the commercial market. Since each sensor has its own advantages under various environmental conditions, the performance of an application depends highly on selecting the correct sensor or combination of sensors. In this paper, a sensor system, which we will refer to as a 3D multi-spectrum sensor system, which comprises three types of sensors, visible, thermal-IR and time-of-flight (ToF), is proposed. Since the proposed system integrates information from each sensor into one calibrated framework, the optimal sensor combination for an application can be easily selected, taking into account all combinations of sensors information. To demonstrate the effectiveness of the proposed system, a face recognition system with light and pose variation is designed. With the proposed sensor system, the optimal sensor combination, which provides new effectively fused features for a face recognition system, is obtained. PMID:24072025

  4. A blur-robust descriptor with applications to face recognition.

    PubMed

    Gopalan, Raghuraman; Taheri, Sima; Turaga, Pavan; Chellappa, Rama

    2012-06-01

    Understanding the effect of blur is an important problem in unconstrained visual analysis. We address this problem in the context of image-based recognition by a fusion of image-formation models and differential geometric tools. First, we discuss the space spanned by blurred versions of an image and then, under certain assumptions, provide a differential geometric analysis of that space. More specifically, we create a subspace resulting from convolution of an image with a complete set of orthonormal basis functions of a prespecified maximum size (that can represent an arbitrary blur kernel within that size), and show that the corresponding subspaces created from a clean image and its blurred versions are equal under the ideal case of zero noise and some assumptions on the properties of blur kernels. We then study the practical utility of this subspace representation for the problem of direct recognition of blurred faces by viewing the subspaces as points on the Grassmann manifold and present methods to perform recognition for cases where the blur is both homogenous and spatially varying. We empirically analyze the effect of noise, as well as the presence of other facial variations between the gallery and probe images, and provide comparisons with existing approaches on standard data sets. PMID:22231594

  5. Infrared face recognition based on binary particle swarm optimization and SVM-wrapper model

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua; Liu, Guodong

    2015-10-01

    Infrared facial imaging, being light- independent, and not vulnerable to facial skin, expressions and posture, can avoid or limit the drawbacks of face recognition in visible light. Robust feature selection and representation is a key issue for infrared face recognition research. This paper proposes a novel infrared face recognition method based on local binary pattern (LBP). LBP can improve the robust of infrared face recognition under different environment situations. How to make full use of the discriminant ability in LBP patterns is an important problem. A search algorithm combination binary particle swarm with SVM is used to find out the best discriminative subset in LBP features. Experimental results show that the proposed method outperforms traditional LBP based infrared face recognition methods. It can significantly improve the recognition performance of infrared face recognition.

  6. Examination of Consumption of Processing Performance in Face Recognition on Working Memory

    NASA Astrophysics Data System (ADS)

    Yonemura, Keiichi; Sugiura, Akihiko

    In this study, we examined consumption of processing resources in face recognition on working memory. Selective interference is occurred by dual-task method with matching-to-sample. We assess processing delay and correct percentage of task that time. As recognition categories, we used simple figures, languages, objects, scenes, and faces (considering everyday life and vascular dementia). By experimental result, we understood consumption of processing resources in face recognition is the largest among other categories on working memory. Using the relation of processing resources consumption between face recognition and other recognitions on working memory, we hope that assessing of vascular dementia noticed frontal lobe dysfunction.

  7. Ambient temperature normalization for infrared face recognition based on the second-order polynomial model

    NASA Astrophysics Data System (ADS)

    Wang, Zhengzi

    2015-08-01

    The influence of ambient temperature is a big challenge to robust infrared face recognition. This paper proposes a new ambient temperature normalization algorithm to improve the performance of infrared face recognition under variable ambient temperatures. Based on statistical regression theory, a second order polynomial model is learned to describe the ambient temperature's impact on infrared face image. Then, infrared image was normalized to reference ambient temperature by the second order polynomial model. Finally, this normalization method is applied to infrared face recognition to verify its efficiency. The experiments demonstrate that the proposed temperature normalization method is feasible and can significantly improve the robustness of infrared face recognition.

  8. Low Resolution Face Recognition Across Variations in Pose and Illumination.

    PubMed

    Mudunuri, Sivaram Prasad; Biswas, Soma

    2016-05-01

    We propose a completely automatic approach for recognizing low resolution face images captured in uncontrolled environment. The approach uses multidimensional scaling to learn a common transformation matrix for the entire face which simultaneously transforms the facial features of the low resolution and the high resolution training images such that the distance between them approximates the distance had both the images been captured under the same controlled imaging conditions. Stereo matching cost is used to obtain the similarity of two images in the transformed space. Though this gives very good recognition performance, the time taken for computing the stereo matching cost is significant. To overcome this limitation, we propose a reference-based approach in which each face image is represented by its stereo matching cost from a few reference images. Experimental evaluation on the real world challenging databases and comparison with the state-of-the-art super-resolution, classifier based and cross modal synthesis techniques show the effectiveness of the proposed algorithm. PMID:27046843

  9. The impact of specular highlights on 3D-2D face recognition

    NASA Astrophysics Data System (ADS)

    Christlein, Vincent; Riess, Christian; Angelopoulou, Elli; Evangelopoulos, Georgios; Kakadiaris, Ioannis

    2013-05-01

    One of the most popular form of biometrics is face recognition. Face recognition techniques typically assume that a face exhibits Lambertian reectance. However, a face often exhibits prominent specularities, especially in outdoor environments. These specular highlights can compromise an identity authentication. In this work, we analyze the impact of such highlights on a 3D-2D face recognition system. First, we investigate three different specularity removal methods as preprocessing steps for face recognition. Then, we explicitly model facial specularities within the face detection system with the Cook-Torrance reflectance model. In our experiments, specularity removal increases the recognition rate on an outdoor face database by about 5% at a false alarm rate of 10-3. The integration of the Cook-Torrance model further improves these results, increasing the verification rate by 19% at a FAR of 10-3.

  10. The Effects of Inversion and Familiarity on Face versus Body Cues to Person Recognition

    ERIC Educational Resources Information Center

    Robbins, Rachel A.; Coltheart, Max

    2012-01-01

    Extensive research has focused on face recognition, and much is known about this topic. However, much of this work seems to be based on an assumption that faces are the most important aspect of person recognition. Here we test this assumption in two experiments. We show that when viewers are forced to choose, they "do" use the face more than the…

  11. Influence of Emotional Facial Expressions on 3-5-Year-Olds' Face Recognition

    ERIC Educational Resources Information Center

    Freitag, Claudia; Schwarzer, Gudrun

    2011-01-01

    Three experiments examined 3- and 5-year-olds' recognition of faces in constant and varied emotional expressions. Children were asked to identify repeatedly presented target faces, distinguishing them from distractor faces, during an immediate recognition test and during delayed assessments after 10 min and one week. Emotional facial expression…

  12. Formal Implementation of a Performance Evaluation Model for the Face Recognition System

    PubMed Central

    Shin, Yong-Nyuo; Kim, Jason; Lee, Yong-Jun; Shin, Woochang; Choi, Jin-Young

    2008-01-01

    Due to usability features, practical applications, and its lack of intrusiveness, face recognition technology, based on information, derived from individuals' facial features, has been attracting considerable attention recently. Reported recognition rates of commercialized face recognition systems cannot be admitted as official recognition rates, as they are based on assumptions that are beneficial to the specific system and face database. Therefore, performance evaluation methods and tools are necessary to objectively measure the accuracy and performance of any face recognition system. In this paper, we propose and formalize a performance evaluation model for the biometric recognition system, implementing an evaluation tool for face recognition systems based on the proposed model. Furthermore, we performed evaluations objectively by providing guidelines for the design and implementation of a performance evaluation system, formalizing the performance test process. PMID:18317524

  13. A framework for the recognition of 3D faces and expressions

    NASA Astrophysics Data System (ADS)

    Li, Chao; Barreto, Armando

    2006-04-01

    Face recognition technology has been a focus both in academia and industry for the last couple of years because of its wide potential applications and its importance to meet the security needs of today's world. Most of the systems developed are based on 2D face recognition technology, which uses pictures for data processing. With the development of 3D imaging technology, 3D face recognition emerges as an alternative to overcome the difficulties inherent with 2D face recognition, i.e. sensitivity to illumination conditions and orientation positioning of the subject. But 3D face recognition still needs to tackle the problem of deformation of facial geometry that results from the expression changes of a subject. To deal with this issue, a 3D face recognition framework is proposed in this paper. It is composed of three subsystems: an expression recognition system, a system for the identification of faces with expression, and neutral face recognition system. A system for the recognition of faces with one type of expression (happiness) and neutral faces was implemented and tested on a database of 30 subjects. The results proved the feasibility of this framework.

  14. Postencoding cognitive processes in the cross-race effect: Categorization and individuation during face recognition.

    PubMed

    Ho, Michael R; Pezdek, Kathy

    2016-06-01

    The cross-race effect (CRE) describes the finding that same-race faces are recognized more accurately than cross-race faces. According to social-cognitive theories of the CRE, processes of categorization and individuation at encoding account for differential recognition of same- and cross-race faces. Recent face memory research has suggested that similar but distinct categorization and individuation processes also occur postencoding, at recognition. Using a divided-attention paradigm, in Experiments 1A and 1B we tested and confirmed the hypothesis that distinct postencoding categorization and individuation processes occur during the recognition of same- and cross-race faces. Specifically, postencoding configural divided-attention tasks impaired recognition accuracy more for same-race than for cross-race faces; on the other hand, for White (but not Black) participants, postencoding featural divided-attention tasks impaired recognition accuracy more for cross-race than for same-race faces. A social categorization paradigm used in Experiments 2A and 2B tested the hypothesis that the postencoding in-group or out-group social orientation to faces affects categorization and individuation processes during the recognition of same-race and cross-race faces. Postencoding out-group orientation to faces resulted in categorization for White but not for Black participants. This was evidenced by White participants' impaired recognition accuracy for same-race but not for cross-race out-group faces. Postencoding in-group orientation to faces had no effect on recognition accuracy for either same-race or cross-race faces. The results of Experiments 2A and 2B suggest that this social orientation facilitates White but not Black participants' individuation and categorization processes at recognition. Models of recognition memory for same-race and cross-race faces need to account for processing differences that occur at both encoding and recognition. PMID:26391033

  15. Face recognition using multiple maximum scatter difference discrimination dictionary learning

    NASA Astrophysics Data System (ADS)

    Zhu, Yanyong; Dong, Jiwen; Li, Hengjian

    2015-10-01

    Based on multiple maximum scatter difference discrimination Dictionary learning, a novel face recognition algorithm is proposed. Dictionary used for sparse coding plays a key role in sparse representation classification. In this paper, a multiple maximum scatter difference discriminated criterion is used for dictionary learning. During the process of dictionary learning, the multiple maximum scatter difference computes its discriminated vectors from both the range of the between class scatter matrix and the null space of the within-class scatter matrix. The proposed algorithm is theoretically elegant and easy to calculate. Extensive experimental studies conducted on the AR database and Extended Yale Database B in comparison with existing basic sparse representation and other classification methods, it shows that the performance is a little better than the original sparse representation methods with lower complexity.

  16. Local ICA for the Most Wanted face recognition

    NASA Astrophysics Data System (ADS)

    Guan, Xin; Szu, Harold H.; Markowitz, Zvi

    2000-04-01

    Facial disguises of FBI Most Wanted criminals are inevitable and anticipated in our design of automatic/aided target recognition (ATR) imaging systems. For example, man's facial hairs may hide his mouth and chin but not necessarily the nose and eyes. Sunglasses will cover the eyes but not the nose, mouth, and chins. This fact motivates us to build sets of the independent component analyses bases separately for each facial region of the entire alleged criminal group. Then, given an alleged criminal face, collective votes are obtained from all facial regions in terms of 'yes, no, abstain' and are tallied for a potential alarm. Moreover, and innocent outside shall fall below the alarm threshold and is allowed to pass the checkpoint. Such a PD versus FAR called ROC curve is obtained.

  17. Face learning and the emergence of view-independent face recognition: an event-related brain potential study.

    PubMed

    Zimmermann, Friederike G S; Eimer, Martin

    2013-06-01

    Recognizing unfamiliar faces is more difficult than familiar face recognition, and this has been attributed to qualitative differences in the processing of familiar and unfamiliar faces. Familiar faces are assumed to be represented by view-independent codes, whereas unfamiliar face recognition depends mainly on view-dependent low-level pictorial representations. We employed an electrophysiological marker of visual face recognition processes in order to track the emergence of view-independence during the learning of previously unfamiliar faces. Two face images showing either the same or two different individuals in the same or two different views were presented in rapid succession, and participants had to perform an identity-matching task. On trials where both faces showed the same view, repeating the face of the same individual triggered an N250r component at occipito-temporal electrodes, reflecting the rapid activation of visual face memory. A reliable N250r component was also observed on view-change trials. Crucially, this view-independence emerged as a result of face learning. In the first half of the experiment, N250r components were present only on view-repetition trials but were absent on view-change trials, demonstrating that matching unfamiliar faces was initially based on strictly view-dependent codes. In the second half, the N250r was triggered not only on view-repetition trials but also on view-change trials, indicating that face recognition had now become more view-independent. This transition may be due to the acquisition of abstract structural codes of individual faces during face learning, but could also reflect the formation of associative links between sets of view-specific pictorial representations of individual faces. PMID:23583970

  18. A Reciprocal Model of Face Recognition and Autistic Traits: Evidence from an Individual Differences Perspective

    PubMed Central

    Halliday, Drew W. R.; MacDonald, Stuart W. S.; Sherf, Suzanne K.; Tanaka, James W.

    2014-01-01

    Although not a core symptom of the disorder, individuals with autism often exhibit selective impairments in their face processing abilities. Importantly, the reciprocal connection between autistic traits and face perception has rarely been examined within the typically developing population. In this study, university participants from the social sciences, physical sciences, and humanities completed a battery of measures that assessed face, object and emotion recognition abilities, general perceptual-cognitive style, and sub-clinical autistic traits (the Autism Quotient (AQ)). We employed separate hierarchical multiple regression analyses to evaluate which factors could predict face recognition scores and AQ scores. Gender, object recognition performance, and AQ scores predicted face recognition behaviour. Specifically, males, individuals with more autistic traits, and those with lower object recognition scores performed more poorly on the face recognition test. Conversely, university major, gender and face recognition performance reliably predicted AQ scores. Science majors, males, and individuals with poor face recognition skills showed more autistic-like traits. These results suggest that the broader autism phenotype is associated with lower face recognition abilities, even among typically developing individuals. PMID:24853862

  19. A reciprocal model of face recognition and autistic traits: evidence from an individual differences perspective.

    PubMed

    Halliday, Drew W R; MacDonald, Stuart W S; Scherf, K Suzanne; Sherf, Suzanne K; Tanaka, James W

    2014-01-01

    Although not a core symptom of the disorder, individuals with autism often exhibit selective impairments in their face processing abilities. Importantly, the reciprocal connection between autistic traits and face perception has rarely been examined within the typically developing population. In this study, university participants from the social sciences, physical sciences, and humanities completed a battery of measures that assessed face, object and emotion recognition abilities, general perceptual-cognitive style, and sub-clinical autistic traits (the Autism Quotient (AQ)). We employed separate hierarchical multiple regression analyses to evaluate which factors could predict face recognition scores and AQ scores. Gender, object recognition performance, and AQ scores predicted face recognition behaviour. Specifically, males, individuals with more autistic traits, and those with lower object recognition scores performed more poorly on the face recognition test. Conversely, university major, gender and face recognition performance reliably predicted AQ scores. Science majors, males, and individuals with poor face recognition skills showed more autistic-like traits. These results suggest that the broader autism phenotype is associated with lower face recognition abilities, even among typically developing individuals. PMID:24853862

  20. Face recognition: database acquisition, hybrid algorithms, and human studies

    NASA Astrophysics Data System (ADS)

    Gutta, Srinivas; Huang, Jeffrey R.; Singh, Dig; Wechsler, Harry

    1997-02-01

    One of the most important technologies absent in traditional and emerging frontiers of computing is the management of visual information. Faces are accessible `windows' into the mechanisms that govern our emotional and social lives. The corresponding face recognition tasks considered herein include: (1) Surveillance, (2) CBIR, and (3) CBIR subject to correct ID (`match') displaying specific facial landmarks such as wearing glasses. We developed robust matching (`classification') and retrieval schemes based on hybrid classifiers and showed their feasibility using the FERET database. The hybrid classifier architecture consist of an ensemble of connectionist networks--radial basis functions-- and decision trees. The specific characteristics of our hybrid architecture include (a) query by consensus as provided by ensembles of networks for coping with the inherent variability of the image formation and data acquisition process, and (b) flexible and adaptive thresholds as opposed to ad hoc and hard thresholds. Experimental results, proving the feasibility of our approach, yield (i) 96% accuracy, using cross validation (CV), for surveillance on a data base consisting of 904 images (ii) 97% accuracy for CBIR tasks, on a database of 1084 images, and (iii) 93% accuracy, using CV, for CBIR subject to correct ID match tasks on a data base of 200 images.

  1. Pose-robust recognition of low-resolution face images.

    PubMed

    Biswas, Soma; Aggarwal, Gaurav; Flynn, Patrick J; Bowyer, Kevin W

    2013-12-01

    Face images captured by surveillance cameras usually have poor resolution in addition to uncontrolled poses and illumination conditions, all of which adversely affect the performance of face matching algorithms. In this paper, we develop a completely automatic, novel approach for matching surveillance quality facial images to high-resolution images in frontal pose, which are often available during enrollment. The proposed approach uses multidimensional scaling to simultaneously transform the features from the poor quality probe images and the high-quality gallery images in such a manner that the distances between them approximate the distances had the probe images been captured in the same conditions as the gallery images. Tensor analysis is used for facial landmark localization in the low-resolution uncontrolled probe images for computing the features. Thorough evaluation on the Multi-PIE dataset and comparisons with state-of-the-art super-resolution and classifier-based approaches are performed to illustrate the usefulness of the proposed approach. Experiments on surveillance imagery further signify the applicability of the framework. We also show the usefulness of the proposed approach for the application of tracking and recognition in surveillance videos. PMID:24136439

  2. Structural attributes of the temporal lobe predict face recognition ability in youth.

    PubMed

    Li, Jun; Dong, Minghao; Ren, Aifeng; Ren, Junchan; Zhang, Jinsong; Huang, Liyu

    2016-04-01

    The face recognition ability varies across individuals. However, it remains elusive how brain anatomical structure is related to the face recognition ability in healthy subjects. In this study, we adopted voxel-based morphometry analysis and machine learning approach to investigate the neural basis of individual face recognition ability using anatomical magnetic resonance imaging. We demonstrated that the gray matter volume (GMV) of the right ventral anterior temporal lobe (vATL), an area sensitive to face identity, is significant positively correlated with the subject's face recognition ability which was measured by the Cambridge face memory test (CFMT) score. Furthermore, the predictive model established by the balanced cross-validation combined with linear regression method revealed that the right vATL GMV can predict subjects' face ability. However, the subjects' Cambridge face memory test scores cannot be predicted by the GMV of the face processing network core brain regions including the right occipital face area (OFA) and the right face fusion area (FFA). Our results suggest that the right vATL may play an important role in face recognition and might provide insight into the neural mechanisms underlying face recognition deficits in patients with pathophysiological conditions such as prosopagnosia. PMID:26802942

  3. A robust face recognition algorithm under varying illumination using adaptive retina modeling

    NASA Astrophysics Data System (ADS)

    Cheong, Yuen Kiat; Yap, Vooi Voon; Nisar, Humaira

    2013-10-01

    Variation in illumination has a drastic effect on the appearance of a face image. This may hinder the automatic face recognition process. This paper presents a novel approach for face recognition under varying lighting conditions. The proposed algorithm uses adaptive retina modeling based illumination normalization. In the proposed approach, retina modeling is employed along with histogram remapping following normal distribution. Retina modeling is an approach that combines two adaptive nonlinear equations and a difference of Gaussians filter. Two databases: extended Yale B database and CMU PIE database are used to verify the proposed algorithm. For face recognition Gabor Kernel Fisher Analysis method is used. Experimental results show that the recognition rate for the face images with different illumination conditions has improved by the proposed approach. Average recognition rate for Extended Yale B database is 99.16%. Whereas, the recognition rate for CMU-PIE database is 99.64%.

  4. Orientation and Affective Expression Effects on Face Recognition in Williams Syndrome and Autism

    ERIC Educational Resources Information Center

    Rose, Fredric E.; Lincoln, Alan J.; Lai, Zona; Ene, Michaela; Searcy, Yvonne M.; Bellugi, Ursula

    2007-01-01

    We sought to clarify the nature of the face processing strength commonly observed in individuals with Williams syndrome (WS) by comparing the face recognition ability of persons with WS to that of persons with autism and to healthy controls under three conditions: Upright faces with neutral expressions, upright faces with varying affective…

  5. The Cambridge Face Memory Test for Children (CFMT-C): a new tool for measuring face recognition skills in childhood.

    PubMed

    Croydon, Abigail; Pimperton, Hannah; Ewing, Louise; Duchaine, Brad C; Pellicano, Elizabeth

    2014-09-01

    Face recognition ability follows a lengthy developmental course, not reaching maturity until well into adulthood. Valid and reliable assessments of face recognition memory ability are necessary to examine patterns of ability and disability in face processing, yet there is a dearth of such assessments for children. We modified a well-known test of face memory in adults, the Cambridge Face Memory Test (Duchaine & Nakayama, 2006, Neuropsychologia, 44, 576-585), to make it developmentally appropriate for children. To establish its utility, we administered either the upright or inverted versions of the computerised Cambridge Face Memory Test - Children (CFMT-C) to 401 children aged between 5 and 12 years. Our results show that the CFMT-C is sufficiently sensitive to demonstrate age-related gains in the recognition of unfamiliar upright and inverted faces, does not suffer from ceiling or floor effects, generates robust inversion effects, and is capable of detecting difficulties in face memory in children diagnosed with autism. Together, these findings indicate that the CFMT-C constitutes a new valid assessment tool for children's face recognition skills. PMID:25054837

  6. Face shape and face identity processing in behavioral variant fronto-temporal dementia: A specific deficit for familiarity and name recognition of famous faces.

    PubMed

    De Winter, François-Laurent; Timmers, Dorien; de Gelder, Beatrice; Van Orshoven, Marc; Vieren, Marleen; Bouckaert, Miriam; Cypers, Gert; Caekebeke, Jo; Van de Vliet, Laura; Goffin, Karolien; Van Laere, Koen; Sunaert, Stefan; Vandenberghe, Rik; Vandenbulcke, Mathieu; Van den Stock, Jan

    2016-01-01

    Deficits in face processing have been described in the behavioral variant of fronto-temporal dementia (bvFTD), primarily regarding the recognition of facial expressions. Less is known about face shape and face identity processing. Here we used a hierarchical strategy targeting face shape and face identity recognition in bvFTD and matched healthy controls. Participants performed 3 psychophysical experiments targeting face shape detection (Experiment 1), unfamiliar face identity matching (Experiment 2), familiarity categorization and famous face-name matching (Experiment 3). The results revealed group differences only in Experiment 3, with a deficit in the bvFTD group for both familiarity categorization and famous face-name matching. Voxel-based morphometry regression analyses in the bvFTD group revealed an association between grey matter volume of the left ventral anterior temporal lobe and familiarity recognition, while face-name matching correlated with grey matter volume of the bilateral ventral anterior temporal lobes. Subsequently, we quantified familiarity-specific and name-specific recognition deficits as the sum of the celebrities of which respectively only the name or only the familiarity was accurately recognized. Both indices were associated with grey matter volume of the bilateral anterior temporal cortices. These findings extent previous results by documenting the involvement of the left anterior temporal lobe (ATL) in familiarity detection and the right ATL in name recognition deficits in fronto-temporal lobar degeneration. PMID:27298765

  7. Internal versus external features in triggering the brain waveforms for conjunction and feature faces in recognition.

    PubMed

    Nie, Aiqing; Jiang, Jingguo; Fu, Qiao

    2014-08-20

    Previous research has found that conjunction faces (whose internal features, e.g. eyes, nose, and mouth, and external features, e.g. hairstyle and ears, are from separate studied faces) and feature faces (partial features of these are studied) can produce higher false alarms than both old and new faces (i.e. those that are exactly the same as the studied faces and those that have not been previously presented) in recognition. The event-related potentials (ERPs) that relate to conjunction and feature faces at recognition, however, have not been described as yet; in addition, the contributions of different facial features toward ERPs have not been differentiated. To address these issues, the present study compared the ERPs elicited by old faces, conjunction faces (the internal and the external features were from two studied faces), old internal feature faces (whose internal features were studied), and old external feature faces (whose external features were studied) with those of new faces separately. The results showed that old faces not only elicited an early familiarity-related FN400, but a more anterior distributed late old/new effect that reflected recollection. Conjunction faces evoked similar late brain waveforms as old internal feature faces, but not to old external feature faces. These results suggest that, at recognition, old faces hold higher familiarity than compound faces in the profiles of ERPs and internal facial features are more crucial than external ones in triggering the brain waveforms that are characterized as reflecting the result of familiarity. PMID:25003951

  8. The "parts and wholes" of face recognition: A review of the literature.

    PubMed

    Tanaka, James W; Simonyi, Diana

    2016-10-01

    It has been claimed that faces are recognized as a "whole" rather than by the recognition of individual parts. In a paper published in the Quarterly Journal of Experimental Psychology in 1993, Martha Farah and I attempted to operationalize the holistic claim using the part/whole task. In this task, participants studied a face and then their memory presented in isolation and in the whole face. Consistent with the holistic view, recognition of the part was superior when tested in the whole-face condition compared to when it was tested in isolation. The "whole face" or holistic advantage was not found for faces that were inverted, or scrambled, nor for non-face objects, suggesting that holistic encoding was specific to normal, intact faces. In this paper, we reflect on the part/whole paradigm and how it has contributed to our understanding of what it means to recognize a face as a "whole" stimulus. We describe the value of part/whole task for developing theories of holistic and non-holistic recognition of faces and objects. We discuss the research that has probed the neural substrates of holistic processing in healthy adults and people with prosopagnosia and autism. Finally, we examine how experience shapes holistic face recognition in children and recognition of own- and other-race faces in adults. The goal of this article is to summarize the research on the part/whole task and speculate on how it has informed our understanding of holistic face processing. PMID:26886495

  9. Development of Face Recognition in 5- to 15-Year-Olds

    ERIC Educational Resources Information Center

    Kinnunen, Suna; Korkman, Marit; Laasonen, Marja; Lahti-Nuuttila, Pekka

    2013-01-01

    This study focuses on the development of face recognition in typically developing preschool- and school-aged children (aged 5 to 15 years old, "n" = 611, 336 girls). Social predictors include sex differences and own-sex bias. At younger ages, the development of face recognition was rapid and became more gradual as the age increased up…

  10. Tracking and recognition face in videos with incremental local sparse representation model

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Wang, Yunhong; Zhang, Zhaoxiang

    2013-10-01

    This paper addresses the problem of tracking and recognizing faces via incremental local sparse representation. First a robust face tracking algorithm is proposed via employing local sparse appearance and covariance pooling method. In the following face recognition stage, with the employment of a novel template update strategy, which combines incremental subspace learning, our recognition algorithm adapts the template to appearance changes and reduces the influence of occlusion and illumination variation. This leads to a robust video-based face tracking and recognition with desirable performance. In the experiments, we test the quality of face recognition in real-world noisy videos on YouTube database, which includes 47 celebrities. Our proposed method produces a high face recognition rate at 95% of all videos. The proposed face tracking and recognition algorithms are also tested on a set of noisy videos under heavy occlusion and illumination variation. The tracking results on challenging benchmark videos demonstrate that the proposed tracking algorithm performs favorably against several state-of-the-art methods. In the case of the challenging dataset in which faces undergo occlusion and illumination variation, and tracking and recognition experiments under significant pose variation on the University of California, San Diego (Honda/UCSD) database, our proposed method also consistently demonstrates a high recognition rate.

  11. Face recognition and emotional valence: processing without awareness by neurologically intact participants does not simulate covert recognition in prosopagnosia.

    PubMed

    Stone, A; Valentine, T; Davis, R

    2001-06-01

    Covert face recognition in neurologically intact participants was investigated with the use of very brief stimulus presentation to prevent awareness of the stimulus. In Experiment 1, skin conductance response (SCR) to photographs of celebrity and unfamiliar faces was recorded; the faces were displayed for 220 msec and for 17 msec in a within-participants design. SCR to faces presented for 220 msec was larger and more likely to occur with familiar faces than with unfamiliar faces. Face familiarity did not affect the SCR to faces presented for 17 msec. SCR was larger for faces of good than for faces of evil celebrities presented for 17 msec, but valence did not affect SCR to faces displayed for 220 msec. In Experiment 2, associative priming was found in a face familiarity decision task when the prime face was displayed for 220 msec, but no facilitation occurred when primes were presented for 17 msec. In Experiment 3, participants were able to differentiate evil and good faces presented without awareness in a two-alternative forced-choice decision. The results provide no evidence of familiarity detection outside awareness in normal participants and suggest that, contrary to previous research, very brief presentation to neurologically intact participants is not a useful model for the types of covert recognition found in prosopagnosia. However, a response based on affective valence appears to be available from brief presentation. PMID:12467113

  12. Expression-invariant face recognition using three-dimensional weighted walkthrough and centroid distance

    NASA Astrophysics Data System (ADS)

    Liang, Yan; Zhang, Yun

    2015-09-01

    Three-dimensional (3-D) face recognition provides a potential to handle challenges caused by illumination and pose variations. However, extreme expression variations still complicate the task of recognition. An accurate and robust method for expression-invariant 3-D face recognition is proposed. A 3-D face is partitioned into a set of isogeodesic stripes and the spatial relationships of the stripes are described by 3-D weighted walkthrough and the centroid distance. Moreover, the method of the similarity measure is given. Experiments are performed on the CASIA dataset and the FRGC v2.0 dataset. The results show that our method has advantages for recognition performance despite large expression variations.

  13. Image Description with Local Patterns: An Application to Face Recognition

    NASA Astrophysics Data System (ADS)

    Zhou, Wei; Ahrary, Alireza; Kamata, Sei-Ichiro

    In this paper, we propose a novel approach for presenting the local features of digital image using 1D Local Patterns by Multi-Scans (1DLPMS). We also consider the extentions and simplifications of the proposed approach into facial images analysis. The proposed approach consists of three steps. At the first step, the gray values of pixels in image are represented as a vector giving the local neighborhood intensity distrubutions of the pixels. Then, multi-scans are applied to capture different spatial information on the image with advantage of less computation than other traditional ways, such as Local Binary Patterns (LBP). The second step is encoding the local features based on different encoding rules using 1D local patterns. This transformation is expected to be less sensitive to illumination variations besides preserving the appearance of images embedded in the original gray scale. At the final step, Grouped 1D Local Patterns by Multi-Scans (G1DLPMS) is applied to make the proposed approach computationally simpler and easy to extend. Next, we further formulate boosted algorithm to extract the most discriminant local features. The evaluated results demonstrate that the proposed approach outperforms the conventional approaches in terms of accuracy in applications of face recognition, gender estimation and facial expression.

  14. Facial expression influences recognition memory for faces: robust enhancement effect of fearful expression.

    PubMed

    Wang, Bo

    2013-04-01

    Memory for faces is important for social interactions. However, it is unclear whether negative or positive expression affects recollection and familiarity for faces and whether the effect can be modulated by retention interval. Two experiments examined the effect of emotional expression on recognition for faces at two delay conditions. In Experiment 1 participants viewed neutral, positive, and negative (including fearful, sad, angry etc.) faces and made gender discrimination for each face. In Experiment 2 they viewed and made gender discrimination for neutral, positive, and fearful faces. Following the incidental learning they were randomly assigned into the immediate and 24-hour (24-h) delay conditions. Findings from the two experiments are as follows: (1) In the immediate and 24-h delay conditions overall recognition and recollection for negative faces (fearful faces in Experiment 2) were better than for neutral faces and positive faces. (2) In the immediate and 24-h delay conditions recollection and familiarity for positive faces was equivalent to recollection for neutral faces. (3) The enhancement effect of fearful expression on recognition and recollection was not due to greater discriminability between the old and new faces in the fearful category. The results indicate that recognition and recollection for faces and the enhancement effect of fearful expression is robust within 24 hours. PMID:23016604

  15. Kruskal-Wallis-Based Computationally Efficient Feature Selection for Face Recognition

    PubMed Central

    Hussain, Ayyaz; Basit, Abdul

    2014-01-01

    Face recognition in today's technological world, and face recognition applications attain much more importance. Most of the existing work used frontal face images to classify face image. However these techniques fail when applied on real world face images. The proposed technique effectively extracts the prominent facial features. Most of the features are redundant and do not contribute to representing face. In order to eliminate those redundant features, computationally efficient algorithm is used to select the more discriminative face features. Extracted features are then passed to classification step. In the classification step, different classifiers are ensemble to enhance the recognition accuracy rate as single classifier is unable to achieve the high accuracy. Experiments are performed on standard face database images and results are compared with existing techniques. PMID:24967437

  16. Understanding gender bias in face recognition: effects of divided attention at encoding.

    PubMed

    Palmer, Matthew A; Brewer, Neil; Horry, Ruth

    2013-03-01

    Prior research has demonstrated a female own-gender bias in face recognition, with females better at recognizing female faces than male faces. We explored the basis for this effect by examining the effect of divided attention during encoding on females' and males' recognition of female and male faces. For female participants, divided attention impaired recognition performance for female faces to a greater extent than male faces in a face recognition paradigm (Study 1; N=113) and an eyewitness identification paradigm (Study 2; N=502). Analysis of remember-know judgments (Study 2) indicated that divided attention at encoding selectively reduced female participants' recollection of female faces at test. For male participants, divided attention selectively reduced recognition performance (and recollection) for male stimuli in Study 2, but had similar effects on recognition of male and female faces in Study 1. Overall, the results suggest that attention at encoding contributes to the female own-gender bias by facilitating the later recollection of female faces. PMID:23422290

  17. Laterality effects in normal subjects' recognition of familiar faces, voices and names. Perceptual and representational components.

    PubMed

    Gainotti, Guido

    2013-06-01

    A growing body of evidence suggests that a different hemispheric specialization may exist for different modalities of person identification, with a prevalent right lateralization of the sensory-motor systems allowing face and voice recognition and a prevalent left lateralization of the name recognition system. Data supporting this claim concern, however, much more disorders of familiar people recognition observed in patients with focal brain lesions than results of experimental studies conducted in normal subjects. These last data are sparse and in part controversial, but are important from the theoretical point of view, because it is not clear if hemispheric asymmetries in the recognition of faces, voices and names are limited to their perceptual processing, or also extend to the domain of their cortical representations. The present review has tried to clarify this issues, taking into account investigations that have evaluated in normal subjects laterality effects in recognition of familiar names, faces and voices, by means of behavioural, neurophysiological and neuroimaging techniques. Results of this survey indicate that: (a) recognition of familiar faces and voices show a prevalent right lateralization, whereas recognition of familiar names is lateralized to the left hemisphere; (b) the right hemisphere prevalence is greater in tasks involving familiar than unfamiliar faces and voices, and the left hemisphere superiority is greater in the recognition of familiar than unfamiliar names. Taken together, these data suggest that hemispheric asymmetries in the recognition of faces, voices and names are not limited to their perceptual processing, but also extend to the domain of their cortical representations. PMID:23542500

  18. Novel image fusion scheme based on maximum ratio combining for robust multispectral face recognition

    NASA Astrophysics Data System (ADS)

    Omri, Faten; Foufou, Sebti

    2015-04-01

    Recently, the research in multispectral face recognition has focused on developing efficient frameworks for improving face recognition performance at close-up distances. However, few studies have investigated the multispectral face images captured at long distance. In fact, great challenges still exist in recognizing human face in images captured at long distance as the image quality might be affected and some important features masked. Therefore, multispectral face recognition tools and algorithms should evolve from close-up distances to long distances. To address these issues, we present in this paper a novel image fusion scheme based on Maximum Ratio Combining algorithm and improve multispectral face recognition at long distance. The proposed method is compared with similar super-resolution method based on the Maximum likelihood algorithm. Simulation results show the efficiency of the proposed approach in term of average variance of detection error.

  19. Eye tracking reveals a crucial role for facial motion in recognition of faces by infants.

    PubMed

    Xiao, Naiqi G; Quinn, Paul C; Liu, Shaoying; Ge, Liezhong; Pascalis, Olivier; Lee, Kang

    2015-06-01

    Current knowledge about face processing in infancy comes largely from studies using static face stimuli, but faces that infants see in the real world are mostly moving ones. To bridge this gap, 3-, 6-, and 9-month-old Asian infants (N = 118) were familiarized with either moving or static Asian female faces, and then their face recognition was tested with static face images. Eye-tracking methodology was used to record eye movements during the familiarization and test phases. The results showed a developmental change in eye movement patterns, but only for the moving faces. In addition, the more infants shifted their fixations across facial regions, the better their face recognition was, but only for the moving faces. The results suggest that facial movement influences the way faces are encoded from early in development. PMID:26010387

  20. Face Recognition Is Affected by Similarity in Spatial Frequency Range to a Greater Degree Than Within-Category Object Recognition

    ERIC Educational Resources Information Center

    Collin, Charles A.; Liu, Chang Hong; Troje, Nikolaus F.; McMullen, Patricia A.; Chaudhuri, Avi

    2004-01-01

    Previous studies have suggested that face identification is more sensitive to variations in spatial frequency content than object recognition, but none have compared how sensitive the 2 processes are to variations in spatial frequency overlap (SFO). The authors tested face and object matching accuracy under varying SFO conditions. Their results…

  1. Experience moderates overlap between object and face recognition, suggesting a common ability

    PubMed Central

    Gauthier, Isabel; McGugin, Rankin W.; Richler, Jennifer J.; Herzmann, Grit; Speegle, Magen; Van Gulick, Ana E.

    2014-01-01

    Some research finds that face recognition is largely independent from the recognition of other objects; a specialized and innate ability to recognize faces could therefore have little or nothing to do with our ability to recognize objects. We propose a new framework in which recognition performance for any category is the product of domain-general ability and category-specific experience. In Experiment 1, we show that the overlap between face and object recognition depends on experience with objects. In 256 subjects we measured face recognition, object recognition for eight categories, and self-reported experience with these categories. Experience predicted neither face recognition nor object recognition but moderated their relationship: Face recognition performance is increasingly similar to object recognition performance with increasing object experience. If a subject has a lot of experience with objects and is found to perform poorly, they also prove to have a low ability with faces. In a follow-up survey, we explored the dimensions of experience with objects that may have contributed to self-reported experience in Experiment 1. Different dimensions of experience appear to be more salient for different categories, with general self-reports of expertise reflecting judgments of verbal knowledge about a category more than judgments of visual performance. The complexity of experience and current limitations in its measurement support the importance of aggregating across multiple categories. Our findings imply that both face and object recognition are supported by a common, domain-general ability expressed through experience with a category and best measured when accounting for experience. PMID:24993021

  2. Component Structure of Individual Differences in True and False Recognition of Faces

    ERIC Educational Resources Information Center

    Bartlett, James C.; Shastri, Kalyan K.; Abdi, Herve; Neville-Smith, Marsha

    2009-01-01

    Principal-component analyses of 4 face-recognition studies uncovered 2 independent components. The first component was strongly related to false-alarm errors with new faces as well as to facial "conjunctions" that recombine features of previously studied faces. The second component was strongly related to hits as well as to the conjunction/new…

  3. Fusion of Visible and Thermal Descriptors Using Genetic Algorithms for Face Recognition Systems

    PubMed Central

    Hermosilla, Gabriel; Gallardo, Francisco; Farias, Gonzalo; San Martin, Cesar

    2015-01-01

    The aim of this article is to present a new face recognition system based on the fusion of visible and thermal features obtained from the most current local matching descriptors by maximizing face recognition rates through the use of genetic algorithms. The article considers a comparison of the performance of the proposed fusion methodology against five current face recognition methods and classic fusion techniques used commonly in the literature. These were selected by considering their performance in face recognition. The five local matching methods and the proposed fusion methodology are evaluated using the standard visible/thermal database, the Equinox database, along with a new database, the PUCV-VTF, designed for visible-thermal studies in face recognition and described for the first time in this work. The latter is created considering visible and thermal image sensors with different real-world conditions, such as variations in illumination, facial expression, pose, occlusion, etc. The main conclusions of this article are that two variants of the proposed fusion methodology surpass current face recognition methods and the classic fusion techniques reported in the literature, attaining recognition rates of over 97% and 99% for the Equinox and PUCV-VTF databases, respectively. The fusion methodology is very robust to illumination and expression changes, as it combines thermal and visible information efficiently by using genetic algorithms, thus allowing it to choose optimal face areas where one spectrum is more representative than the other. PMID:26213932

  4. Fusion of Visible and Thermal Descriptors Using Genetic Algorithms for Face Recognition Systems.

    PubMed

    Hermosilla, Gabriel; Gallardo, Francisco; Farias, Gonzalo; San Martin, Cesar

    2015-01-01

    The aim of this article is to present a new face recognition system based on the fusion of visible and thermal features obtained from the most current local matching descriptors by maximizing face recognition rates through the use of genetic algorithms. The article considers a comparison of the performance of the proposed fusion methodology against five current face recognition methods and classic fusion techniques used commonly in the literature. These were selected by considering their performance in face recognition. The five local matching methods and the proposed fusion methodology are evaluated using the standard visible/thermal database, the Equinox database, along with a new database, the PUCV-VTF, designed for visible-thermal studies in face recognition and described for the first time in this work. The latter is created considering visible and thermal image sensors with different real-world conditions, such as variations in illumination, facial expression, pose, occlusion, etc. The main conclusions of this article are that two variants of the proposed fusion methodology surpass current face recognition methods and the classic fusion techniques reported in the literature, attaining recognition rates of over 97% and 99% for the Equinox and PUCV-VTF databases, respectively. The fusion methodology is very robust to illumination and expression changes, as it combines thermal and visible information efficiently by using genetic algorithms, thus allowing it to choose optimal face areas where one spectrum is more representative than the other. PMID:26213932

  5. Fearful contextual expression impairs the encoding and recognition of target faces: an ERP study.

    PubMed

    Lin, Huiyan; Schulz, Claudia; Straube, Thomas

    2015-01-01

    Previous event-related potential (ERP) studies have shown that the N170 to faces is modulated by the emotion of the face and its context. However, it is unclear how the encoding of emotional target faces as reflected in the N170 is modulated by the preceding contextual facial expression when temporal onset and identity of target faces are unpredictable. In addition, no study as yet has investigated whether contextual facial expression modulates later recognition of target faces. To address these issues, participants in the present study were asked to identify target faces (fearful or neutral) that were presented after a sequence of fearful or neutral contextual faces. The number of sequential contextual faces was random and contextual and target faces were of different identities so that temporal onset and identity of target faces were unpredictable. Electroencephalography (EEG) data was recorded during the encoding phase. Subsequently, participants had to perform an unexpected old/new recognition task in which target face identities were presented in either the encoded or the non-encoded expression. ERP data showed a reduced N170 to target faces in fearful as compared to neutral context regardless of target facial expression. In the later recognition phase, recognition rates were reduced for target faces in the encoded expression when they had been encountered in fearful as compared to neutral context. The present findings suggest that fearful compared to neutral contextual faces reduce the allocation of attentional resources towards target faces, which results in limited encoding and recognition of target faces. PMID:26388751

  6. Single-Sample Face Recognition Based on Intra-Class Differences in a Variation Model

    PubMed Central

    Cai, Jun; Chen, Jing; Liang, Xing

    2015-01-01

    In this paper, a novel random facial variation modeling system for sparse representation face recognition is presented. Although recently Sparse Representation-Based Classification (SRC) has represented a breakthrough in the field of face recognition due to its good performance and robustness, there is the critical problem that SRC needs sufficiently large training samples to achieve good performance. To address these issues, we challenge the single-sample face recognition problem with intra-class differences of variation in a facial image model based on random projection and sparse representation. In this paper, we present a developed facial variation modeling systems composed only of various facial variations. We further propose a novel facial random noise dictionary learning method that is invariant to different faces. The experiment results on the AR, Yale B, Extended Yale B, MIT and FEI databases validate that our method leads to substantial improvements, particularly in single-sample face recognition problems. PMID:25580904

  7. Recognition of personally familiar faces and functional connectivity in Alzheimer's disease.

    PubMed

    Kurth, Sophie; Moyse, Evelyne; Bahri, Mohamed A; Salmon, Eric; Bastin, Christine

    2015-06-01

    Studies have reported that patients in the severe stages of Alzheimer's disease (AD) experience difficulties recognizing their own faces in recent photographs. Two case reports of late-stage AD showed that this loss of self-face recognition was temporally graded: photographs from the remote past were recognized more easily than more recent photographs. Little is known about the neural correlates of own face recognition abilities in AD patients, while neuroimaging studies in healthy adults have related these abilities to a bilateral fronto-parieto-occipital network. In this study, two behavioral experiments (experiments 1 and 2) and one functional magnetic resonance imaging (fMRI) experiment (second part of experiment 2) were conducted to compare mild AD patients (experiment 1) and moderate AD patients (experiment 2) with healthy older participants in a recognition task involving self and familiar faces from different decades of the participants' life. In moderate AD patients, variable performance allowed us to examine correlations between scores and resting-state fMRI in order to link behavioral data to cerebral activity. At the behavioral level, the results revealed that, in mild AD, self and familiar face recognition was preserved. Moreover, mild AD patients and healthy older participants showed an inverse temporal gradient, with faster recognition of self and familiar recent photographs than self and familiar remote photographs. However, in moderate AD, both self and familiar face recognition were affected. fMRI results showed that the higher the connectivity between the dorsomedial prefrontal cortex (dMPFC) and the right superior frontal gyrus (rSFG), the lower the self and familiar face recognition scores in moderate AD patients. Given that previous studies have related the superior frontal region to control processes rather than face recognition processes, these results might reflect less segregation and more interference between brain networks in AD. In

  8. Visual scanning behavior is related to recognition performance for own- and other-age faces

    PubMed Central

    Proietti, Valentina; Macchi Cassia, Viola; dell’Amore, Francesca; Conte, Stefania; Bricolo, Emanuela

    2015-01-01

    It is well-established that our recognition ability is enhanced for faces belonging to familiar categories, such as own-race faces and own-age faces. Recent evidence suggests that, for race, the recognition bias is also accompanied by different visual scanning strategies for own- compared to other-race faces. Here, we tested the hypothesis that these differences in visual scanning patterns extend also to the comparison between own and other-age faces and contribute to the own-age recognition advantage. Participants (young adults with limited experience with infants) were tested in an old/new recognition memory task where they encoded and subsequently recognized a series of adult and infant faces while their eye movements were recorded. Consistent with findings on the other-race bias, we found evidence of an own-age bias in recognition which was accompanied by differential scanning patterns, and consequently differential encoding strategies, for own-compared to other-age faces. Gaze patterns for own-age faces involved a more dynamic sampling of the internal features and longer viewing time on the eye region compared to the other regions of the face. This latter strategy was extensively employed during learning (vs. recognition) and was positively correlated to discriminability. These results suggest that deeply encoding the eye region is functional for recognition and that the own-age bias is evident not only in differential recognition performance, but also in the employment of different sampling strategies found to be effective for accurate recognition. PMID:26579056

  9. Solving the Border Control Problem: Evidence of Enhanced Face Matching in Individuals with Extraordinary Face Recognition Skills.

    PubMed

    Bobak, Anna Katarzyna; Dowsett, Andrew James; Bate, Sarah

    2016-01-01

    Photographic identity documents (IDs) are commonly used despite clear evidence that unfamiliar face matching is a difficult and error-prone task. The current study set out to examine the performance of seven individuals with extraordinary face recognition memory, so called "super recognisers" (SRs), on two face matching tasks resembling border control identity checks. In Experiment 1, the SRs as a group outperformed control participants on the "Glasgow Face Matching Test", and some case-by-case comparisons also reached significance. In Experiment 2, a perceptually difficult face matching task was used: the "Models Face Matching Test". Once again, SRs outperformed controls both on group and mostly in case-by-case analyses. These findings suggest that SRs are considerably better at face matching than typical perceivers, and would make proficient personnel for border control agencies. PMID:26829321

  10. Solving the Border Control Problem: Evidence of Enhanced Face Matching in Individuals with Extraordinary Face Recognition Skills

    PubMed Central

    Bobak, Anna Katarzyna; Dowsett, Andrew James; Bate, Sarah

    2016-01-01

    Photographic identity documents (IDs) are commonly used despite clear evidence that unfamiliar face matching is a difficult and error-prone task. The current study set out to examine the performance of seven individuals with extraordinary face recognition memory, so called “super recognisers” (SRs), on two face matching tasks resembling border control identity checks. In Experiment 1, the SRs as a group outperformed control participants on the “Glasgow Face Matching Test”, and some case-by-case comparisons also reached significance. In Experiment 2, a perceptually difficult face matching task was used: the “Models Face Matching Test”. Once again, SRs outperformed controls both on group and mostly in case-by-case analyses. These findings suggest that SRs are considerably better at face matching than typical perceivers, and would make proficient personnel for border control agencies. PMID:26829321

  11. Emotional recognition from face, voice, and music in dementia of the Alzheimer type.

    PubMed

    Drapeau, Joanie; Gosselin, Nathalie; Gagnon, Lise; Peretz, Isabelle; Lorrain, Dominique

    2009-07-01

    Persons with dementia of the Alzheimer type (DAT) are impaired in recognizing emotions from face and voice. Yet clinical practitioners use these mediums to communicate with DAT patients. Music is also used in clinical practice, but little is known about emotional processing from music in DAT. This study aims to assess emotional recognition in mild DAT. Seven patients with DAT and 16 healthy elderly adults were given three tasks of emotional recognition for face, prosody, and music. DAT participants were only impaired in the emotional recognition from the face. These preliminary results suggest that dynamic auditory emotions are preserved in DAT. PMID:19673804

  12. Locality Constrained Joint Dynamic Sparse Representation for Local Matching Based Face Recognition

    PubMed Central

    Wang, Jianzhong; Yi, Yugen; Zhou, Wei; Shi, Yanjiao; Qi, Miao; Zhang, Ming; Zhang, Baoxue; Kong, Jun

    2014-01-01

    Recently, Sparse Representation-based Classification (SRC) has attracted a lot of attention for its applications to various tasks, especially in biometric techniques such as face recognition. However, factors such as lighting, expression, pose and disguise variations in face images will decrease the performances of SRC and most other face recognition techniques. In order to overcome these limitations, we propose a robust face recognition method named Locality Constrained Joint Dynamic Sparse Representation-based Classification (LCJDSRC) in this paper. In our method, a face image is first partitioned into several smaller sub-images. Then, these sub-images are sparsely represented using the proposed locality constrained joint dynamic sparse representation algorithm. Finally, the representation results for all sub-images are aggregated to obtain the final recognition result. Compared with other algorithms which process each sub-image of a face image independently, the proposed algorithm regards the local matching-based face recognition as a multi-task learning problem. Thus, the latent relationships among the sub-images from the same face image are taken into account. Meanwhile, the locality information of the data is also considered in our algorithm. We evaluate our algorithm by comparing it with other state-of-the-art approaches. Extensive experiments on four benchmark face databases (ORL, Extended YaleB, AR and LFW) demonstrate the effectiveness of LCJDSRC. PMID:25419662

  13. Description and recognition of faces from 3D data

    NASA Astrophysics Data System (ADS)

    Coombes, Anne M.; Richards, Robin; Linney, Alfred D.; Bruce, Vicki; Fright, Rick

    1992-12-01

    A method based on differential geometry, is presented for mathematically describing the shape of the facial surface. Three-dimensional data for the face are collected by optical surface scanning. The method allows the segmentation of the face into regions of a particular `surface type,' according to the surface curvature. Eight different surface types are produced which all have perceptually meaningful interpretations. The correspondence of the surface type regions to the facial features are easily visualized, allowing a qualitative assessment of the face. A quantitative description of the face in terms of the surface type regions can be produced and the variation of the description between faces is demonstrated. A set of optical surface scans can be registered together and averages to produce an average male and average female face. Thus an assessment of how individuals vary from the average can be made as well as a general statement about the differences between male and female faces. This method will enable an investigation to be made as to how reliably faces can be individuated by their surface shape which, if feasible, may be the basis of an automatic system for recognizing faces. It also has applications in physical anthropology, for classification of the face, facial reconstructive surgery, to quantify the changes in a face altered by reconstructive surgery and growth, and in visual perception, to assess the recognizability of faces. Examples of some of these applications are presented.

  14. Improving Preschoolers' Recognition Memory for Faces with Orienting Information.

    ERIC Educational Resources Information Center

    Montepare, Joann M.

    To determine whether preschool children's memory for unfamiliar faces could be facilitated by giving them orienting information about faces, 4- and 5-year-old subjects were told that they were going to play a guessing game in which they would be looking at faces and guessing which ones they had seen before. In study 1, 6 boys and 6 girls within…

  15. Separability oriented fusion of LBP and CS-LDP for infrared face recognition

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua; Liu, Guodong

    2015-10-01

    Due to low resolutions of infrared face image, the local texture features are more appreciated for infrared face feature extraction. To extract rich facial texture features, infrared face recognition based on local binary pattern (LBP) and center-symmetric local derivative pattern (CS-LDP) is proposed. Firstly, LBP is utilized to extract the first order texture from the original infrared face image; Secondly, the second order features are extracted CS-LDP. Finally, an adaptive weighted fusion algorithm based separability discriminant criterion is proposed to get final recognition features. Experimental results on our infrared faces databases demonstrate that separability oriented fusion of LBP and CS-LDP contributes complementary discriminant ability, which can improve the performance for infrared face recognition

  16. Coupled bias-variance tradeoff for cross-pose face recognition.

    PubMed

    Li, Annan; Shan, Shiguang; Gao, Wen

    2012-01-01

    Subspace-based face representation can be looked as a regression problem. From this viewpoint, we first revisited the problem of recognizing faces across pose differences, which is a bottleneck in face recognition. Then, we propose a new approach for cross-pose face recognition using a regressor with a coupled bias-variance tradeoff. We found that striking a coupled balance between bias and variance in regression for different poses could improve the regressor-based cross-pose face representation, i.e., the regressor can be more stable against a pose difference. With the basic idea, ridge regression and lasso regression are explored. Experimental results on CMU PIE, the FERET, and the Multi-PIE face databases show that the proposed bias-variance tradeoff can achieve considerable reinforcement in recognition performance. PMID:21724510

  17. Impairments in Monkey and Human Face Recognition in 2-Year-Old Toddlers with Autism Spectrum Disorder and Developmental Delay

    ERIC Educational Resources Information Center

    Chawarska, Katarzyna; Volkmar, Fred

    2007-01-01

    Face recognition impairments are well documented in older children with Autism Spectrum Disorders (ASD); however, the developmental course of the deficit is not clear. This study investigates the progressive specialization of face recognition skills in children with and without ASD. Experiment 1 examines human and monkey face recognition in…

  18. Effects of exposure to facial expression variation in face learning and recognition.

    PubMed

    Liu, Chang Hong; Chen, Wenfeng; Ward, James

    2015-11-01

    Facial expression is a major source of image variation in face images. Linking numerous expressions to the same face can be a huge challenge for face learning and recognition. It remains largely unknown what level of exposure to this image variation is critical for expression-invariant face recognition. We examined this issue in a recognition memory task, where the number of facial expressions of each face being exposed during a training session was manipulated. Faces were either trained with multiple expressions or a single expression, and they were later tested in either the same or different expressions. We found that recognition performance after learning three emotional expressions had no improvement over learning a single emotional expression (Experiments 1 and 2). However, learning three emotional expressions improved recognition compared to learning a single neutral expression (Experiment 3). These findings reveal both the limitation and the benefit of multiple exposures to variations of emotional expression in achieving expression-invariant face recognition. The transfer of expression training to a new type of expression is likely to depend on a relatively extensive level of training and a certain degree of variation across the types of expressions. PMID:25398479

  19. KD-tree based clustering algorithm for fast face recognition on large-scale data

    NASA Astrophysics Data System (ADS)

    Wang, Yuanyuan; Lin, Yaping; Yang, Junfeng

    2015-07-01

    This paper proposes an acceleration method for large-scale face recognition system. When dealing with a large-scale database, face recognition is time-consuming. In order to tackle this problem, we employ the k-means clustering algorithm to classify face data. Specifically, the data in each cluster are stored in the form of the kd-tree, and face feature matching is conducted with the kd-tree based nearest neighborhood search. Experiments on CAS-PEAL and self-collected database show the effectiveness of our proposed method.

  20. Partial least squares regression on DCT domain for infrared face recognition

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua

    2014-09-01

    Compact and discriminative feature extraction is a challenging task for infrared face recognition. In this paper, we propose an infrared face recognition method using Partial Least Square (PLS) regression on Discrete Cosine Transform (DCT) coefficients. With the strong ability for data de-correlation and compact energy, DCT is studied to get the compact features in infrared face. To dig out discriminative information in DCT coefficients, class-specific One-to-Rest Partial Least Squares (PLS) classifier is learned for accurate classification. The infrared data were collected by an infrared camera Thermo Vision A40 supplied by FLIR Systems Inc. The experimental results show that the recognition rate of the proposed algorithm can reach 95.8%, outperforms that of the state of art infrared face recognition methods based on Linear Discriminant Analysis (LDA) and DCT.

  1. Color face recognition based on steerable pyramid transform and extreme learning machines.

    PubMed

    Uçar, Ayşegül

    2014-01-01

    This paper presents a novel color face recognition algorithm by means of fusing color and local information. The proposed algorithm fuses the multiple features derived from different color spaces. Multiorientation and multiscale information relating to the color face features are extracted by applying Steerable Pyramid Transform (SPT) to the local face regions. In this paper, the new three hybrid color spaces, YSCr, ZnSCr, and BnSCr, are firstly constructed using the Cb and Cr component images of the YCbCr color space, the S color component of the HSV color spaces, and the Zn and Bn color components of the normalized XYZ color space. Secondly, the color component face images are partitioned into the local patches. Thirdly, SPT is applied to local face regions and some statistical features are extracted. Fourthly, all features are fused according to decision fusion frame and the combinations of Extreme Learning Machines classifiers are applied to achieve color face recognition with fast and high correctness. The experiments show that the proposed Local Color Steerable Pyramid Transform (LCSPT) face recognition algorithm improves seriously face recognition performance by using the new color spaces compared to the conventional and some hybrid ones. Furthermore, it achieves faster recognition compared with state-of-the-art studies. PMID:24558319

  2. Image-Invariant Responses in Face-Selective Regions Do Not Explain the Perceptual Advantage for Familiar Face Recognition

    PubMed Central

    Davies-Thompson, Jodie; Newling, Katherine

    2013-01-01

    The ability to recognize familiar faces across different viewing conditions contrasts with the inherent difficulty in the perception of unfamiliar faces across similar image manipulations. It is widely believed that this difference in perception and recognition is based on the neural representation for familiar faces being less sensitive to changes in the image than it is for unfamiliar faces. Here, we used an functional magnetic resonance-adaptation paradigm to investigate image invariance in face-selective regions of the human brain. We found clear evidence for a degree of image-invariant adaptation to facial identity in face-selective regions, such as the fusiform face area. However, contrary to the predictions of models of face processing, comparable levels of image invariance were evident for both familiar and unfamiliar faces. This suggests that the marked differences in the perception of familiar and unfamiliar faces may not depend on differences in the way multiple images are represented in core face-selective regions of the human brain. PMID:22345357

  3. Confidence-Accuracy Calibration in Absolute and Relative Face Recognition Judgments

    ERIC Educational Resources Information Center

    Weber, Nathan; Brewer, Neil

    2004-01-01

    Confidence-accuracy (CA) calibration was examined for absolute and relative face recognition judgments as well as for recognition judgments from groups of stimuli presented simultaneously or sequentially (i.e., simultaneous or sequential mini-lineups). When the effect of difficulty was controlled, absolute and relative judgments produced…

  4. Face Pose Recognition Based on Monocular Digital Imagery and Stereo-Based Estimation of its Precision

    NASA Astrophysics Data System (ADS)

    Gorbatsevich, V.; Vizilter, Yu.; Knyaz, V.; Zheltov, S.

    2014-06-01

    A technique for automated face detection and its pose estimation using single image is developed. The algorithm includes: face detection, facial features localization, face/background segmentation, face pose estimation, image transformation to frontal view. Automatic face/background segmentation is performed by original graph-cut technique based on detected feature points. The precision of face orientation estimation based on monocular digital imagery is addressed. The approach for precision estimation is developed based on comparison of synthesized facial 2D images and scanned face 3D model. The software for modelling and measurement is developed. The special system for non-contact measurements is created. Required set of 3D real face models and colour facial textures is obtained using this system. The precision estimation results demonstrate the precision of face pose estimation enough for further successful face recognition.

  5. Principal patterns of fractional-order differential gradients for face recognition

    NASA Astrophysics Data System (ADS)

    Yu, Lei; Cao, Qi; Zhao, Anping

    2015-01-01

    We investigate the ability of fractional-order differentiation (FD) for facial texture representation and present a local descriptor, called the principal patterns of fractional-order differential gradients (PPFDGs), for face recognition. In PPFDG, multiple FD gradient patterns of a face image are obtained utilizing multiorientation FD masks. As a result, each pixel of the face image can be represented as a high-dimensional gradient vector. Then, by employing principal component analysis to the gradient vectors over the centered neighborhood of each pixel, we capture the principal gradient patterns and meanwhile compute the corresponding orientation patterns from which oriented gradient magnitudes are computed. Histogram features are finally extracted from these oriented gradient magnitude patterns as the face representation using local binary patterns. Experimental results on face recognition technology, A.M. Martinez and R. Benavente, Extended Yale B, and labeled faces in the wild face datasets validate the effectiveness of the proposed method.

  6. Maximized Posteriori Attributes Selection from Facial Salient Landmarks for Face Recognition

    NASA Astrophysics Data System (ADS)

    Gupta, Phalguni; Kisku, Dakshina Ranjan; Sing, Jamuna Kanta; Tistarelli, Massimo

    This paper presents a robust and dynamic face recognition technique based on the extraction and matching of devised probabilistic graphs drawn on SIFT features related to independent face areas. The face matching strategy is based on matching individual salient facial graph characterized by SIFT features as connected to facial landmarks such as the eyes and the mouth. In order to reduce the face matching errors, the Dempster-Shafer decision theory is applied to fuse the individual matching scores obtained from each pair of salient facial features. The proposed algorithm is evaluated with the ORL and the IITK face databases. The experimental results demonstrate the effectiveness and potential of the proposed face recognition technique also in case of partially occluded faces.

  7. Super-resolution method for face recognition using nonlinear mappings on coherent features.

    PubMed

    Huang, Hua; He, Huiting

    2011-01-01

    Low-resolution (LR) of face images significantly decreases the performance of face recognition. To address this problem, we present a super-resolution method that uses nonlinear mappings to infer coherent features that favor higher recognition of the nearest neighbor (NN) classifiers for recognition of single LR face image. Canonical correlation analysis is applied to establish the coherent subspaces between the principal component analysis (PCA) based features of high-resolution (HR) and LR face images. Then, a nonlinear mapping between HR/LR features can be built by radial basis functions (RBFs) with lower regression errors in the coherent feature space than in the PCA feature space. Thus, we can compute super-resolved coherent features corresponding to an input LR image according to the trained RBF model efficiently and accurately. And, face identity can be obtained by feeding these super-resolved features to a simple NN classifier. Extensive experiments on the Facial Recognition Technology, University of Manchester Institute of Science and Technology, and Olivetti Research Laboratory databases show that the proposed method outperforms the state-of-the-art face recognition algorithms for single LR image in terms of both recognition rate and robustness to facial variations of pose and expression. PMID:21062679

  8. Capturing specific abilities as a window into human individuality: The example of face recognition

    PubMed Central

    Wilmer, Jeremy B.; Germine, Laura; Chabris, Christopher F.; Chatterjee, Garga; Gerbasi, Margaret; Nakayama, Ken

    2013-01-01

    Proper characterization of each individual's unique pattern of strengths and weaknesses requires good measures of diverse abilities. Here, we advocate combining our growing understanding of neural and cognitive mechanisms with modern psychometric methods in a renewed effort to capture human individuality through a consideration of specific abilities. We articulate five criteria for the isolation and measurement of specific abilities, then apply these criteria to face recognition. We cleanly dissociate face recognition from more general visual and verbal recognition. This dissociation stretches across ability as well as disability, suggesting that specific developmental face recognition deficits are a special case of a broader specificity that spans the entire spectrum of human face recognition performance. Item-by-item results from 1,471 web-tested participants, included as supplementary information, fuel item analyses, validation, norming, and item response theory (IRT) analyses of our three tests: (a) the widely used Cambridge Face Memory Test (CFMT); (b) an Abstract Art Memory Test (AAMT), and (c) a Verbal Paired-Associates Memory Test (VPMT). The availability of this data set provides a solid foundation for interpreting future scores on these tests. We argue that the allied fields of experimental psychology, cognitive neuroscience, and vision science could fuel the discovery of additional specific abilities to add to face recognition, thereby providing new perspectives on human individuality. PMID:23428079

  9. Face recognition using artificial neural network group-based adaptive tolerance (GAT) trees.

    PubMed

    Zhang, M; Fulcher, J

    1996-01-01

    Recent artificial neural network research has focused on simple models, but such models have not been very successful in describing complex systems (such as face recognition). This paper introduces the artificial neural network group-based adaptive tolerance (GAT) tree model for translation-invariant face recognition, suitable for use in an airport security system. GAT trees use a two-stage divide-and-conquer tree-type approach. The first stage determines general properties of the input, such as whether the facial image contains glasses or a beard. The second stage identifies the individual. Face perception classification, detection of front faces with glasses and/or beards, and face recognition results using GAT trees under laboratory conditions are presented. We conclude that the neural network group-based model offers significant improvement over conventional neural network trees for this task. PMID:18263454

  10. Recognition Memory Measures Yield Disproportionate Effects of Aging on Learning Face-Name Associations

    PubMed Central

    James, Lori E.; Fogler, Kethera A.; Tauber, Sarah K.

    2008-01-01

    No previous research has tested whether the specific age-related deficit in learning face-name associations that has been identified using recall tasks also occurs for recognition memory measures. Young and older participants saw pictures of unfamiliar people with a name and an occupation for each person, and were tested on a matching (in Experiment 1) or multiple-choice (in Experiment 2) recognition memory test. For both recognition measures, the pattern of effects was the same as that obtained using a recall measure: more face-occupation associations were remembered than face-name associations, young adults remembered more associated information than older adults overall, and older adults had disproportionately poorer memory for face-name associations. Findings implicate age-related difficulty in forming and retrieving the association between the face and the name as the primary cause of obtained deficits in previous name learning studies. PMID:18808254

  11. Correlations between psychometric schizotypy, scan path length, fixations on the eyes and face recognition.

    PubMed

    Hills, Peter J; Eaton, Elizabeth; Pake, J Michael

    2016-01-01

    Psychometric schizotypy in the general population correlates negatively with face recognition accuracy, potentially due to deficits in inhibition, social withdrawal, or eye-movement abnormalities. We report an eye-tracking face recognition study in which participants were required to match one of two faces (target and distractor) to a cue face presented immediately before. All faces could be presented with or without paraphernalia (e.g., hats, glasses, facial hair). Results showed that paraphernalia distracted participants, and that the most distracting condition was when the cue and the distractor face had paraphernalia but the target face did not, while there was no correlation between distractibility and participants' scores on the Schizotypal Personality Questionnaire (SPQ). Schizotypy was negatively correlated with proportion of time fixating on the eyes and positively correlated with not fixating on a feature. It was negatively correlated with scan path length and this variable correlated with face recognition accuracy. These results are interpreted as schizotypal traits being associated with a restricted scan path leading to face recognition deficits. PMID:25835241

  12. Using Computerized Games to Teach Face Recognition Skills to Children with Autism Spectrum Disorder: The "Let's Face It!" Program

    ERIC Educational Resources Information Center

    Tanaka, James W.; Wolf, Julie M.; Klaiman, Cheryl; Koenig, Kathleen; Cockburn, Jeffrey; Herlihy, Lauren; Brown, Carla; Stahl, Sherin; Kaiser, Martha D.; Schultz, Robert T.

    2010-01-01

    Background: An emerging body of evidence indicates that relative to typically developing children, children with autism are selectively impaired in their ability to recognize facial identity. A critical question is whether face recognition skills can be enhanced through a direct training intervention. Methods: In a randomized clinical trial,…

  13. Effect of Partial Occlusion on Newborns' Face Preference and Recognition

    ERIC Educational Resources Information Center

    Gava, Lucia; Valenza, Eloisa; Turati, Chiara; de Schonen, Scania

    2008-01-01

    Many studies have shown that newborns prefer (e.g. Goren, Sarty & Wu, 1975 ; Valenza, Simion, Macchi Cassia & Umilta, 1996) and recognize (e.g. Bushnell, Say & Mullin, 1989; Pascalis & de Schonen, 1994) faces. However, it is not known whether, at birth, faces are still preferred and recognized when some of their parts are not visible because…

  14. Atypical Development of Face and Greeble Recognition in Autism

    ERIC Educational Resources Information Center

    Scherf, K. Suzanne; Behrmann, Marlene; Minshew, Nancy; Luna, Beatriz

    2008-01-01

    Background: Impaired face processing is a widely documented deficit in autism. Although the origin of this deficit is unclear, several groups have suggested that a lack of perceptual expertise is contributory. We investigated whether individuals with autism develop expertise in visuoperceptual processing of faces and whether any deficiency in such…

  15. A new face of sleep: The impact of post-learning sleep on recognition memory for face-name associations.

    PubMed

    Maurer, Leonie; Zitting, Kirsi-Marja; Elliott, Kieran; Czeisler, Charles A; Ronda, Joseph M; Duffy, Jeanne F

    2015-12-01

    Sleep has been demonstrated to improve consolidation of many types of new memories. However, few prior studies have examined how sleep impacts learning of face-name associations. The recognition of a new face along with the associated name is an important human cognitive skill. Here we investigated whether post-presentation sleep impacts recognition memory of new face-name associations in healthy adults. Fourteen participants were tested twice. Each time, they were presented 20 photos of faces with a corresponding name. Twelve hours later, they were shown each face twice, once with the correct and once with an incorrect name, and asked if each face-name combination was correct and to rate their confidence. In one condition the 12-h interval between presentation and recall included an 8-h nighttime sleep opportunity ("Sleep"), while in the other condition they remained awake ("Wake"). There were more correct and highly confident correct responses when the interval between presentation and recall included a sleep opportunity, although improvement between the "Wake" and "Sleep" conditions was not related to duration of sleep or any sleep stage. These data suggest that a nighttime sleep opportunity improves the ability to correctly recognize face-name associations. Further studies investigating the mechanism of this improvement are important, as this finding has implications for individuals with sleep disturbances and/or memory impairments. PMID:26549626

  16. Face Processing and Facial Emotion Recognition in Adults with Down Syndrome

    ERIC Educational Resources Information Center

    Barisnikov, Koviljka; Hippolyte, Loyse; Van der Linden, Martial

    2008-01-01

    Face processing and facial expression recognition was investigated in 17 adults with Down syndrome, and results were compared with those of a child control group matched for receptive vocabulary. On the tasks involving faces without emotional content, the adults with Down syndrome performed significantly worse than did the controls. However, their…

  17. Face Recognition in Children with a Pervasive Developmental Disorder Not Otherwise Specified.

    ERIC Educational Resources Information Center

    Serra, M.; Althaus, M.; de Sonneville, L. M. J.; Stant, A. D.; Jackson, A. E.; Minderaa, R. B.

    2003-01-01

    A study investigated the accuracy and speed of face recognition in 26 children (ages 7-10) with Pervasive Developmental Disorder Not Otherwise Specified. Subjects needed an amount of time to recognize the faces that almost equaled the time they needed to recognize abstract patterns that were difficult to distinguish. (Contains references.)…

  18. The Simon Then Garfunkel Effect: Semantic Priming, Sensitivity, and the Modularity of Face Recognition.

    ERIC Educational Resources Information Center

    Rhodes, Gillian; Tremewan, Tanya

    1993-01-01

    In 5 experiments involving 306 adults, the mechanisms underlying semantic priming in the domain of face recognition, particularly famous faces, and the plausibility of modularity were assessed. Results suggest that sensitivity changes that occur when direct associative connections within the module can be ruled out pose a problem for modularity.…

  19. Brief Report: Face-Specific Recognition Deficits in Young Children with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Bradshaw, Jessica; Shic, Frederick; Chawarska, Katarzyna

    2011-01-01

    This study used eyetracking to investigate the ability of young children with autism spectrum disorders (ASD) to recognize social (faces) and nonsocial (simple objects and complex block patterns) stimuli using the visual paired comparison (VPC) paradigm. Typically developing (TD) children showed evidence for recognition of faces and simple…

  20. Deficits in Other-Race Face Recognition: No Evidence for Encoding-Based Effects

    PubMed Central

    Papesh, Megan H.; Goldinger, Stephen D.

    2010-01-01

    The other-race effect (ORE) in face recognition is typically observed in tasks which require long-term memory. Several studies, however, have found the effect early in face encoding (Lindsay, Jack, & Christian, 1991; Walker & Hewstone, 2006). In 6 experiments, with over 300 participants, we found no evidence that the recognition deficit associated with the ORE reflects deficits in immediate encoding. In Experiment 1, with a study-to-test retention interval of 4 min, participants were better able to recognise White faces, relative to Asian faces. Experiment 1 also validated the use of computer-generated faces in subsequent experiments. In Experiments 2 through 4, performance was virtually identical to Asian and White faces in match-to-sample, immediate recognition. In Experiment 5, decreasing target-foil similarity and disrupting the retention interval with trivia questions elicited a re-emergence of the ORE. Experiments 6A and 6B replicated this effect, and showed that memory for Asian faces was particularly susceptible to distraction; White faces were recognised equally well, regardless of trivia questions during the retention interval. The recognition deficit in the ORE apparently emerges from retention or retrieval deficits, not differences in immediate perceptual processing. PMID:20025384

  1. Brief Report: Developing Spatial Frequency Biases for Face Recognition in Autism and Williams Syndrome

    ERIC Educational Resources Information Center

    Leonard, Hayley C.; Annaz, Dagmara; Karmiloff-Smith, Annette; Johnson, Mark H.

    2011-01-01

    The current study investigated whether contrasting face recognition abilities in autism and Williams syndrome could be explained by different spatial frequency biases over developmental time. Typically-developing children and groups with Williams syndrome and autism were asked to recognise faces in which low, middle and high spatial frequency…

  2. An Own-Race Advantage for Components as Well as Configurations in Face Recognition

    ERIC Educational Resources Information Center

    Hayward, William G.; Rhodes, Gillian; Schwaninger, Adrian

    2008-01-01

    The own-race advantage in face recognition has been hypothesized as being due to a superiority in the processing of configural information for own-race faces. Here we examined the contributions of both configural and component processing to the own-race advantage. We recruited 48 Caucasian participants in Australia and 48 Chinese participants in…

  3. Deficits in other-race face recognition: no evidence for encoding-based effects.

    PubMed

    Papesh, Megan H; Goldinger, Stephen D

    2009-12-01

    The other-race effect (ORE) in face recognition is typically observed in tasks which require long-term memory. Several studies, however, have found the effect early in face encoding (Lindsay, Jack, & Christian, 1991; Walker & Hewstone, 2006). In 6 experiments, with over 300 participants, we found no evidence that the recognition deficit associated with the ORE reflects deficits in immediate encoding. In Experiment 1, with a study-to-test retention interval of 4 min, participants were better able to recognise White faces, relative to Asian faces. Experiment 1 also validated the use of computer-generated faces in subsequent experiments. In Experiments 2 through 4, performance was virtually identical to Asian and White faces in match-to-sample, immediate recognition. In Experiment 5, decreasing target-foil similarity and disrupting the retention interval with trivia questions elicited a re-emergence of the ORE. Experiments 6A and 6B replicated this effect, and showed that memory for Asian faces was particularly susceptible to distraction; White faces were recognised equally well, regardless of trivia questions during the retention interval. The recognition deficit in the ORE apparently emerges from retention or retrieval deficits, not differences in immediate perceptual processing. PMID:20025384

  4. Verbal Overshadowing and Face Recognition in Young and Old Adults

    ERIC Educational Resources Information Center

    Kinlen, Thomas J.; Adams-Price, Carolyn E.; Henley, Tracy B.

    2007-01-01

    Verbal overshadowing has been found to disrupt recognition accuracy when hard-to-describe stimuli are used. The current study replicates previous research on verbal overshadowing with younger people and extends this research into an older population to examine the possible link between verbal expertise and verbal overshadowing. It was hypothesized…

  5. Determining optimally orthogonal discriminant vectors in DCT domain for multiscale-based face recognition

    NASA Astrophysics Data System (ADS)

    Niu, Yanmin; Wang, Xuchu

    2011-02-01

    This paper presents a new face recognition method that extracts multiple discriminant features based on multiscale image enhancement technique and kernel-based orthogonal feature extraction improvements with several interesting characteristics. First, it can extract more discriminative multiscale face feature than traditional pixel-based or Gabor-based feature. Second, it can effectively deal with the small sample size problem as well as feature correlation problem by using eigenvalue decomposition on scatter matrices. Finally, the extractor handles nonlinearity efficiently by using kernel trick. Multiple recognition experiments on open face data set with comparison to several related methods show the effectiveness and superiority of the proposed method.

  6. The design and implementation of effective face detection and recognition system

    NASA Astrophysics Data System (ADS)

    Sun, Yigui

    2011-06-01

    In the paper, a face detection and recognition system (FDRS) based on video sequences and still image is proposed. It uses the AdaBoost algorithm to detect human face in the image or frame, adopts Discrete Cosine Transforms (DCT) for feature extraction and recognition in face image. The related technologies are firstly outlined. Then, the system requirements and UML use case diagram are described. In addition, the paper mainly introduces the design solution and key procedures. The FDRS's source-code is built in VC++, Standard Template Library (STL) and Intel Open Source Computer Vision Library (OpenCV).

  7. Blurred face recognition by fusing blur-invariant texture and structure features

    NASA Astrophysics Data System (ADS)

    Zhu, Mengyu; Cao, Zhiguo; Xiao, Yang; Xie, Xiaokang

    2015-10-01

    Blurred face recognition is still remaining as a challenge task, but with wide applications. Image blur can largely affect recognition performance. The local phase quantization (LPQ) was proposed to extract the blur-invariant texture information. It was used for blurred face recognition and achieved good performance. However, LPQ considers only the phase blur-invariant texture information, which is not sufficient. In addition, LPQ is extracted holistically, which cannot fully explore its discriminative power on local spatial properties. In this paper, we propose a novel method for blurred face recognition. The texture and structure blur-invariant features are extracted and fused to generate a more complete description on blurred image. For texture blur-invariant feature, LPQ is extracted in a densely sampled way and vector of locally aggregated descriptors (VLAD) is employed to enhance its performance. For structure blur-invariant feature, the histogram of oriented gradient (HOG) is used. To further enhance its blur invariance, we improve HOG by eliminating weak gradient magnitude which is more sensitive to image blur than the strong gradient. The improved HOG is then fused with the original HOG by canonical correlation analysis (CCA). At last, we fuse them together by CCA to form the final blur-invariant representation of the face image. The experiments are performed on three face datasets. The results demonstrate that our improvements and our proposition can have a good performance in blurred face recognition.

  8. Catechol-O-methyltransferase val158met Polymorphism Interacts with Sex to Affect Face Recognition Ability

    PubMed Central

    Lamb, Yvette N.; McKay, Nicole S.; Singh, Shrimal S.; Waldie, Karen E.; Kirk, Ian J.

    2016-01-01

    The catechol-O-methyltransferase (COMT) val158met polymorphism affects the breakdown of synaptic dopamine. Consequently, this polymorphism has been associated with a variety of neurophysiological and behavioral outcomes. Some of the effects have been found to be sex-specific and it appears estrogen may act to down-regulate the activity of the COMT enzyme. The dopaminergic system has been implicated in face recognition, a form of cognition for which a female advantage has typically been reported. This study aimed to investigate potential joint effects of sex and COMT genotype on face recognition. A sample of 142 university students was genotyped and assessed using the Faces I subtest of the Wechsler Memory Scale – Third Edition (WMS-III). A significant two-way interaction between sex and COMT genotype on face recognition performance was found. Of the male participants, COMT val homozygotes and heterozygotes had significantly lower scores than met homozygotes. Scores did not differ between genotypes for female participants. While male val homozygotes had significantly lower scores than female val homozygotes, no sex differences were observed in the heterozygotes and met homozygotes. This study contributes to the accumulating literature documenting sex-specific effects of the COMT polymorphism by demonstrating a COMT-sex interaction for face recognition, and is consistent with a role for dopamine in face recognition. PMID:27445927

  9. Oxytocin increases bias, but not accuracy, in face recognition line-ups.

    PubMed

    Bate, Sarah; Bennetts, Rachel; Parris, Benjamin A; Bindemann, Markus; Udale, Robert; Bussunt, Amanda

    2015-07-01

    Previous work indicates that intranasal inhalation of oxytocin improves face recognition skills, raising the possibility that it may be used in security settings. However, it is unclear whether oxytocin directly acts upon the core face-processing system itself or indirectly improves face recognition via affective or social salience mechanisms. In a double-blind procedure, 60 participants received either an oxytocin or placebo nasal spray before completing the One-in-Ten task-a standardized test of unfamiliar face recognition containing target-present and target-absent line-ups. Participants in the oxytocin condition outperformed those in the placebo condition on target-present trials, yet were more likely to make false-positive errors on target-absent trials. Signal detection analyses indicated that oxytocin induced a more liberal response bias, rather than increasing accuracy per se. These findings support a social salience account of the effects of oxytocin on face recognition and indicate that oxytocin may impede face recognition in certain scenarios. PMID:25433464

  10. Self-Face Recognition in Schizophrenia: An Eye-Tracking Study.

    PubMed

    Bortolon, Catherine; Capdevielle, Delphine; Salesse, Robin N; Raffard, Stéphane

    2016-01-01

    Self-face recognition has been shown to be impaired in schizophrenia (SZ), according to studies using behavioral tasks implicating cognitive demands. Here, we employed an eye-tracking methodology, which is a relevant tool to understand impairments in self-face recognition deficits in SZ because it provides a natural, continuous and online record of face processing. Moreover, it allows collecting the most relevant and informative features each individual looks at during the self-face recognition. These advantages are especially relevant considering the fundamental role played by the patterns of visual exploration on face processing. Thus, this paper aims to investigate self-face recognition deficits in SZ using eye-tracking methodology. Visual scan paths were monitored in 20 patients with SZ and 20 healthy controls. Self, famous, and unknown faces were morphed in steps of 20%. Location, number, and duration of fixations on relevant areas were recorded with an eye-tracking system. Participants performed a passive exploration task (no specific instruction was provided), followed by an active decision making task (individuals were explicitly requested to recognize the different faces). Results showed that patients with SZ had fewer and longer fixations compared to controls. Nevertheless, both groups focused their attention on relevant facial features in a similar way. No significant difference was found between groups when participants were requested to recognize the faces (active task). In conclusion, using an eye tracking methodology and two tasks with low levels of cognitive demands, our results suggest that patients with SZ are able to: (1) explore faces and focus on relevant features of the face in a similar way as controls; and (2) recognize their own face. PMID:26903833

  11. The Own-Age Bias in Face Recognition: A Meta-Analytic and Theoretical Review

    ERIC Educational Resources Information Center

    Rhodes, Matthew G.; Anastasi, Jeffrey S.

    2012-01-01

    A large number of studies have examined the finding that recognition memory for faces of one's own age group is often superior to memory for faces of another age group. We examined this "own-age bias" (OAB) in the meta-analyses reported. These data showed that hits were reliably greater for same-age relative to other-age faces (g = 0.23) and that…

  12. Self-Face Recognition in Schizophrenia: An Eye-Tracking Study

    PubMed Central

    Bortolon, Catherine; Capdevielle, Delphine; Salesse, Robin N.; Raffard, Stéphane

    2016-01-01

    Self-face recognition has been shown to be impaired in schizophrenia (SZ), according to studies using behavioral tasks implicating cognitive demands. Here, we employed an eye-tracking methodology, which is a relevant tool to understand impairments in self-face recognition deficits in SZ because it provides a natural, continuous and online record of face processing. Moreover, it allows collecting the most relevant and informative features each individual looks at during the self-face recognition. These advantages are especially relevant considering the fundamental role played by the patterns of visual exploration on face processing. Thus, this paper aims to investigate self-face recognition deficits in SZ using eye-tracking methodology. Visual scan paths were monitored in 20 patients with SZ and 20 healthy controls. Self, famous, and unknown faces were morphed in steps of 20%. Location, number, and duration of fixations on relevant areas were recorded with an eye-tracking system. Participants performed a passive exploration task (no specific instruction was provided), followed by an active decision making task (individuals were explicitly requested to recognize the different faces). Results showed that patients with SZ had fewer and longer fixations compared to controls. Nevertheless, both groups focused their attention on relevant facial features in a similar way. No significant difference was found between groups when participants were requested to recognize the faces (active task). In conclusion, using an eye tracking methodology and two tasks with low levels of cognitive demands, our results suggest that patients with SZ are able to: (1) explore faces and focus on relevant features of the face in a similar way as controls; and (2) recognize their own face. PMID:26903833

  13. Infrared face recognition based on LBP histogram and KW feature selection

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua

    2014-07-01

    The conventional LBP-based feature as represented by the local binary pattern (LBP) histogram still has room for performance improvements. This paper focuses on the dimension reduction of LBP micro-patterns and proposes an improved infrared face recognition method based on LBP histogram representation. To extract the local robust features in infrared face images, LBP is chosen to get the composition of micro-patterns of sub-blocks. Based on statistical test theory, Kruskal-Wallis (KW) feature selection method is proposed to get the LBP patterns which are suitable for infrared face recognition. The experimental results show combination of LBP and KW features selection improves the performance of infrared face recognition, the proposed method outperforms the traditional methods based on LBP histogram, discrete cosine transform(DCT) or principal component analysis(PCA).

  14. Maximum margin sparse representation discriminative mapping with application to face recognition

    NASA Astrophysics Data System (ADS)

    Zhang, Qiang; Cai, Yunze; Xu, Xiaoming

    2013-02-01

    Sparse subspace learning has drawn more and more attention recently. We propose a novel sparse subspace learning algorithm called maximum margin sparse representation discriminative mapping (MSRDM), which adds the discriminative information into sparse neighborhood preservation. Based on combination of maximum margin discriminant criterion and sparse representation, MSRDM can preserve both local geometry structure and classification information. MSRDM can avoid the small sample size problem in face recognition naturally and the computation is efficient. To improve face recognition performance, we propose to integrate Gabor-like complex wavelet and natural image features by complex vectors as input features of MSRDM. Experimental results on ORL, UMIST, Yale, and PIE face databases demonstrate the effectiveness of the proposed face recognition method.

  15. Learning deformation model for expression-robust 3D face recognition

    NASA Astrophysics Data System (ADS)

    Guo, Zhe; Liu, Shu; Wang, Yi; Lei, Tao

    2015-12-01

    Expression change is the major cause of local plastic deformation of the facial surface. The intra-class differences with large expression change somehow are larger than the inter-class differences as it's difficult to distinguish the same individual with facial expression change. In this paper, an expression-robust 3D face recognition method is proposed by learning expression deformation model. The expression of the individuals on the training set is modeled by principal component analysis, the main components are retained to construct the facial deformation model. For the test 3D face, the shape difference between the test and the neutral face in training set is used for reconstructing the expression change by the constructed deformation model. The reconstruction residual error is used for face recognition. The average recognition rate on GavabDB and self-built database reaches 85.1% and 83%, respectively, which shows strong robustness for expression changes.

  16. On the particular vulnerability of face recognition to aging: a review of three hypotheses

    PubMed Central

    Boutet, Isabelle; Taler, Vanessa; Collin, Charles A.

    2015-01-01

    Age-related face recognition deficits are characterized by high false alarms to unfamiliar faces, are not as pronounced for other complex stimuli, and are only partially related to general age-related impairments in cognition. This paper reviews some of the underlying processes likely to be implicated in theses deficits by focusing on areas where contradictions abound as a means to highlight avenues for future research. Research pertaining to the three following hypotheses is presented: (i) perceptual deterioration, (ii) encoding of configural information, and (iii) difficulties in recollecting contextual information. The evidence surveyed provides support for the idea that all three factors are likely to contribute, under certain conditions, to the deficits in face recognition seen in older adults. We discuss how these different factors might interact in the context of a generic framework of the different stages implicated in face recognition. Several suggestions for future investigations are outlined. PMID:26347670

  17. Fashioning the Face: Sensorimotor Simulation Contributes to Facial Expression Recognition.

    PubMed

    Wood, Adrienne; Rychlowska, Magdalena; Korb, Sebastian; Niedenthal, Paula

    2016-03-01

    When we observe a facial expression of emotion, we often mimic it. This automatic mimicry reflects underlying sensorimotor simulation that supports accurate emotion recognition. Why this is so is becoming more obvious: emotions are patterns of expressive, behavioral, physiological, and subjective feeling responses. Activation of one component can therefore automatically activate other components. When people simulate a perceived facial expression, they partially activate the corresponding emotional state in themselves, which provides a basis for inferring the underlying emotion of the expresser. We integrate recent evidence in favor of a role for sensorimotor simulation in emotion recognition. We then connect this account to a domain-general understanding of how sensory information from multiple modalities is integrated to generate perceptual predictions in the brain. PMID:26876363

  18. 3D Face Recognition Based on Multiple Keypoint Descriptors and Sparse Representation

    PubMed Central

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying; Lu, Jianwei

    2014-01-01

    Recent years have witnessed a growing interest in developing methods for 3D face recognition. However, 3D scans often suffer from the problems of missing parts, large facial expressions, and occlusions. To be useful in real-world applications, a 3D face recognition approach should be able to handle these challenges. In this paper, we propose a novel general approach to deal with the 3D face recognition problem by making use of multiple keypoint descriptors (MKD) and the sparse representation-based classification (SRC). We call the proposed method 3DMKDSRC for short. Specifically, with 3DMKDSRC, each 3D face scan is represented as a set of descriptor vectors extracted from keypoints by meshSIFT. Descriptor vectors of gallery samples form the gallery dictionary. Given a probe 3D face scan, its descriptors are extracted at first and then its identity can be determined by using a multitask SRC. The proposed 3DMKDSRC approach does not require the pre-alignment between two face scans and is quite robust to the problems of missing data, occlusions and expressions. Its superiority over the other leading 3D face recognition schemes has been corroborated by extensive experiments conducted on three benchmark databases, Bosphorus, GavabDB, and FRGC2.0. The Matlab source code for 3DMKDSRC and the related evaluation results are publicly available at http://sse.tongji.edu.cn/linzhang/3dmkdsrcface/3dmkdsrc.htm. PMID:24940876

  19. Recognition by association: Within- and cross-modality associative priming with faces and voices.

    PubMed

    Stevenage, Sarah V; Hale, Sarah; Morgan, Yasmin; Neil, Greg J

    2014-02-01

    Recent literature has raised the suggestion that voice recognition runs in parallel to face recognition. As a result, a prediction can be made that voices should prime faces and faces should prime voices. A traditional associative priming paradigm was used in two studies to explore within-modality priming and cross-modality priming. In the within-modality condition where both prime and target were faces, analysis indicated the expected associative priming effect: The familiarity decision to the second target celebrity was made more quickly if preceded by a semantically related prime celebrity, than if preceded by an unrelated prime celebrity. In the cross-modality condition, where a voice prime preceded a face target, analysis indicated no associative priming when a 3-s stimulus onset asynchrony (SOA) was used. However, when a relatively longer SOA was used, providing time for robust recognition of the prime, significant cross-modality priming emerged. These data are explored within the context of a unified account of face and voice recognition, which recognizes weaker voice processing than face processing. PMID:24387093

  20. 3D face recognition based on multiple keypoint descriptors and sparse representation.

    PubMed

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying; Lu, Jianwei

    2014-01-01

    Recent years have witnessed a growing interest in developing methods for 3D face recognition. However, 3D scans often suffer from the problems of missing parts, large facial expressions, and occlusions. To be useful in real-world applications, a 3D face recognition approach should be able to handle these challenges. In this paper, we propose a novel general approach to deal with the 3D face recognition problem by making use of multiple keypoint descriptors (MKD) and the sparse representation-based classification (SRC). We call the proposed method 3DMKDSRC for short. Specifically, with 3DMKDSRC, each 3D face scan is represented as a set of descriptor vectors extracted from keypoints by meshSIFT. Descriptor vectors of gallery samples form the gallery dictionary. Given a probe 3D face scan, its descriptors are extracted at first and then its identity can be determined by using a multitask SRC. The proposed 3DMKDSRC approach does not require the pre-alignment between two face scans and is quite robust to the problems of missing data, occlusions and expressions. Its superiority over the other leading 3D face recognition schemes has been corroborated by extensive experiments conducted on three benchmark databases, Bosphorus, GavabDB, and FRGC2.0. The Matlab source code for 3DMKDSRC and the related evaluation results are publicly available at http://sse.tongji.edu.cn/linzhang/3dmkdsrcface/3dmkdsrc.htm. PMID:24940876

  1. The fusiform face area is not sufficient for face recognition: evidence from a patient with dense prosopagnosia and no occipital face area.

    PubMed

    Steeves, Jennifer K E; Culham, Jody C; Duchaine, Bradley C; Pratesi, Cristiana Cavina; Valyear, Kenneth F; Schindler, Igor; Humphrey, G Keith; Milner, A David; Goodale, Melvyn A

    2006-01-01

    We tested functional activation for faces in patient D.F., who following acquired brain damage has a profound deficit in object recognition based on form (visual form agnosia) and also prosopagnosia that is undocumented to date. Functional imaging demonstrated that like our control observers, D.F. shows significantly more activation when passively viewing face compared to scene images in an area that is consistent with the fusiform face area (FFA) (p < 0.01). Control observers also show occipital face area (OFA) activation; however, whereas D.F.'s lesions appear to overlap the OFA bilaterally. We asked, given that D.F. shows FFA activation for faces, to what extent is she able to recognize faces? D.F. demonstrated a severe impairment in higher level face processing--she could not recognize face identity, gender or emotional expression. In contrast, she performed relatively normally on many face categorization tasks. D.F. can differentiate faces from non-faces given sufficient texture information and processing time, and she can do this is independent of color and illumination information. D.F. can use configural information for categorizing faces when they are presented in an upright but not a sideways orientation and given that she also cannot discriminate half-faces she may rely on a spatially symmetric feature arrangement. Faces appear to be a unique category, which she can classify even when she has no advance knowledge that she will be shown face images. Together, these imaging and behavioral data support the importance of the integrity of a complex network of regions for face identification, including more than just the FFA--in particular the OFA, a region believed to be associated with low-level processing. PMID:16125741

  2. Face and Emotion Recognition in MCDD versus PDD-NOS

    ERIC Educational Resources Information Center

    Herba, Catherine M.; de Bruin, Esther; Althaus, Monika; Verheij, Fop; Ferdinand, Robert F.

    2008-01-01

    Previous studies indicate that Multiple Complex Developmental Disorder (MCDD) children differ from PDD-NOS and autistic children on a symptom level and on psychophysiological functioning. Children with MCDD (n = 21) and PDD-NOS (n = 62) were compared on two facets of social-cognitive functioning: identification of neutral faces and facial…

  3. Facilitating recognition of crowded faces with presaccadic attention

    PubMed Central

    Wolfe, Benjamin A.; Whitney, David

    2014-01-01

    In daily life, we make several saccades per second to objects we cannot normally recognize in the periphery due to visual crowding. While we are aware of the presence of these objects, we cannot identify them and may, at best, only know that an object is present at a particular location. The process of planning a saccade involves a presaccadic attentional component known to be critical for saccadic accuracy, but whether this or other presaccadic processes facilitate object identification as opposed to object detection—especially with high level natural objects like faces—is less clear. In the following experiments, we show that presaccadic information about a crowded face reduces the deleterious effect of crowding, facilitating discrimination of two emotional faces, even when the target face is never foveated. While accurate identification of crowded objects is possible in the absence of a saccade, accurate identification of a crowded object is considerably facilitated by presaccadic attention. Our results provide converging evidence for a selective increase in available information about high level objects, such as faces, at a presaccadic stage. PMID:24592233

  4. Face recognition using fuzzy integral and wavelet decomposition method.

    PubMed

    Kwak, Keun-Chang; Pedrycz, Witold

    2004-08-01

    In this paper, we develop a method for recognizing face images by combining wavelet decomposition, Fisherface method, and fuzzy integral. The proposed approach is comprised of four main stages. The first stage uses the wavelet decomposition that helps extract intrinsic features of face images. As a result of this decomposition, we obtain four subimages (namely approximation, horizontal, vertical, and diagonal detailed images). The second stage of the approach concerns the application of the Fisherface method to these four decompositions. The choice of the Fisherface method in this setting is motivated by its insensitivity to large variation in light direction, face pose, and facial expression. The two last phases are concerned with the aggregation of the individual classifiers by means of the fuzzy integral. Both Sugeno and Choquet type of fuzzy integral are considered as the aggregation method. In the experiments we use n-fold cross-validation to assure high consistency of the produced classification outcomes. The experimental results obtained for the Chungbuk National University (CNU) and Yale University face databases reveal that the approach presented in this paper yields better classification performance in comparison to the results obtained by other classifiers. PMID:15462434

  5. Self-Face and Self-Body Recognition in Autism

    ERIC Educational Resources Information Center

    Gessaroli, Erica; Andreini, Veronica; Pellegri, Elena; Frassinetti, Francesca

    2013-01-01

    The advantage in responding to self vs. others' body and face-parts (the so called self-advantage) is considered to reflect the implicit access to the bodily self representation and has been studied in healthy and brain-damaged adults in previous studies. If the distinction of the self from others is a key aspect of social behaviour and is a…

  6. Semantic and visual determinants of face recognition in a prosopagnosic patient.

    PubMed

    Dixon, M J; Bub, D N; Arguin, M

    1998-05-01

    Prosopagnosia is the neuropathological inability to recognize familiar people by their faces. It can occur in isolation or can coincide with recognition deficits for other nonface objects. Often, patients whose prosopagnosia is accompanied by object recognition difficulties have more trouble identifying certain categories of objects relative to others. In previous research, we demonstrated that objects that shared multiple visual features and were semantically close posed severe recognition difficulties for a patient with temporal lobe damage. We now demonstrate that this patient's face recognition is constrained by these same parameters. The prosopagnosic patient ELM had difficulties pairing faces to names when the faces shared visual features and the names were semantically related (e.g., Tonya Harding, Nancy Kerrigan, and Josee Chouinard -three ice skaters). He made tenfold fewer errors when the exact same faces were associated with semantically unrelated people (e.g., singer Celine Dion, actress Betty Grable, and First Lady Hillary Clinton). We conclude that prosopagnosia and co-occurring category-specific recognition problems both stem from difficulties disambiguating the stored representations of objects that share multiple visual features and refer to semantically close identities or concepts. PMID:9869710

  7. 3D fast wavelet network model-assisted 3D face recognition

    NASA Astrophysics Data System (ADS)

    Said, Salwa; Jemai, Olfa; Zaied, Mourad; Ben Amar, Chokri

    2015-12-01

    In last years, the emergence of 3D shape in face recognition is due to its robustness to pose and illumination changes. These attractive benefits are not all the challenges to achieve satisfactory recognition rate. Other challenges such as facial expressions and computing time of matching algorithms remain to be explored. In this context, we propose our 3D face recognition approach using 3D wavelet networks. Our approach contains two stages: learning stage and recognition stage. For the training we propose a novel algorithm based on 3D fast wavelet transform. From 3D coordinates of the face (x,y,z), we proceed to voxelization to get a 3D volume which will be decomposed by 3D fast wavelet transform and modeled after that with a wavelet network, then their associated weights are considered as vector features to represent each training face . For the recognition stage, an unknown identity face is projected on all the training WN to obtain a new vector features after every projection. A similarity score is computed between the old and the obtained vector features. To show the efficiency of our approach, experimental results were performed on all the FRGC v.2 benchmark.

  8. When less is more: Impact of face processing ability on recognition of visually degraded faces.

    PubMed

    Royer, Jessica; Blais, Caroline; Gosselin, Frédéric; Duncan, Justin; Fiset, Daniel

    2015-10-01

    It is generally thought that faces are perceived as indissociable wholes. As a result, many assume that hiding large portions of the face by the addition of noise or by masking limits or qualitatively alters natural "expert" face processing by forcing observers to use atypical processing mechanisms. We addressed this question by measuring face processing abilities with whole faces and with Bubbles (Gosselin & Schyns, 2001), an extreme masking method thought by some to bias the observers toward the use of atypical processing mechanisms by limiting the use of whole-face strategies. We obtained a strong and negative correlation between individual face processing ability and the number of bubbles (r = -.79), and this correlation remained strong even after controlling for general visual/cognitive processing ability (rpartial = -.72). In other words, the better someone is at processing faces, the fewer facial parts they need to accurately carry out this task. Thus, contrary to what many researchers assume, face processing mechanisms appear to be quite insensitive to the visual impoverishment of the face stimulus. PMID:26168140

  9. Face Recognition System for Set-Top Box-Based Intelligent TV

    PubMed Central

    Lee, Won Oh; Kim, Yeong Gon; Hong, Hyung Gil; Park, Kang Ryoung

    2014-01-01

    Despite the prevalence of smart TVs, many consumers continue to use conventional TVs with supplementary set-top boxes (STBs) because of the high cost of smart TVs. However, because the processing power of a STB is quite low, the smart TV functionalities that can be implemented in a STB are very limited. Because of this, negligible research has been conducted regarding face recognition for conventional TVs with supplementary STBs, even though many such studies have been conducted with smart TVs. In terms of camera sensors, previous face recognition systems have used high-resolution cameras, cameras with high magnification zoom lenses, or camera systems with panning and tilting devices that can be used for face recognition from various positions. However, these cameras and devices cannot be used in intelligent TV environments because of limitations related to size and cost, and only small, low cost web-cameras can be used. The resulting face recognition performance is degraded because of the limited resolution and quality levels of the images. Therefore, we propose a new face recognition system for intelligent TVs in order to overcome the limitations associated with low resource set-top box and low cost web-cameras. We implement the face recognition system using a software algorithm that does not require special devices or cameras. Our research has the following four novelties: first, the candidate regions in a viewer's face are detected in an image captured by a camera connected to the STB via low processing background subtraction and face color filtering; second, the detected candidate regions of face are transmitted to a server that has high processing power in order to detect face regions accurately; third, in-plane rotations of the face regions are compensated based on similarities between the left and right half sub-regions of the face regions; fourth, various poses of the viewer's face region are identified using five templates obtained during the initial user

  10. Toward fast feature adaptation and localization for real-time face recognition systems

    NASA Astrophysics Data System (ADS)

    Zuo, Fei; de With, Peter H.

    2003-06-01

    In a home environment, video surveillance employing face detection and recognition is attractive for new applications. Facial feature (e.g. eyes and mouth) localization in the face is an essential task for face recognition because it constitutes an indispensable step for face geometry normalization. This paper presents a new and efficient feature localization approach for real-time personal surveillance applications with low-quality images. The proposed approach consists of three major steps: (1) self-adaptive iris tracing, which is preceded by a trace-point selection process with multiple initializations to overcome the local convergence problem, (2) eye structure verification using an eye template with limited deformation freedom, and (3) eye-pair selection based on a combination of metrics. We have tested our facial feature localization method on about 100 randomly selected face images from the AR database and 30 face images downloaded from the Internet. The results show that our approach achieves a correct detection rate of 96%. Since our eye-selection technique does not involve time-consuming deformation processes, it yields relatively fast processing. The proposed algorithm has been successfully applied to a real-time home video surveillance system and proven to be an effective and computationally efficient face normalization method preceding the face recognition.

  11. Super resolution based face recognition: do we need training image set?

    NASA Astrophysics Data System (ADS)

    Al-Hassan, Nadia; Sellahewa, Harin; Jassim, Sabah A.

    2013-05-01

    This paper is concerned with face recognition under uncontrolled condition, e.g. at a distance surveillance scenarios, and post-rioting forensic, whereby captured face images are severely degraded/blurred and of low-resolution. This is a tough challenge due to many factors including capturing conditions. We present the results of our investigations into recently developed Compressive Sensing (CS) theory to develop scalable face recognition schemes using a variety of overcomplete dictionaries that construct super-resolved face images from any input low-resolution degraded face image. We shall demonstrate that deterministic as well as non-deterministic dictionaries that do not involve the use of face image information but satisfy some form of the Restricted Isometry Property used for CS can achieve face recognition accuracy levels, as good as if not better than those achieved by dictionaries proposed in the literature, that are learned from face image databases using elaborate procedures. We shall elaborate on how this approach helps in crime fighting and terrorism.

  12. The cross-race effect in face recognition memory by bicultural individuals.

    PubMed

    Marsh, Benjamin U; Pezdek, Kathy; Ozery, Daphna Hausman

    2016-09-01

    Social-cognitive models of the cross-race effect (CRE) generally specify that cross-race faces are automatically categorized as an out-group, and that different encoding processes are then applied to same-race and cross-race faces, resulting in better recognition memory for same-race faces. We examined whether cultural priming moderates the cognitive categorization of cross-race faces. In Experiment 1, monoracial Latino-Americans, considered to have a bicultural self, were primed to focus on either a Latino or American cultural self and then viewed Latino and White faces. Latino-Americans primed as Latino exhibited higher recognition accuracy (A') for Latino than White faces; those primed as American exhibited higher recognition accuracy for White than Latino faces. In Experiment 2, as predicted, prime condition did not moderate the CRE in European-Americans. These results suggest that for monoracial biculturals, priming either of their cultural identities influences the encoding processes applied to same- and cross-race faces, thereby moderating the CRE. PMID:27219532

  13. Age-Related Differences in Brain Electrical Activity during Extended Continuous Face Recognition in Younger Children, Older Children and Adults

    ERIC Educational Resources Information Center

    Van Strien, Jan W.; Glimmerveen, Johanna C.; Franken, Ingmar H. A.; Martens, Vanessa E. G.; de Bruin, Eveline A.

    2011-01-01

    To examine the development of recognition memory in primary-school children, 36 healthy younger children (8-9 years old) and 36 healthy older children (11-12 years old) participated in an ERP study with an extended continuous face recognition task (Study 1). Each face of a series of 30 faces was shown randomly six times interspersed with…

  14. Adaptive weighted local textural features for illumination, expression, and occlusion invariant face recognition

    NASA Astrophysics Data System (ADS)

    Cui, Chen; Asari, Vijayan K.

    2014-03-01

    Biometric features such as fingerprints, iris patterns, and face features help to identify people and restrict access to secure areas by performing advanced pattern analysis and matching. Face recognition is one of the most promising biometric methodologies for human identification in a non-cooperative security environment. However, the recognition results obtained by face recognition systems are a affected by several variations that may happen to the patterns in an unrestricted environment. As a result, several algorithms have been developed for extracting different facial features for face recognition. Due to the various possible challenges of data captured at different lighting conditions, viewing angles, facial expressions, and partial occlusions in natural environmental conditions, automatic facial recognition still remains as a difficult issue that needs to be resolved. In this paper, we propose a novel approach to tackling some of these issues by analyzing the local textural descriptions for facial feature representation. The textural information is extracted by an enhanced local binary pattern (ELBP) description of all the local regions of the face. The relationship of each pixel with respect to its neighborhood is extracted and employed to calculate the new representation. ELBP reconstructs a much better textural feature extraction vector from an original gray level image in different lighting conditions. The dimensionality of the texture image is reduced by principal component analysis performed on each local face region. Each low dimensional vector representing a local region is now weighted based on the significance of the sub-region. The weight of each sub-region is determined by employing the local variance estimate of the respective region, which represents the significance of the region. The final facial textural feature vector is obtained by concatenating the reduced dimensional weight sets of all the modules (sub-regions) of the face image

  15. Face recognition across makeup and plastic surgery from real-world images

    NASA Astrophysics Data System (ADS)

    Moeini, Ali; Faez, Karim; Moeini, Hossein

    2015-09-01

    A study for feature extraction is proposed to handle the problem of facial appearance changes including facial makeup and plastic surgery in face recognition. To extend a face recognition method robust to facial appearance changes, features are individually extracted from facial depth on which facial makeup and plastic surgery have no effect. Then facial depth features are added to facial texture features to perform feature extraction. Accordingly, a three-dimensional (3-D) face is reconstructed from only a single two-dimensional (2-D) frontal image in real-world scenarios. Then the facial depth is extracted from the reconstructed model. Afterward, the dual-tree complex wavelet transform (DT-CWT) is applied to both texture and reconstructed depth images to extract the feature vectors. Finally, the final feature vectors are generated by combining 2-D and 3-D feature vectors, and are then classified by adopting the support vector machine. Promising results have been achieved for makeup-invariant face recognition on two available image databases including YouTube makeup and virtual makeup, and plastic surgery-invariant face recognition on a plastic surgery face database is compared to several state-of-the-art feature extraction methods. Several real-world scenarios are also planned to evaluate the performance of the proposed method on a combination of these three databases with 1102 subjects.

  16. Local binary pattern based face recognition by estimation of facial distinctive information distribution

    NASA Astrophysics Data System (ADS)

    da, Bangyou; Sang, Nong

    2009-11-01

    We present a novel approach for face recognition by combining a local binary pattern (LBP)-based face descriptor and the distinctive information of faces. Several studies of psychophysics have shown that the eyes or mouth can be an important cue in human face perception, and the nose plays an insignificant role. This means that there exists a distinctive information distribution of faces. First, we give a quantitative estimation of the density for each pixel in a fronted face image by combining the Parzen-window approach and scale invariant feature transform detector, which is taken as the measure of the distinctive information of the faces. Second, we integrate the density function in the subwindow region of the face to gain the weight set used in the LBP-based face descriptor to produce weighted chi-square statistics. As an elementary application of the estimation of distinctive information of faces, the proposed method is tested on the FERET FA/FB image sets and yields a recognition rate of 98.2% compared to the 97.3% produced by the method adopted by Ahonen, Hadid, and Pietikainen.

  17. Effects of surface materials on polarimetric-thermal measurements: applications to face recognition.

    PubMed

    Short, Nathaniel J; Yuffa, Alex J; Videen, Gorden; Hu, Shuowen

    2016-07-01

    Materials, such as cosmetics, applied to the face can severely inhibit biometric face-recognition systems operating in the visible spectrum. These products are typically made up of materials having different spectral properties and color pigmentation that distorts the perceived shape of the face. The surface of the face emits thermal radiation, due to the living tissue beneath the surface of the skin. The emissivity of skin is approximately 0.99; in comparison, oil- and plastic-based materials, commonly found in cosmetics and face paints, have an emissivity range of 0.9-0.95 in the long-wavelength infrared part of the spectrum. Due to these properties, all three are good thermal emitters and have little impact on the heat transferred from the face. Polarimetric-thermal imaging provides additional details of the face and is also dependent upon the thermal radiation from the face. In this paper, we provide a theoretical analysis on the thermal conductivity of various materials commonly applied to the face using a metallic sphere. Additionally, we observe the impact of environmental conditions on the strength of the polarimetric signature and the ability to recover geometric details. Finally, we show how these materials degrade the performance of traditional face-recognition methods and provide an approach to mitigating this effect using polarimetric-thermal imaging. PMID:27409214

  18. Gradient feature matching for in-plane rotation invariant face sketch recognition

    NASA Astrophysics Data System (ADS)

    Alex, Ann Theja; Asari, Vijayan K.; Mathew, Alex

    2013-03-01

    Automatic recognition of face sketches is a challenging and interesting problem. An artist drawn sketch is compared against a mugshot database to identify criminals. It is a very cumbersome task to manually compare images. This necessitates a pattern recognition system to perform the comparisons. Existing methods fall into two main categories - those that allow recognition across modalities and methods that require a sketch/photo symthesis step and then copare in some modality. The methods that require synthesis require a lot of computing power since it involves high time and space complexity. Our method allows recognition across modalities. It uses the edge feature of a face sketch and face photo image to create a feature string called 'edge-string' which is a polar coordinate representation of the edge image. To generate a polar coordinate representation, we need the reference point and reference line. Using the center point of the edge image as the reference point and using a horizontal line as the reference line is the simplest solution. But, it cannot handle in-plane rotations. For this reason, we propose an approach for finding the reference line and the centroid point. The edge-strings of the face photo and face sketch are then compared using the Smith-Waterman algorithm for local string alignments. The face photo that gave the highest similarity score is the photo that matches the test face sketch input. The results on CUHK (Chinese University of Hong Kong) student dataset show the effectiveness of the proposed approach in face sketch recognition.

  19. Correlation based efficient face recognition and color change detection

    NASA Astrophysics Data System (ADS)

    Elbouz, M.; Alfalou, A.; Brosseau, C.; Alam, M. S.; Qasmi, S.

    2013-01-01

    Identifying the human face via correlation is a topic attracting widespread interest. At the heart of this technique lies the comparison of an unknown target image to a known reference database of images. However, the color information in the target image remains notoriously difficult to interpret. In this paper, we report a new technique which: (i) is robust against illumination change, (ii) offers discrimination ability to detect color change between faces having similar shape, and (iii) is specifically designed to detect red colored stains (i.e. facial bleeding). We adopt the Vanderlugt correlator (VLC) architecture with a segmented phase filter and we decompose the color target image using normalized red, green, and blue (RGB), and hue, saturation, and value (HSV) scales. We propose a new strategy to effectively utilize color information in signatures for further increasing the discrimination ability. The proposed algorithm has been found to be very efficient for discriminating face subjects with different skin colors, and those having color stains in different areas of the facial image.

  20. Image Generation Using Bidirectional Integral Features for Face Recognition with a Single Sample per Person

    PubMed Central

    Lee, Yonggeol; Lee, Minsik; Choi, Sang-Il

    2015-01-01

    In face recognition, most appearance-based methods require several images of each person to construct the feature space for recognition. However, in the real world it is difficult to collect multiple images per person, and in many cases there is only a single sample per person (SSPP). In this paper, we propose a method to generate new images with various illuminations from a single image taken under frontal illumination. Motivated by the integral image, which was developed for face detection, we extract the bidirectional integral feature (BIF) to obtain the characteristics of the illumination condition at the time of the picture being taken. The experimental results for various face databases show that the proposed method results in improved recognition performance under illumination variation. PMID:26414018

  1. Illumination-invariant face recognition with a contrast sensitive silicon retina

    SciTech Connect

    Buhmann, J.M.; Lades, M.; Eeckman, F.

    1993-11-29

    Changes in lighting conditions strongly effect the performance and reliability of computer vision systems. We report face recognition results under drastically changing lighting conditions for a computer vision system which concurrently uses a contrast sensitive silicon retina and a conventional, gain controlled CCD camera. For both input devices the face recognition system employs an elastic matching algorithm with wavelet based features to classify unknown faces. To assess the effect of analog on-chip preprocessing by the silicon retina the CCD images have been digitally preprocessed with a bandpass filter to adjust the power spectrum. The silicon retina with its ability to adjust sensitivity increases the recognition rate up to 50 percent. These comparative experiments demonstrate that preprocessing with an analog VLSI silicon retina generates image data enriched with object-constant features.

  2. Stereotype Priming in Face Recognition: Interactions between Semantic and Visual Information in Face Encoding

    ERIC Educational Resources Information Center

    Hills, Peter J.; Lewis, Michael B.; Honey, R. C.

    2008-01-01

    The accuracy with which previously unfamiliar faces are recognised is increased by the presentation of a stereotype-congruent occupation label [Klatzky, R. L., Martin, G. L., & Kane, R. A. (1982a). "Semantic interpretation effects on memory for faces." "Memory & Cognition," 10, 195-206; Klatzky, R. L., Martin, G. L., & Kane, R. A. (1982b).…

  3. Cross-age effect in recognition performance and memory monitoring for faces.

    PubMed

    Bryce, Margaret S; Dodson, Chad S

    2013-03-01

    The cross-age effect refers to the finding of better memory for own- than other-age faces. We examined 3 issues about this effect: (1) Does it extend to the ability to monitor the likely accuracy of memory judgments for young and old faces? (2) Does it apply to source information that is associated with young and old faces? And (3) what is a likely mechanism underlying the cross-age effect? In Experiment 1, young and older adults viewed young and old faces appearing in different contexts. Young adults exhibited a cross-age effect in their recognition of faces and in their memory-monitoring performance for these faces. Older adults, by contrast, showed no age-of-face effects. Experiment 2 examined whether young adults' cross-age effect depends on or is independent of encoding a mixture of young and old faces. Young adults encoded either a mixture of young and old faces, a set of all young faces, or a set of all old faces. In the mixed-list condition we replicated our finding of young adults' superior memory for own-age faces; in the pure-list conditions, however, there were absolutely no differences in performance between young and old faces. The fact that the pure-list design abolishes the cross-age effect supports social-cognitive theories of this phenomenon. PMID:23066807

  4. Face recognition via edge-based Gabor feature representation for plastic surgery-altered images

    NASA Astrophysics Data System (ADS)

    Chude-Olisah, Chollette C.; Sulong, Ghazali; Chude-Okonkwo, Uche A. K.; Hashim, Siti Z. M.

    2014-12-01

    Plastic surgery procedures on the face introduce skin texture variations between images of the same person (intra-subject), thereby making the task of face recognition more difficult than in normal scenario. Usually, in contemporary face recognition systems, the original gray-level face image is used as input to the Gabor descriptor, which translates to encoding some texture properties of the face image. The texture-encoding process significantly degrades the performance of such systems in the case of plastic surgery due to the presence of surgically induced intra-subject variations. Based on the proposition that the shape of significant facial components such as eyes, nose, eyebrow, and mouth remains unchanged after plastic surgery, this paper employs an edge-based Gabor feature representation approach for the recognition of surgically altered face images. We use the edge information, which is dependent on the shapes of the significant facial components, to address the plastic surgery-induced texture variation problems. To ensure that the significant facial components represent useful edge information with little or no false edges, a simple illumination normalization technique is proposed for preprocessing. Gabor wavelet is applied to the edge image to accentuate on the uniqueness of the significant facial components for discriminating among different subjects. The performance of the proposed method is evaluated on the Georgia Tech (GT) and the Labeled Faces in the Wild (LFW) databases with illumination and expression problems, and the plastic surgery database with texture changes. Results show that the proposed edge-based Gabor feature representation approach is robust against plastic surgery-induced face variations amidst expression and illumination problems and outperforms the existing plastic surgery face recognition methods reported in the literature.

  5. An in-depth cognitive examination of individuals with superior face recognition skills.

    PubMed

    Bobak, Anna K; Bennetts, Rachel J; Parris, Benjamin A; Jansari, Ashok; Bate, Sarah

    2016-09-01

    Previous work has reported the existence of "super-recognisers" (SRs), or individuals with extraordinary face recognition skills. However, the precise underpinnings of this ability have not yet been investigated. In this paper we examine (a) the face-specificity of super recognition, (b) perception of facial identity in SRs, (c) whether SRs present with enhancements in holistic processing and (d) the consistency of these findings across different SRs. A detailed neuropsychological investigation into six SRs indicated domain-specificity in three participants, with some evidence of enhanced generalised visuo-cognitive or socio-emotional processes in the remaining individuals. While superior face-processing skills were restricted to face memory in three of the SRs, enhancements to facial identity perception were observed in the others. Notably, five of the six participants showed at least some evidence of enhanced holistic processing. These findings indicate cognitive heterogeneity in the presentation of superior face recognition, and have implications for our theoretical understanding of the typical face-processing system and the identification of superior face-processing skills in applied settings. PMID:27344238

  6. Development of holistic vs. featural processing in face recognition

    PubMed Central

    Nakabayashi, Kazuyo; Liu, Chang Hong

    2014-01-01

    According to a classic view developed by Carey and Diamond (1977), young children process faces in a piecemeal fashion before adult-like holistic processing starts to emerge at the age of around 10 years. This is known as the encoding switch hypothesis. Since then, a growing body of studies have challenged the theory. This article will provide a critical appraisal of this literature, followed by an analysis of some more recent developments. We will conclude, quite contrary to the classical view, that holistic processing is not only present in early child development, but could even precede the development of part-based processing. PMID:25368565

  7. Visual face-movement sensitive cortex is relevant for auditory-only speech recognition.

    PubMed

    Riedel, Philipp; Ragert, Patrick; Schelinski, Stefanie; Kiebel, Stefan J; von Kriegstein, Katharina

    2015-07-01

    It is commonly assumed that the recruitment of visual areas during audition is not relevant for performing auditory tasks ('auditory-only view'). According to an alternative view, however, the recruitment of visual cortices is thought to optimize auditory-only task performance ('auditory-visual view'). This alternative view is based on functional magnetic resonance imaging (fMRI) studies. These studies have shown, for example, that even if there is only auditory input available, face-movement sensitive areas within the posterior superior temporal sulcus (pSTS) are involved in understanding what is said (auditory-only speech recognition). This is particularly the case when speakers are known audio-visually, that is, after brief voice-face learning. Here we tested whether the left pSTS involvement is causally related to performance in auditory-only speech recognition when speakers are known by face. To test this hypothesis, we applied cathodal transcranial direct current stimulation (tDCS) to the pSTS during (i) visual-only speech recognition of a speaker known only visually to participants and (ii) auditory-only speech recognition of speakers they learned by voice and face. We defined the cathode as active electrode to down-regulate cortical excitability by hyperpolarization of neurons. tDCS to the pSTS interfered with visual-only speech recognition performance compared to a control group without pSTS stimulation (tDCS to BA6/44 or sham). Critically, compared to controls, pSTS stimulation additionally decreased auditory-only speech recognition performance selectively for voice-face learned speakers. These results are important in two ways. First, they provide direct evidence that the pSTS is causally involved in visual-only speech recognition; this confirms a long-standing prediction of current face-processing models. Secondly, they show that visual face-sensitive pSTS is causally involved in optimizing auditory-only speech recognition. These results are in line

  8. Wavelet decomposition based principal component analysis for face recognition using MATLAB

    NASA Astrophysics Data System (ADS)

    Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish

    2016-03-01

    For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.

  9. Effects of facial emotion recognition remediation on visual scanning of novel face stimuli.

    PubMed

    Marsh, Pamela J; Luckett, Gemma; Russell, Tamara; Coltheart, Max; Green, Melissa J

    2012-11-01

    Previous research shows that emotion recognition in schizophrenia can be improved with targeted remediation that draws attention to important facial features (eyes, nose, mouth). Moreover, the effects of training have been shown to last for up to one month after training. The aim of this study was to investigate whether improved emotion recognition of novel faces is associated with concomitant changes in visual scanning of these same novel facial expressions. Thirty-nine participants with schizophrenia received emotion recognition training using Ekman's Micro-Expression Training Tool (METT), with emotion recognition and visual scanpath (VSP) recordings to face stimuli collected simultaneously. Baseline ratings of interpersonal and cognitive functioning were also collected from all participants. Post-METT training, participants showed changes in foveal attention to the features of facial expressions of emotion not used in METT training, which were generally consistent with the information about important features from the METT. In particular, there were changes in how participants looked at the features of facial expressions of emotion surprise, disgust, fear, happiness, and neutral, demonstrating that improved emotion recognition is paralleled by changes in the way participants with schizophrenia viewed novel facial expressions of emotion. However, there were overall decreases in foveal attention to sad and neutral faces that indicate more intensive instruction might be needed for these faces during training. Most importantly, the evidence shows that participant gender may affect training outcomes. PMID:22959743

  10. Toward a unified model of face and object recognition in the human visual system

    PubMed Central

    Wallis, Guy

    2013-01-01

    Our understanding of the mechanisms and neural substrates underlying visual recognition has made considerable progress over the past 30 years. During this period, accumulating evidence has led many scientists to conclude that objects and faces are recognised in fundamentally distinct ways, and in fundamentally distinct cortical areas. In the psychological literature, in particular, this dissociation has led to a palpable disconnect between theories of how we process and represent the two classes of object. This paper follows a trend in part of the recognition literature to try to reconcile what we know about these two forms of recognition by considering the effects of learning. Taking a widely accepted, self-organizing model of object recognition, this paper explains how such a system is affected by repeated exposure to specific stimulus classes. In so doing, it explains how many aspects of recognition generally regarded as unusual to faces (holistic processing, configural processing, sensitivity to inversion, the other-race effect, the prototype effect, etc.) are emergent properties of category-specific learning within such a system. Overall, the paper describes how a single model of recognition learning can and does produce the seemingly very different types of representation associated with faces and objects. PMID:23966963

  11. Oxytocin eliminates the own-race bias in face recognition memory.

    PubMed

    Blandón-Gitlin, Iris; Pezdek, Kathy; Saldivar, Sesar; Steelman, Erin

    2014-09-11

    The neuropeptide Oxytocin influences a number of social behaviors, including processing of faces. We examined whether Oxytocin facilitates the processing of out-group faces and reduce the own-race bias (ORB). The ORB is a robust phenomenon characterized by poor recognition memory of other-race faces compared to the same-race faces. In Experiment 1, participants received intranasal solutions of Oxytocin or placebo prior to viewing White and Black faces. On a subsequent recognition test, whereas in the placebo condition the same-race faces were better recognized than other-race faces, in the Oxytocin condition Black and White faces were equally well recognized, effectively eliminating the ORB. In Experiment 2, Oxytocin was administered after the study phase. The ORB resulted, but Oxytocin did not significantly reduce the effect. This study is the first to show that Oxytocin can enhance face memory of out-group members and underscore the importance of social encoding mechanisms underlying the own-race bias. This article is part of a Special Issue entitled Oxytocin and Social Behav. PMID:23872107

  12. Recognition memory for distractor faces depends on attentional load at exposure.

    PubMed

    Jenkins, Rob; Lavie, Nilli; Driver, Jon

    2005-04-01

    Incidental recognition memory for faces previously exposed as task-irrelevant distractors was assessed as a function of the attentional load of an unrelated task performed on superimposed letter strings at exposure. In Experiment 1, subjects were told to ignore the faces and either to judge the color of the letters (low load) or to search for an angular target letter among other angular letters (high load). A surprise recognition memory test revealed that despite the irrelevance of all faces at exposure, those exposed under low-load conditions were later recognized, but those exposed under high-load conditions were not. Experiment 2 found a similar pattern when both the high- and low-load tasks required shape judgments for the letters but made differing attentional demands. Finally, Experiment 3 showed that high load in a nonface task can significantly reduce even immediate recognition of a fixated face from the preceding trial. These results demonstrate that load in a nonface domain (e.g., letter shape) can reduce face recognition, in accord with Lavie's load theory. In addition to their theoretical impact, these results may have practical implications for eyewitness testimony. PMID:16082812

  13. Is that me or my twin? Lack of self-face recognition advantage in identical twins.

    PubMed

    Martini, Matteo; Bufalari, Ilaria; Stazi, Maria Antonietta; Aglioti, Salvatore Maria

    2015-01-01

    Despite the increasing interest in twin studies and the stunning amount of research on face recognition, the ability of adult identical twins to discriminate their own faces from those of their co-twins has been scarcely investigated. One's own face is the most distinctive feature of the bodily self, and people typically show a clear advantage in recognizing their own face even more than other very familiar identities. Given the very high level of resemblance of their faces, monozygotic twins represent a unique model for exploring self-face processing. Herein we examined the ability of monozygotic twins to distinguish their own face from the face of their co-twin and of a highly familiar individual. Results show that twins equally recognize their own face and their twin's face. This lack of self-face advantage was negatively predicted by how much they felt physically similar to their co-twin and by their anxious or avoidant attachment style. We speculate that in monozygotic twins, the visual representation of the self-face overlaps with that of the co-twin. Thus, to distinguish the self from the co-twin, monozygotic twins have to rely much more than control participants on the multisensory integration processes upon which the sense of bodily self is based. Moreover, in keeping with the notion that attachment style influences perception of self and significant others, we propose that the observed self/co-twin confusion may depend upon insecure attachment. PMID:25853249

  14. Is That Me or My Twin? Lack of Self-Face Recognition Advantage in Identical Twins

    PubMed Central

    Martini, Matteo; Bufalari, Ilaria; Stazi, Maria Antonietta; Aglioti, Salvatore Maria

    2015-01-01

    Despite the increasing interest in twin studies and the stunning amount of research on face recognition, the ability of adult identical twins to discriminate their own faces from those of their co-twins has been scarcely investigated. One’s own face is the most distinctive feature of the bodily self, and people typically show a clear advantage in recognizing their own face even more than other very familiar identities. Given the very high level of resemblance of their faces, monozygotic twins represent a unique model for exploring self-face processing. Herein we examined the ability of monozygotic twins to distinguish their own face from the face of their co-twin and of a highly familiar individual. Results show that twins equally recognize their own face and their twin’s face. This lack of self-face advantage was negatively predicted by how much they felt physically similar to their co-twin and by their anxious or avoidant attachment style. We speculate that in monozygotic twins, the visual representation of the self-face overlaps with that of the co-twin. Thus, to distinguish the self from the co-twin, monozygotic twins have to rely much more than control participants on the multisensory integration processes upon which the sense of bodily self is based. Moreover, in keeping with the notion that attachment style influences perception of self and significant others, we propose that the observed self/co-twin confusion may depend upon insecure attachment. PMID:25853249

  15. A Kernel Gabor-Based Weighted Region Covariance Matrix for Face Recognition

    PubMed Central

    Qin, Huafeng; Qin, Lan; Xue, Lian; Li, Yantao

    2012-01-01

    This paper proposes a novel image region descriptor for face recognition, named kernel Gabor-based weighted region covariance matrix (KGWRCM). As different parts are different effectual in characterizing and recognizing faces, we construct a weighting matrix by computing the similarity of each pixel within a face sample to emphasize features. We then incorporate the weighting matrices into a region covariance matrix, named weighted region covariance matrix (WRCM), to obtain the discriminative features of faces for recognition. Finally, to further preserve discriminative features in higher dimensional space, we develop the kernel Gabor-based weighted region covariance matrix (KGWRCM). Experimental results show that the KGWRCM outperforms other algorithms including the kernel Gabor-based region covariance matrix (KGCRM). PMID:22969351

  16. Differential outcomes training improves face recognition memory in children and in adults with Down syndrome.

    PubMed

    Esteban, Laura; Plaza, Victoria; López-Crespo, Ginesa; Vivas, Ana B; Estévez, Angeles F

    2014-06-01

    Previous studies have demonstrated that the differential outcomes procedure (DOP), which involves paring a unique reward with a specific stimulus, enhances discriminative learning and memory performance in several populations. The present study aimed to further investigate whether this procedure would improve face recognition memory in 5- and 7-year-old children (Experiment 1) and adults with Down syndrome (Experiment 2). In a delayed matching-to-sample task, participants had to select the previously shown face (sample stimulus) among six alternatives faces (comparison stimuli) in four different delays (1, 5, 10, or 15s). Participants were tested in two conditions: differential, where each sample stimulus was paired with a specific outcome; and non-differential outcomes, where reinforcers were administered randomly. The results showed a significantly better face recognition in the differential outcomes condition relative to the non-differential in both experiments. Implications for memory training programs and future research are discussed. PMID:24713518

  17. A kernel Gabor-based weighted region covariance matrix for face recognition.

    PubMed

    Qin, Huafeng; Qin, Lan; Xue, Lian; Li, Yantao

    2012-01-01

    This paper proposes a novel image region descriptor for face recognition, named kernel Gabor-based weighted region covariance matrix (KGWRCM). As different parts are different effectual in characterizing and recognizing faces, we construct a weighting matrix by computing the similarity of each pixel within a face sample to emphasize features. We then incorporate the weighting matrices into a region covariance matrix, named weighted region covariance matrix (WRCM), to obtain the discriminative features of faces for recognition. Finally, to further preserve discriminative features in higher dimensional space, we develop the kernel Gabor-based weighted region covariance matrix (KGWRCM). Experimental results show that the KGWRCM outperforms other algorithms including the kernel Gabor-based region covariance matrix (KGCRM). PMID:22969351

  18. Uncorrelated regularized local Fisher discriminant analysis for face recognition

    NASA Astrophysics Data System (ADS)

    Wang, Zhan; Ruan, Qiuqi; An, Gaoyun

    2014-07-01

    A local Fisher discriminant analysis can work well for a multimodal problem. However, it often suffers from the undersampled problem, which makes the local within-class scatter matrix singular. We develop a supervised discriminant analysis technique called uncorrelated regularized local Fisher discriminant analysis for image feature extraction. In this technique, the local within-class scatter matrix is approximated by a full-rank matrix that not only solves the undersampled problem but also eliminates the poor impact of small and zero eigenvalues. Statistically uncorrelated features are obtained to remove redundancy. A trace ratio criterion and the corresponding iterative algorithm are employed to globally solve the objective function. Experimental results on four famous face databases indicate that our proposed method is effective and outperforms the conventional dimensionality reduction methods.

  19. Contribution of Bodily and Gravitational Orientation Cues to Face and Letter Recognition.

    PubMed

    Barnett-Cowan, Michael; Snow, Jacqueline C; Culham, Jody C

    2015-01-01

    Sensory information provided by the vestibular system is crucial in cognitive processes such as the ability to recognize objects. The orientation at which objects are most easily recognized--the perceptual upright (PU)--is influenced by body orientation with respect to gravity as detected from the somatosensory and vestibular systems. To date, the influence of these sensory cues on the PU has been measured using a letter recognition task. Here we assessed whether gravitational influences on letter recognition also extend to human face recognition. 13 right-handed observers were positioned in four body orientations (upright, left-side-down, right-side-down, supine) and visually discriminated ambiguous characters ('p'-from-'d'; 'i'-from-'!') and ambiguous faces used in popular visual illusions ('young woman'-from-'old woman'; 'grinning man'-from-'frowning man') in a forced-choice paradigm. The two transition points (e.g., 'p-to-d' and 'd-to-p'; 'young woman-to-old woman' and 'old woman-to-young woman') were fit with a sigmoidal psychometric function and the average of these transitions was taken as the PU for each stimulus category. The results show that both faces and letters are more influenced by body orientation than gravity. However, faces are more optimally recognized when closer in alignment with body orientation than letters--which are more influenced by gravity. Our results indicate that the brain does not utilize a common representation of upright that governs recognition of all object categories. Distinct areas of ventro-temporal cortex that represent faces and letters may weight bodily and gravitational cues differently--possibly to facilitate the specific demands of face and letter recognition. PMID:26595950

  20. Sparsity preserving discriminative learning with applications to face recognition

    NASA Astrophysics Data System (ADS)

    Ren, Yingchun; Wang, Zhicheng; Chen, Yufei; Shan, Xiaoying; Zhao, Weidong

    2016-01-01

    The extraction of effective features is extremely important for understanding the intrinsic structure hidden in high-dimensional data. In recent years, sparse representation models have been widely used in feature extraction. A supervised learning method, called sparsity preserving discriminative learning (SPDL), is proposed. SPDL, which attempts to preserve the sparse representation structure of the data and simultaneously maximize the between-class separability, can be regarded as a combiner of manifold learning and sparse representation. More specifically, SPDL first creates a concatenated dictionary by class-wise principal component analysis decompositions and learns the sparse representation structure of each sample under the constructed dictionary using the least squares method. Second, a local between-class separability function is defined to characterize the scatter of the samples in the different submanifolds. Then, SPDL integrates the learned sparse representation information with the local between-class relationship to construct a discriminant function. Finally, the proposed method is transformed into a generalized eigenvalue problem. Extensive experimental results on several popular face databases demonstrate the effectiveness of the proposed approach.

  1. 3D face recognition under expressions, occlusions, and pose variations.

    PubMed

    Drira, Hassen; Ben Amor, Boulbaba; Srivastava, Anuj; Daoudi, Mohamed; Slama, Rim

    2013-09-01

    We propose a novel geometric framework for analyzing 3D faces, with the specific goals of comparing, matching, and averaging their shapes. Here we represent facial surfaces by radial curves emanating from the nose tips and use elastic shape analysis of these curves to develop a Riemannian framework for analyzing shapes of full facial surfaces. This representation, along with the elastic Riemannian metric, seems natural for measuring facial deformations and is robust to challenges such as large facial expressions (especially those with open mouths), large pose variations, missing parts, and partial occlusions due to glasses, hair, and so on. This framework is shown to be promising from both--empirical and theoretical--perspectives. In terms of the empirical evaluation, our results match or improve upon the state-of-the-art methods on three prominent databases: FRGCv2, GavabDB, and Bosphorus, each posing a different type of challenge. From a theoretical perspective, this framework allows for formal statistical inferences, such as the estimation of missing facial parts using PCA on tangent spaces and computing average shapes. PMID:23868784

  2. Emotional Faces in Context: Age Differences in Recognition Accuracy and Scanning Patterns

    PubMed Central

    Noh, Soo Rim; Isaacowitz, Derek M.

    2014-01-01

    While age-related declines in facial expression recognition are well documented, previous research relied mostly on isolated faces devoid of context. We investigated the effects of context on age differences in recognition of facial emotions and in visual scanning patterns of emotional faces. While their eye movements were monitored, younger and older participants viewed facial expressions (i.e., anger, disgust) in contexts that were emotionally congruent, incongruent, or neutral to the facial expression to be identified. Both age groups had highest recognition rates of facial expressions in the congruent context, followed by the neutral context, and recognition rates in the incongruent context were worst. These context effects were more pronounced for older adults. Compared to younger adults, older adults exhibited a greater benefit from congruent contextual information, regardless of facial expression. Context also influenced the pattern of visual scanning characteristics of emotional faces in a similar manner across age groups. In addition, older adults initially attended more to context overall. Our data highlight the importance of considering the role of context in understanding emotion recognition in adulthood. PMID:23163713

  3. Emotional face recognition deficit in amnestic patients with mild cognitive impairment: behavioral and electrophysiological evidence

    PubMed Central

    Yang, Linlin; Zhao, Xiaochuan; Wang, Lan; Yu, Lulu; Song, Mei; Wang, Xueyi

    2015-01-01

    Amnestic mild cognitive impairment (MCI) has been conceptualized as a transitional stage between healthy aging and Alzheimer’s disease. Thus, understanding emotional face recognition deficit in patients with amnestic MCI could be useful in determining progression of amnestic MCI. The purpose of this study was to investigate the features of emotional face processing in amnestic MCI by using event-related potentials (ERPs). Patients with amnestic MCI and healthy controls performed a face recognition task, giving old/new responses to previously studied and novel faces with different emotional messages as the stimulus material. Using the learning-recognition paradigm, the experiments were divided into two steps, ie, a learning phase and a test phase. ERPs were analyzed on electroencephalographic recordings. The behavior data indicated high emotion classification accuracy for patients with amnestic MCI and for healthy controls. The mean percentage of correct classifications was 81.19% for patients with amnestic MCI and 96.46% for controls. Our ERP data suggest that patients with amnestic MCI were still be able to undertake personalizing processing for negative faces, but not for neutral or positive faces, in the early frontal processing stage. In the early time window, no differences in frontal old/new effect were found between patients with amnestic MCI and normal controls. However, in the late time window, the three types of stimuli did not elicit any old/new parietal effects in patients with amnestic MCI, suggesting their recollection was impaired. This impairment may be closely associated with amnestic MCI disease. We conclude from our data that face recognition processing and emotional memory is impaired in patients with amnestic MCI. Such damage mainly occurred in the early coding stages. In addition, we found that patients with amnestic MCI had difficulty in post-processing of positive and neutral facial emotions. PMID:26347065

  4. A Smile Enhances 3-Month-Olds' Recognition of an Individual Face

    ERIC Educational Resources Information Center

    Turati, Chiara; Montirosso, Rosario; Brenna, Viola; Ferrara, Veronica; Borgatti, Renato

    2011-01-01

    Recent studies demonstrated that in adults and children recognition of face identity and facial expression mutually interact (Bate, Haslam, & Hodgson, 2009; Spangler, Schwarzer, Korell, & Maier-Karius, 2010). Here, using a familiarization paradigm, we explored the relation between these processes in early infancy, investigating whether 3-month-old…

  5. Cultural In-Group Advantage: Emotion Recognition in African American and European American Faces and Voices

    ERIC Educational Resources Information Center

    Wickline, Virginia B.; Bailey, Wendy; Nowicki, Stephen

    2009-01-01

    The authors explored whether there were in-group advantages in emotion recognition of faces and voices by culture or geographic region. Participants were 72 African American students (33 men, 39 women), 102 European American students (30 men, 72 women), 30 African international students (16 men, 14 women), and 30 European international students…

  6. A Normed Study of Face Recognition in Autism and Related Disorders.

    ERIC Educational Resources Information Center

    Klin, Ami; Sparrow, Sara S.; de Bildt, Annelies; Cicchetti, Domenic V.; Cohen, Donald J.; Volkmar, Fred R.

    1999-01-01

    This study used a well-normed task of face recognition with 102 young children with autism, pervasive developmental disorder (PDD) not otherwise specified, and non-PDD disorders (mental retardation and language disorders) matched for chronological age and either verbal or nonverbal mental age. Autistic subjects exhibited pronounced deficits in…

  7. Kernel-based discriminant image filter learning: application in face recognition

    NASA Astrophysics Data System (ADS)

    Zhang, Lingchen; Wei, Sui; Qu, Lei

    2014-11-01

    The extraction of discriminative and robust feature is a crucial issue in pattern recognition and classification. In this paper, we propose a kernel based discriminant image filter learning method (KDIFL) for local feature enhancement and demonstrate its superiority in the application of face recognition. Instead of designing the image filter in a handcraft or analytical way, we propose to learn the image filter so that after filtering the between-class difference is attenuated and the within-class difference is amplified, thus facilitate the following recognition. During filter learning, the kernel trick is employed to cope with the nonlinear feature space problem caused by expression, pose, illumination, and so on. We show that the proposed filter is generalized and it can be concatenated with classic feature descriptors (e.g. LBP) to further increase the discriminability of extracted features. Our extensive experiments on Yale, ORL and AR face databases validate the effectiveness and robustness of the proposed method.

  8. Study on local Gabor binary patterns for face representation and recognition

    NASA Astrophysics Data System (ADS)

    Ge, Wei; Han, Chunling; Quan, Wei

    2015-12-01

    More recently, Local Binary Patterns(LBP) has received much attention in face representation and recognition. The original LBP operator could describe the spatial structure information, which are the variety edge or variety angle features of local facial images essentially, they are important factors of classify different faces. But the scale and orientation of the edge features include more detail information which could be used to classify different persons efficiently, while original LBP operator could not to extract the information. In this paper, based on the introduction of original LBP-based facial representation and recognition, the histogram sequences of local Gabor binary patterns are used to representation facial image. Principal Component Analysis (PCA) method is used to classification the histogram sequences, which have been converted to vectors. Recognition experimental results show that the method we used in this paper increases nearly 6% than the classification performance of original LBP operator.

  9. Always on My Mind? Recognition of Attractive Faces May Not Depend on Attention

    PubMed Central

    Silva, André; Macedo, António F.; Albuquerque, Pedro B.; Arantes, Joana

    2016-01-01

    Little research has examined what happens to attention and memory as a whole when humans see someone attractive. Hence, we investigated whether attractive stimuli gather more attention and are better remembered than unattractive stimuli. Participants took part in an attention task – in which matrices containing attractive and unattractive male naturalistic photographs were presented to 54 females, and measures of eye-gaze location and fixation duration using an eye-tracker were taken – followed by a recognition task. Eye-gaze was higher for the attractive stimuli compared to unattractive stimuli. Also, attractive photographs produced more hits and false recognitions than unattractive photographs which may indicate that regardless of attention allocation, attractive photographs produce more correct but also more false recognitions. We present an evolutionary explanation for this, as attending to more attractive faces but not always remembering them accurately and differentially compared with unseen attractive faces, may help females secure mates with higher reproductive value. PMID:26858683

  10. PEM-PCA: A Parallel Expectation-Maximization PCA Face Recognition Architecture

    PubMed Central

    Rujirakul, Kanokmon; Arnonkijpanich, Banchar

    2014-01-01

    Principal component analysis or PCA has been traditionally used as one of the feature extraction techniques in face recognition systems yielding high accuracy when requiring a small number of features. However, the covariance matrix and eigenvalue decomposition stages cause high computational complexity, especially for a large database. Thus, this research presents an alternative approach utilizing an Expectation-Maximization algorithm to reduce the determinant matrix manipulation resulting in the reduction of the stages' complexity. To improve the computational time, a novel parallel architecture was employed to utilize the benefits of parallelization of matrix computation during feature extraction and classification stages including parallel preprocessing, and their combinations, so-called a Parallel Expectation-Maximization PCA architecture. Comparing to a traditional PCA and its derivatives, the results indicate lower complexity with an insignificant difference in recognition precision leading to high speed face recognition systems, that is, the speed-up over nine and three times over PCA and Parallel PCA. PMID:24955405

  11. Multi-Directional Multi-Level Dual-Cross Patterns for Robust Face Recognition.

    PubMed

    Ding, Changxing; Choi, Jonghyun; Tao, Dacheng; Davis, Larry S

    2016-03-01

    To perform unconstrained face recognition robust to variations in illumination, pose and expression, this paper presents a new scheme to extract "Multi-Directional Multi-Level Dual-Cross Patterns" (MDML-DCPs) from face images. Specifically, the MDML-DCPs scheme exploits the first derivative of Gaussian operator to reduce the impact of differences in illumination and then computes the DCP feature at both the holistic and component levels. DCP is a novel face image descriptor inspired by the unique textural structure of human faces. It is computationally efficient and only doubles the cost of computing local binary patterns, yet is extremely robust to pose and expression variations. MDML-DCPs comprehensively yet efficiently encodes the invariant characteristics of a face image from multiple levels into patterns that are highly discriminative of inter-personal differences but robust to intra-personal variations. Experimental results on the FERET, CAS-PERL-R1, FRGC 2.0, and LFW databases indicate that DCP outperforms the state-of-the-art local descriptors (e.g., LBP, LTP, LPQ, POEM, tLBP, and LGXP) for both face identification and face verification tasks. More impressively, the best performance is achieved on the challenging LFW and FRGC 2.0 databases by deploying MDML-DCPs in a simple recognition scheme. PMID:27046495

  12. Activation reduction in anterior temporal cortices during repeated recognition of faces of personal acquaintances.

    PubMed

    Sugiura, M; Kawashima, R; Nakamura, K; Sato, N; Nakamura, A; Kato, T; Hatano, K; Schormann, T; Zilles, K; Sato, K; Ito, K; Fukuda, H

    2001-05-01

    Repeated recognition of the face of a familiar individual is known to show semantic repetition priming effect. In this study, normal subjects were repeatedly presented faces of their colleagues, and the effect of repetition on the regional cerebral blood flow change was measured using positron emission tomography. They repeated a set of three tasks: the familiar-face detection (F) task, the facial direction discrimination (D) task, and the perceptual control (C) task. During five repetitions of the F task, familiar faces were presented six times from different views in a pseudorandom order. Activation reduction through the repetition of the F tasks was observed in the bilateral anterior (anterolateral to the polar region) temporal cortices which are suggested to be involved in the access to the long-term memory concerning people. The bilateral amygdala, the hypothalamus, and the medial frontal cortices, were constantly activated during the F tasks, and considered to be associated with the behavioral significance of the presented familiar faces. Constant activation was also observed in the bilateral occipitotemporal regions and fusiform gyri and the right medial temporal regions during perception of the faces, and in the left medial temporal regions during the facial familiarity detection task, which are consistent with the results of previous functional brain imaging studies. The results have provided further information about the functional segregation of the anterior temporal regions in face recognition and long-term memory. PMID:11304083

  13. An ERP investigation of the co-development of hemispheric lateralization of face and word recognition

    PubMed Central

    Dundas, Eva M.; Plaut, David C.; Behrmann, Marlene

    2014-01-01

    The adult human brain would appear to have specialized and independent neural systems for the visual processing of words and faces. Extensive evidence has demonstrated greater selectivity for written words in the left over right hemisphere, and, conversely, greater selectivity for faces in the right over left hemisphere. This study examines the emergence of these complementary neural profiles, as well as the possible relationship between them. Using behavioral and neurophysiological measures, in adults, we observed the standard finding of greater accuracy and a larger N170 ERP component in the left over right hemisphere for words, and conversely, greater accuracy and a larger N170 in the right over the left hemisphere for faces. We also found that, although children aged 7-12 years revealed the adult hemispheric pattern for words, they showed neither a behavioral nor a neural hemispheric superiority for faces. Of particular interest, the magnitude of their N170 for faces in the right hemisphere was related to that of the N170 for words in their left hemisphere. These findings suggest that the hemispheric organization of face recognition and of word recognition do not develop independently, and that word lateralization may precede and drive later face lateralization. A theoretical account for the findings, in which competition for visual representations unfolds over the course of development, is discussed. PMID:24933662

  14. Recognition memory for emotional and neutral faces: an event-related potential study.

    PubMed

    Johansson, Mikael; Mecklinger, Axel; Treese, Anne-Cécile

    2004-12-01

    This study examined emotional influences on the hypothesized event-related potential (ERP) correlates of familiarity and recollection (Experiment 1) and the states of awareness (Experiment 2) accompanying recognition memory for faces differing in facial affect. Participants made gender judgments to positive, negative, and neutral faces at study and were in the test phase instructed to discriminate between studied and nonstudied faces. Whereas old-new discrimination was unaffected by facial expression, negative faces were recollected to a greater extent than both positive and neutral faces as reflected in the parietal ERP old-new effect and in the proportion of remember judgments. Moreover, emotion-specific modulations were observed in frontally recorded ERPs elicited by correctly rejected new faces that concurred with a more liberal response criterion for emotional as compared to neutral faces. Taken together, the results are consistent with the view that processes promoting recollection are facilitated for negative events and that emotion may affect recognition performance by influencing criterion setting mediated by the prefrontal cortex. PMID:15701233

  15. Gaze direction influences awareness in recognition memory for faces after intentional learning.

    PubMed

    Daury, Noémy

    2009-08-01

    Previous research has shown that direct gaze elicits more hits than deviated gaze in face recognition tasks. The aim of the present study was to evaluate whether the state of awareness that accompanied recognition was different for faces with eye gaze directed toward the observer as compared with faces looking elsewhere. This state of awareness was assessed using the "Remember-Know-Guess" paradigm. Three different experiments were conducted including, respectively, 24 (12 women, 12 men), 24 (12 women, 12 men), and 28 (15 women, 13 men) volunteer participants ages 18 to 31 (M1 = 20.8, SD1 = 2.8; M2 = 20.7, SD2 = 2.4; M3 = 21.5, SD3 = 3.6). Experiments comprised two incidental learning experiments using, respectively, frontal views and profile views of faces at encoding, and one intentional learning experiment using profile views of faces at encoding. Surprisingly, the effect of direct gaze observed in previous studies was not replicated. The rates of Hits were not significantly higher for faces showing direct gaze than for faces with deviated gaze across the three experiments. However, in the intentional learning experiment, rates of Remember responses were significantly higher in the direct gaze than in the deviated gaze condition. PMID:19831103

  16. A prescreener for 3D face recognition using radial symmerty and the Hausdorff fraction.

    SciTech Connect

    Koudelka, Melissa L.; Koch, Mark William; Russ, Trina Denise

    2005-04-01

    Face recognition systems require the ability to efficiently scan an existing database of faces to locate a match for a newly acquired face. The large number of faces in real world databases makes computationally intensive algorithms impractical for scanning entire databases. We propose the use of more efficient algorithms to 'prescreen' face databases, determining a limited set of likely matches that can be processed further to identify a match. We use both radial symmetry and shape to extract five features of interest on 3D range images of faces. These facial features determine a very small subset of discriminating points which serve as input to a prescreening algorithm based on a Hausdorff fraction. We show how to compute the Haudorff fraction in linear O(n) time using a range image representation. Our feature extraction and prescreening algorithms are verified using the FRGC v1.0 3D face scan data. Results show 97% of the extracted facial features are within 10 mm or less of manually marked ground truth, and the prescreener has a rank 6 recognition rate of 100%.

  17. The time course of individual face recognition: A pattern analysis of ERP signals.

    PubMed

    Nemrodov, Dan; Niemeier, Matthias; Mok, Jenkin Ngo Yin; Nestor, Adrian

    2016-05-15

    An extensive body of work documents the time course of neural face processing in the human visual cortex. However, the majority of this work has focused on specific temporal landmarks, such as N170 and N250 components, derived through univariate analyses of EEG data. Here, we take on a broader evaluation of ERP signals related to individual face recognition as we attempt to move beyond the leading theoretical and methodological framework through the application of pattern analysis to ERP data. Specifically, we investigate the spatiotemporal profile of identity recognition across variation in emotional expression. To this end, we apply pattern classification to ERP signals both in time, for any single electrode, and in space, across multiple electrodes. Our results confirm the significance of traditional ERP components in face processing. At the same time though, they support the idea that the temporal profile of face recognition is incompletely described by such components. First, we show that signals associated with different facial identities can be discriminated from each other outside the scope of these components, as early as 70ms following stimulus presentation. Next, electrodes associated with traditional ERP components as well as, critically, those not associated with such components are shown to contribute information to stimulus discriminability. And last, the levels of ERP-based pattern discrimination are found to correlate with recognition accuracy across subjects confirming the relevance of these methods for bridging brain and behavior data. Altogether, the current results shed new light on the fine-grained time course of neural face processing and showcase the value of novel methods for pattern analysis to investigating fundamental aspects of visual recognition. PMID:26973169

  18. Color correction using color-flow eigenspace model in color face recognition

    NASA Astrophysics Data System (ADS)

    Choi, JaeYoung; Ro, Yong Man

    2009-02-01

    We propose a new color correction approach which, as opposed to existing methods, take advantages of a given pair of two color face images (probe and gallery) in the color face recognition (FR) framework. In the proposed color correction method, the color-flow vector and color-flow eigenspace model are developed to generate color corrected probe images. The main contribution of this paper is threefold: 1) the proposed method can reliably compensate the non-linear photic variations imposed on probe face images comparing to traditional color correction techniques; 2) to the best of our knowledge, for the first time, we conduct extensive experiment studies to compare the effectiveness of various color correction methods to deal with photometrical distortions in probe images; 3) the proposed method can significantly enhance the recognition performance degraded by severely illuminant probe face images. Two standard face databases CMU PIE and XM2VTSDB were used to demonstrate the effectiveness of the proposed color correction method. The usefulness of the proposed method in the color FR is shown in terms of both absolute and comparative recognition performances against four traditional color correction solutions of White balance, Gray-world, Retinex, and Color-by-correlation.

  19. Recognition of novel faces after single exposure is enhanced during pregnancy.

    PubMed

    Anderson, Marla V; Rutherford, M D

    2011-01-01

    Protective mechanisms in pregnancy include Nausea and Vomiting in Pregnancy (NVP) (Fessler, 2002; Flaxman and Sherman, 2000), increased sensitivity to health cues (Jones et al., 2005), and increased vigilance to out-group members (Navarette, Fessler, and Eng, 2007). While common perception suggests that pregnancy results in decreased cognitive function, an adaptationist perspective might predict that some aspects of cognition would be enhanced during pregnancy if they help to protect the reproductive investment. We propose that a reallocation of cognitive resources from nonessential to critical areas engenders the cognitive decline observed in some studies. Here, we used a recognition task disguised as a health rating to determine whether pregnancy facilitates face recognition. We found that pregnant women were significantly better at recognizing faces and that this effect was particularly pronounced for own-race male faces. In human evolutionary history, and today, males present a significant threat to females. Thus, enhanced recognition of faces, and especially male faces, during pregnancy may serve a protective function. PMID:22947954

  20. Participant sexual orientation matters: new evidence on the gender bias in face recognition.

    PubMed

    Steffens, Melanie C; Landmann, Sören; Mecklenbräuker, Silvia

    2013-01-01

    Research participants' sexual orientation is not consistently taken into account in experimental psychological research. We argue that it should be in any research related to participant or target gender. Corroborating this argument, an example study is presented on the gender bias in face recognition, the finding that women correctly recognize more female than male faces. In contrast, findings with male participants have been inconclusive. An online experiment (N = 1,147) was carried out, on purpose over-sampling lesbian and gay participants. Findings demonstrate that the pro-female gender bias in face recognition is modified by male participants' sexual orientation. Heterosexual women and lesbians as well as heterosexual men showed a pro-female gender bias in face recognition, whereas gay men showed a pro-male gender bias, consistent with the explanation that differences in face expertise develop congruent with interests. These results contribute to the growing evidence that participant sexual orientation can be used to distinguish between alternative theoretical explanations of given gender-correlated patterns of findings. PMID:23681015

  1. Near-infrared and visible light image fusion algorithm for face recognition

    NASA Astrophysics Data System (ADS)

    Ma, Zhongli; Wen, Jie; Liu, Quanyong; Tuo, Guanjun

    2015-05-01

    In order to improve face recognition accuracy, we present a simple near-infrared (NIR) and visible light (VL) image fusion algorithm based on two-dimensional linear discriminant analysis (2DLDA). We first use two such schemes to extract two classes of face discriminant features of each of NIR and VL images separately. Then the two classes of features of each kind of images are fused using the matching score fusion method. At last, a simple NIR and VL image fusion approach is exploited to combine the scores of NIR and VL images and to obtain the classification result. The experimental results show that the proposed NIR and VL image fusion approach can effectively improve the accuracy of face recognition.

  2. Illumination compensation and normalization for robust face recognition using discrete cosine transform in logarithm domain.

    PubMed

    Chen, Weilong; Er, Meng Joo; Wu, Shiqian

    2006-04-01

    This paper presents a novel illumination normalization approach for face recognition under varying lighting conditions. In the proposed approach, a discrete cosine transform (DCT) is employed to compensate for illumination variations in the logarithm domain. Since illumination variations mainly lie in the low-frequency band, an appropriate number of DCT coefficients are truncated to minimize variations under different lighting conditions. Experimental results on the Yale B database and CMU PIE database show that the proposed approach improves the performance significantly for the face images with large illumination variations. Moreover, the advantage of our approach is that it does not require any modeling steps and can be easily implemented in a real-time face recognition system. PMID:16602604

  3. A 2D range Hausdorff approach for 3D face recognition.

    SciTech Connect

    Koch, Mark William; Russ, Trina Denise; Little, Charles Quentin

    2005-04-01

    This paper presents a 3D facial recognition algorithm based on the Hausdorff distance metric. The standard 3D formulation of the Hausdorff matching algorithm has been modified to operate on a 2D range image, enabling a reduction in computation from O(N2) to O(N) without large storage requirements. The Hausdorff distance is known for its robustness to data outliers and inconsistent data between two data sets, making it a suitable choice for dealing with the inherent problems in many 3D datasets due to sensor noise and object self-occlusion. For optimal performance, the algorithm assumes a good initial alignment between probe and template datasets. However, to minimize the error between two faces, the alignment can be iteratively refined. Results from the algorithm are presented using 3D face images from the Face Recognition Grand Challenge database version 1.0.

  4. A Cognitively-Motivated Framework for Partial Face Recognition in Unconstrained Scenarios

    PubMed Central

    Monteiro, João C.; Cardoso, Jaime S.

    2015-01-01

    Humans perform and rely on face recognition routinely and effortlessly throughout their daily lives. Multiple works in recent years have sought to replicate this process in a robust and automatic way. However, it is known that the performance of face recognition algorithms is severely compromised in non-ideal image acquisition scenarios. In an attempt to deal with conditions, such as occlusion and heterogeneous illumination, we propose a new approach motivated by the global precedent hypothesis of the human brain's cognitive mechanisms of perception. An automatic modeling of SIFT keypoint descriptors using a Gaussian mixture model (GMM)-based universal background model method is proposed. A decision is, then, made in an innovative hierarchical sense, with holistic information gaining precedence over a more detailed local analysis. The algorithm was tested on the ORL, ARand Extended Yale B Face databases and presented state-of-the-art performance for a variety of experimental setups. PMID:25602266

  5. Face Memory and Object Recognition in Children with High-Functioning Autism or Asperger Syndrome and in Their Parents

    ERIC Educational Resources Information Center

    Kuusikko-Gauffin, Sanna; Jansson-Verkasalo, Eira; Carter, Alice; Pollock-Wurman, Rachel; Jussila, Katja; Mattila, Marja-Leena; Rahko, Jukka; Ebeling, Hanna; Pauls, David; Moilanen, Irma

    2011-01-01

    Children with Autism Spectrum Disorders (ASDs) have reported to have impairments in face, recognition and face memory, but intact object recognition and object memory. Potential abnormalities, in these fields at the family level of high-functioning children with ASD remains understudied despite, the ever-mounting evidence that ASDs are genetic and…

  6. Emotion Recognition in Faces and the Use of Visual Context in Young People with High-Functioning Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Wright, Barry; Clarke, Natalie; Jordan, Jo; Young, Andrew W.; Clarke, Paula; Miles, Jeremy; Nation, Kate; Clarke, Leesa; Williams, Christine

    2008-01-01

    We compared young people with high-functioning autism spectrum disorders (ASDs) with age, sex and IQ matched controls on emotion recognition of faces and pictorial context. Each participant completed two tests of emotion recognition. The first used Ekman series faces. The second used facial expressions in visual context. A control task involved…

  7. ERP Correlates of Target-Distracter Differentiation in Repeated Runs of a Continuous Recognition Task with Emotional and Neutral Faces

    ERIC Educational Resources Information Center

    Treese, Anne-Cecile; Johansson, Mikael; Lindgren, Magnus

    2010-01-01

    The emotional salience of faces has previously been shown to induce memory distortions in recognition memory tasks. This event-related potential (ERP) study used repeated runs of a continuous recognition task with emotional and neutral faces to investigate emotion-induced memory distortions. In the second and third runs, participants made more…

  8. Own- and Other-Race Face Identity Recognition in Children: The Effects of Pose and Feature Composition

    ERIC Educational Resources Information Center

    Anzures, Gizelle; Kelly, David J.; Pascalis, Olivier; Quinn, Paul C.; Slater, Alan M.; de Viviés, Xavier; Lee, Kang

    2014-01-01

    We used a matching-to-sample task and manipulated facial pose and feature composition to examine the other-race effect (ORE) in face identity recognition between 5 and 10 years of age. Overall, the present findings provide a genuine measure of own- and other-race face identity recognition in children that is independent of photographic and image…

  9. A Spatial Frequency Account of the Detriment that Local Processing of Navon Letters Has on Face Recognition

    ERIC Educational Resources Information Center

    Hills, Peter J.; Lewis, Michael B.

    2009-01-01

    Five minutes of processing the local features of a Navon letter causes a detriment in subsequent face-recognition performance (Macrae & Lewis, 2002). We hypothesize a perceptual after effect explanation of this effect in which face recognition is less accurate after adapting to high-spatial frequencies at high contrasts. Five experiments were…

  10. Recognizing the same face in different contexts: Testing within-person face recognition in typical development and in autism.

    PubMed

    Neil, Louise; Cappagli, Giulia; Karaminis, Themelis; Jenkins, Rob; Pellicano, Elizabeth

    2016-03-01

    Unfamiliar face recognition follows a particularly protracted developmental trajectory and is more likely to be atypical in children with autism than those without autism. There is a paucity of research, however, examining the ability to recognize the same face across multiple naturally varying images. Here, we investigated within-person face recognition in children with and without autism. In Experiment 1, typically developing 6- and 7-year-olds, 8- and 9-year-olds, 10- and 11-year-olds, 12- to 14-year-olds, and adults were given 40 grayscale photographs of two distinct male identities (20 of each face taken at different ages, from different angles, and in different lighting conditions) and were asked to sort them by identity. Children mistook images of the same person as images of different people, subdividing each individual into many perceived identities. Younger children divided images into more perceived identities than adults and also made more misidentification errors (placing two different identities together in the same group) than older children and adults. In Experiment 2, we used the same procedure with 32 cognitively able children with autism. Autistic children reported a similar number of identities and made similar numbers of misidentification errors to a group of typical children of similar age and ability. Fine-grained analysis using matrices revealed marginal group differences in overall performance. We suggest that the immature performance in typical and autistic children could arise from problems extracting the perceptual commonalities from different images of the same person and building stable representations of facial identity. PMID:26615971

  11. Recognizing the same face in different contexts: Testing within-person face recognition in typical development and in autism

    PubMed Central

    Neil, Louise; Cappagli, Giulia; Karaminis, Themelis; Jenkins, Rob; Pellicano, Elizabeth

    2016-01-01

    Unfamiliar face recognition follows a particularly protracted developmental trajectory and is more likely to be atypical in children with autism than those without autism. There is a paucity of research, however, examining the ability to recognize the same face across multiple naturally varying images. Here, we investigated within-person face recognition in children with and without autism. In Experiment 1, typically developing 6- and 7-year-olds, 8- and 9-year-olds, 10- and 11-year-olds, 12- to 14-year-olds, and adults were given 40 grayscale photographs of two distinct male identities (20 of each face taken at different ages, from different angles, and in different lighting conditions) and were asked to sort them by identity. Children mistook images of the same person as images of different people, subdividing each individual into many perceived identities. Younger children divided images into more perceived identities than adults and also made more misidentification errors (placing two different identities together in the same group) than older children and adults. In Experiment 2, we used the same procedure with 32 cognitively able children with autism. Autistic children reported a similar number of identities and made similar numbers of misidentification errors to a group of typical children of similar age and ability. Fine-grained analysis using matrices revealed marginal group differences in overall performance. We suggest that the immature performance in typical and autistic children could arise from problems extracting the perceptual commonalities from different images of the same person and building stable representations of facial identity. PMID:26615971

  12. Dissociable roles of internal feelings and face recognition ability in facial expression decoding.

    PubMed

    Zhang, Lin; Song, Yiying; Liu, Ling; Liu, Jia

    2016-05-15

    The problem of emotion recognition has been tackled by researchers in both affective computing and cognitive neuroscience. While affective computing relies on analyzing visual features from facial expressions, it has been proposed that humans recognize emotions by internally simulating the emotional states conveyed by others' expressions, in addition to perceptual analysis of facial features. Here we investigated whether and how our internal feelings contributed to the ability to decode facial expressions. In two independent large samples of participants, we observed that individuals who generally experienced richer internal feelings exhibited a higher ability to decode facial expressions, and the contribution of internal feelings was independent of face recognition ability. Further, using voxel-based morphometry, we found that the gray matter volume (GMV) of bilateral superior temporal sulcus (STS) and the right inferior parietal lobule was associated with facial expression decoding through the mediating effect of internal feelings, while the GMV of bilateral STS, precuneus, and the right central opercular cortex contributed to facial expression decoding through the mediating effect of face recognition ability. In addition, the clusters in bilateral STS involved in the two components were neighboring yet separate. Our results may provide clues about the mechanism by which internal feelings, in addition to face recognition ability, serve as an important instrument for humans in facial expression decoding. PMID:26908317

  13. Robust Face Recognition via Multi-Scale Patch-Based Matrix Regression

    PubMed Central

    Gao, Guangwei; Yang, Jian; Jing, Xiaoyuan; Huang, Pu; Hua, Juliang; Yue, Dong

    2016-01-01

    In many real-world applications such as smart card solutions, law enforcement, surveillance and access control, the limited training sample size is the most fundamental problem. By making use of the low-rank structural information of the reconstructed error image, the so-called nuclear norm-based matrix regression has been demonstrated to be effective for robust face recognition with continuous occlusions. However, the recognition performance of nuclear norm-based matrix regression degrades greatly in the face of the small sample size problem. An alternative solution to tackle this problem is performing matrix regression on each patch and then integrating the outputs from all patches. However, it is difficult to set an optimal patch size across different databases. To fully utilize the complementary information from different patch scales for the final decision, we propose a multi-scale patch-based matrix regression scheme based on which the ensemble of multi-scale outputs can be achieved optimally. Extensive experiments on benchmark face databases validate the effectiveness and robustness of our method, which outperforms several state-of-the-art patch-based face recognition algorithms. PMID:27525734

  14. Robust Face Recognition via Multi-Scale Patch-Based Matrix Regression.

    PubMed

    Gao, Guangwei; Yang, Jian; Jing, Xiaoyuan; Huang, Pu; Hua, Juliang; Yue, Dong

    2016-01-01

    In many real-world applications such as smart card solutions, law enforcement, surveillance and access control, the limited training sample size is the most fundamental problem. By making use of the low-rank structural information of the reconstructed error image, the so-called nuclear norm-based matrix regression has been demonstrated to be effective for robust face recognition with continuous occlusions. However, the recognition performance of nuclear norm-based matrix regression degrades greatly in the face of the small sample size problem. An alternative solution to tackle this problem is performing matrix regression on each patch and then integrating the outputs from all patches. However, it is difficult to set an optimal patch size across different databases. To fully utilize the complementary information from different patch scales for the final decision, we propose a multi-scale patch-based matrix regression scheme based on which the ensemble of multi-scale outputs can be achieved optimally. Extensive experiments on benchmark face databases validate the effectiveness and robustness of our method, which outperforms several state-of-the-art patch-based face recognition algorithms. PMID:27525734

  15. Optimized face recognition algorithm using radial basis function neural networks and its practical applications.

    PubMed

    Yoo, Sung-Hoon; Oh, Sung-Kwun; Pedrycz, Witold

    2015-09-01

    In this study, we propose a hybrid method of face recognition by using face region information extracted from the detected face region. In the preprocessing part, we develop a hybrid approach based on the Active Shape Model (ASM) and the Principal Component Analysis (PCA) algorithm. At this step, we use a CCD (Charge Coupled Device) camera to acquire a facial image by using AdaBoost and then Histogram Equalization (HE) is employed to improve the quality of the image. ASM extracts the face contour and image shape to produce a personal profile. Then we use a PCA method to reduce dimensionality of face images. In the recognition part, we consider the improved Radial Basis Function Neural Networks (RBF NNs) to identify a unique pattern associated with each person. The proposed RBF NN architecture consists of three functional modules realizing the condition phase, the conclusion phase, and the inference phase completed with the help of fuzzy rules coming in the standard 'if-then' format. In the formation of the condition part of the fuzzy rules, the input space is partitioned with the use of Fuzzy C-Means (FCM) clustering. In the conclusion part of the fuzzy rules, the connections (weights) of the RBF NNs are represented by four kinds of polynomials such as constant, linear, quadratic, and reduced quadratic. The values of the coefficients are determined by running a gradient descent method. The output of the RBF NNs model is obtained by running a fuzzy inference method. The essential design parameters of the network (including learning rate, momentum coefficient and fuzzification coefficient used by the FCM) are optimized by means of Differential Evolution (DE). The proposed P-RBF NNs (Polynomial based RBF NNs) are applied to facial recognition and its performance is quantified from the viewpoint of the output performance and recognition rate. PMID:26163042

  16. Multi-stream face recognition on dedicated mobile devices for crime-fighting

    NASA Astrophysics Data System (ADS)

    Jassim, Sabah A.; Sellahewa, Harin

    2006-09-01

    Automatic face recognition is a useful tool in the fight against crime and terrorism. Technological advance in mobile communication systems and multi-application mobile devices enable the creation of hybrid platforms for active and passive surveillance. A dedicated mobile device that incorporates audio-visual sensors would not only complement existing networks of fixed surveillance devices (e.g. CCTV) but could also provide wide geographical coverage in almost any situation and anywhere. Such a device can hold a small portion of a law-enforcing agency biometric database that consist of audio and/or visual data of a number of suspects/wanted or missing persons who are expected to be in a local geographical area. This will assist law-enforcing officers on the ground in identifying persons whose biometric templates are downloaded onto their devices. Biometric data on the device can be regularly updated which will reduce the number of faces an officer has to remember. Such a dedicated device would act as an active/passive mobile surveillance unit that incorporate automatic identification. This paper is concerned with the feasibility of using wavelet-based face recognition schemes on such devices. The proposed schemes extend our recently developed face verification scheme for implementation on a currently available PDA. In particular we will investigate the use of a combination of wavelet frequency channels for multi-stream face recognition. We shall present experimental results on the performance of our proposed schemes for a number of publicly available face databases including a new AV database of videos recorded on a PDA.

  17. A family at risk: congenital prosopagnosia, poor face recognition and visuoperceptual deficits within one family.

    PubMed

    Johnen, Andreas; Schmukle, Stefan C; Hüttenbrink, Judith; Kischka, Claudia; Kennerknecht, Ingo; Dobel, Christian

    2014-05-01

    Congenital prosopagnosia (CP) describes a severe face processing impairment despite intact early vision and in the absence of overt brain damage. CP is assumed to be present from birth and often transmitted within families. Previous studies reported conflicting findings regarding associated deficits in nonface visuoperceptual tasks. However, diagnostic criteria for CP significantly differed between studies, impeding conclusions on the heterogeneity of the impairment. Following current suggestions for clinical diagnoses of CP, we administered standardized tests for face processing, a self-report questionnaire and general visual processing tests to an extended family (N=28), in which many members reported difficulties with face recognition. This allowed us to assess the degree of heterogeneity of the deficit within a large sample of suspected CPs of similar genetic and environmental background. (a) We found evidence for a severe face processing deficit but intact nonface visuoperceptual skills in three family members - a father and his two sons - who fulfilled conservative criteria for a CP diagnosis on standardized tests and a self-report questionnaire, thus corroborating findings of familial transmissions of CP. (b) Face processing performance of the remaining family members was also significantly below the mean of the general population, suggesting that face processing impairments are transmitted as a continuous trait rather than in a dichotomous all-or-nothing fashion. (c) Self-rating scores of face recognition showed acceptable correlations with standardized tests, suggesting this method as a viable screening procedure for CP diagnoses. (d) Finally, some family members revealed severe impairments in general visual processing and nonface visual memory tasks either in conjunction with face perception deficits or as an isolated impairment. This finding may indicate an elevated risk for more general visuoperceptual deficits in families with prosopagnosic members

  18. An efficient multimodal 2D-3D hybrid approach to automatic face recognition.

    PubMed

    Mian, Ajmal S; Bennamoun, Mohammed; Owens, Robyn

    2007-11-01

    We present a fully automatic face recognition algorithm and demonstrate its performance on the FRGC v2.0 data. Our algorithm is multimodal (2D and 3D) and performs hybrid (feature-based and holistic) matching in order to achieve efficiency and robustness to facial expressions. The pose of a 3D face along with its texture is automatically corrected using a novel approach based on a single automatically detected point and the Hotelling transform. A novel 3D Spherical Face Representation (SFR) is used in conjunction with the SIFT descriptor to form a rejection classifier which quickly eliminates a large number of candidate faces at an early stage for efficient recognition in case of large galleries. The remaining faces are then verified using a novel region-based matching approach which is robust to facial expressions. This approach automatically segments the eyes-forehead and the nose regions, which are relatively less sensitive to expressions, and matches them separately using a modified ICP algorithm. The results of all the matching engines are fused at the metric level to achieve higher accuracy. We use the FRGC benchmark to compare our results to other algorithms which used the same database. Our multimodal hybrid algorithm performed better than others by achieving 99.74% and 98.31% verification rates at 0.001 FAR and identification rates of 99.02% and 95.37% for probes with neutral and non-neutral expression respectively. PMID:17848775

  19. Weighted joint sparse representation-based classification method for robust alignment-free face recognition

    NASA Astrophysics Data System (ADS)

    Sun, Bo; Xu, Feng; Zhou, Guoyan; He, Jun; Ge, Fengxiang

    2015-01-01

    This work proposes a weighted joint sparse representation (WJSR)-based classification method for robust alignment-free face recognition, in which an image is represented by a set of scale-invariant feature transform descriptors. The proposed method considers the correlation and the reliability of the query descriptors. The reliability is measured by the similarity information between the query descriptors and the atoms in the dictionary, which is incorporated into the l0∖l2-norm minimization to seek the optimal WJSR. Compared with the related state-of-art methods, the performance is advanced, as verified by the experiments on the benchmark face databases.

  20. 3D face recognition based on the hierarchical score-level fusion classifiers

    NASA Astrophysics Data System (ADS)

    Mráček, Štěpán.; Váša, Jan; Lankašová, Karolína; Drahanský, Martin; Doležel, Michal

    2014-05-01

    This paper describes the 3D face recognition algorithm that is based on the hierarchical score-level fusion clas-sifiers. In a simple (unimodal) biometric pipeline, the feature vector is extracted from the input data and subsequently compared with the template stored in the database. In our approachm, we utilize several feature extraction algorithms. We use 6 different image representations of the input 3D face data. Moreover, we are using Gabor and Gauss-Laguerre filter banks applied on the input image data that yield to 12 resulting feature vectors. Each representation is compared with corresponding counterpart from the biometric database. We also add the recognition based on the iso-geodesic curves. The final score-level fusion is performed on 13 comparison scores using the Support Vector Machine (SVM) classifier.

  1. Amygdala Volume Predicts Inter-Individual Differences in Fearful Face Recognition

    PubMed Central

    Zhao, Ke; Yan, Wen-Jing; Chen, Yu-Hsin; Zuo, Xi-Nian; Fu, Xiaolan

    2013-01-01

    The present study investigates the relationship between inter-individual differences in fearful face recognition and amygdala volume. Thirty normal adults were recruited and each completed two identical facial expression recognition tests offline and two magnetic resonance imaging (MRI) scans. Linear regression indicated that the left amygdala volume negatively correlated with the accuracy of recognizing fearful facial expressions and positively correlated with the probability of misrecognizing fear as surprise. Further exploratory analyses revealed that this relationship did not exist for any other subcortical or cortical regions. Nor did such a relationship exist between the left amygdala volume and performance recognizing the other five facial expressions. These mind-brain associations highlight the importance of the amygdala in recognizing fearful faces and provide insights regarding inter-individual differences in sensitivity toward fear-relevant stimuli. PMID:24009767

  2. A high precision feature based on LBP and Gabor theory for face recognition.

    PubMed

    Xia, Wei; Yin, Shouyi; Ouyang, Peng

    2013-01-01

    How to describe an image accurately with the most useful information but at the same time the least useless information is a basic problem in the recognition field. In this paper, a novel and high precision feature called BG2D2LRP is proposed, accompanied with a corresponding face recognition system. The feature contains both static texture differences and dynamic contour trends. It is based on Gabor and LBP theory, operated by various kinds of transformations such as block, second derivative, direct orientation, layer and finally fusion in a particular way. Seven well-known face databases such as FRGC, AR, FERET and so on are used to evaluate the veracity and robustness of the proposed feature. A maximum improvement of 29.41% is achieved comparing with other methods. Besides, the ROC curve provides a satisfactory figure. Those experimental results strongly demonstrate the feasibility and superiority of the new feature and method. PMID:23552103

  3. Amygdala volume predicts inter-individual differences in fearful face recognition.

    PubMed

    Zhao, Ke; Yan, Wen-Jing; Chen, Yu-Hsin; Zuo, Xi-Nian; Fu, Xiaolan

    2013-01-01

    The present study investigates the relationship between inter-individual differences in fearful face recognition and amygdala volume. Thirty normal adults were recruited and each completed two identical facial expression recognition tests offline and two magnetic resonance imaging (MRI) scans. Linear regression indicated that the left amygdala volume negatively correlated with the accuracy of recognizing fearful facial expressions and positively correlated with the probability of misrecognizing fear as surprise. Further exploratory analyses revealed that this relationship did not exist for any other subcortical or cortical regions. Nor did such a relationship exist between the left amygdala volume and performance recognizing the other five facial expressions. These mind-brain associations highlight the importance of the amygdala in recognizing fearful faces and provide insights regarding inter-individual differences in sensitivity toward fear-relevant stimuli. PMID:24009767

  4. Face recognition using 3D facial shape and color map information: comparison and combination

    NASA Astrophysics Data System (ADS)

    Godil, Afzal; Ressler, Sandy; Grother, Patrick

    2004-08-01

    In this paper, we investigate the use of 3D surface geometry for face recognition and compare it to one based on color map information. The 3D surface and color map data are from the CAESAR anthropometric database. We find that the recognition performance is not very different between 3D surface and color map information using a principal component analysis algorithm. We also discuss the different techniques for the combination of the 3D surface and color map information for multi-modal recognition by using different fusion approaches and show that there is significant improvement in results. The effectiveness of various techniques is compared and evaluated on a dataset with 200 subjects in two different positions.

  5. The Effect of Inversion on 3- to 5-Year-Olds' Recognition of Face and Nonface Visual Objects

    ERIC Educational Resources Information Center

    Picozzi, Marta; Cassia, Viola Macchi; Turati, Chiara; Vescovo, Elena

    2009-01-01

    This study compared the effect of stimulus inversion on 3- to 5-year-olds' recognition of faces and two nonface object categories matched with faces for a number of attributes: shoes (Experiment 1) and frontal images of cars (Experiments 2 and 3). The inversion effect was present for faces but not shoes at 3 years of age (Experiment 1). Analogous…

  6. Band-Reweighed Gabor Kernel Embedding for Face Image Representation and Recognition.

    PubMed

    Ren, Chuan-Xian; Dai, Dao-Qing; Li, Xiao-Xin; Lai, Zhao-Rong

    2014-02-01

    Face recognition with illumination or pose variation is a challenging problem in image processing and pattern recognition. A novel algorithm using band-reweighed Gabor kernel embedding to deal with the problem is proposed in this paper. For a given image, it is first transformed by a group of Gabor filters, which output Gabor features using different orientation and scale parameters. Fisher scoring function is used to measure the importance of features in each band, and then, the features with the largest scores are preserved for saving memory requirements. The reduced bands are combined by a vector, which is determined by a weighted kernel discriminant criterion and solved by a constrained quadratic programming method, and then, the weighted sum of these nonlinear bands is defined as the similarity between two images. Compared with existing concatenation-based Gabor feature representation and the uniformly weighted similarity calculation approaches, our method provides a new way to use Gabor features for face recognition and presents a reasonable interpretation for highlighting discriminant orientations and scales. The minimum Mahalanobis distance considering the spatial correlations within the data is exploited for feature matching, and the graphical lasso is used therein for directly estimating the sparse inverse covariance matrix. Experiments using benchmark databases show that our new algorithm improves the recognition results and obtains competitive performance. PMID:26270914

  7. An rTMS study into self-face recognition using video-morphing technique

    PubMed Central

    Heinisch, Christine; Dinse, Hubert R.; Tegenthoff, Martin; Juckel, Georg

    2011-01-01

    Self-face recognition is a sign of higher order self-awareness. Research into the neuronal network argues that the visual pathway of recognizing one’s own face differs from recognizing others. The present study aimed at investigating the cortical network of self-other discrimination by producing virtual lesions over the temporo-parietal junction and the prefrontal cortex using low-frequency repetitive transcranial magnetic stimulation (rTMS) in a sham-controlled design. Frontal and parietal areas were stimulated separately in consecutive sessions one week apart in 10 healthy subjects. We designed a video-task comprising morphings of famous, unfamiliar and the subjects’ own faces that transformed into each other over a time period of six seconds. Reaction time (RT) was measured by pushing a mouse-button once a change of identity was recognized. rTMS over the right temporo-parietal junction led to a decrease in RT when a subject’s own face emerged from a familiar face; a similar effect was observed after rTMS over right-prefrontal and left-parietal cortices, when the subjects’ ratings of own likeability were taken into account. The transition from an unfamiliar face to one’s own face indicated a left frontal lateralization. PMID:20587597

  8. Recognition based on two separated singular value decomposition-enriched faces

    NASA Astrophysics Data System (ADS)

    Wang, Jing-Wein; Le, Ngoc Tuyen; Lee, Jiann-Shu; Wang, Chou-Chen

    2014-11-01

    In previous studies on human face recognition, illumination pretreatment has been considered to be among the most crucial steps. We propose the illumination compensation algorithm two separated singular value decomposition (TSVD). TSVD consists of two parts, namely the division of high- and low-level images and singular value decomposition, which are implemented according to self-adapted illumination compensation to resolve the problems associated with strong variation of light and to improve face recognition performance. The mean color values of the three color channels R, G, and B are used as the thresholds, and two subimages of two types of light levels are then input with the division of the maximal mean and minimal mean, which are incorporated with light templates at various horizontal levels. The dynamic compensation coefficient is proportionately adjusted to reconstruct the subimages. Finally, two subimages are integrated to achieve illumination compensation. In addition, we combined TSVD and the projection color space (PCS) to design a new method for converting the color space called the two-level PCS. Experimental results demonstrated the efficiency of our proposed method. The proposed method not only makes the skin color of facial images appear softer but also substantially improves the accuracy of face recognition, even in facial images that were taken under conditions of lateral light or exhibit variations in posture.

  9. Camera characterization for face recognition under active near-infrared illumination

    NASA Astrophysics Data System (ADS)

    Gernoth, Thorsten; Grigat, Rolf-Rainer

    2010-01-01

    Active near-infrared illumination may be used in a face recognition system to achieve invariance to changes of the visible illumination. Another benefit of active near-infrared illumination is the bright pupil effect which can be used to assist eye detection. But long time exposure to near-infrared radiation is hazardous to the eyes. The level of illumination is therefore limited by potentially harmful effects to the eyes. Image sensors for face recognition under active near-infrared illumination have therefore to be carefully selected to provide optimal image quality in the desired field of application. A model of the active illumination source is introduced. Safety issues with regard to near-infrared illumination are addressed using this model and a radiometric analysis. From the illumination model requirements on suitable imaging sensors are formulated. Standard image quality metrics are used to assess the imaging device performance under application typical conditions. The characterization of image quality is based on measurements of the Opto-Electronic Conversion Function, Modulation Transfer Function and noise. A methodology to select an image sensor for the desired field of application is given. Two cameras with low-cost image sensors are characterized using the key parameters that influence the image quality for face recognition.

  10. Robust Face Recognition via Minimum Error Entropy-Based Atomic Representation.

    PubMed

    Wang, Yulong; Tang, Yuan Yan; Li, Luoqing

    2015-12-01

    Representation-based classifiers (RCs) have attracted considerable attention in face recognition in recent years. However, most existing RCs use the mean square error (MSE) criterion as the cost function, which relies on the Gaussianity assumption of the error distribution and is sensitive to non-Gaussian noise. This may severely degrade the performance of MSE-based RCs in recognizing facial images with random occlusion and corruption. In this paper, we present a minimum error entropy-based atomic representation (MEEAR) framework for face recognition. Unlike existing MSE-based RCs, our framework is based on the minimum error entropy criterion, which is not dependent on the error distribution and shown to be more robust to noise. In particular, MEEAR can produce discriminative representation vector by minimizing the atomic norm regularized Renyi's entropy of the reconstruction error. The optimality conditions are provided for general atomic representation model. As a general framework, MEEAR can also be used as a platform to develop new classifiers. Two effective MEE-based RCs are proposed by defining appropriate atomic sets. The experimental results on popular face databases show that MEEAR can improve both the recognition accuracy and the reconstructed results compared with the state-of-the-art MSE-based RCs. PMID:26513784

  11. Emotional face recognition in adolescent suicide attempters and adolescents engaging in non-suicidal self-injury.

    PubMed

    Seymour, Karen E; Jones, Richard N; Cushman, Grace K; Galvan, Thania; Puzia, Megan E; Kim, Kerri L; Spirito, Anthony; Dickstein, Daniel P

    2016-03-01

    Little is known about the bio-behavioral mechanisms underlying and differentiating suicide attempts from non-suicidal self-injury (NSSI) in adolescents. Adolescents who attempt suicide or engage in NSSI often report significant interpersonal and social difficulties. Emotional face recognition ability is a fundamental skill required for successful social interactions, and deficits in this ability may provide insight into the unique brain-behavior interactions underlying suicide attempts versus NSSI in adolescents. Therefore, we examined emotional face recognition ability among three mutually exclusive groups: (1) inpatient adolescents who attempted suicide (SA, n = 30); (2) inpatient adolescents engaged in NSSI (NSSI, n = 30); and (3) typically developing controls (TDC, n = 30) without psychiatric illness. Participants included adolescents aged 13-17 years, matched on age, gender and full-scale IQ. Emotional face recognition was evaluated using the diagnostic assessment of nonverbal accuracy (DANVA-2). Compared to TDC youth, adolescents with NSSI made more errors on child fearful and adult sad face recognition while controlling for psychopathology and medication status (ps < 0.05). No differences were found on emotional face recognition between NSSI and SA groups. Secondary analyses showed that compared to inpatients without major depression, those with major depression made fewer errors on adult sad face recognition even when controlling for group status (p < 0.05). Further, compared to inpatients without generalized anxiety, those with generalized anxiety made fewer recognition errors on adult happy faces even when controlling for group status (p < 0.05). Adolescent inpatients engaged in NSSI showed greater deficits in emotional face recognition than TDC, but not inpatient adolescents who attempted suicide. Further results suggest the importance of psychopathology in emotional face recognition. Replication of these preliminary results and examination

  12. Gabor-based kernel PCA with fractional power polynomial models for face recognition.

    PubMed

    Liu, Chengjun

    2004-05-01

    This paper presents a novel Gabor-based kernel Principal Component Analysis (PCA) method by integrating the Gabor wavelet representation of face images and the kernel PCA method for face recognition. Gabor wavelets first derive desirable facial features characterized by spatial frequency, spatial locality, and orientation selectivity to cope with the variations due to illumination and facial expression changes. The kernel PCA method is then extended to include fractional power polynomial models for enhanced face recognition performance. A fractional power polynomial, however, does not necessarily define a kernel function, as it might not define a positive semidefinite Gram matrix. Note that the sigmoid kernels, one of the three classes of widely used kernel functions (polynomial kernels, Gaussian kernels, and sigmoid kernels), do not actually define a positive semidefinite Gram matrix either. Nevertheless, the sigmoid kernels have been successfully used in practice, such as in building support vector machines. In order to derive real kernel PCA features, we apply only those kernel PCA eigenvectors that are associated with positive eigenvalues. The feasibility of the Gabor-based kernel PCA method with fractional power polynomial models has been successfully tested on both frontal and pose-angled face recognition, using two data sets from the FERET database and the CMU PIE database, respectively. The FERET data set contains 600 frontal face images of 200 subjects, while the PIE data set consists of 680 images across five poses (left and right profiles, left and right half profiles, and frontal view) with two different facial expressions (neutral and smiling) of 68 subjects. The effectiveness of the Gabor-based kernel PCA method with fractional power polynomial models is shown in terms of both absolute performance indices and comparative performance against the PCA method, the kernel PCA method with polynomial kernels, the kernel PCA method with fractional power

  13. Appearance characterization of linear Lambertian objects, generalized photometric stereo, and illumination-invariant face recognition.

    PubMed

    Zhou, Shaohua Kevin; Aggarwal, Gaurav; Chellappa, Rama; Jacobs, David W

    2007-02-01

    Traditional photometric stereo algorithms employ a Lambertian reflectance model with a varying albedo field and involve the appearance of only one object. In this paper, we generalize photometric stereo algorithms to handle all appearances of all objects in a class, in particular the human face class, by making use of the linear Lambertian property. A linear Lambertian object is one which is linearly spanned by a set of basis objects and has a Lambertian surface. The linear property leads to a rank constraint and, consequently, a factorization of an observation matrix that consists of exemplar images of different objects (e.g., faces of different subjects) under different, unknown illuminations. Integrability and symmetry constraints are used to fully recover the subspace bases using a novel linearized algorithm that takes the varying albedo field into account. The effectiveness of the linear Lambertian property is further investigated by using it for the problem of illumination-invariant face recognition using just one image. Attached shadows are incorporated in the model by a careful treatment of the inherent nonlinearity in Lambert's law. This enables us to extend our algorithm to perform face recognition in the presence of multiple illumination sources. Experimental results using standard data sets are presented. PMID:17170477

  14. Single trial EEG classification applied to a face recognition experiment using different feature extraction methods.

    PubMed

    Li, Yudu; Ma, Sen; Hu, Zhongze; Chen, Jiansheng; Su, Guangda; Dou, Weibei

    2015-08-01

    Research on brain machine interface (BMI) has been developed very fast in recent years. Numerous feature extraction methods have successfully been applied to electroencephalogram (EEG) classification in various experiments. However, little effort has been spent on EEG based BMI systems regarding familiarity of human faces cognition. In this work, we have implemented and compared the classification performances of four common feature extraction methods, namely, common spatial pattern, principal component analysis, wavelet transform and interval features. High resolution EEG signals were collected from fifteen healthy subjects stimulated by equal number of familiar and novel faces. Principal component analysis outperforms other methods with average classification accuracy reaching 94.2% leading to possible real life applications. Our findings thereby may contribute to the BMI systems for face recognition. PMID:26737964

  15. An Evolutionary Feature-Based Visual Attention Model Applied to Face Recognition

    NASA Astrophysics Data System (ADS)

    Vázquez, Roberto A.; Sossa, Humberto; Garro, Beatriz A.

    Visual attention is a powerful mechanism that enables perception to focus on a small subset of the information picked up by our eyes. It is directly related to the accuracy of an object categorization task. In this paper we adopt those biological hypotheses and propose an evolutionary visual attention model applied to the face recognition problem. The model is composed by three levels: the attentive level that determines where to look by means of a retinal ganglion network simulated using a network of bi-stable neurons and controlled by an evolutionary process; the preprocessing level that analyses and process the information from the retinal ganglion network; and the associative level that uses a neural network to associate the visual stimuli with the face of a particular person. To test the accuracy of the model a benchmark of faces is used.

  16. Integration of multispectral face recognition and multi-PTZ camera automated surveillance for security applications

    NASA Astrophysics Data System (ADS)

    Chen, Chung-Hao; Yao, Yi; Chang, Hong; Koschan, Andreas; Abidi, Mongi

    2013-06-01

    Due to increasing security concerns, a complete security system should consist of two major components, a computer-based face-recognition system and a real-time automated video surveillance system. A computerbased face-recognition system can be used in gate access control for identity authentication. In recent studies, multispectral imaging and fusion of multispectral narrow-band images in the visible spectrum have been employed and proven to enhance the recognition performance over conventional broad-band images, especially when the illumination changes. Thus, we present an automated method that specifies the optimal spectral ranges under the given illumination. Experimental results verify the consistent performance of our algorithm via the observation that an identical set of spectral band images is selected under all tested conditions. Our discovery can be practically used for a new customized sensor design associated with given illuminations for an improved face recognition performance over conventional broad-band images. In addition, once a person is authorized to enter a restricted area, we still need to continuously monitor his/her activities for the sake of security. Because pantilt-zoom (PTZ) cameras are capable of covering a panoramic area and maintaining high resolution imagery for real-time behavior understanding, researches in automated surveillance systems with multiple PTZ cameras have become increasingly important. Most existing algorithms require the prior knowledge of intrinsic parameters of the PTZ camera to infer the relative positioning and orientation among multiple PTZ cameras. To overcome this limitation, we propose a novel mapping algorithm that derives the relative positioning and orientation between two PTZ cameras based on a unified polynomial model. This reduces the dependence on the knowledge of intrinsic parameters of PTZ camera and relative positions. Experimental results demonstrate that our proposed algorithm presents substantially

  17. Recognition of faces and names: multimodal physiological correlates of memory and executive function.

    PubMed

    Mitchell, Meghan B; Shirk, Steven D; McLaren, Donald G; Dodd, Jessica S; Ezzati, Ali; Ally, Brandon A; Atri, Alireza

    2016-06-01

    We sought to characterize electrophysiological, eye-tracking and behavioral correlates of face-name recognition memory in healthy younger adults using high-density electroencephalography (EEG), infrared eye-tracking (ET), and neuropsychological measures. Twenty-one participants first studied 40 face-name (FN) pairs; 20 were presented four times (4R) and 20 were shown once (1R). Recognition memory was assessed by asking participants to make old/new judgments for 80 FN pairs, of which half were previously studied items and half were novel FN pairs (N). Simultaneous EEG and ET recording were collected during recognition trials. Comparisons of event-related potentials (ERPs) for correctly identified FN pairs were compared across the three item types revealing classic ERP old/new effects including 1) relative positivity (1R > N) bi-frontally from 300 to 500 ms, reflecting enhanced familiarity, 2) relative positivity (4R > 1R and 4R > N) in parietal areas from 500 to 800 ms, reflecting enhanced recollection, and 3) late frontal effects (1R > N) from 1000 to 1800 ms in right frontal areas, reflecting post-retrieval monitoring. ET analysis also revealed significant differences in eye movements across conditions. Exploration of cross-modality relationships suggested associations between memory and executive function measures and the three ERP effects. Executive function measures were associated with several indicators of saccadic eye movements and fixations, which were also associated with all three ERP effects. This novel characterization of face-name recognition memory performance using simultaneous EEG and ET reproduced classic ERP and ET effects, supports the construct validity of the multimodal FN paradigm, and holds promise as an integrative tool to probe brain networks supporting memory and executive functioning. PMID:26116280

  18. 3D face recognition using simulated annealing and the surface interpenetration measure.

    PubMed

    Queirolo, Chauã C; Silva, Luciano; Bellon, Olga R P; Segundo, Maurício Pamplona

    2010-02-01

    This paper presents a novel automatic framework to perform 3D face recognition. The proposed method uses a Simulated Annealing-based approach (SA) for range image registration with the Surface Interpenetration Measure (SIM), as similarity measure, in order to match two face images. The authentication score is obtained by combining the SIM values corresponding to the matching of four different face regions: circular and elliptical areas around the nose, forehead, and the entire face region. Then, a modified SA approach is proposed taking advantage of invariant face regions to better handle facial expressions. Comprehensive experiments were performed on the FRGC v2 database, the largest available database of 3D face images composed of 4,007 images with different facial expressions. The experiments simulated both verification and identification systems and the results compared to those reported by state-of-the-art works. By using all of the images in the database, a verification rate of 96.5 percent was achieved at a False Acceptance Rate (FAR) of 0.1 percent. In the identification scenario, a rank-one accuracy of 98.4 percent was achieved. To the best of our knowledge, this is the highest rank-one score ever achieved for the FRGC v2 database when compared to results published in the literature. PMID:20075453

  19. Eye region-based fusion technique of thermal and dark visual images for human face recognition

    NASA Astrophysics Data System (ADS)

    Bhowmik, Mrinal Kanti; Bhattacharjee, Debotosh; Basu, Dipak Kumar; Nasipuri, Mita

    2012-07-01

    We present an approach for human face recognition using eye region extraction/replacement method under low illumination and varying expression conditions. For conducting experiments, two different sets of face images, namely visual and corresponding thermal, are used from Imaging, Robotics, and Intelligent Systems (IRIS) thermal/visual face data. A decomposition and reconstruction technique of Daubechies wavelet co-efficient (db4) is used to generate the fused image by replacing the eye region in the visual image with the same region from the corresponding thermal image. After that, independent component analysis over the natural logarithm domain (Log-ICA) is used for feature extraction/dimensionality reduction, and finally, a classifier is used to classify the fused face images. Two different image sets, i.e., training and test image sets, are mainly prepared using the IRIS thermal/visual face database for finding the accuracy of the proposed system. Experimental results show the proposed method is more efficient than other image fusion techniques which have used region extraction techniques for dark faces.

  20. Infrared face recognition based on intensity of local micropattern-weighted local binary pattern

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua; Liu, Guodong

    2011-07-01

    The traditional local binary pattern (LBP) histogram representation extracts the local micropatterns and assigns the same weight to all local micropatterns. To combine the different contributions of local micropatterns to face recognition, this paper proposes a weighted LBP histogram based on Weber's law. First, inspired by psychological Weber's law, intensity of local micropattern is defined by the ratio between two terms: one is relative intensity differences of a central pixel against its neighbors and the other is intensity of local central pixel. Second, regarding the intensity of local micropattern as its weight, the weighted LBP histogram is constructed with the defined weight. Finally, to make full use of the space location information and lessen the complexity of recognition, the partitioning and locality preserving projection are applied to get final features. The proposed method is tested on our infrared face databases and yields the recognition rate of 99.2% for same-session situation and 96.4% for elapsed-time situation compared to the 97.6 and 92.1% produced by the method based on traditional LBP.

  1. How a hat may affect 3-month-olds' recognition of a face: an eye-tracking study.

    PubMed

    Bulf, Hermann; Valenza, Eloisa; Turati, Chiara

    2013-01-01

    Recent studies have shown that infants' face recognition rests on a robust face representation that is resilient to a variety of facial transformations such as rotations in depth, motion, occlusion or deprivation of inner/outer features. Here, we investigated whether 3-month-old infants' ability to represent the invariant aspects of a face is affected by the presence of an external add-on element, i.e. a hat. Using a visual habituation task, three experiments were carried out in which face recognition was investigated by manipulating the presence/absence of a hat during face encoding (i.e. habituation phase) and face recognition (i.e. test phase). An eye-tracker system was used to record the time infants spent looking at face-relevant information compared to the hat. The results showed that infants' face recognition was not affected by the presence of the external element when the type of the hat did not vary between the habituation and test phases, and when both the novel and the familiar face wore the same hat during the test phase (Experiment 1). Infants' ability to recognize the invariant aspects of a face was preserved also when the hat was absent in the habituation phase and the same hat was shown only during the test phase (Experiment 2). Conversely, when the novel face identity competed with a novel hat, the hat triggered the infants' attention, interfering with the recognition process and preventing the infants' preference for the novel face during the test phase (Experiment 3). Findings from the current study shed light on how faces and objects are processed when they are simultaneously presented in the same visual scene, contributing to an understanding of how infants respond to the multiple and composite information available in their surrounding environment. PMID:24349378

  2. Recognition disorders for famous faces and voices: a review of the literature and normative data of a new test battery.

    PubMed

    Quaranta, Davide; Piccininni, Chiara; Carlesimo, Giovanni Augusto; Luzzi, Simona; Marra, Camillo; Papagno, Costanza; Trojano, Luigi; Gainotti, Guido

    2016-03-01

    Several anatomo-clinical investigations have shown that familiar face recognition disorders not due to high level perceptual defects are often observed in patients with lesions of the right anterior temporal lobe (ATL). The meaning of these findings is, however, controversial, because some authors claim that these patients show pure instances of modality-specific 'associative prosopagnosia', whereas other authors maintain that in these patients voice recognition is also impaired and that these patients have a 'multimodal person recognition disorder'. To solve the problem of the nature of famous faces recognition disorders in patients affected by right ATL lesions, it is therefore very important to verify with formal tests if these patients are or are not able to recognize others by voice, but a direct comparison between the two modalities is hindered by the fact that voice recognition is more difficult than face recognition. To circumvent this difficulty, we constructed a test battery in which subjects were requested to recognize the same persons (well-known at the national level) through their faces and voices, evaluating familiarity and identification processes. The present paper describes the 'Famous People Recognition Battery' and reports the normative data necessary to clarify the nature of person recognition disorders observed in patients affected by right ATL lesions. PMID:26700802

  3. Face recognition by exploring information jointly in space, scale and orientation.

    PubMed

    Lei, Zhen; Liao, Shengcai; Pietikäinen, Matti; Li, Stan Z

    2011-01-01

    Information jointly contained in image space, scale and orientation domains can provide rich important clues not seen in either individual of these domains. The position, spatial frequency and orientation selectivity properties are believed to have an important role in visual perception. This paper proposes a novel face representation and recognition approach by exploring information jointly in image space, scale and orientation domains. Specifically, the face image is first decomposed into different scale and orientation responses by convolving multiscale and multiorientation Gabor filters. Second, local binary pattern analysis is used to describe the neighboring relationship not only in image space, but also in different scale and orientation responses. This way, information from different domains is explored to give a good face representation for recognition. Discriminant classification is then performed based upon weighted histogram intersection or conditional mutual information with linear discriminant analysis techniques. Extensive experimental results on FERET, AR, and FRGC ver 2.0 databases show the significant advantages of the proposed method over the existing ones. PMID:20643604

  4. Intraclass retrieval of nonrigid 3D objects: application to face recognition.

    PubMed

    Passalis, Georgios; Kakadiaris, Ioannis A; Theoharis, Theoharis

    2007-02-01

    As the size of the available collections of 3D objects grows, database transactions become essential for their management with the key operation being retrieval (query). Large collections are also precategorized into classes so that a single class contains objects of the same type (e.g., human faces, cars, four-legged animals). It is shown that general object retrieval methods are inadequate for intraclass retrieval tasks. We advocate that such intraclass problems require a specialized method that can exploit the basic class characteristics in order to achieve higher accuracy. A novel 3D object retrieval method is presented which uses a parameterized annotated model of the shape of the class objects, incorporating its main characteristics. The annotated subdivision-based model is fitted onto objects of the class using a deformable model framework, converted to a geometry image and transformed into the wavelet domain. Object retrieval takes place in the wavelet domain. The method does not require user interaction, achieves high accuracy, is efficient for use with large databases, and is suitable for nonrigid object classes. We apply our method to the face recognition domain, one of the most challenging intraclass retrieval tasks. We used the Face Recognition Grand Challenge v2 database, yielding an average verification rate of 95.2 percent at 10-3 false accept rate. The latest results of our work can be found at http://www.cbl.uh.edu/UR8D/. PMID:17170476

  5. Multiple scales combined principle component analysis deep learning network for face recognition

    NASA Astrophysics Data System (ADS)

    Tian, Lei; Fan, Chunxiao; Ming, Yue

    2016-03-01

    It is well known that higher level features can represent the abstract semantics of original data. We propose a multiple scales combined deep learning network to learn a set of high-level feature representations through each stage of convolutional neural network for face recognition, which is named as multiscaled principle component analysis (PCA) Network (MS-PCANet). There are two main differences between our model and the traditional deep learning network. On the one hand, we get the prefixed filter kernels by learning the principal component of images' patches using PCA, nonlinearly process the convolutional results by using simple binary hashing, and pool them using spatial pyramid pooling method. On the other hand, in our model, the output features of several stages are fed to the classifier. The purpose of combining feature representations from multiple stages is to provide multiscaled features to the classifier, since the features in the latter stage are more global and invariant than those in the early stage. Therefore, our MS-PCANet feature compactly encodes both holistic abstract information and local specific information. Extensive experimental results show our MS-PCANet model can efficiently extract high-level feature presentations and outperform state-of-the-art face/expression recognition methods on multiple modalities benchmark face-related datasets.

  6. Attentional biases and recognition accuracy: What happens when multiple own- and other-race faces are encountered simultaneously?

    PubMed

    Semplonius, Thalia; Mondloch, Catherine J

    2015-01-01

    Adults recognize own-race faces more accurately than other-race faces. We investigated three characteristics of laboratory investigations hypothesized to minimize the magnitude of the own-race recognition advantage (ORA): lack of competition for attention and instructions that emphasize individuating faces during the study phase, and a lack of uncertainty during the test phase. Across two experiments, participants studied faces individually, in arrays comprising multiple faces and household objects, or in naturalistic scenes (presented on an eye-tracker); they were instructed to remember everything, memorize faces, or form impressions ofpeople. They then completed one of two recognition tasks--an old/new recognition task or a lineup recognition task. Task instructions influenced time spent looking at faces but not the allocation of attention to own- versus other-race faces. The magnitude of the ORA was independent of both task instructions and test protocol, with some modulation by how faces were presented in the study phase. We discuss these results in light of current theories of the ORA. PMID:26489216

  7. Non-intrusive gesture recognition system combining with face detection based on Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Jin, Jing; Wang, Yuanqing; Xu, Liujing; Cao, Liqun; Han, Lei; Zhou, Biye; Li, Minggao

    2014-11-01

    A non-intrusive gesture recognition human-machine interaction system is proposed in this paper. In order to solve the hand positioning problem which is a difficulty in current algorithms, face detection is used for the pre-processing to narrow the search area and find user's hand quickly and accurately. Hidden Markov Model (HMM) is used for gesture recognition. A certain number of basic gesture units are trained as HMM models. At the same time, an improved 8-direction feature vector is proposed and used to quantify characteristics in order to improve the detection accuracy. The proposed system can be applied in interaction equipments without special training for users, such as household interactive television

  8. Gestalt interest points for image description in weight-invariant face recognition

    NASA Astrophysics Data System (ADS)

    Hörhan, Markus; Eidenberger, Horst

    2015-03-01

    In this work, we propose two improvements of the Gestalt Interest Points (GIP) algorithm for the recognition of faces of people that have underwent significant weight change. The basic assumption is that some interest points contribute more to the description of such objects than others. We assume that we can eliminate certain interest points to make the whole method more efficient while retaining our classification results. To find out which gestalt interest points can be eliminated, we did experiments concerning contrast and orientation of face features. Furthermore, we investigated the robustness of GIP against image rotation. The experiments show that our method is rotational invariant and - in this practically relevant forensic domain - outperforms the state-of-the-art methods such as SIFT, SURF, ORB and FREAK.

  9. Validation of the face-name pairs task in major depression: impaired recall but not recognition.

    PubMed

    Smith, Kimberley J; Mullally, Sinead; McLoughlin, Declan; O'Mara, Shane

    2014-01-01

    Major depression can be associated with neurocognitive deficits which are believed in part to be related to medial temporal lobe pathology. The purpose of this study was to investigate this impairment using a hippocampal-dependent neuropsychological task. The face-name pairs task was used to assess associative memory functioning in 19 patients with major depression. When compared to age-sex-and-education matched controls, patients with depression showed impaired learning, delayed cued-recall, and delayed free-recall. However, they also showed preserved recognition of the verbal and nonverbal components of this task. Results indicate that the face-name pairs task is sensitive to neurocognitive deficits in major depression. PMID:24575068

  10. Paternal kin recognition and infant care in white-faced capuchins (Cebus capucinus).

    PubMed

    Sargeant, Elizabeth J; Wikberg, Eva C; Kawamura, Shoji; Jack, Katharine M; Fedigan, Linda M

    2016-06-01

    Evidence for paternal kin recognition and paternally biased behaviors is mixed among primates. We investigate whether infant handling behaviors exhibit paternal kin biases in wild white-faced capuchins monkeys (Cebus capucinus) by comparing interactions between infants and genetic sires, potential sires, siblings (full sibling, maternal, and paternal half-siblings) and unrelated handlers. We used a linear mixed model approach to analyze data collected on 21 focal infants from six groups in Sector Santa Rosa, Costa Rica. Our analyses suggest that the best predictor of adult and subadult male interactions with an infant is the male's dominance status, not his paternity status. We found that maternal siblings but not paternal siblings handled infants more than did unrelated individuals. We conclude that maternal but not paternal kinship influence patterns of infant handling in white-faced capuchins, regardless of whether or not they can recognize paternal kin. Am. J. Primatol. 78:659-668, 2016. © 2016 Wiley Periodicals, Inc. PMID:26815856

  11. Computer-Assisted Face Processing Instruction Improves Emotion Recognition, Mentalizing, and Social Skills in Students with ASD

    ERIC Educational Resources Information Center

    Rice, Linda Marie; Wall, Carla Anne; Fogel, Adam; Shic, Frederick

    2015-01-01

    This study examined the extent to which a computer-based social skills intervention called "FaceSay"™ was associated with improvements in affect recognition, mentalizing, and social skills of school-aged children with Autism Spectrum Disorder (ASD). "FaceSay"™ offers students simulated practice with eye gaze, joint attention,…

  12. Does Perceived Race Affect Discrimination and Recognition of Ambiguous-Race Faces? A Test of the Sociocognitive Hypothesis

    ERIC Educational Resources Information Center

    Rhodes, Gillian; Lie, Hanne C.; Ewing, Louise; Evangelista, Emma; Tanaka, James W.

    2010-01-01

    Discrimination and recognition are often poorer for other-race than own-race faces. These other-race effects (OREs) have traditionally been attributed to reduced perceptual expertise, resulting from more limited experience, with other-race faces. However, recent findings suggest that sociocognitive factors, such as reduced motivation to…

  13. Recognition of Immaturity and Emotional Expressions in Blended Faces by Children with Autism and Other Developmental Disabilities

    ERIC Educational Resources Information Center

    Gross, Thomas F.

    2008-01-01

    The recognition of facial immaturity and emotional expression by children with autism, language disorders, mental retardation, and non-disabled controls was studied in two experiments. Children identified immaturity and expression in upright and inverted faces. The autism group identified fewer immature faces and expressions than control (Exp. 1 &…

  14. Upright or inverted, entire or exploded: right-hemispheric superiority in face recognition withstands multiple spatial manipulations

    PubMed Central

    Marzoli, Daniele; Tommasi, Luca

    2015-01-01

    Background. The ability to identify faces has been interpreted as a cerebral specialization based on the evolutionary importance of these social stimuli, and a number of studies have shown that this function is mainly lateralized in the right hemisphere. The aim of this study was to assess the right-hemispheric specialization in face recognition in unfamiliar circumstances. Methods. Using a divided visual field paradigm, we investigated hemispheric asymmetries in the matching of two subsequent faces, using two types of transformation hindering identity recognition, namely upside-down rotation and spatial “explosion” (female and male faces were fractured into parts so that their mutual spatial relations were left intact), as well as their combination. Results. We confirmed the right-hemispheric superiority in face processing. Moreover, we found a decrease of the identity recognition for more extreme “levels of explosion” and for faces presented upside-down (either as sample or target stimuli) than for faces presented upright, as well as an advantage in the matching of female compared to male faces. Discussion. We conclude that the right-hemispheric superiority for face processing is not an epiphenomenon of our expertise, because we are not often exposed to inverted and “exploded” faces, but rather a robust hemispheric lateralization. We speculate that these results could be attributable to the prevalence of right-handedness in humans and/or to early biases in social interactions. PMID:26644986

  15. The rehabilitation of face recognition impairments: a critical review and future directions

    PubMed Central

    Bate, Sarah; Bennetts, Rachel J.

    2014-01-01

    While much research has investigated the neural and cognitive characteristics of face recognition impairments (prosopagnosia), much less work has examined their rehabilitation. In this paper, we present a critical analysis of the studies that have attempted to improve face-processing skills in acquired and developmental prosopagnosia, and place them in the context of the wider neurorehabilitation literature. First, we examine whether neuroplasticity within the typical face-processing system varies across the lifespan, in order to examine whether timing of intervention may be crucial. Second, we examine reports of interventions in acquired prosopagnosia, where training in compensatory strategies has had some success. Third, we examine reports of interventions in developmental prosopagnosia, where compensatory training in children and remedial training in adults have both been successful. However, the gains are somewhat limited—compensatory strategies have resulted in labored recognition techniques and limited generalization to untrained faces, and remedial techniques require longer periods of training and result in limited maintenance of gains. Critically, intervention suitability and outcome in both forms of the condition likely depends on a complex interaction of factors, including prosopagnosia severity, the precise functional locus of the impairment, and individual differences such as age. Finally, we discuss future directions in the rehabilitation of prosopagnosia, and the possibility of boosting the effects of cognitive training programmes by simultaneous administration of oxytocin or non-invasive brain stimulation. We conclude that future work using more systematic methods and larger participant groups is clearly required, and in the case of developmental prosopagnosia, there is an urgent need to develop early detection and remediation tools for children, in order to optimize intervention outcome. PMID:25100965

  16. Expression-invariant face recognition using depth and intensity dual-tree complex wavelet transform features

    NASA Astrophysics Data System (ADS)

    Ayatollahi, Fazael; Raie, Abolghasem A.; Hajati, Farshid

    2015-03-01

    A new multimodal expression-invariant face recognition method is proposed by extracting features of rigid and semirigid regions of the face which are less affected by facial expressions. Dual-tree complex wavelet transform is applied in one decomposition level to extract the desired feature from range and intensity images by transforming the regions into eight subimages, consisting of six band-pass subimages to represent face details and two low-pass subimages to represent face approximates. The support vector machine has been used to classify both feature fusion and score fusion modes. To test the algorithm, BU-3DFE and FRGC v2.0 datasets have been selected. The BU-3DFE dataset was tested by low intensity versus high intensity and high intensity versus low intensity strategies using all expressions in both training and testing stages in different levels. Findings include the best rank-1 identification rate of 99.8% and verification rate of 100% at a 0.1% false acceptance rate. The FRGC v2.0 was tested by the neutral versus non-neutral strategy, which applies images without expression in training and with expression in the testing stage, thereby achieving the best rank-1 identification rate of 93.5% and verification rate of 97.4% at a 0.1% false acceptance rate.

  17. Aging Face Recognition: A Hierarchical Learning Model Based on Local Patterns Selection.

    PubMed

    Li, Zhifeng; Gong, Dihong; Li, Xuelong; Tao, Dacheng

    2016-05-01

    Aging face recognition refers to matching the same person's faces across different ages, e.g., matching a person's older face to his (or her) younger one, which has many important practical applications, such as finding missing children. The major challenge of this task is that facial appearance is subject to significant change during the aging process. In this paper, we propose to solve the problem with a hierarchical model based on two-level learning. At the first level, effective features are learned from low-level microstructures, based on our new feature descriptor called local pattern selection (LPS). The proposed LPS descriptor greedily selects low-level discriminant patterns in a way, such that intra-user dissimilarity is minimized. At the second level, higher level visual information is further refined based on the output from the first level. To evaluate the performance of our new method, we conduct extensive experiments on the MORPH data set (the largest face aging data set available in the public domain), which show a significant improvement in accuracy over the state-of-the-art methods. PMID:26930681

  18. Recognition and context memory for faces from own and other ethnic groups: a remember-know investigation.

    PubMed

    Horry, Ruth; Wright, Daniel B; Tredoux, Colin G

    2010-03-01

    People are more accurate at recognizing faces from their own ethnic group than at recognizing faces from other ethnic groups. This other-ethnicity effect (OEE) in recognition may be produced by a deficit in recollective memory for other-ethnicity faces. In a single study, White and Black participants saw White and Black faces presented within several different visual contexts. The participants were then given an old/new recognition task. Old responses were followed by remember-know-guess judgments and context judgments. Own-ethnicity faces were recognized more accurately, were given more remember responses, and produced more accurate context judgments than did other-ethnicity faces. These results are discussed in a dual-process framework, and implications for eyewitness memory are considered. PMID:20173186

  19. An Efficient Feature Extraction Method with Pseudo-Zernike Moment in RBF Neural Network-Based Human Face Recognition System

    NASA Astrophysics Data System (ADS)

    Haddadnia, Javad; Ahmadi, Majid; Faez, Karim

    2003-12-01

    This paper introduces a novel method for the recognition of human faces in digital images using a new feature extraction method that combines the global and local information in frontal view of facial images. Radial basis function (RBF) neural network with a hybrid learning algorithm (HLA) has been used as a classifier. The proposed feature extraction method includes human face localization derived from the shape information. An efficient distance measure as facial candidate threshold (FCT) is defined to distinguish between face and nonface images. Pseudo-Zernike moment invariant (PZMI) with an efficient method for selecting moment order has been used. A newly defined parameter named axis correction ratio (ACR) of images for disregarding irrelevant information of face images is introduced. In this paper, the effect of these parameters in disregarding irrelevant information in recognition rate improvement is studied. Also we evaluate the effect of orders of PZMI in recognition rate of the proposed technique as well as RBF neural network learning speed. Simulation results on the face database of Olivetti Research Laboratory (ORL) indicate that the proposed method for human face recognition yielded a recognition rate of 99.3%.

  20. [Effects of retrieving context information on accuracy-confidence relationships in recognition memory for faces].

    PubMed

    Ishizaki, Chikage; Naka, Makiko; Aritomi, Miyoko

    2007-04-01

    We investigated how retrieval conditions affect accuracy-confidence (A-C) relationship sin recognition memory for faces. Seventy participants took a face-recognition test and rated their confidence in their judgment. Twenty-three participants were assigned to a retrieval condition, where they were encouraged to remember background information (scenery) of each picture just before rating their confidence. Twenty-four participants were assigned to a verbalizing condition, in which they were encouraged to remember and verbally describe the background of each picture before rating. Twenty-three participants were assigned to a control condition. The results showed that for the control condition, an A-C relationship was found for old items but not for new items, replicating the results of Takahashi (1998) and Wagenaar (1988). In contrast, in the retrieval condition, an A-C relationship was found for both old and new items. In the verbalizing condition, an A-C relationship was not found for either old or new items. The results showed that retrieving background information affects A-C relationships, supporting the idea that confidence ratings rely not only on memory traces but also on various kinds of information such as retrieved background scenery. Implications for eyewitness testimony were discussed. PMID:17511249

  1. Applied learning-based color tone mapping for face recognition in video surveillance system

    NASA Astrophysics Data System (ADS)

    Yew, Chuu Tian; Suandi, Shahrel Azmin

    2012-04-01

    In this paper, we present an applied learning-based color tone mapping technique for video surveillance system. This technique can be applied onto both color and grayscale surveillance images. The basic idea is to learn the color or intensity statistics from a training dataset of photorealistic images of the candidates appeared in the surveillance images, and remap the color or intensity of the input image so that the color or intensity statistics match those in the training dataset. It is well known that the difference in commercial surveillance cameras models, and signal processing chipsets used by different manufacturers will cause the color and intensity of the images to differ from one another, thus creating additional challenges for face recognition in video surveillance system. Using Multi-Class Support Vector Machines as the classifier on a publicly available video surveillance camera database, namely SCface database, this approach is validated and compared to the results of using holistic approach on grayscale images. The results show that this technique is suitable to improve the color or intensity quality of video surveillance system for face recognition.

  2. Validation of a short-term memory test for the recognition of people and faces.

    PubMed

    Leyk, D; Sievert, A; Heiss, A; Gorges, W; Ridder, D; Alexander, T; Wunderlich, M; Ruther, T

    2008-08-01

    Memorising and processing faces is a short-term memory dependent task of utmost importance in the security domain, in which constant and high performance is a must. Especially in access or passport control-related tasks, the timely identification of performance decrements is essential, margins of error are narrow and inadequate performance may have grave consequences. However, conventional short-term memory tests frequently use abstract settings with little relevance to working situations. They may thus be unable to capture task-specific decrements. The aim of the study was to devise and validate a new test, better reflecting job specifics and employing appropriate stimuli. After 1.5 s (short) or 4.5 s (long) presentation, a set of seven portraits of faces had to be memorised for comparison with two control stimuli. Stimulus appearance followed 2 s (first item) and 8 s (second item) after set presentation. Twenty eight subjects (12 male, 16 female) were tested at seven different times of day, 3 h apart. Recognition rates were above 60% even for the least favourable condition. Recognition was significantly better in the 'long' condition (+10%) and for the first item (+18%). Recognition time showed significant differences (10%) between items. Minor effects of learning were found for response latencies only. Based on occupationally relevant metrics, the test displayed internal and external validity, consistency and suitability for further use in test/retest scenarios. In public security, especially where access to restricted areas is monitored, margins of error are narrow and operator performance must remain high and level. Appropriate schedules for personnel, based on valid test results, are required. However, task-specific data and performance tests, permitting the description of task specific decrements, are not available. Commonly used tests may be unsuitable due to undue abstraction and insufficient reference to real-world conditions. Thus, tests are required

  3. Image disparity in cross-spectral face recognition: mitigating camera and atmospheric effects

    NASA Astrophysics Data System (ADS)

    Cao, Zhicheng; Schmid, Natalia A.; Li, Xin

    2016-05-01

    Matching facial images acquired in different electromagnetic spectral bands remains a challenge. An example of this type of comparison is matching active or passive infrared (IR) against a gallery of visible face images. When combined with cross-distance, this problem becomes even more challenging due to deteriorated quality of the IR data. As an example, we consider a scenario where visible light images are acquired at a short standoff distance while IR images are long range data. To address the difference in image quality due to atmospheric and camera effects, typical degrading factors observed in long range data, we propose two approaches that allow to coordinate image quality of visible and IR face images. The first approach involves Gaussian-based smoothing functions applied to images acquired at a short distance (visible light images in the case we analyze). The second approach involves denoising and enhancement applied to low quality IR face images. A quality measure tool called Adaptive Sharpness Measure is utilized as guidance for the quality parity process, which is an improvement of the famous Tenengrad method. For recognition algorithm, a composite operator combining Gabor filters, Local Binary Patterns (LBP), generalized LBP and Weber Local Descriptor (WLD) is used. The composite operator encodes both magnitude and phase responses of the Gabor filters. The combining of LBP and WLD utilizes both the orientation and intensity information of edges. Different IR bands, short-wave infrared (SWIR) and near-infrared (NIR), and different long standoff distances are considered. The experimental results show that in all cases the proposed technique of image quality parity (both approaches) benefits the final recognition performance.

  4. Template protection and its implementation in 3D face recognition systems

    NASA Astrophysics Data System (ADS)

    Zhou, Xuebing

    2007-04-01

    As biometric recognition systems are widely applied in various application areas, security and privacy risks have recently attracted the attention of the biometric community. Template protection techniques prevent stored reference data from revealing private biometric information and enhance the security of biometrics systems against attacks such as identity theft and cross matching. This paper concentrates on a template protection algorithm that merges methods from cryptography, error correction coding and biometrics. The key component of the algorithm is to convert biometric templates into binary vectors. It is shown that the binary vectors should be robust, uniformly distributed, statistically independent and collision-free so that authentication performance can be optimized and information leakage can be avoided. Depending on statistical character of the biometric template, different approaches for transforming biometric templates into compact binary vectors are presented. The proposed methods are integrated into a 3D face recognition system and tested on the 3D facial images of the FRGC database. It is shown that the resulting binary vectors provide an authentication performance that is similar to the original 3D face templates. A high security level is achieved with reasonable false acceptance and false rejection rates of the system, based on an efficient statistical analysis. The algorithm estimates the statistical character of biometric templates from a number of biometric samples in the enrollment database. For the FRGC 3D face database, the small distinction of robustness and discriminative power between the classification results under the assumption of uniquely distributed templates and the ones under the assumption of Gaussian distributed templates is shown in our tests.

  5. A Multi-Modal Face Recognition Method Using Complete Local Derivative Patterns and Depth Maps

    PubMed Central

    Yin, Shouyi; Dai, Xu; Ouyang, Peng; Liu, Leibo; Wei, Shaojun

    2014-01-01

    In this paper, we propose a multi-modal 2D + 3D face recognition method for a smart city application based on a Wireless Sensor Network (WSN) and various kinds of sensors. Depth maps are exploited for the 3D face representation. As for feature extraction, we propose a new feature called Complete Local Derivative Pattern (CLDP). It adopts the idea of layering and has four layers. In the whole system, we apply CLDP separately on Gabor features extracted from a 2D image and depth map. Then, we obtain two features: CLDP-Gabor and CLDP-Depth. The two features weighted by the corresponding coefficients are combined together in the decision level to compute the total classification distance. At last, the probe face is assigned the identity with the smallest classification distance. Extensive experiments are conducted on three different databases. The results demonstrate the robustness and superiority of the new approach. The experimental results also prove that the proposed multi-modal 2D + 3D method is superior to other multi-modal ones and CLDP performs better than other Local Binary Pattern (LBP) based features. PMID:25333290

  6. A multi-modal face recognition method using complete local derivative patterns and depth maps.

    PubMed

    Yin, Shouyi; Dai, Xu; Ouyang, Peng; Liu, Leibo; Wei, Shaojun

    2014-01-01

    In this paper, we propose a multi-modal 2D + 3D face recognition method for a smart city application based on a Wireless Sensor Network (WSN) and various kinds of sensors. Depth maps are exploited for the 3D face representation. As for feature extraction, we propose a new feature called Complete Local Derivative Pattern (CLDP). It adopts the idea of layering and has four layers. In the whole system, we apply CLDP separately on Gabor features extracted from a 2D image and depth map. Then, we obtain two features: CLDP-Gabor and CLDP-Depth. The two features weighted by the corresponding coefficients are combined together in the decision level to compute the total classification distance. At last, the probe face is assigned the identity with the smallest classification distance. Extensive experiments are conducted on three different databases. The results demonstrate the robustness and superiority of the new approach. The experimental results also prove that the proposed multi-modal 2D + 3D method is superior to other multi-modal ones and CLDP performs better than other Local Binary Pattern (LBP) based features. PMID:25333290

  7. Sparse and Dense Hybrid Representation via Dictionary Decomposition for Face Recognition.

    PubMed

    Jiang, Xudong; Lai, Jian

    2015-05-01

    Sparse representation provides an effective tool for classification under the conditions that every class has sufficient representative training samples and the training data are uncorrupted. These conditions may not hold true in many practical applications. Face identification is an example where we have a large number of identities but sufficient representative and uncorrupted training images cannot be guaranteed for every identity. A violation of the two conditions leads to a poor performance of the sparse representation-based classification (SRC). This paper addresses this critic issue by analyzing the merits and limitations of SRC. A sparse- and dense-hybrid representation (SDR) framework is proposed in this paper to alleviate the problems of SRC. We further propose a procedure of supervised low-rank (SLR) dictionary decomposition to facilitate the proposed SDR framework. In addition, the problem of the corrupted training data is also alleviated by the proposed SLR dictionary decomposition. The application of the proposed SDR-SLR approach in face recognition verifies its effectiveness and advancement to the field. Extensive experiments on benchmark face databases demonstrate that it consistently outperforms the state-of-the-art sparse representation based approaches and the performance gains are significant in most cases. PMID:26353329

  8. Galactose uncovers face recognition and mental images in congenital prosopagnosia: the first case report.

    PubMed

    Esins, Janina; Schultz, Johannes; Bülthoff, Isabelle; Kennerknecht, Ingo

    2014-09-01

    A woman in her early 40s with congenital prosopagnosia and attention deficit hyperactivity disorder observed for the first time sudden and extensive improvement of her face recognition abilities, mental imagery, and sense of navigation after galactose intake. This effect of galactose on prosopagnosia has never been reported before. Even if this effect is restricted to a subform of congenital prosopagnosia, galactose might improve the condition of other prosopagnosics. Congenital prosopagnosia, the inability to recognize other people by their face, has extensive negative impact on everyday life. It has a high prevalence of about 2.5%. Monosaccharides are known to have a positive impact on cognitive performance. Here, we report the case of a prosopagnosic woman for whom the daily intake of 5 g of galactose resulted in a remarkable improvement of her lifelong face blindness, along with improved sense of orientation and more vivid mental imagery. All these improvements vanished after discontinuing galactose intake. The self-reported effects of galactose were wide-ranging and remarkably strong but could not be reproduced for 16 other prosopagnosics tested. Indications about heterogeneity within prosopagnosia have been reported; this could explain the difficulty to find similar effects in other prosopagnosics. Detailed analyses of the effects of galactose in prosopagnosia might give more insight into the effects of galactose on human cognition in general. Galactose is cheap and easy to obtain, therefore, a systematic test of its positive effects on other cases of congenital prosopagnosia may be warranted. PMID:24164936

  9. Fast Face-Recognition Optical Parallel Correlator Using High Accuracy Correlation Filter

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Kodate, Kashiko

    2005-11-01

    We designed and fabricated a fully automatic fast face recognition optical parallel correlator [E. Watanabe and K. Kodate: Appl. Opt. 44 (2005) 5666] based on the VanderLugt principle. The implementation of an as-yet unattained ultra high-speed system was aided by reconfiguring the system to make it suitable for easier parallel processing, as well as by composing a higher accuracy correlation filter and high-speed ferroelectric liquid crystal-spatial light modulator (FLC-SLM). In running trial experiments using this system (dubbed FARCO), we succeeded in acquiring remarkably low error rates of 1.3% for false match rate (FMR) and 2.6% for false non-match rate (FNMR). Given the results of our experiments, the aim of this paper is to examine methods of designing correlation filters and arranging database image arrays for even faster parallel correlation, underlining the issues of calculation technique, quantization bit rate, pixel size and shift from optical axis. The correlation filter has proved its excellent performance and higher precision than classical correlation and joint transform correlator (JTC). Moreover, arrangement of multi-object reference images leads to 10-channel correlation signals, as sharply marked as those of a single channel. This experiment result demonstrates great potential for achieving the process speed of 10000 face/s.

  10. Efficient face recognition using local derivative pattern and shifted phase-encoded fringe-adjusted joint transform correlation

    NASA Astrophysics Data System (ADS)

    Biswas, Bikram K.; Alam, Mohammad S.; Chowdhury, Suparna

    2016-04-01

    An improved shifted phase-encoded fringe-adjusted joint transform correlation technique is proposed in this paper for face recognition which can accommodate the detrimental effects of noise, illumination, and other 3D distortions such as expression and rotation variations. This technique utilizes a third order local derivative pattern operator (LDP3) followed by a shifted phase-encoded fringe-adjusted joint transform correlation (SPFJTC) operation. The local derivative pattern operator ensures better facial feature extraction in a variable environment while the SPFJTC yields robust correlation output for the desired signals. The performance of the proposed method is determined by using the Yale Face Database, Yale Face Database B, and Georgia Institute of Technology Face Database. This technique has been found to yield better face recognition rate compared to alternate JTC based techniques.

  11. The Facial Expressive Action Stimulus Test. A test battery for the assessment of face memory, face and object perception, configuration processing, and facial expression recognition

    PubMed Central

    de Gelder, Beatrice; Huis in ‘t Veld, Elisabeth M. J.; Van den Stock, Jan

    2015-01-01

    There are many ways to assess face perception skills. In this study, we describe a novel task battery FEAST (Facial Expressive Action Stimulus Test) developed to test recognition of identity and expressions of human faces as well as stimulus control categories. The FEAST consists of a neutral and emotional face memory task, a face and shoe identity matching task, a face and house part-to-whole matching task, and a human and animal facial expression matching task. The identity and part-to-whole matching tasks contain both upright and inverted conditions. The results provide reference data of a healthy sample of controls in two age groups for future users of the FEAST. PMID:26579004

  12. Two-stage sparse representation-based face recognition with reconstructed images

    NASA Astrophysics Data System (ADS)

    Cheng, Guangtao; Song, Zhanjie; Lei, Yang; Han, Xiuning

    2014-09-01

    In order to address the challenges that both the training and testing images are contaminated by random pixels corruption, occlusion, and disguise, a robust face recognition algorithm based on two-stage sparse representation is proposed. Specifically, noises in the training images are first eliminated by low-rank matrix recovery. Then, by exploiting the first-stage sparse representation computed by solving a new extended ℓ1-minimization problem, noises in the testing image can be successfully removed. After the elimination, feature extraction techniques that are more discriminative but are sensitive to noise can be effectively performed on the reconstructed clean images, and the final classification is accomplished by utilizing the second-stage sparse representation obtained by solving the reduced ℓ1-minimization problem in a low-dimensional feature space. Extensive experiments are conducted on publicly available databases to verify the superiority and robustness of our algorithm.

  13. Optimization of decision making for face recognition based on nonlinear correlation plane

    NASA Astrophysics Data System (ADS)

    Alfalou, A.; Brosseau, C.; Kaddah, W.

    2015-05-01

    We report on a specific procedure in the correlation plane which allows us to make more robust and discriminating decision for face recognition applications. In this scheme, we multiply the correlation plane by a nonlinear function which is chosen to increase the correlation peak, reduce the autocorrelation noise, and increase the inter-correlation noise. In our work we present tests using a VanderLugt correlator (VLC) fabricated with different filters (phase-only filter (POF), composite POF) and we also discuss the efficiency of the protocol using peak-to-correlation-energy measures. Rewardingly, our technical results demonstrate that this method is remarkably efficient to increase both robustness and discrimination performances of a VLC.

  14. Nonlinear Topological Component Analysis: Application to Age-Invariant Face Recognition.

    PubMed

    Bouchaffra, Djamel

    2015-07-01

    We introduce a novel formalism that performs dimensionality reduction and captures topological features (such as the shape of the observed data) to conduct pattern classification. This mission is achieved by: 1) reducing the dimension of the observed variables through a kernelized radial basis function technique and expressing the latent variables probability distribution in terms of the observed variables; 2) disclosing the data manifold as a 3-D polyhedron via the α -shape constructor and extracting topological features; and 3) classifying a data set using a mixture of multinomial distributions. We have applied our methodology to the problem of age-invariant face recognition. Experimental results obtained demonstrate the efficiency of the proposed methodology named nonlinear topological component analysis when compared with some state-of-the-art approaches. PMID:25134092

  15. The neural correlates of memory encoding and recognition for own-race and other-race faces.

    PubMed

    Herzmann, Grit; Willenbockel, Verena; Tanaka, James W; Curran, Tim

    2011-09-01

    People are generally better at recognizing faces from their own race than from a different race, as has been shown in numerous behavioral studies. Here we use event-related potentials (ERPs) to investigate how differences between own-race and other-race faces influence the neural correlates of memory encoding and recognition. ERPs of Asian and Caucasian participants were recorded during the study and test phases of a Remember-Know paradigm with Chinese and Caucasian faces. A behavioral other-race effect was apparent in both groups, neither of which recognized other-race faces as well as own-race faces; however, Caucasian subjects showed stronger behavioral other-race effects. In the study phase, memory encoding was assessed with the ERP difference due to memory (Dm). Other-race effects in memory encoding were only found for Caucasian subjects. For subsequently "recollected" items, Caucasian subjects showed less positive mean amplitudes for own-race than other-race faces indicating that less neural activation was required for successful memory encoding of own-race faces. For the comparison of subsequently "recollected" and "familiar" items, Caucasian subjects showed similar brain activation only for own-race faces suggesting that subsequent familiarity and recollection of own-race faces arose from similar memory encoding processes. Experience with a race also influenced old/new effects, which are ERP correlates of recollection measured during recognition testing. Own-race faces elicited a typical parietal old/new effect, whereas old/new effects for other-race faces were prolonged and dominated by activity in frontal brain regions, suggesting a stronger involvement of post-retrieval monitoring processes. These results indicate that the other-race effect is a memory encoding- and recognition-based phenomenon. PMID:21807008

  16. Parallel effects of processing fluency and positive affect on familiarity-based recognition decisions for faces

    PubMed Central

    Duke, Devin; Fiacconi, Chris M.; Köhler, Stefan

    2014-01-01

    According to attribution models of familiarity assessment, people can use a heuristic in recognition-memory decisions, in which they attribute the subjective ease of processing of a memory probe to a prior encounter with the stimulus in question. Research in social cognition suggests that experienced positive affect may be the proximal cue that signals fluency in various experimental contexts. In the present study, we compared the effects of positive affect and fluency on recognition-memory judgments for faces with neutral emotional expression. We predicted that if positive affect is indeed the critical cue that signals processing fluency at retrieval, then its manipulation should produce effects that closely mirror those produced by manipulations of processing fluency. In two experiments, we employed a masked-priming procedure in combination with a Remember-Know (RK) paradigm that aimed to separate familiarity- from recollection-based memory decisions. In addition, participants performed a prime-discrimination task that allowed us to take inter-individual differences in prime awareness into account. We found highly similar effects of our priming manipulations of processing fluency and of positive affect. In both cases, the critical effect was specific to familiarity-based recognition responses. Moreover, in both experiments it was reflected in a shift toward a more liberal response bias, rather than in changed discrimination. Finally, in both experiments, the effect was found to be related to prime awareness; it was present only in participants who reported a lack of such awareness on the prime-discrimination task. These findings add to a growing body of evidence that points not only to a role of fluency, but also of positive affect in familiarity assessment. As such they are consistent with the idea that fluency itself may be hedonically marked. PMID:24795678

  17. From face to interface recognition: a differential geometric approach to distinguish DNA from RNA binding surfaces

    PubMed Central

    Shazman, Shula; Elber, Gershon; Mandel-Gutfreund, Yael

    2011-01-01

    Protein nucleic acid interactions play a critical role in all steps of the gene expression pathway. Nucleic acid (NA) binding proteins interact with their partners, DNA or RNA, via distinct regions on their surface that are characterized by an ensemble of chemical, physical and geometrical properties. In this study, we introduce a novel methodology based on differential geometry, commonly used in face recognition, to characterize and predict NA binding surfaces on proteins. Applying the method on experimentally solved three-dimensional structures of proteins we successfully classify double-stranded DNA (dsDNA) from single-stranded RNA (ssRNA) binding proteins, with 83% accuracy. We show that the method is insensitive to conformational changes that occur upon binding and can be applicable for de novo protein-function prediction. Remarkably, when concentrating on the zinc finger motif, we distinguish successfully between RNA and DNA binding interfaces possessing the same binding motif even within the same protein, as demonstrated for the RNA polymerase transcription-factor, TFIIIA. In conclusion, we present a novel methodology to characterize protein surfaces, which can accurately tell apart dsDNA from an ssRNA binding interfaces. The strength of our method in recognizing fine-tuned differences on NA binding interfaces make it applicable for many other molecular recognition problems, with potential implications for drug design. PMID:21693557

  18. From face to interface recognition: a differential geometric approach to distinguish DNA from RNA binding surfaces.

    PubMed

    Shazman, Shula; Elber, Gershon; Mandel-Gutfreund, Yael

    2011-09-01

    Protein nucleic acid interactions play a critical role in all steps of the gene expression pathway. Nucleic acid (NA) binding proteins interact with their partners, DNA or RNA, via distinct regions on their surface that are characterized by an ensemble of chemical, physical and geometrical properties. In this study, we introduce a novel methodology based on differential geometry, commonly used in face recognition, to characterize and predict NA binding surfaces on proteins. Applying the method on experimentally solved three-dimensional structures of proteins we successfully classify double-stranded DNA (dsDNA) from single-stranded RNA (ssRNA) binding proteins, with 83% accuracy. We show that the method is insensitive to conformational changes that occur upon binding and can be applicable for de novo protein-function prediction. Remarkably, when concentrating on the zinc finger motif, we distinguish successfully between RNA and DNA binding interfaces possessing the same binding motif even within the same protein, as demonstrated for the RNA polymerase transcription-factor, TFIIIA. In conclusion, we present a novel methodology to characterize protein surfaces, which can accurately tell apart dsDNA from an ssRNA binding interfaces. The strength of our method in recognizing fine-tuned differences on NA binding interfaces make it applicable for many other molecular recognition problems, with potential implications for drug design. PMID:21693557

  19. A Novel Extended Granger Causal Model Approach Demonstrates Brain Hemispheric Differences during Face Recognition Learning

    PubMed Central

    Ge, Tian; Kendrick, Keith M.; Feng, Jianfeng

    2009-01-01

    Two main approaches in exploring causal relationships in biological systems using time-series data are the application of Dynamic Causal model (DCM) and Granger Causal model (GCM). These have been extensively applied to brain imaging data and are also readily applicable to a wide range of temporal changes involving genes, proteins or metabolic pathways. However, these two approaches have always been considered to be radically different from each other and therefore used independently. Here we present a novel approach which is an extension of Granger Causal model and also shares the features of the bilinear approximation of Dynamic Causal model. We have first tested the efficacy of the extended GCM by applying it extensively in toy models in both time and frequency domains and then applied it to local field potential recording data collected from in vivo multi-electrode array experiments. We demonstrate face discrimination learning-induced changes in inter- and intra-hemispheric connectivity and in the hemispheric predominance of theta and gamma frequency oscillations in sheep inferotemporal cortex. The results provide the first evidence for connectivity changes between and within left and right inferotemporal cortexes as a result of face recognition learning. PMID:19936225

  20. Virtual images inspired consolidate collaborative representation-based classification method for face recognition

    NASA Astrophysics Data System (ADS)

    Liu, Shigang; Zhang, Xinxin; Peng, Yali; Cao, Han

    2016-07-01

    The collaborative representation-based classification method performs well in the field of classification of high-dimensional images such as face recognition. It utilizes training samples from all classes to represent a test sample and assigns a class label to the test sample using the representation residuals. However, this method still suffers from the problem that limited number of training sample influences the classification accuracy when applied to image classification. In this paper, we propose a modified collaborative representation-based classification method (MCRC), which exploits novel virtual images and can obtain high classification accuracy. The procedure to produce virtual images is very simple but the use of them can bring surprising performance improvement. The virtual images can sufficiently denote the features of original face images in some case. Extensive experimental results doubtlessly demonstrate that the proposed method can effectively improve the classification accuracy. This is mainly attributed to the integration of the collaborative representation and the proposed feature-information dominated virtual images.

  1. Constructing a safety and security system by medical applications of a fast face recognition optical parallel correlator

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Ishikawa, Mami; Ohta, Maiko; Murakami, Yasuo; Kodate, Kashiko

    2006-01-01

    Medical errors and patient safety have always received a great deal of attention, as they can be critically life-threatening and significant matters. Hospitals and medical personnel are trying their utmost to avoid these errors. Currently in the medical field, patients' record is identified through their PIN numbers and ID cards. However, for patients who cannot speak or move, or who suffer from memory disturbances, alternative methods would be more desirable, and necessary in some cases. The authors previously proposed and fabricated a specially-designed correlator called FARCO (Fast Face Recognition Optical Correlator) based on the Vanderlugt Correlator1, which operates at the speed of 1000 faces/s 2,3,4. Combined with high-speed display devices, the four-channel processing could achieve such high operational speed as 4000 faces/s. Running trial experiments on a 1-to-N identification basis using the optical parallel correlator, we succeeded in acquiring low error rates of 1 % FMR and 2.3 % FNMR. In this paper, we propose a robust face recognition system using the FARCO for focusing on the safety and security of the medical field. We apply our face recognition system to registration of inpatients, in particular children and infants, before and after medical treatments or operations. The proposed system has recorded a higher recognition rate by multiplexing both input and database facial images from moving images. The system was also tested and evaluated for further practical use, leaving excellent results. Hence, our face recognition system could function effectively as an integral part of medical system, meeting these essential requirements of safety, security and privacy.

  2. Positive, but Not Negative, Facial Expressions Facilitate 3-Month-Olds' Recognition of an Individual Face

    ERIC Educational Resources Information Center

    Brenna, Viola; Proietti, Valentina; Montirosso, Rosario; Turati, Chiara

    2013-01-01

    The current study examined whether and how the presence of a positive or a negative emotional expression may affect the face recognition process at 3 months of age. Using a familiarization procedure, Experiment 1 demonstrated that positive (i.e., happiness), but not negative (i.e., fear and anger) facial expressions facilitate infants'…

  3. The Cambridge Mindreading (CAM) Face-Voice Battery: Testing Complex Emotion Recognition in Adults with and without Asperger Syndrome

    ERIC Educational Resources Information Center

    Golan, Ofer; Baron-Cohen, Simon; Hill, Jacqueline

    2006-01-01

    Adults with Asperger Syndrome (AS) can recognise simple emotions and pass basic theory of mind tasks, but have difficulties recognising more complex emotions and mental states. This study describes a new battery of tasks, testing recognition of 20 complex emotions and mental states from faces and voices. The battery was given to males and females…

  4. Common Neural Systems Associated with the Recognition of Famous Faces and Names: An Event-Related fMRI Study

    ERIC Educational Resources Information Center

    Nielson, Kristy A.; Seidenberg, Michael; Woodard, John L.; Durgerian, Sally; Zhang, Qi; Gross, William L.; Gander, Amelia; Guidotti, Leslie M.; Antuono, Piero; Rao, Stephen M.

    2010-01-01

    Person recognition can be accomplished through several modalities (face, name, voice). Lesion, neurophysiology and neuroimaging studies have been conducted in an attempt to determine the similarities and differences in the neural networks associated with person identity via different modality inputs. The current study used event-related…

  5. Recognition of Faces and Greebles in 3-Month-Old Infants: Influence of Temperament and Cognitive Abilities

    ERIC Educational Resources Information Center

    Spangler, Sibylle M.; Freitag, Claudia; Schwarzer, Gudrun; Vierhaus, Marc; Teubert, Manuel; Lamm, Bettina; Kolling, Thorsten; Graf, Frauke; Goertz, Claudia; Fassbender, Ina; Lohaus, Arnold; Knopf, Monika; Keller, Heidi

    2011-01-01

    The aim of the present study was to investigate whether temperament and cognitive abilities are related to recognition performance of Caucasian and African faces and of a nonfacial stimulus class, Greebles. Seventy Caucasian infants were tested at 3 months with a habituation/dishabituation paradigm and their temperament and cognitive abilities…

  6. The Effects of Early Experience on Face Recognition: An Event-Related Potential Study of Institutionalized Children in Romania

    ERIC Educational Resources Information Center

    Moulson, Margaret C.; Westerlund, Alissa; Fox, Nathan A.; Zeanah, Charles H.; Nelson, Charles A.

    2009-01-01

    Data are reported from 3 groups of children residing in Bucharest, Romania. Face recognition in currently institutionalized, previously institutionalized, and never-institutionalized children was assessed at 3 time points: preintervention (n = 121), 30 months of age (n = 99), and 42 months of age (n = 77). Children watched photographs of caregiver…

  7. Neural Correlates of Face and Object Recognition in Young Children with Autism Spectrum Disorder, Developmental Delay, and Typical Development.

    ERIC Educational Resources Information Center

    Dawson, Geraldine; Carver, Leslie; Meltzoff, Andrew N.; Panagiotides, Herachles; McPartland, James; Webb, Sara J.

    2002-01-01

    Compared face recognition ability in young children with autism to that of children with typical development and developmental delay. Took electroencephalographic recordings of brain activity while children viewed pictures of their mothers and unfamiliar females, and familiar and unfamiliar toys. Found that autistic children showed no differences…

  8. High-accuracy and robust face recognition system based on optical parallel correlator using a temporal image sequence

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Ishikawa, Mami; Ohta, Maiko; Kodate, Kashiko

    2005-09-01

    Face recognition is used in a wide range of security systems, such as monitoring credit card use, searching for individuals with street cameras via Internet and maintaining immigration control. There are still many technical subjects under study. For instance, the number of images that can be stored is limited under the current system, and the rate of recognition must be improved to account for photo shots taken at different angles under various conditions. We implemented a fully automatic Fast Face Recognition Optical Correlator (FARCO) system by using a 1000 frame/s optical parallel correlator designed and assembled by us. Operational speed for the 1: N (i.e. matching a pair of images among N, where N refers to the number of images in the database) identification experiment (4000 face images) amounts to less than 1.5 seconds, including the pre/post processing. From trial 1: N identification experiments using FARCO, we acquired low error rates of 2.6% False Reject Rate and 1.3% False Accept Rate. By making the most of the high-speed data-processing capability of this system, much more robustness can be achieved for various recognition conditions when large-category data are registered for a single person. We propose a face recognition algorithm for the FARCO while employing a temporal image sequence of moving images. Applying this algorithm to a natural posture, a two times higher recognition rate scored compared with our conventional system. The system has high potential for future use in a variety of purposes such as search for criminal suspects by use of street and airport video cameras, registration of babies at hospitals or handling of an immeasurable number of images in a database.

  9. Good match exploration for thermal infrared face recognition based on YWF-SIFT with multi-scale fusion

    NASA Astrophysics Data System (ADS)

    Bai, Junfeng; Ma, Yong; Li, Jing; Li, Hao; Fang, Yu; Wang, Rui; Wang, Hongyuan

    2014-11-01

    Stable local feature detection is a critical prerequisite in the problem of infrared (IR) face recognition. Recently, Scale Invariant Feature Transform (SIFT) is introduced for feature detection in an infrared face frame, which is achieved by applying a simple and effective averaging window with SIFT termed as Y-styled Window Filter (YWF). However, the thermal IR face frame has an intrinsic characteristic such as lack of feature points (keypoints); therefore, the performance of the YWF-SIFT method will be inevitably influenced when it was used for IR face recognition. In this paper, we propose a novel method combining multi-scale fusion with YWF-SIFT to explore more good feature matches. The multi-scale fusion is performed on a thermal IR frame and a corresponding auxiliary visual frame generated from an off-the-shelf low-cost visual camera. The fused image is more informative, and typically contains much more stable features. Besides, the use of YWF-SIFT method enables us to establish feature correspondences more accurately. Quantitative experimental results demonstrate that our algorithm is able to significantly improve the quantity of feature points by approximately 38%. As a result, the performance of YWF-SIFT with multi-scale fusion is enhanced about 12% in infrared human face recognition.

  10. The role of relational binding in item memory: evidence from face recognition in a case of developmental amnesia.

    PubMed

    Olsen, Rosanna K; Lee, Yunjo; Kube, Jana; Rosenbaum, R Shayna; Grady, Cheryl L; Moscovitch, Morris; Ryan, Jennifer D

    2015-04-01

    Current theories state that the hippocampus is responsible for the formation of memory representations regarding relations, whereas extrahippocampal cortical regions support representations for single items. However, findings of impaired item memory in hippocampal amnesics suggest a more nuanced role for the hippocampus in item memory. The hippocampus may be necessary when the item elements need to be bound within and across episodes to form a lasting representation that can be used flexibly. The current investigation was designed to test this hypothesis in face recognition. H.C., an individual who developed with a compromised hippocampal system, and control participants incidentally studied individual faces that either varied in presentation viewpoint across study repetitions or remained in a fixed viewpoint across the study repetitions. Eye movements were recorded during encoding and participants then completed a surprise recognition memory test. H.C. demonstrated altered face viewing during encoding. Although the overall number of fixations made by H.C. was not significantly different from that of controls, the distribution of her viewing was primarily directed to the eye region. Critically, H.C. was significantly impaired in her ability to subsequently recognize faces studied from variable viewpoints, but demonstrated spared performance in recognizing faces she encoded from a fixed viewpoint, implicating a relationship between eye movement behavior in the service of a hippocampal binding function. These findings suggest that a compromised hippocampal system disrupts the ability to bind item features within and across study repetitions, ultimately disrupting recognition when it requires access to flexible relational representations. PMID:25834058

  11. Visual scanning and recognition of Chinese, Caucasian, and racially ambiguous faces: Contributions from bottom-up facial physiognomic information and top-down knowledge of racial categories

    PubMed Central

    Wang, Qiandong; Xiao, Naiqi G.; Quinn, Paul C.; Hu, Chao S.; Qian, Miao; Fu, Genyue; Lee, Kang

    2014-01-01

    Recent studies have shown that participants use different eye movement strategies when scanning own- and other-race faces. However, it is unclear (1) whether this effect is related to face recognition performance, and (2) to what extent this effect is influenced by top-down or bottom-up facial information. In the present study, Chinese participants performed a face recognition task with Chinese faces, Caucasian faces, and racially ambiguous morphed face stimuli. For the racially ambiguous faces, we led participants to believe that they were viewing either own-race Chinese faces or other-race Caucasian faces. Results showed that (1) Chinese participants scanned the nose of the true Chinese faces more than that of the true Caucasian faces, whereas they scanned the eyes of the Caucasian faces more than those of the Chinese faces; (2) they scanned the eyes, nose, and mouth equally for the ambiguous faces in the Chinese condition compared with those in the Caucasian condition; (3) when recognizing the true Chinese target faces, but not the true target Caucasian faces, the greater the fixation proportion on the nose, the faster the participants correctly recognized these faces. The same was true when racially ambiguous face stimuli were thought to be Chinese faces. These results provide the first evidence to show that (1) visual scanning patterns of faces are related to own-race face recognition response time, and (2) it is bottom-up facial physiognomic information of racial categories that mainly contributes to face scanning. However, top-down knowledge of racial categories can influence the relationship between face scanning patterns and recognition response time. PMID:25497461

  12. Efficient algorithm for sparse coding and dictionary learning with applications to face recognition

    NASA Astrophysics Data System (ADS)

    Zhao, Zhong; Feng, Guocan

    2015-03-01

    Sparse representation has been successfully applied to pattern recognition problems in recent years. The most common way for producing sparse coding is to use the l1-norm regularization. However, the l1-norm regularization only favors sparsity and does not consider locality. It may select quite different bases for similar samples to favor sparsity, which is disadvantageous to classification. Besides, solving the l1-minimization problem is time consuming, which limits its applications in large-scale problems. We propose an improved algorithm for sparse coding and dictionary learning. This algorithm takes both sparsity and locality into consideration. It selects part of the dictionary columns that are close to the input sample for coding and imposes locality constraint on these selected dictionary columns to obtain discriminative coding for classification. Because an analytic solution of the coding is derived by only using part of the dictionary columns, the proposed algorithm is much faster than the l1-based algorithms for classification. Besides, we also derive an analytic solution for updating the dictionary in the training process. Experiments conducted on five face databases show that the proposed algorithm has better performance than the competing algorithms in terms of accuracy and efficiency.

  13. Characterization and recognition of mixed emotional expressions in thermal face image

    NASA Astrophysics Data System (ADS)

    Saha, Priya; Bhattacharjee, Debotosh; De, Barin K.; Nasipuri, Mita

    2016-05-01

    Facial expressions in infrared imaging have been introduced to solve the problem of illumination, which is an integral constituent of visual imagery. The paper investigates facial skin temperature distribution on mixed thermal facial expressions of our created face database where six are basic expressions and rest 12 are a mixture of those basic expressions. Temperature analysis has been performed on three facial regions of interest (ROIs); periorbital, supraorbital and mouth. Temperature variability of the ROIs in different expressions has been measured using statistical parameters. The temperature variation measurement in ROIs of a particular expression corresponds to a vector, which is later used in recognition of mixed facial expressions. Investigations show that facial features in mixed facial expressions can be characterized by positive emotion induced facial features and negative emotion induced facial features. Supraorbital is a useful facial region that can differentiate basic expressions from mixed expressions. Analysis and interpretation of mixed expressions have been conducted with the help of box and whisker plot. Facial region containing mixture of two expressions is generally less temperature inducing than corresponding facial region containing basic expressions.

  14. Social trait judgment and affect recognition from static faces and video vignettes in schizophrenia.

    PubMed

    McIntosh, Lindsey G; Park, Sohee

    2014-09-01

    Social impairment is a core feature of schizophrenia, present from the pre-morbid stage and predictive of outcome, but the etiology of this deficit remains poorly understood. Successful and adaptive social interactions depend on one's ability to make rapid and accurate judgments about others in real time. Our surprising ability to form accurate first impressions from brief exposures, known as "thin slices" of behavior has been studied very extensively in healthy participants. We sought to examine affect and social trait judgment from thin slices of static or video stimuli in order to investigate the ability of schizophrenic individuals to form reliable social impressions of others. 21 individuals with schizophrenia (SZ) and 20 matched healthy participants (HC) were asked to identify emotions and social traits for actors in standardized face stimuli as well as brief video clips. Sound was removed from videos to remove all verbal cues. Clinical symptoms in SZ and delusional ideation in both groups were measured. Results showed a general impairment in affect recognition for both types of stimuli in SZ. However, the two groups did not differ in the judgments of trustworthiness, approachability, attractiveness, and intelligence. Interestingly, in SZ, the severity of positive symptoms was correlated with higher ratings of attractiveness, trustworthiness, and approachability. Finally, increased delusional ideation in SZ was associated with a tendency to rate others as more trustworthy, while the opposite was true for HC. These findings suggest that complex social judgments in SZ are affected by symptomatology. PMID:25037526

  15. Social trait judgment and affect recognition from static faces and video vignettes in schizophrenia

    PubMed Central

    McIntosh, Lindsey G.; Park, Sohee

    2014-01-01

    Social impairment is a core feature of schizophrenia, present from the pre-morbid stage and predictive of outcome, but the etiology of this deficit remains poorly understood. Successful and adaptive social interactions depend on one’s ability to make rapid and accurate judgments about others in real time. Our surprising ability to form accurate first impressions from brief exposures, known as “thin slices” of behavior has been studied very extensively in healthy participants. We sought to examine affect and social trait judgment from thin slices of static or video stimuli in order to investigate the ability of schizophrenic individuals to form reliable social impressions of others. 21 individuals with schizophrenia (SZ) and 20 matched healthy participants (HC) were asked to identify emotions and social traits for actors in standardized face stimuli as well as brief video clips. Sound was removed from videos to remove all verbal cues. Clinical symptoms in SZ and delusional ideation in both groups were measured. Results showed a general impairment in affect recognition for both types of stimuli in SZ. However, the two groups did not differ in the judgments of trustworthiness, approachability, attractiveness, and intelligence. Interestingly, in SZ, the severity of positive symptoms was correlated with higher ratings of attractiveness, trustworthiness, and approachability. Finally, increased delusional ideation in SZ was associated with a tendency to rate others as more trustworthy, while the opposite was true for HC. These findings suggest that complex social judgments in SZ are affected by symptomatology. PMID:25037526

  16. Image Quality Assessment for Fake Biometric Detection: Application to Iris, Fingerprint, and Face Recognition.

    PubMed

    Galbally, Javier; Marcel, Sébastien; Fierrez, Julian

    2014-02-01

    To ensure the actual presence of a real legitimate trait in contrast to a fake self-manufactured synthetic or reconstructed sample is a significant problem in biometric authentication, which requires the development of new and efficient protection measures. In this paper, we present a novel software-based fake detection method that can be used in multiple biometric systems to detect different types of fraudulent access attempts. The objective of the proposed system is to enhance the security of biometric recognition frameworks, by adding liveness assessment in a fast, user-friendly, and non-intrusive manner, through the use of image quality assessment. The proposed approach presents a very low degree of complexity, which makes it suitable for real-time applications, using 25 general image quality features extracted from one image (i.e., the same acquired for authentication purposes) to distinguish between legitimate and impostor samples. The experimental results, obtained on publicly available data sets of fingerprint, iris, and 2D face, show that the proposed method is highly competitive compared with other state-of-the-art approaches and that the analysis of the general image quality of real biometric samples reveals highly valuable information that may be very efficiently used to discriminate them from fake traits. PMID:26270913

  17. Local gradient Gabor pattern (LGGP) with applications in face recognition, cross-spectral matching, and soft biometrics

    NASA Astrophysics Data System (ADS)

    Chen, Cunjian; Ross, Arun

    2013-05-01

    Researchers in face recognition have been using Gabor filters for image representation due to their robustness to complex variations in expression and illumination. Numerous methods have been proposed to model the output of filter responses by employing either local or global descriptors. In this work, we propose a novel but simple approach for encoding Gradient information on Gabor-transformed images to represent the face, which can be used for identity, gender and ethnicity assessment. Extensive experiments on the standard face benchmark FERET (Visible versus Visible), as well as the heterogeneous face dataset HFB (Near-infrared versus Visible), suggest that the matching performance due to the proposed descriptor is comparable against state-of-the-art descriptor-based approaches in face recognition applications. Furthermore, the same feature set is used in the framework of a Collaborative Representation Classification (CRC) scheme for deducing soft biometric traits such as gender and ethnicity from face images in the AR, Morph and CAS-PEAL databases.

  18. Visual scanning and recognition of Chinese, Caucasian, and racially ambiguous faces: contributions from bottom-up facial physiognomic information and top-down knowledge of racial categories.

    PubMed

    Wang, Qiandong; Xiao, Naiqi G; Quinn, Paul C; Hu, Chao S; Qian, Miao; Fu, Genyue; Lee, Kang

    2015-02-01

    Recent studies have shown that participants use different eye movement strategies when scanning own- and other-race faces. However, it is unclear (1) whether this effect is related to face recognition performance, and (2) to what extent this effect is influenced by top-down or bottom-up facial information. In the present study, Chinese participants performed a face recognition task with Chinese, Caucasian, and racially ambiguous faces. For the racially ambiguous faces, we led participants to believe that they were viewing either own-race Chinese faces or other-race Caucasian faces. Results showed that (1) Chinese participants scanned the nose of the true Chinese faces more than that of the true Caucasian faces, whereas they scanned the eyes of the Caucasian faces more than those of the Chinese faces; (2) they scanned the eyes, nose, and mouth equally for the ambiguous faces in the Chinese condition compared with those in the Caucasian condition; (3) when recognizing the true Chinese target faces, but not the true target Caucasian faces, the greater the fixation proportion on the nose, the faster the participants correctly recognized these faces. The same was true when racially ambiguous face stimuli were thought to be Chinese faces. These results provide the first evidence to show that (1) visual scanning patterns of faces are related to own-race face recognition response time, and (2) it is bottom-up facial physiognomic information that mainly contributes to face scanning. However, top-down knowledge of racial categories can influence the relationship between face scanning patterns and recognition response time. PMID:25497461

  19. Learning Low-Rank Class-Specific Dictionary and Sparse Intra-Class Variant Dictionary for Face Recognition

    PubMed Central

    Tang, Xin; Feng, Guo-can; Li, Xiao-xin; Cai, Jia-xin

    2015-01-01

    Face recognition is challenging especially when the images from different persons are similar to each other due to variations in illumination, expression, and occlusion. If we have sufficient training images of each person which can span the facial variations of that person under testing conditions, sparse representation based classification (SRC) achieves very promising results. However, in many applications, face recognition often encounters the small sample size problem arising from the small number of available training images for each person. In this paper, we present a novel face recognition framework by utilizing low-rank and sparse error matrix decomposition, and sparse coding techniques (LRSE+SC). Firstly, the low-rank matrix recovery technique is applied to decompose the face images per class into a low-rank matrix and a sparse error matrix. The low-rank matrix of each individual is a class-specific dictionary and it captures the discriminative feature of this individual. The sparse error matrix represents the intra-class variations, such as illumination, expression changes. Secondly, we combine the low-rank part (representative basis) of each person into a supervised dictionary and integrate all the sparse error matrix of each individual into a within-individual variant dictionary which can be applied to represent the possible variations between the testing and training images. Then these two dictionaries are used to code the query image. The within-individual variant dictionary can be shared by all the subjects and only contribute to explain the lighting conditions, expressions, and occlusions of the query image rather than discrimination. At last, a reconstruction-based scheme is adopted for face recognition. Since the within-individual dictionary is introduced, LRSE+SC can handle the problem of the corrupted training data and the situation that not all subjects have enough samples for training. Experimental results show that our method achieves the

  20. Reminiscence, forgetting, and hypermnesia using face-name learning: isolating the effects using recall and recognition memory measures.

    PubMed

    Groninger, Lowell D; Murray, Kenneth N

    2004-05-01

    A face-name learning paradigm was used to study phenomena involved in reminiscence, forgetting, and hypermnesia. Individuals introduced themselves on videotape while participants tried to learn their names. The presence of cues during testing increased overall performance but decreased hypermnesia in Experiment 1. Significant recognition memory effects were found for reminiscence and hypermnesia in Experiments 2 and 3. Experiment 3 also showed no interference from activities between testing sessions, but did show facilitating effects from exposure to photographs of target faces and to exposure of target names. The results were interpreted as showing support for reminiscence effects being primarily caused by imagery redintegration and effects consistent with stimulus sampling theories. PMID:15279437

  1. Looking But Not Seeing: Atypical Visual Scanning and Recognition of Faces in 2 and 4-Year-Old Children with Autism Spectrum Disorder

    PubMed Central

    Shic, Frederick

    2016-01-01

    This study used eye-tracking to examine visual scanning and recognition of faces by 2- and 4-year-old children with autism spectrum disorder (ASD) (N = 44) and typically developing (TD) controls (N = 30). TD toddlers at both age levels scanned and recognized faces similarly. Toddlers with ASD looked increasingly away from faces with age, atypically attended to key features of faces, and were impaired in face recognition. Deficits in recognition were associated with imbalanced attention between key facial features. This study illustrates that face processing in ASD may be affected early and become further compromised with age. We propose that deficits in face processing likely impact the effectiveness of toddlers with ASD as social partners and thus should be targeted for intervention. PMID:19590943

  2. Looking but not seeing: atypical visual scanning and recognition of faces in 2 and 4-year-old children with autism spectrum disorder.

    PubMed

    Chawarska, Katarzyna; Shic, Frederick

    2009-12-01

    This study used eye-tracking to examine visual scanning and recognition of faces by 2- and 4-year-old children with autism spectrum disorder (ASD) (N = 44) and typically developing (TD) controls (N = 30). TD toddlers at both age levels scanned and recognized faces similarly. Toddlers with ASD looked increasingly away from faces with age, atypically attended to key features of faces, and were impaired in face recognition. Deficits in recognition were associated with imbalanced attention between key facial features. This study illustrates that face processing in ASD may be affected early and become further compromised with age. We propose that deficits in face processing likely impact the effectiveness of toddlers with ASD as social partners and thus should be targeted for intervention. PMID:19590943

  3. No Own-Age Advantage in Children’s Recognition of Emotion on Prototypical Faces of Different Ages

    PubMed Central

    Griffiths, Sarah; Penton-Voak, Ian S.; Jarrold, Chris; Munafò, Marcus R.

    2015-01-01

    We test whether there is an own-age advantage in emotion recognition using prototypical younger child, older child and adult faces displaying emotional expressions. Prototypes were created by averaging photographs of individuals from 6 different age and sex categories (male 5–8 years, male 9–12 years, female 5–8 years, female 9–12 years, adult male and adult female), each posing 6 basic emotional expressions. In the study 5–8 year old children (n = 33), 9–13 year old children (n = 70) and adults (n = 92) labelled these expression prototypes in a 6-alternative forced-choice task. There was no evidence that children or adults recognised expressions better on faces from their own age group. Instead, child facial expression prototypes were recognised as accurately as adult expression prototypes by all age groups. This suggests there is no substantial own-age advantage in children’s emotion recognition. PMID:25978656

  4. Looking but Not Seeing: Atypical Visual Scanning and Recognition of Faces in 2 and 4-Year-Old Children with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Chawarska, Katarzyna; Shic, Frederick

    2009-01-01

    This study used eye-tracking to examine visual scanning and recognition of faces by 2- and 4-year-old children with autism spectrum disorder (ASD) (N = 44) and typically developing (TD) controls (N = 30). TD toddlers at both age levels scanned and recognized faces similarly. Toddlers with ASD looked increasingly away from faces with age,…

  5. A Gabor-block-based kernel discriminative common vector approach using cosine kernels for human face recognition.

    PubMed

    Kar, Arindam; Bhattacharjee, Debotosh; Basu, Dipak Kumar; Nasipuri, Mita; Kundu, Mahantapas

    2012-01-01

    In this paper a nonlinear Gabor Wavelet Transform (GWT) discriminant feature extraction approach for enhanced face recognition is proposed. Firstly, the low-energized blocks from Gabor wavelet transformed images are extracted. Secondly, the nonlinear discriminating features are analyzed and extracted from the selected low-energized blocks by the generalized Kernel Discriminative Common Vector (KDCV) method. The KDCV method is extended to include cosine kernel function in the discriminating method. The KDCV with the cosine kernels is then applied on the extracted low-energized discriminating feature vectors to obtain the real component of a complex quantity for face recognition. In order to derive positive kernel discriminative vectors, we apply only those kernel discriminative eigenvectors that are associated with nonzero eigenvalues. The feasibility of the low-energized Gabor-block-based generalized KDCV method with cosine kernel function models has been successfully tested for classification using the L(1), L(2) distance measures; and the cosine similarity measure on both frontal and pose-angled face recognition. Experimental results on the FRAV2D and the FERET database demonstrate the effectiveness of this new approach. PMID:23365559

  6. Can the usage of human growth hormones affect facial appearance and the accuracy of face recognition systems?

    NASA Astrophysics Data System (ADS)

    Rose, Jake; Martin, Michael; Bourlai, Thirimachos

    2014-06-01

    In law enforcement and security applications, the acquisition of face images is critical in producing key trace evidence for the successful identification of potential threats. The goal of the study is to demonstrate that steroid usage significantly affects human facial appearance and hence, the performance of commercial and academic face recognition (FR) algorithms. In this work, we evaluate the performance of state-of-the-art FR algorithms on two unique face image datasets of subjects before (gallery set) and after (probe set) steroid (or human growth hormone) usage. For the purpose of this study, datasets of 73 subjects were created from multiple sources found on the Internet, containing images of men and women before and after steroid usage. Next, we geometrically pre-processed all images of both face datasets. Then, we applied image restoration techniques on the same face datasets, and finally, we applied FR algorithms in order to match the pre-processed face images of our probe datasets against the face images of the gallery set. Experimental results demonstrate that only a specific set of FR algorithms obtain the most accurate results (in terms of the rank-1 identification rate). This is because there are several factors that influence the efficiency of face matchers including (i) the time lapse between the before and after image pre-processing and restoration face photos, (ii) the usage of different drugs (e.g. Dianabol, Winstrol, and Decabolan), (iii) the usage of different cameras to capture face images, and finally, (iv) the variability of standoff distance, illumination and other noise factors (e.g. motion noise). All of the previously mentioned complicated scenarios make clear that cross-scenario matching is a very challenging problem and, thus, further investigation is required.

  7. Neural correlates of own- and other-race face recognition in children: A functional near-infrared spectroscopy study

    PubMed Central

    Ding, Xiao Pan; Fu, Genyue; Lee, Kang

    2013-01-01

    The present study used the functional Near-infrared Spectroscopy (fNIRS) methodology to investigate the neural correlates of elementary school children’s own- and other-race face processing. An old-new paradigm was used to assess children’s recognition ability of own- and other-race faces. FNIRS data revealed that other-race faces elicited significantly greater [oxy-Hb] changes than own-race faces in the right middle frontal gyrus and inferior frontal gyrus regions (BA9) and the left cuneus (BA18). With increased age, the [oxy-Hb] activity differences between own- and other-race faces, or the neural other-race effect (NORE), underwent significant changes in these two cortical areas: at younger ages, the neural response to the other-race faces was modestly greater than that to the own-race faces, but with increased age, the neural response to the own-race faces became increasingly greater than that to the other-race faces. Moreover, these areas had strong regional functional connectivity with a swath of the cortical regions in terms of the neural other-race effect that also changed with increased age. We also found significant and positive correlations between the behavioral other-race effect (reaction time) and the neural other-race effect in the right middle frontal gyrus and inferior frontal gyrus regions (BA9). These results taken together suggest that children, like adults, devote different amounts of neural resources to processing own- and other-race faces, but the size and direction of the neural other-race effect and associated functional regional connectivity change with increased age. PMID:23891903

  8. Featuring Old/New Recognition: The Two Faces of the Pseudoword Effect

    ERIC Educational Resources Information Center

    Joordens, Steve; Ozubko, Jason D.; Niewiadomski, Marty W.

    2008-01-01

    In his analysis of the pseudoword effect, [Greene, R.L. (2004). Recognition memory for pseudowords. "Journal of Memory and Language," 50, 259-267.] suggests nonwords can feel more familiar that words in a recognition context if the orthographic features of the nonword match well with the features of the items presented at study. One possible…

  9. Facial expression of fear in the context of human ethology: Recognition advantage in the perception of male faces.

    PubMed

    Trnka, Radek; Tavel, Peter; Tavel, Peter; Hasto, Jozef

    2015-01-01

    Facial expression is one of the core issues in the ethological approach to the study of human behaviour. This study discusses sex-specific aspects of the recognition of the facial expression of fear using results from our previously published experimental study. We conducted an experiment in which 201 participants judged seven different facial expressions: anger, contempt, disgust, fear, happiness, sadness and surprise (Trnka et al. 2007). Participants were able to recognize the facial expression of fear significantly better on a male face than on a female face. Females also recognized fear generally better than males. The present study provides a new interpretation of this sex difference in the recognition of fear. We interpret these results within the paradigm of human ethology, taking into account the adaptive function of the facial expression of fear. We argue that better detection of fear might be crucial for females under a situation of serious danger in groups of early hominids. The crucial role of females in nurturing and protecting offspring was fundamental for the reproductive potential of the group. A clear decoding of this alarm signal might thus have enabled the timely preparation of females for escape or defence to protect their health for successful reproduction. Further, it is likely that males played the role of guardians of social groups and that they were responsible for effective warnings of the group under situations of serious danger. This may explain why the facial expression of fear is better recognizable on the male face than on the female face. PMID:26071575

  10. An own gender bias and the importance of hair in face recognition.

    PubMed

    Wright, Daniel B; Sladden, Benjamin

    2003-09-01

    There is a large literature on the own race bias, the finding that people are better at recognizing faces of people from their own race. Here an own gender bias is shown: Males are better at identifying male faces than female faces and females are better at identifying female faces than male faces. Encoding a person's hair is shown to account for approximately half of the own gender bias when measured using hit and false alarm rates. Remember/know judgements and confidence measures are taken. Encoding a person's hair is critical for having a "remember" recollective experience. Parallels with the own race bias and implications for eyewitness testimony are discussed. PMID:12927345

  11. Eye-tracking the own-race bias in face recognition: revealing the perceptual and socio-cognitive mechanisms.

    PubMed

    Hills, Peter J; Pake, J Michael

    2013-12-01

    Own-race faces are recognised more accurately than other-race faces and may even be viewed differently as measured by an eye-tracker (Goldinger, Papesh, & He, 2009). Alternatively, observer race might direct eye-movements (Blais, Jack, Scheepers, Fiset, & Caldara, 2008). Observer differences in eye-movements are likely to be based on experience of the physiognomic characteristics that are differentially discriminating for Black and White faces. Two experiments are reported that employed standard old/new recognition paradigms in which Black and White observers viewed Black and White faces with their eye-movements recorded. Experiment 1 showed that there were observer race differences in terms of the features scanned but observers employed the same strategy across different types of faces. Experiment 2 demonstrated that other-race faces could be recognised more accurately if participants had their first fixation directed to more diagnostic features using fixation crosses. These results are entirely consistent with those presented by Blais et al. (2008) and with the perceptual interpretation that the own-race bias is due to inappropriate attention allocated to the facial features (Hills & Lewis, 2006, 2011). PMID:24076536

  12. What drives social in-group biases in face recognition memory? ERP evidence from the own-gender bias.

    PubMed

    Wolff, Nicole; Kemter, Kathleen; Schweinberger, Stefan R; Wiese, Holger

    2014-05-01

    It is well established that memory is more accurate for own-relative to other-race faces (own-race bias), which has been suggested to result from larger perceptual expertise for own-race faces. Previous studies also demonstrated better memory for own-relative to other-gender faces, which is less likely to result from differences in perceptual expertise, and rather may be related to social in-group vs out-group categorization. We examined neural correlates of the own-gender bias using event-related potentials (ERP). In a recognition memory experiment, both female and male participants remembered faces of their respective own gender more accurately compared with other-gender faces. ERPs during learning yielded significant differences between the subsequent memory effects (subsequently remembered - subsequently forgotten) for own-gender compared with other-gender faces in the occipito-temporal P2 and the central N200, whereas neither later subsequent memory effects nor ERP old/new effects at test reflected a neural correlate of the own-gender bias. We conclude that the own-gender bias is mainly related to study phase processes, which is in line with sociocognitive accounts. PMID:23474824

  13. Emotion recognition from facial expressions: a normative study of the Ekman 60-Faces Test in the Italian population.

    PubMed

    Dodich, Alessandra; Cerami, Chiara; Canessa, Nicola; Crespi, Chiara; Marcone, Alessandra; Arpone, Marta; Realmuto, Sabrina; Cappa, Stefano F

    2014-07-01

    The Ekman 60-Faces (EK-60F) Test is a well-known neuropsychological tool assessing emotion recognition from facial expressions. It is the most employed task for research purposes in psychiatric and neurological disorders, including neurodegenerative diseases, such as the behavioral variant of Frontotemporal Dementia (bvFTD). Despite its remarkable usefulness in the social cognition research field, to date, there are still no normative data for the Italian population, thus limiting its application in a clinical context. In this study, we report procedures and normative data for the Italian version of the test. A hundred and thirty-two healthy Italian participants aged between 20 and 79 years with at least 5 years of education were recruited on a voluntary basis. They were administered the EK-60F Test from the Ekman and Friesen series of Pictures of Facial Affect after a preliminary semantic recognition test of the six basic emotions (i.e., anger, fear, sadness, happiness, disgust, surprise). Data were analyzed according to the Capitani procedure [1]. The regression analysis revealed significant effects of demographic variables, with younger, more educated, female subjects showing higher scores. Normative data were then applied to a sample of 15 bvFTD patients which showed global impaired performance in the task, consistently with the clinical condition. We provided EK-60F Test normative data for the Italian population allowing the investigation of global emotion recognition ability as well as selective impairment of basic emotions recognition, both for clinical and research purposes. PMID:24442557

  14. A Comparative Study of Human Thermal Face Recognition Based on Haar Wavelet Transform and Local Binary Pattern

    PubMed Central

    Bhattacharjee, Debotosh; Seal, Ayan; Ganguly, Suranjan; Nasipuri, Mita; Basu, Dipak Kumar

    2012-01-01

    Thermal infrared (IR) images focus on changes of temperature distribution on facial muscles and blood vessels. These temperature changes can be regarded as texture features of images. A comparative study of face two recognition methods working in thermal spectrum is carried out in this paper. In the first approach, the training images and the test images are processed with Haar wavelet transform and the LL band and the average of LH/HL/HH bands subimages are created for each face image. Then a total confidence matrix is formed for each face image by taking a weighted sum of the corresponding pixel values of the LL band and average band. For LBP feature extraction, each of the face images in training and test datasets is divided into 161 numbers of subimages, each of size 8 × 8 pixels. For each such subimages, LBP features are extracted which are concatenated in manner. PCA is performed separately on the individual feature set for dimensionality reduction. Finally, two different classifiers namely multilayer feed forward neural network and minimum distance classifier are used to classify face images. The experiments have been performed on the database created at our own laboratory and Terravic Facial IR Database. PMID:22924035

  15. Spatio-Temporal Dynamics of Face Recognition in a Flash: It's in the Eyes

    ERIC Educational Resources Information Center

    Vinette, Celine; Gosselin, Frederic; Schyns, Philippe G.

    2004-01-01

    We adapted the "Bubbles" procedure [Vis. Res. 41 (2001) 2261] to examine the effective use of information during the first 282ms of face identification. Ten participants each viewed a total of 5100 faces sub-sampled in space-time. We obtained a clear pattern of effective use of information: the eye on the left side of the image became diagnostic…

  16. Newborns' Face Recognition Is Based on Spatial Frequencies below 0.5 Cycles per Degree

    ERIC Educational Resources Information Center

    de Heering, Adelaide; Turati, Chiara; Rossion, Bruno; Bulf, Hermann; Goffaux, Valerie; Simion, Francesca

    2008-01-01

    A critical question in Cognitive Science concerns how knowledge of specific domains emerges during development. Here we examined how limitations of the visual system during the first days of life may shape subsequent development of face processing abilities. By manipulating the bands of spatial frequencies of face images, we investigated what is…

  17. Face puzzle—two new video-based tasks for measuring explicit and implicit aspects of facial emotion recognition

    PubMed Central

    Kliemann, Dorit; Rosenblau, Gabriela; Bölte, Sven; Heekeren, Hauke R.; Dziobek, Isabel

    2013-01-01

    Recognizing others' emotional states is crucial for effective social interaction. While most facial emotion recognition tasks use explicit prompts that trigger consciously controlled processing, emotional faces are almost exclusively processed implicitly in real life. Recent attempts in social cognition suggest a dual process perspective, whereby explicit and implicit processes largely operate independently. However, due to differences in methodology the direct comparison of implicit and explicit social cognition has remained a challenge. Here, we introduce a new tool to comparably measure implicit and explicit processing aspects comprising basic and complex emotions in facial expressions. We developed two video-based tasks with similar answer formats to assess performance in respective facial emotion recognition processes: Face Puzzle, implicit and explicit. To assess the tasks' sensitivity to atypical social cognition and to infer interrelationship patterns between explicit and implicit processes in typical and atypical development, we included healthy adults (NT, n = 24) and adults with autism spectrum disorder (ASD, n = 24). Item analyses yielded good reliability of the new tasks. Group-specific results indicated sensitivity to subtle social impairments in high-functioning ASD. Correlation analyses with established implicit and explicit socio-cognitive measures were further in favor of the tasks' external validity. Between group comparisons provide first hints of differential relations between implicit and explicit aspects of facial emotion recognition processes in healthy compared to ASD participants. In addition, an increased magnitude of between group differences in the implicit task was found for a speed-accuracy composite measure. The new Face Puzzle tool thus provides two new tasks to separately assess explicit and implicit social functioning, for instance, to measure subtle impairments as well as potential improvements due to social cognitive

  18. Face puzzle-two new video-based tasks for measuring explicit and implicit aspects of facial emotion recognition.

    PubMed

    Kliemann, Dorit; Rosenblau, Gabriela; Bölte, Sven; Heekeren, Hauke R; Dziobek, Isabel

    2013-01-01

    Recognizing others' emotional states is crucial for effective social interaction. While most facial emotion recognition tasks use explicit prompts that trigger consciously controlled processing, emotional faces are almost exclusively processed implicitly in real life. Recent attempts in social cognition suggest a dual process perspective, whereby explicit and implicit processes largely operate independently. However, due to differences in methodology the direct comparison of implicit and explicit social cognition has remained a challenge. Here, we introduce a new tool to comparably measure implicit and explicit processing aspects comprising basic and complex emotions in facial expressions. We developed two video-based tasks with similar answer formats to assess performance in respective facial emotion recognition processes: Face Puzzle, implicit and explicit. To assess the tasks' sensitivity to atypical social cognition and to infer interrelationship patterns between explicit and implicit processes in typical and atypical development, we included healthy adults (NT, n = 24) and adults with autism spectrum disorder (ASD, n = 24). Item analyses yielded good reliability of the new tasks. Group-specific results indicated sensitivity to subtle social impairments in high-functioning ASD. Correlation analyses with established implicit and explicit socio-cognitive measures were further in favor of the tasks' external validity. Between group comparisons provide first hints of differential relations between implicit and explicit aspects of facial emotion recognition processes in healthy compared to ASD participants. In addition, an increased magnitude of between group differences in the implicit task was found for a speed-accuracy composite measure. The new Face Puzzle tool thus provides two new tasks to separately assess explicit and implicit social functioning, for instance, to measure subtle impairments as well as potential improvements due to social cognitive

  19. Emotion recognition of static and dynamic faces in autism spectrum disorder.

    PubMed

    Enticott, Peter G; Kennedy, Hayley A; Johnston, Patrick J; Rinehart, Nicole J; Tonge, Bruce J; Taffe, John R; Fitzgerald, Paul B

    2014-01-01

    There is substantial evidence for facial emotion recognition (FER) deficits in autism spectrum disorder (ASD). The extent of this impairment, however, remains unclear, and there is some suggestion that clinical groups might benefit from the use of dynamic rather than static images. High-functioning individuals with ASD (n = 36) and typically developing controls (n = 36) completed a computerised FER task involving static and dynamic expressions of the six basic emotions. The ASD group showed poorer overall performance in identifying anger and disgust and were disadvantaged by dynamic (relative to static) stimuli when presented with sad expressions. Among both groups, however, dynamic stimuli appeared to improve recognition of anger. This research provides further evidence of specific impairment in the recognition of negative emotions in ASD, but argues against any broad advantages associated with the use of dynamic displays. PMID:24341852

  20. Viewpoint independent representation and recognition of polygonal faced in 3-D

    SciTech Connect

    Bunke, H.; Glauser, T.

    1993-08-01

    The recognition of polygons in 3-D space is an important task in robot vision. Two particular problems are addressed in the paper. First a new set of local shape descriptors for polygons are proposed that are invariant under affine transformation. Furthermore, they are complete in the sense that they allow the reconstruction of any polygon in 3-D space from three consecutive vertices. The second problem discussed in this paper is the recognition of 2-D polygonal objects under affine transformation and the presence of partial occlusion. A recognition procedure that is based on the matching of edge length ratios is introduced using a simplified version of the standard dynamic programming procedure commonly employed for string matching. The algorithm is conceptually very simple, easy to implement and has a low computational complexity. It will be shown in a set of experiments that the method is reliable and robust.

  1. Successful Face Recognition is Associated with Increased Prefrontal Cortex Activation in Autism Spectrum Disorder

    PubMed Central

    Herrington, John D.; Riley, Meghan E.; Grupe, Daniel W.; Schultz, Robert T.

    2014-01-01

    This study examines whether deficits in visual information processing in ASD can be offset by the recruitment of brain structures involved in selective attention. During functional MRI, 12 children with ASD and 19 control participants completed a selective attention one-back task in which images of faces and houses were superimposed. When attending to faces, the ASD group showed increased activation relative to control participants within multiple prefrontal cortex areas, including dorsolateral prefrontal cortex (DLPFC). DLPFC activation in ASD was associated with increased response times for faces. These data suggest that prefrontal cortex activation may represent a compensatory mechanism for diminished visual information processing abilities. PMID:25234479

  2. Why We Respond Faster to the Self than to Others? An Implicit Positive Association Theory of Self-Advantage during Implicit Face Recognition

    ERIC Educational Resources Information Center

    Ma, Yina; Han, Shihui

    2010-01-01

    Human adults usually respond faster to their own faces rather than to those of others. We tested the hypothesis that an implicit positive association (IPA) with self mediates self-advantage in face recognition through 4 experiments. Using a self-concept threat (SCT) priming that associated the self with negative personal traits and led to a…

  3. Moving human full body and body parts detection, tracking, and applications on human activity estimation, walking pattern and face recognition

    NASA Astrophysics Data System (ADS)

    Chen, Hai-Wen; McGurr, Mike

    2016-05-01

    We have developed a new way for detection and tracking of human full-body and body-parts with color (intensity) patch morphological segmentation and adaptive thresholding for security surveillance cameras. An adaptive threshold scheme has been developed for dealing with body size changes, illumination condition changes, and cross camera parameter changes. Tests with the PETS 2009 and 2014 datasets show that we can obtain high probability of detection and low probability of false alarm for full-body. Test results indicate that our human full-body detection method can considerably outperform the current state-of-the-art methods in both detection performance and computational complexity. Furthermore, in this paper, we have developed several methods using color features for detection and tracking of human body-parts (arms, legs, torso, and head, etc.). For example, we have developed a human skin color sub-patch segmentation algorithm by first conducting a RGB to YIQ transformation and then applying a Subtractive I/Q image Fusion with morphological operations. With this method, we can reliably detect and track human skin color related body-parts such as face, neck, arms, and legs. Reliable body-parts (e.g. head) detection allows us to continuously track the individual person even in the case that multiple closely spaced persons are merged. Accordingly, we have developed a new algorithm to split a merged detection blob back to individual detections based on the detected head positions. Detected body-parts also allow us to extract important local constellation features of the body-parts positions and angles related to the full-body. These features are useful for human walking gait pattern recognition and human pose (e.g. standing or falling down) estimation for potential abnormal behavior and accidental event detection, as evidenced with our experimental tests. Furthermore, based on the reliable head (face) tacking, we have applied a super-resolution algorithm to enhance

  4. Beyond the Memory Mechanism: Person-Selective and Nonselective Processes in Recognition of Personally Familiar Faces

    ERIC Educational Resources Information Center

    Sugiura, Motoaki; Mano, Yoko; Sasaki, Akihiro; Sadato, Norihiro

    2011-01-01

    Special processes recruited during the recognition of personally familiar people have been assumed to reflect the rich episodic and semantic information that selectively represents each person. However, the processes may also include person nonselective ones, which may require interpretation in terms beyond the memory mechanism. To examine this…

  5. The Generation and Resemblance Heuristics in Face Recognition: Cooperation and Competition

    ERIC Educational Resources Information Center

    Kleider, Heather M.; Goldinger, Stephen D.

    2006-01-01

    Like all probabilistic decisions, recognition memory judgments are based on inferences about the strength and quality of stimulus familiarity. In recent articles, B. W. A. Whittlesea and J. Leboe (2000; J. Leboe & B. W. A. Whittlesea, 2002) proposed that such memory decisions entail various heuristics, similar to well-known heuristics in overt…

  6. Eye Remember You Two: Gaze Direction Modulates Face Recognition in a Developmental Study

    ERIC Educational Resources Information Center

    Smith, Alastair D.; Hood, Bruce M.; Hector, Karen

    2006-01-01

    The effects of gaze direction on memory for faces were studied in children from three different age groups (6-7, 8-9, and 10-11 years old) using a computerized version of a task devised by Hood, Macrae, Cole-Davies and Dias (2003). Participants were presented with a sequence of faces in an encoding phase, and were then required to judge which…

  7. Face Recognition and Visual Search Strategies in Autism Spectrum Disorders: Amending and Extending a Recent Review by Weigelt et al.

    PubMed Central

    2015-01-01

    The purpose of this review was to build upon a recent review by Weigelt et al. which examined visual search strategies and face identification between individuals with autism spectrum disorders (ASD) and typically developing peers. Seven databases, CINAHL Plus, EMBASE, ERIC, Medline, Proquest, PsychInfo and PubMed were used to locate published scientific studies matching our inclusion criteria. A total of 28 articles not included in Weigelt et al. met criteria for inclusion into this systematic review. Of these 28 studies, 16 were available and met criteria at the time of the previous review, but were mistakenly excluded; and twelve were recently published. Weigelt et al. found quantitative, but not qualitative, differences in face identification in individuals with ASD. In contrast, the current systematic review found both qualitative and quantitative differences in face identification between individuals with and without ASD. There is a large inconsistency in findings across the eye tracking and neurobiological studies reviewed. Recommendations for future research in face recognition in ASD were discussed. PMID:26252877

  8. Emotion recognition through static faces and moving bodies: a comparison between typically developed adults and individuals with high level of autistic traits

    PubMed Central

    Actis-Grosso, Rossana; Bossi, Francesco; Ricciardelli, Paola

    2015-01-01

    We investigated whether the type of stimulus (pictures of static faces vs. body motion) contributes differently to the recognition of emotions. The performance (accuracy and response times) of 25 Low Autistic Traits (LAT group) young adults (21 males) and 20 young adults (16 males) with either High Autistic Traits or with High Functioning Autism Spectrum Disorder (HAT group) was compared in the recognition of four emotions (Happiness, Anger, Fear, and Sadness) either shown in static faces or conveyed by moving body patch-light displays (PLDs). Overall, HAT individuals were as accurate as LAT ones in perceiving emotions both with faces and with PLDs. Moreover, they correctly described non-emotional actions depicted by PLDs, indicating that they perceived the motion conveyed by the PLDs per se. For LAT participants, happiness proved to be the easiest emotion to be recognized: in line with previous studies we found a happy face advantage for faces, which for the first time was also found for bodies (happy body advantage). Furthermore, LAT participants recognized sadness better by static faces and fear by PLDs. This advantage for motion kinematics in the recognition of fear was not present in HAT participants, suggesting that (i) emotion recognition is not generally impaired in HAT individuals, (ii) the cues exploited for emotion recognition by LAT and HAT groups are not always the same. These findings are discussed against the background of emotional processing in typically and atypically developed individuals. PMID:26557101

  9. Emotion recognition through static faces and moving bodies: a comparison between typically developed adults and individuals with high level of autistic traits.

    PubMed

    Actis-Grosso, Rossana; Bossi, Francesco; Ricciardelli, Paola

    2015-01-01

    We investigated whether the type of stimulus (pictures of static faces vs. body motion) contributes differently to the recognition of emotions. The performance (accuracy and response times) of 25 Low Autistic Traits (LAT group) young adults (21 males) and 20 young adults (16 males) with either High Autistic Traits or with High Functioning Autism Spectrum Disorder (HAT group) was compared in the recognition of four emotions (Happiness, Anger, Fear, and Sadness) either shown in static faces or conveyed by moving body patch-light displays (PLDs). Overall, HAT individuals were as accurate as LAT ones in perceiving emotions both with faces and with PLDs. Moreover, they correctly described non-emotional actions depicted by PLDs, indicating that they perceived the motion conveyed by the PLDs per se. For LAT participants, happiness proved to be the easiest emotion to be recognized: in line with previous studies we found a happy face advantage for faces, which for the first time was also found for bodies (happy body advantage). Furthermore, LAT participants recognized sadness better by static faces and fear by PLDs. This advantage for motion kinematics in the recognition of fear was not present in HAT participants, suggesting that (i) emotion recognition is not generally impaired in HAT individuals, (ii) the cues exploited for emotion recognition by LAT and HAT groups are not always the same. These findings are discussed against the background of emotional processing in typically and atypically developed individuals. PMID:26557101

  10. Neural Correlates of the In-Group Memory Advantage on the Encoding and Recognition of Faces

    PubMed Central

    Herzmann, Grit; Curran, Tim

    2013-01-01

    People have a memory advantage for faces that belong to the same group, for example, that attend the same university or have the same personality type. Faces from such in-group members are assumed to receive more attention during memory encoding and are therefore recognized more accurately. Here we use event-related potentials related to memory encoding and retrieval to investigate the neural correlates of the in-group memory advantage. Using the minimal group procedure, subjects were classified based on a bogus personality test as belonging to one of two personality types. While the electroencephalogram was recorded, subjects studied and recognized faces supposedly belonging to the subject’s own and the other personality type. Subjects recognized in-group faces more accurately than out-group faces but the effect size was small. Using the individual behavioral in-group memory advantage in multivariate analyses of covariance, we determined neural correlates of the in-group advantage. During memory encoding (300 to 1000 ms after stimulus onset), subjects with a high in-group memory advantage elicited more positive amplitudes for subsequently remembered in-group than out-group faces, showing that in-group faces received more attention and elicited more neural activity during initial encoding. Early during memory retrieval (300 to 500 ms), frontal brain areas were more activated for remembered in-group faces indicating an early detection of group membership. Surprisingly, the parietal old/new effect (600 to 900 ms) thought to indicate recollection processes differed between in-group and out-group faces independent from the behavioral in-group memory advantage. This finding suggests that group membership affects memory retrieval independent of memory performance. Comparisons with a previous study on the other-race effect, another memory phenomenon influenced by social classification of faces, suggested that the in-group memory advantage is dominated by top

  11. Subjective face recognition difficulties, aberrant sensibility, sleeping disturbances and aberrant eating habits in families with Asperger syndrome

    PubMed Central

    Nieminen-von Wendt, Taina; Paavonen, Juulia E; Ylisaukko-Oja, Tero; Sarenius, Susan; Källman, Tiia; Järvelä, Irma; von Wendt, Lennart

    2005-01-01

    Background The present study was undertaken in order to determine whether a set of clinical features, which are not included in the DSM-IV or ICD-10 for Asperger Syndrome (AS), are associated with AS in particular or whether they are merely a familial trait that is not related to the diagnosis. Methods Ten large families, a total of 138 persons, of whom 58 individuals fulfilled the diagnostic criteria for AS and another 56 did not to fulfill these criteria, were studied using a structured interview focusing on the possible presence of face recognition difficulties, aberrant sensibility and eating habits and sleeping disturbances. Results The prevalence for face recognition difficulties was 46.6% in individuals with AS compared with 10.7% in the control group. The corresponding figures for subjectively reported presence of aberrant sensibilities were 91.4% and 46.6%, for sleeping disturbances 48.3% and 23.2% and for aberrant eating habits 60.3% and 14.3%, respectively. Conclusion An aberrant processing of sensory information appears to be a common feature in AS. The impact of these and other clinical features that are not incorporated in the ICD-10 and DSM-IV on our understanding of AS may hitherto have been underestimated. These associated clinical traits may well be reflected by the behavioural characteristics of these individuals. PMID:15826308

  12. Comparative performance between human and automated face recognition systems, using CCTV imagery, different compression levels and scene parameters

    NASA Astrophysics Data System (ADS)

    Tsifouti, A.; Triantaphillidou, S.; Larabi, M.-C.; Bilissi, E.; Psarrou, A.

    2015-01-01

    In this investigation we identify relationships between human and automated face recognition systems with respect to compression. Further, we identify the most influential scene parameters on the performance of each recognition system. The work includes testing of the systems with compressed Closed-Circuit Television (CCTV) footage, consisting of quantified scene (footage) parameters. Parameters describe the content of scenes concerning camera to subject distance, facial angle, scene brightness, and spatio-temporal busyness. These parameters have been previously shown to affect the human visibility of useful facial information, but not much work has been carried out to assess the influence they have on automated recognition systems. In this investigation, the methodology previously employed in the human investigation is adopted, to assess performance of three different automated systems: Principal Component Analysis, Linear Discriminant Analysis, and Kernel Fisher Analysis. Results show that the automated systems are more tolerant to compression than humans. In automated systems, mixed brightness scenes were the most affected and low brightness scenes were the least affected by compression. In contrast for humans, low brightness scenes were the most affected and medium brightness scenes the least affected. Findings have the potential to broaden the methods used for testing imaging systems for security applications.

  13. Optoelectronic face recognition system using diffractive optical elements: design and evaluation of compact parallel joint transform correlator (COPaC)

    NASA Astrophysics Data System (ADS)

    Kodate, Kashiko; Watanabe, Eriko; Inaba, Rieko

    2001-12-01

    Individual identification based on biological characteristics such as fingerprint, iris and countenance is regarded as a highly essential technique in security systems. As a simple and rapid recognition system satisfying required performance, we have proposed an opto-electronic system, which combines a parallel joint transform correlator (PJTC) with a personal computer. In this paper, the PJTC method using a new design multiple diffractive optical element as a Fourier transform lens was reviewed and proved to be one of the most practical optical computers for face recognition. Furthermore, based on these first trial results, we proposed the design and fabrication of a portable type compact PJTC (COPaC), of which the size is 23 X 15 X 16.3 cm3 and the weight is 4 kg. We obtained its high accuracy performance for one-to-one correlation using 300 front facial images in a database and proved its practicability. Additionally we performed experiments on ID-less discrimination of twins who look alike for human eyes. Successful recognition rate was obtained, indicating its excellent performance and feasibility.

  14. Successful face recognition is associated with increased prefrontal cortex activation in autism spectrum disorder.

    PubMed

    Herrington, John D; Riley, Meghan E; Grupe, Daniel W; Schultz, Robert T

    2015-04-01

    This study examines whether deficits in visual information processing in autism-spectrum disorder (ASD) can be offset by the recruitment of brain structures involved in selective attention. During functional MRI, 12 children with ASD and 19 control participants completed a selective attention one-back task in which images of faces and houses were superimposed. When attending to faces, the ASD group showed increased activation relative to control participants within multiple prefrontal cortex areas, including dorsolateral prefrontal cortex (DLPFC). DLPFC activation in ASD was associated with increased response times for faces. These data suggest that prefrontal cortex activation may represent a compensatory mechanism for diminished visual information processing abilities in ASD. PMID:25234479

  15. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks

    PubMed Central

    Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi

    2014-01-01

    Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the “small sample size” (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0–1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system. PMID:25494350

  16. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks.

    PubMed

    Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi

    2014-01-01

    Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the "small sample size" (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0-1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system. PMID:25494350

  17. Recognition of Identity and Expression in Faces by Children with Down Syndrome.

    ERIC Educational Resources Information Center

    Wishart, Jennifer G.; Pitcairn, T. K.

    2000-01-01

    The ability of 16 children (ages 8-14) with Down syndrome, 16 age-matched children with nonspecific developmental delay, and 23 younger controls to recognize facial identity and expression was examined. Children with Down syndrome were equally proficient at recognizing unfamiliar faces when expression was varied but significantly poorer at…

  18. Face Recognition and Description Abilities in People with Mild Intellectual Disabilities

    ERIC Educational Resources Information Center

    Gawrylowicz, Julie; Gabbert, Fiona; Carson, Derek; Lindsay, William R.; Hancock, Peter J. B.

    2013-01-01

    Background: People with intellectual disabilities (ID) are as likely as the general population to find themselves in the situation of having to identify and/or describe a perpetrator's face to the police. However, limited verbal and memory abilities in people with ID might prevent them to engage in standard police procedures. Method: Two…

  19. Recognition bias for critical faces in social phobia: a replication and extension.

    PubMed

    Coles, Meredith E; Heimberg, Richard G

    2005-01-01

    Studies using linguistic stimuli have provided little support for explicit memory biases among individuals with social phobia (SP). However, using facial stimuli rated on their criticalness, Lundh and Ost (1996) found that individuals with SP recognized more critical than accepting faces, whereas non-anxious controls tended to show the opposite pattern. Since the publication of Lundh and Ost's findings, additional studies using a variety of facial stimuli have produced inconsistent findings (J. Anxiety Disord. 14 (2000) 501; Behav. Res. Ther. 39 (2001) 967). Unfortunately, these inconsistencies are difficult to reconcile given great variation in methods and stimuli. Therefore, we designed a study to replicate and extend the work of Lundh and Ost (Behav. Res. Ther. 34 (1996) 787). Similar to Lundh and Ost, individuals with SP identified a significantly higher proportion of old critical faces as old than did non-anxious controls. Further, extending the work of Lundh and Ost, signal detection analyses revealed group differences on response bias according to face type. Specifically, controls showed a response bias towards indicating that accepting faces were previously seen, whereas individuals with SP did not. Finally, signal detection analyses failed to reveal group differences in the accuracy of memory. PMID:15531356

  20. Successful Face Recognition Is Associated with Increased Prefrontal Cortex Activation in Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Herrington, John D.; Riley, Meghan E.; Grupe, Daniel W.; Schultz, Robert T.

    2015-01-01

    This study examines whether deficits in visual information processing in autism-spectrum disorder (ASD) can be offset by the recruitment of brain structures involved in selective attention. During functional MRI, 12 children with ASD and 19 control participants completed a selective attention one-back task in which images of faces and houses were…