Science.gov

Sample records for 3d face recognition

  1. Random-Profiles-Based 3D Face Recognition System

    PubMed Central

    Joongrock, Kim; Sunjin, Yu; Sangyoun, Lee

    2014-01-01

    In this paper, a noble nonintrusive three-dimensional (3D) face modeling system for random-profile-based 3D face recognition is presented. Although recent two-dimensional (2D) face recognition systems can achieve a reliable recognition rate under certain conditions, their performance is limited by internal and external changes, such as illumination and pose variation. To address these issues, 3D face recognition, which uses 3D face data, has recently received much attention. However, the performance of 3D face recognition highly depends on the precision of acquired 3D face data, while also requiring more computational power and storage capacity than 2D face recognition systems. In this paper, we present a developed nonintrusive 3D face modeling system composed of a stereo vision system and an invisible near-infrared line laser, which can be directly applied to profile-based 3D face recognition. We further propose a novel random-profile-based 3D face recognition method that is memory-efficient and pose-invariant. The experimental results demonstrate that the reconstructed 3D face data consists of more than 50 k 3D point clouds and a reliable recognition rate against pose variation. PMID:24691101

  2. A review of recent advances in 3D face recognition

    NASA Astrophysics Data System (ADS)

    Luo, Jing; Geng, Shuze; Xiao, Zhaoxia; Xiu, Chunbo

    2015-03-01

    Face recognition based on machine vision has achieved great advances and been widely used in the various fields. However, there are some challenges on the face recognition, such as facial pose, variations in illumination, and facial expression. So, this paper gives the recent advances in 3D face recognition. 3D face recognition approaches are categorized into four groups: minutiae approach, space transform approach, geometric features approach, model approach. Several typical approaches are compared in detail, including feature extraction, recognition algorithm, and the performance of the algorithm. Finally, this paper summarized the challenge existing in 3D face recognition and the future trend. This paper aims to help the researches majoring on face recognition.

  3. 3D face recognition based on matching of facial surfaces

    NASA Astrophysics Data System (ADS)

    Echeagaray-Patrón, Beatriz A.; Kober, Vitaly

    2015-09-01

    Face recognition is an important task in pattern recognition and computer vision. In this work a method for 3D face recognition in the presence of facial expression and poses variations is proposed. The method uses 3D shape data without color or texture information. A new matching algorithm based on conformal mapping of original facial surfaces onto a Riemannian manifold followed by comparison of conformal and isometric invariants computed in the manifold is suggested. Experimental results are presented using common 3D face databases that contain significant amount of expression and pose variations.

  4. The use of 3D information in face recognition.

    PubMed

    Liu, Chang Hong; Ward, James

    2006-03-01

    Effects of shading in face recognition have often alluded to 3D shape processing. However, research to date has failed to demonstrate any use of important 3D information. Stereopsis adds no advantage in face encoding [Liu, C. H., Ward, J., & Young, A. W. (in press). Transfer between 2D and 3D representations of faces. Visual Cognition], and perspective transformation impairs rather than assists recognition performance [Liu, C. H. (2003). Is face recognition in pictures affected by the center of projection? In IEEE international workshop on analysis and modeling of faces and gestures (pp. 53-59). Nice, France: IEEE Computer Society]. Although evidence tends to rule out involvement of 3D information in face processing, it remains possible that the usefulness of this information depends on certain combinations of cues. We tested this hypothesis in a recognition task, where face stimuli with several levels of perspective transformation were either presented in stereo or without stereo. We found that even at a moderate level of perspective transformation where training and test faces were separated by just 30 cm, the stereo condition produced better performance. This provides the first evidence that stereo information can facilitate face recognition. We conclude that 3D information plays a role in face processing but only when certain types of 3D cues are properly combined. PMID:16298412

  5. 3D face recognition based on a modified ICP method

    NASA Astrophysics Data System (ADS)

    Zhao, Kankan; Xi, Jiangtao; Yu, Yanguang; Chicharo, Joe F.

    2011-11-01

    3D face recognition technique has gained much more attention recently, and it is widely used in security system, identification system, and access control system, etc. The core technique in 3D face recognition is to find out the corresponding points in different 3D face images. The classic partial Iterative Closest Point (ICP) method is iteratively align the two point sets based on repetitively calculate the closest points as the corresponding points in each iteration. After several iterations, the corresponding points can be obtained accurately. However, if two 3D face images with different scale are from the same person, the classic partial ICP does not work. In this paper we propose a modified partial Iterative Closest Point (ICP) method in which the scaling effect is considered to achieve 3D face recognition. We design a 3x3 diagonal matrix as the scale matrix in each iteration of the classic partial ICP. The probing face image which is multiplied by the scale matrix will keep the similar scale with the reference face image. Therefore, we can accurately determine the corresponding points even the scales of probing image and reference image are different. 3D face images in our experiments are acquired by a 3D data acquisition system based on Digital Fringe Projection Profilometry (DFPP). A 3D database consists of 30 group images, three images with the same scale, which are from the same person with different views, are included in each group. And in different groups, the scale of the 3 images may be different from other groups. The experiment results show that our proposed method can achieve 3D face recognition, especially in the case that the scales of probing image and referent image are different.

  6. Iterative closest normal point for 3D face recognition.

    PubMed

    Mohammadzade, Hoda; Hatzinakos, Dimitrios

    2013-02-01

    The common approach for 3D face recognition is to register a probe face to each of the gallery faces and then calculate the sum of the distances between their points. This approach is computationally expensive and sensitive to facial expression variation. In this paper, we introduce the iterative closest normal point method for finding the corresponding points between a generic reference face and every input face. The proposed correspondence finding method samples a set of points for each face, denoted as the closest normal points. These points are effectively aligned across all faces, enabling effective application of discriminant analysis methods for 3D face recognition. As a result, the expression variation problem is addressed by minimizing the within-class variability of the face samples while maximizing the between-class variability. As an important conclusion, we show that the surface normal vectors of the face at the sampled points contain more discriminatory information than the coordinates of the points. We have performed comprehensive experiments on the Face Recognition Grand Challenge database, which is presently the largest available 3D face database. We have achieved verification rates of 99.6 and 99.2 percent at a false acceptance rate of 0.1 percent for the all versus all and ROC III experiments, respectively, which, to the best of our knowledge, have seven and four times less error rates, respectively, compared to the best existing methods on this database. PMID:22585097

  7. Color constancy in 3D-2D face recognition

    NASA Astrophysics Data System (ADS)

    Meyer, Manuel; Riess, Christian; Angelopoulou, Elli; Evangelopoulos, Georgios; Kakadiaris, Ioannis A.

    2013-05-01

    Face is one of the most popular biometric modalities. However, up to now, color is rarely actively used in face recognition. Yet, it is well-known that when a person recognizes a face, color cues can become as important as shape, especially when combined with the ability of people to identify the color of objects independent of illuminant color variations. In this paper, we examine the feasibility and effect of explicitly embedding illuminant color information in face recognition systems. We empirically examine the theoretical maximum gain of including known illuminant color to a 3D-2D face recognition system. We also investigate the impact of using computational color constancy methods for estimating the illuminant color, which is then incorporated into the face recognition framework. Our experiments show that under close-to-ideal illumination estimates, one can improve face recognition rates by 16%. When the illuminant color is algorithmically estimated, the improvement is approximately 5%. These results suggest that color constancy has a positive impact on face recognition, but the accuracy of the illuminant color estimate has a considerable effect on its benefits.

  8. 3D fast wavelet network model-assisted 3D face recognition

    NASA Astrophysics Data System (ADS)

    Said, Salwa; Jemai, Olfa; Zaied, Mourad; Ben Amar, Chokri

    2015-12-01

    In last years, the emergence of 3D shape in face recognition is due to its robustness to pose and illumination changes. These attractive benefits are not all the challenges to achieve satisfactory recognition rate. Other challenges such as facial expressions and computing time of matching algorithms remain to be explored. In this context, we propose our 3D face recognition approach using 3D wavelet networks. Our approach contains two stages: learning stage and recognition stage. For the training we propose a novel algorithm based on 3D fast wavelet transform. From 3D coordinates of the face (x,y,z), we proceed to voxelization to get a 3D volume which will be decomposed by 3D fast wavelet transform and modeled after that with a wavelet network, then their associated weights are considered as vector features to represent each training face . For the recognition stage, an unknown identity face is projected on all the training WN to obtain a new vector features after every projection. A similarity score is computed between the old and the obtained vector features. To show the efficiency of our approach, experimental results were performed on all the FRGC v.2 benchmark.

  9. Learning the spherical harmonic features for 3-D face recognition.

    PubMed

    Liu, Peijiang; Wang, Yunhong; Huang, Di; Zhang, Zhaoxiang; Chen, Liming

    2013-03-01

    In this paper, a competitive method for 3-D face recognition (FR) using spherical harmonic features (SHF) is proposed. With this solution, 3-D face models are characterized by the energies contained in spherical harmonics with different frequencies, thereby enabling the capture of both gross shape and fine surface details of a 3-D facial surface. This is in clear contrast to most 3-D FR techniques which are either holistic or feature based, using local features extracted from distinctive points. First, 3-D face models are represented in a canonical representation, namely, spherical depth map, by which SHF can be calculated. Then, considering the predictive contribution of each SHF feature, especially in the presence of facial expression and occlusion, feature selection methods are used to improve the predictive performance and provide faster and more cost-effective predictors. Experiments have been carried out on three public 3-D face datasets, SHREC2007, FRGC v2.0, and Bosphorus, with increasing difficulties in terms of facial expression, pose, and occlusion, and which demonstrate the effectiveness of the proposed method. PMID:23060332

  10. Objective 3D face recognition: Evolution, approaches and challenges.

    PubMed

    Smeets, Dirk; Claes, Peter; Vandermeulen, Dirk; Clement, John Gerald

    2010-09-10

    Face recognition is a natural human ability and a widely accepted identification and authentication method. In modern legal settings, a lot of credence is placed on identifications made by eyewitnesses. Consequently these are based on human perception which is often flawed and can lead to situations where identity is disputed. Therefore, there is a clear need to secure identifications in an objective way based on anthropometric measures. Anthropometry has existed for many years and has evolved with each advent of new technology and computing power. As a result of this, face recognition methodology has shifted from a purely 2D image-based approach to the use of 3D facial shape. However, one of the main challenges still remaining is the non-rigid structure of the face, which can change permanently over varying time-scales and briefly with facial expressions. The majority of face recognition methods have been developed by scientists with a very technical background such as biometry, pattern recognition and computer vision. This article strives to bridge the gap between these communities and the forensic science end-users. A concise review of face recognition using 3D shape is given. Methods using 3D shape applied to data embodying facial expressions are tabulated for reference. From this list a categorization of different strategies to deal with expressions is presented. The underlying concepts and practical issues relating to the application of each strategy are given, without going into technical details. The discussion clearly articulates the justification to establish archival, reference databases to compare and evaluate different strategies. PMID:20395086

  11. 3D Multi-Spectrum Sensor System with Face Recognition

    PubMed Central

    Kim, Joongrock; Yu, Sunjin; Kim, Ig-Jae; Lee, Sangyoun

    2013-01-01

    This paper presents a novel three-dimensional (3D) multi-spectrum sensor system, which combines a 3D depth sensor and multiple optical sensors for different wavelengths. Various image sensors, such as visible, infrared (IR) and 3D sensors, have been introduced into the commercial market. Since each sensor has its own advantages under various environmental conditions, the performance of an application depends highly on selecting the correct sensor or combination of sensors. In this paper, a sensor system, which we will refer to as a 3D multi-spectrum sensor system, which comprises three types of sensors, visible, thermal-IR and time-of-flight (ToF), is proposed. Since the proposed system integrates information from each sensor into one calibrated framework, the optimal sensor combination for an application can be easily selected, taking into account all combinations of sensors information. To demonstrate the effectiveness of the proposed system, a face recognition system with light and pose variation is designed. With the proposed sensor system, the optimal sensor combination, which provides new effectively fused features for a face recognition system, is obtained. PMID:24072025

  12. 3D multi-spectrum sensor system with face recognition.

    PubMed

    Kim, Joongrock; Yu, Sunjin; Kim, Ig-Jae; Lee, Sangyoun

    2013-01-01

    This paper presents a novel three-dimensional (3D) multi-spectrum sensor system, which combines a 3D depth sensor and multiple optical sensors for different wavelengths. Various image sensors, such as visible, infrared (IR) and 3D sensors, have been introduced into the commercial market. Since each sensor has its own advantages under various environmental conditions, the performance of an application depends highly on selecting the correct sensor or combination of sensors. In this paper, a sensor system, which we will refer to as a 3D multi-spectrum sensor system, which comprises three types of sensors, visible, thermal-IR and time-of-flight (ToF), is proposed. Since the proposed system integrates information from each sensor into one calibrated framework, the optimal sensor combination for an application can be easily selected, taking into account all combinations of sensors information. To demonstrate the effectiveness of the proposed system, a face recognition system with light and pose variation is designed. With the proposed sensor system, the optimal sensor combination, which provides new effectively fused features for a face recognition system, is obtained. PMID:24072025

  13. The Role of Active Exploration of 3D Face Stimuli on Recognition Memory of Facial Information

    ERIC Educational Resources Information Center

    Liu, Chang Hong; Ward, James; Markall, Helena

    2007-01-01

    Research on face recognition has mainly relied on methods in which observers are relatively passive viewers of face stimuli. This study investigated whether active exploration of three-dimensional (3D) face stimuli could facilitate recognition memory. A standard recognition task and a sequential matching task were employed in a yoked design.…

  14. A framework for the recognition of 3D faces and expressions

    NASA Astrophysics Data System (ADS)

    Li, Chao; Barreto, Armando

    2006-04-01

    Face recognition technology has been a focus both in academia and industry for the last couple of years because of its wide potential applications and its importance to meet the security needs of today's world. Most of the systems developed are based on 2D face recognition technology, which uses pictures for data processing. With the development of 3D imaging technology, 3D face recognition emerges as an alternative to overcome the difficulties inherent with 2D face recognition, i.e. sensitivity to illumination conditions and orientation positioning of the subject. But 3D face recognition still needs to tackle the problem of deformation of facial geometry that results from the expression changes of a subject. To deal with this issue, a 3D face recognition framework is proposed in this paper. It is composed of three subsystems: an expression recognition system, a system for the identification of faces with expression, and neutral face recognition system. A system for the recognition of faces with one type of expression (happiness) and neutral faces was implemented and tested on a database of 30 subjects. The results proved the feasibility of this framework.

  15. 3D face recognition under expressions, occlusions, and pose variations.

    PubMed

    Drira, Hassen; Ben Amor, Boulbaba; Srivastava, Anuj; Daoudi, Mohamed; Slama, Rim

    2013-09-01

    We propose a novel geometric framework for analyzing 3D faces, with the specific goals of comparing, matching, and averaging their shapes. Here we represent facial surfaces by radial curves emanating from the nose tips and use elastic shape analysis of these curves to develop a Riemannian framework for analyzing shapes of full facial surfaces. This representation, along with the elastic Riemannian metric, seems natural for measuring facial deformations and is robust to challenges such as large facial expressions (especially those with open mouths), large pose variations, missing parts, and partial occlusions due to glasses, hair, and so on. This framework is shown to be promising from both--empirical and theoretical--perspectives. In terms of the empirical evaluation, our results match or improve upon the state-of-the-art methods on three prominent databases: FRGCv2, GavabDB, and Bosphorus, each posing a different type of challenge. From a theoretical perspective, this framework allows for formal statistical inferences, such as the estimation of missing facial parts using PCA on tangent spaces and computing average shapes. PMID:23868784

  16. Description and recognition of faces from 3D data

    NASA Astrophysics Data System (ADS)

    Coombes, Anne M.; Richards, Robin; Linney, Alfred D.; Bruce, Vicki; Fright, Rick

    1992-12-01

    A method based on differential geometry, is presented for mathematically describing the shape of the facial surface. Three-dimensional data for the face are collected by optical surface scanning. The method allows the segmentation of the face into regions of a particular `surface type,' according to the surface curvature. Eight different surface types are produced which all have perceptually meaningful interpretations. The correspondence of the surface type regions to the facial features are easily visualized, allowing a qualitative assessment of the face. A quantitative description of the face in terms of the surface type regions can be produced and the variation of the description between faces is demonstrated. A set of optical surface scans can be registered together and averages to produce an average male and average female face. Thus an assessment of how individuals vary from the average can be made as well as a general statement about the differences between male and female faces. This method will enable an investigation to be made as to how reliably faces can be individuated by their surface shape which, if feasible, may be the basis of an automatic system for recognizing faces. It also has applications in physical anthropology, for classification of the face, facial reconstructive surgery, to quantify the changes in a face altered by reconstructive surgery and growth, and in visual perception, to assess the recognizability of faces. Examples of some of these applications are presented.

  17. 3D Face Recognition Based on Multiple Keypoint Descriptors and Sparse Representation

    PubMed Central

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying; Lu, Jianwei

    2014-01-01

    Recent years have witnessed a growing interest in developing methods for 3D face recognition. However, 3D scans often suffer from the problems of missing parts, large facial expressions, and occlusions. To be useful in real-world applications, a 3D face recognition approach should be able to handle these challenges. In this paper, we propose a novel general approach to deal with the 3D face recognition problem by making use of multiple keypoint descriptors (MKD) and the sparse representation-based classification (SRC). We call the proposed method 3DMKDSRC for short. Specifically, with 3DMKDSRC, each 3D face scan is represented as a set of descriptor vectors extracted from keypoints by meshSIFT. Descriptor vectors of gallery samples form the gallery dictionary. Given a probe 3D face scan, its descriptors are extracted at first and then its identity can be determined by using a multitask SRC. The proposed 3DMKDSRC approach does not require the pre-alignment between two face scans and is quite robust to the problems of missing data, occlusions and expressions. Its superiority over the other leading 3D face recognition schemes has been corroborated by extensive experiments conducted on three benchmark databases, Bosphorus, GavabDB, and FRGC2.0. The Matlab source code for 3DMKDSRC and the related evaluation results are publicly available at http://sse.tongji.edu.cn/linzhang/3dmkdsrcface/3dmkdsrc.htm. PMID:24940876

  18. 3D face recognition based on multiple keypoint descriptors and sparse representation.

    PubMed

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying; Lu, Jianwei

    2014-01-01

    Recent years have witnessed a growing interest in developing methods for 3D face recognition. However, 3D scans often suffer from the problems of missing parts, large facial expressions, and occlusions. To be useful in real-world applications, a 3D face recognition approach should be able to handle these challenges. In this paper, we propose a novel general approach to deal with the 3D face recognition problem by making use of multiple keypoint descriptors (MKD) and the sparse representation-based classification (SRC). We call the proposed method 3DMKDSRC for short. Specifically, with 3DMKDSRC, each 3D face scan is represented as a set of descriptor vectors extracted from keypoints by meshSIFT. Descriptor vectors of gallery samples form the gallery dictionary. Given a probe 3D face scan, its descriptors are extracted at first and then its identity can be determined by using a multitask SRC. The proposed 3DMKDSRC approach does not require the pre-alignment between two face scans and is quite robust to the problems of missing data, occlusions and expressions. Its superiority over the other leading 3D face recognition schemes has been corroborated by extensive experiments conducted on three benchmark databases, Bosphorus, GavabDB, and FRGC2.0. The Matlab source code for 3DMKDSRC and the related evaluation results are publicly available at http://sse.tongji.edu.cn/linzhang/3dmkdsrcface/3dmkdsrc.htm. PMID:24940876

  19. The impact of specular highlights on 3D-2D face recognition

    NASA Astrophysics Data System (ADS)

    Christlein, Vincent; Riess, Christian; Angelopoulou, Elli; Evangelopoulos, Georgios; Kakadiaris, Ioannis

    2013-05-01

    One of the most popular form of biometrics is face recognition. Face recognition techniques typically assume that a face exhibits Lambertian reectance. However, a face often exhibits prominent specularities, especially in outdoor environments. These specular highlights can compromise an identity authentication. In this work, we analyze the impact of such highlights on a 3D-2D face recognition system. First, we investigate three different specularity removal methods as preprocessing steps for face recognition. Then, we explicitly model facial specularities within the face detection system with the Cook-Torrance reflectance model. In our experiments, specularity removal increases the recognition rate on an outdoor face database by about 5% at a false alarm rate of 10-3. The integration of the Cook-Torrance model further improves these results, increasing the verification rate by 19% at a FAR of 10-3.

  20. A 2D range Hausdorff approach for 3D face recognition.

    SciTech Connect

    Koch, Mark William; Russ, Trina Denise; Little, Charles Quentin

    2005-04-01

    This paper presents a 3D facial recognition algorithm based on the Hausdorff distance metric. The standard 3D formulation of the Hausdorff matching algorithm has been modified to operate on a 2D range image, enabling a reduction in computation from O(N2) to O(N) without large storage requirements. The Hausdorff distance is known for its robustness to data outliers and inconsistent data between two data sets, making it a suitable choice for dealing with the inherent problems in many 3D datasets due to sensor noise and object self-occlusion. For optimal performance, the algorithm assumes a good initial alignment between probe and template datasets. However, to minimize the error between two faces, the alignment can be iteratively refined. Results from the algorithm are presented using 3D face images from the Face Recognition Grand Challenge database version 1.0.

  1. Learning deformation model for expression-robust 3D face recognition

    NASA Astrophysics Data System (ADS)

    Guo, Zhe; Liu, Shu; Wang, Yi; Lei, Tao

    2015-12-01

    Expression change is the major cause of local plastic deformation of the facial surface. The intra-class differences with large expression change somehow are larger than the inter-class differences as it's difficult to distinguish the same individual with facial expression change. In this paper, an expression-robust 3D face recognition method is proposed by learning expression deformation model. The expression of the individuals on the training set is modeled by principal component analysis, the main components are retained to construct the facial deformation model. For the test 3D face, the shape difference between the test and the neutral face in training set is used for reconstructing the expression change by the constructed deformation model. The reconstruction residual error is used for face recognition. The average recognition rate on GavabDB and self-built database reaches 85.1% and 83%, respectively, which shows strong robustness for expression changes.

  2. Face recognition using 3D facial shape and color map information: comparison and combination

    NASA Astrophysics Data System (ADS)

    Godil, Afzal; Ressler, Sandy; Grother, Patrick

    2004-08-01

    In this paper, we investigate the use of 3D surface geometry for face recognition and compare it to one based on color map information. The 3D surface and color map data are from the CAESAR anthropometric database. We find that the recognition performance is not very different between 3D surface and color map information using a principal component analysis algorithm. We also discuss the different techniques for the combination of the 3D surface and color map information for multi-modal recognition by using different fusion approaches and show that there is significant improvement in results. The effectiveness of various techniques is compared and evaluated on a dataset with 200 subjects in two different positions.

  3. 3D face recognition based on the hierarchical score-level fusion classifiers

    NASA Astrophysics Data System (ADS)

    Mráček, Štěpán.; Váša, Jan; Lankašová, Karolína; Drahanský, Martin; Doležel, Michal

    2014-05-01

    This paper describes the 3D face recognition algorithm that is based on the hierarchical score-level fusion clas-sifiers. In a simple (unimodal) biometric pipeline, the feature vector is extracted from the input data and subsequently compared with the template stored in the database. In our approachm, we utilize several feature extraction algorithms. We use 6 different image representations of the input 3D face data. Moreover, we are using Gabor and Gauss-Laguerre filter banks applied on the input image data that yield to 12 resulting feature vectors. Each representation is compared with corresponding counterpart from the biometric database. We also add the recognition based on the iso-geodesic curves. The final score-level fusion is performed on 13 comparison scores using the Support Vector Machine (SVM) classifier.

  4. An efficient multimodal 2D-3D hybrid approach to automatic face recognition.

    PubMed

    Mian, Ajmal S; Bennamoun, Mohammed; Owens, Robyn

    2007-11-01

    We present a fully automatic face recognition algorithm and demonstrate its performance on the FRGC v2.0 data. Our algorithm is multimodal (2D and 3D) and performs hybrid (feature-based and holistic) matching in order to achieve efficiency and robustness to facial expressions. The pose of a 3D face along with its texture is automatically corrected using a novel approach based on a single automatically detected point and the Hotelling transform. A novel 3D Spherical Face Representation (SFR) is used in conjunction with the SIFT descriptor to form a rejection classifier which quickly eliminates a large number of candidate faces at an early stage for efficient recognition in case of large galleries. The remaining faces are then verified using a novel region-based matching approach which is robust to facial expressions. This approach automatically segments the eyes-forehead and the nose regions, which are relatively less sensitive to expressions, and matches them separately using a modified ICP algorithm. The results of all the matching engines are fused at the metric level to achieve higher accuracy. We use the FRGC benchmark to compare our results to other algorithms which used the same database. Our multimodal hybrid algorithm performed better than others by achieving 99.74% and 98.31% verification rates at 0.001 FAR and identification rates of 99.02% and 95.37% for probes with neutral and non-neutral expression respectively. PMID:17848775

  5. A prescreener for 3D face recognition using radial symmerty and the Hausdorff fraction.

    SciTech Connect

    Koudelka, Melissa L.; Koch, Mark William; Russ, Trina Denise

    2005-04-01

    Face recognition systems require the ability to efficiently scan an existing database of faces to locate a match for a newly acquired face. The large number of faces in real world databases makes computationally intensive algorithms impractical for scanning entire databases. We propose the use of more efficient algorithms to 'prescreen' face databases, determining a limited set of likely matches that can be processed further to identify a match. We use both radial symmetry and shape to extract five features of interest on 3D range images of faces. These facial features determine a very small subset of discriminating points which serve as input to a prescreening algorithm based on a Hausdorff fraction. We show how to compute the Haudorff fraction in linear O(n) time using a range image representation. Our feature extraction and prescreening algorithms are verified using the FRGC v1.0 3D face scan data. Results show 97% of the extracted facial features are within 10 mm or less of manually marked ground truth, and the prescreener has a rank 6 recognition rate of 100%.

  6. 3D face recognition using simulated annealing and the surface interpenetration measure.

    PubMed

    Queirolo, Chauã C; Silva, Luciano; Bellon, Olga R P; Segundo, Maurício Pamplona

    2010-02-01

    This paper presents a novel automatic framework to perform 3D face recognition. The proposed method uses a Simulated Annealing-based approach (SA) for range image registration with the Surface Interpenetration Measure (SIM), as similarity measure, in order to match two face images. The authentication score is obtained by combining the SIM values corresponding to the matching of four different face regions: circular and elliptical areas around the nose, forehead, and the entire face region. Then, a modified SA approach is proposed taking advantage of invariant face regions to better handle facial expressions. Comprehensive experiments were performed on the FRGC v2 database, the largest available database of 3D face images composed of 4,007 images with different facial expressions. The experiments simulated both verification and identification systems and the results compared to those reported by state-of-the-art works. By using all of the images in the database, a verification rate of 96.5 percent was achieved at a False Acceptance Rate (FAR) of 0.1 percent. In the identification scenario, a rank-one accuracy of 98.4 percent was achieved. To the best of our knowledge, this is the highest rank-one score ever achieved for the FRGC v2 database when compared to results published in the literature. PMID:20075453

  7. Template protection and its implementation in 3D face recognition systems

    NASA Astrophysics Data System (ADS)

    Zhou, Xuebing

    2007-04-01

    As biometric recognition systems are widely applied in various application areas, security and privacy risks have recently attracted the attention of the biometric community. Template protection techniques prevent stored reference data from revealing private biometric information and enhance the security of biometrics systems against attacks such as identity theft and cross matching. This paper concentrates on a template protection algorithm that merges methods from cryptography, error correction coding and biometrics. The key component of the algorithm is to convert biometric templates into binary vectors. It is shown that the binary vectors should be robust, uniformly distributed, statistically independent and collision-free so that authentication performance can be optimized and information leakage can be avoided. Depending on statistical character of the biometric template, different approaches for transforming biometric templates into compact binary vectors are presented. The proposed methods are integrated into a 3D face recognition system and tested on the 3D facial images of the FRGC database. It is shown that the resulting binary vectors provide an authentication performance that is similar to the original 3D face templates. A high security level is achieved with reasonable false acceptance and false rejection rates of the system, based on an efficient statistical analysis. The algorithm estimates the statistical character of biometric templates from a number of biometric samples in the enrollment database. For the FRGC 3D face database, the small distinction of robustness and discriminative power between the classification results under the assumption of uniquely distributed templates and the ones under the assumption of Gaussian distributed templates is shown in our tests.

  8. Intraclass retrieval of nonrigid 3D objects: application to face recognition.

    PubMed

    Passalis, Georgios; Kakadiaris, Ioannis A; Theoharis, Theoharis

    2007-02-01

    As the size of the available collections of 3D objects grows, database transactions become essential for their management with the key operation being retrieval (query). Large collections are also precategorized into classes so that a single class contains objects of the same type (e.g., human faces, cars, four-legged animals). It is shown that general object retrieval methods are inadequate for intraclass retrieval tasks. We advocate that such intraclass problems require a specialized method that can exploit the basic class characteristics in order to achieve higher accuracy. A novel 3D object retrieval method is presented which uses a parameterized annotated model of the shape of the class objects, incorporating its main characteristics. The annotated subdivision-based model is fitted onto objects of the class using a deformable model framework, converted to a geometry image and transformed into the wavelet domain. Object retrieval takes place in the wavelet domain. The method does not require user interaction, achieves high accuracy, is efficient for use with large databases, and is suitable for nonrigid object classes. We apply our method to the face recognition domain, one of the most challenging intraclass retrieval tasks. We used the Face Recognition Grand Challenge v2 database, yielding an average verification rate of 95.2 percent at 10-3 false accept rate. The latest results of our work can be found at http://www.cbl.uh.edu/UR8D/. PMID:17170476

  9. Viewpoint independent representation and recognition of polygonal faced in 3-D

    SciTech Connect

    Bunke, H.; Glauser, T.

    1993-08-01

    The recognition of polygons in 3-D space is an important task in robot vision. Two particular problems are addressed in the paper. First a new set of local shape descriptors for polygons are proposed that are invariant under affine transformation. Furthermore, they are complete in the sense that they allow the reconstruction of any polygon in 3-D space from three consecutive vertices. The second problem discussed in this paper is the recognition of 2-D polygonal objects under affine transformation and the presence of partial occlusion. A recognition procedure that is based on the matching of edge length ratios is introduced using a simplified version of the standard dynamic programming procedure commonly employed for string matching. The algorithm is conceptually very simple, easy to implement and has a low computational complexity. It will be shown in a set of experiments that the method is reliable and robust.

  10. SNL3dFace

    Energy Science and Technology Software Center (ESTSC)

    2007-07-20

    This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial featuresmore » of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are available to control the information that is encoded by the PCA. Such options include the XYZ coordinates, the difference of each XYZ coordinates from the reference, the Z coordinate, the intensity/texture values, etc. In addition to PCA/FLDA feature projection this software supports feature matching to obtain similarity matrices for performance analysis. In addition, this software supports visualization of the STL, MRD, 2D normalized, and PCA synthetic representations in a 3D environment.« less

  11. SNL3dFace

    SciTech Connect

    Russ, Trina; Koch, Mark; Koudelka, Melissa; Peters, Ralph; Little, Charles; Boehnen, Chris; Peters, Tanya

    2007-07-20

    This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial features of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are available to control the information that is encoded by the PCA. Such options include the XYZ coordinates, the difference of each XYZ coordinates from the reference, the Z coordinate, the intensity/texture values, etc. In addition to PCA/FLDA feature projection this software supports feature matching to obtain similarity matrices for performance analysis. In addition, this software supports visualization of the STL, MRD, 2D normalized, and PCA synthetic representations in a 3D environment.

  12. New method of 3-D object recognition

    NASA Astrophysics Data System (ADS)

    He, An-Zhi; Li, Qun Z.; Miao, Peng C.

    1991-12-01

    In this paper, a new method of 3-D object recognition using optical techniques and a computer is presented. We perform 3-D object recognition using moire contour to obtain the object's 3- D coordinates, projecting drawings of the object in three coordinate planes to describe it and using a method of inquiring library of judgement to match objects. The recognition of a simple geometrical entity is simulated by computer and studied experimentally. The recognition of an object which is composed of a few simple geometrical entities is discussed.

  13. Thermal infrared exploitation for 3D face reconstruction

    NASA Astrophysics Data System (ADS)

    Abayowa, Bernard O.

    2009-05-01

    Despite the advances in face recognition research, current face recognition systems are still not accurate or robust enough to be deployed in uncontrolled environments. The existence of a pose and illumination invariant face recognition system is still lacking. This research exploits the relationship between thermal infrared and visible imagery, to estimate 3D face with visible texture from infrared imagery. The relationship between visible and thermal infrared texture is learned using kernel canonical correlation analysis(KCCA), and then a 3D modeler is used to estimate the geometric structure from predicted visual imagery. This research will find it's application in uncontrolled environments where illumination and pose invariant identification or tracking is required at long range such as urban search and rescue (Amber alert, missing dementia patient), and manhunt scenarios.

  14. 3D face analysis for demographic biometrics

    SciTech Connect

    Tokola, Ryan A; Mikkilineni, Aravind K; Boehnen, Chris Bensing

    2015-01-01

    Despite being increasingly easy to acquire, 3D data is rarely used for face-based biometrics applications beyond identification. Recent work in image-based demographic biometrics has enjoyed much success, but these approaches suffer from the well-known limitations of 2D representations, particularly variations in illumination, texture, and pose, as well as a fundamental inability to describe 3D shape. This paper shows that simple 3D shape features in a face-based coordinate system are capable of representing many biometric attributes without problem-specific models or specialized domain knowledge. The same feature vector achieves impressive results for problems as diverse as age estimation, gender classification, and race classification.

  15. Recognition methods for 3D textured surfaces

    NASA Astrophysics Data System (ADS)

    Cula, Oana G.; Dana, Kristin J.

    2001-06-01

    Texture as a surface representation is the subject of a wide body of computer vision and computer graphics literature. While texture is always associated with a form of repetition in the image, the repeating quantity may vary. The texture may be a color or albedo variation as in a checkerboard, a paisley print or zebra stripes. Very often in real-world scenes, texture is instead due to a surface height variation, e.g. pebbles, gravel, foliage and any rough surface. Such surfaces are referred to here as 3D textured surfaces. Standard texture recognition algorithms are not appropriate for 3D textured surfaces because the appearance of these surfaces changes in a complex manner with viewing direction and illumination direction. Recent methods have been developed for recognition of 3D textured surfaces using a database of surfaces observed under varied imaging parameters. One of these methods is based on 3D textons obtained using K-means clustering of multiscale feature vectors. Another method uses eigen-analysis originally developed for appearance-based object recognition. In this work we develop a hybrid approach that employs both feature grouping and dimensionality reduction. The method is tested using the Columbia-Utrecht texture database and provides excellent recognition rates. The method is compared with existing recognition methods for 3D textured surfaces. A direct comparison is facilitated by empirical recognition rates from the same texture data set. The current method has key advantages over existing methods including requiring less prior information on both the training and novel images.

  16. Fabricating 3D figurines with personalized faces.

    PubMed

    Tena, J Rafael; Mahler, Moshe; Beeler, Thabo; Grosse, Max; Hengchin Yeh; Matthews, Iain

    2013-01-01

    We present a semi-automated system for fabricating figurines with faces that are personalised to the individual likeness of the customer. The efficacy of the system has been demonstrated by commercial deployments at Walt Disney World Resort and Star Wars Celebration VI in Orlando Florida. Although the system is semi automated, human intervention is limited to a few simple tasks to maintain the high throughput and consistent quality required for commercial application. In contrast to existing systems that fabricate custom heads that are assembled to pre-fabricated plastic bodies, our system seamlessly integrates 3D facial data with a predefined figurine body into a unique and continuous object that is fabricated as a single piece. The combination of state-of-the-art 3D capture, modelling, and printing that are the core of our system provide the flexibility to fabricate figurines whose complexity is only limited by the creativity of the designer. PMID:24808129

  17. 2D face database diversification based on 3D face modeling

    NASA Astrophysics Data System (ADS)

    Wang, Qun; Li, Jiang; Asari, Vijayan K.; Karim, Mohammad A.

    2011-05-01

    Pose and illumination are identified as major problems in 2D face recognition (FR). It has been theoretically proven that the more diversified instances in the training phase, the more accurate and adaptable the FR system appears to be. Based on this common awareness, researchers have developed a large number of photographic face databases to meet the demand for data training purposes. In this paper, we propose a novel scheme for 2D face database diversification based on 3D face modeling and computer graphics techniques, which supplies augmented variances of pose and illumination. Based on the existing samples from identical individuals of the database, a synthesized 3D face model is employed to create composited 2D scenarios with extra light and pose variations. The new model is based on a 3D Morphable Model (3DMM) and genetic type of optimization algorithm. The experimental results show that the complemented instances obviously increase diversification of the existing database.

  18. 3D Face Modeling Using the Multi-Deformable Method

    PubMed Central

    Hwang, Jinkyu; Yu, Sunjin; Kim, Joongrock; Lee, Sangyoun

    2012-01-01

    In this paper, we focus on the problem of the accuracy performance of 3D face modeling techniques using corresponding features in multiple views, which is quite sensitive to feature extraction errors. To solve the problem, we adopt a statistical model-based 3D face modeling approach in a mirror system consisting of two mirrors and a camera. The overall procedure of our 3D facial modeling method has two primary steps: 3D facial shape estimation using a multiple 3D face deformable model and texture mapping using seamless cloning that is a type of gradient-domain blending. To evaluate our method's performance, we generate 3D faces of 30 individuals and then carry out two tests: accuracy test and robustness test. Our method shows not only highly accurate 3D face shape results when compared with the ground truth, but also robustness to feature extraction errors. Moreover, 3D face rendering results intuitively show that our method is more robust to feature extraction errors than other 3D face modeling methods. An additional contribution of our method is that a wide range of face textures can be acquired by the mirror system. By using this texture map, we generate realistic 3D face for individuals at the end of the paper. PMID:23201976

  19. 3D palmprint data fast acquisition and recognition

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoxu; Huang, Shujun; Gao, Nan; Zhang, Zonghua

    2014-11-01

    This paper presents a fast 3D (Three-Dimension) palmprint capturing system and develops an efficient 3D palmprint feature extraction and recognition method. In order to fast acquire accurate 3D shape and texture of palmprint, a DLP projector triggers a CCD camera to realize synchronization. By generating and projecting green fringe pattern images onto the measured palm surface, 3D palmprint data are calculated from the fringe pattern images. The periodic feature vector can be derived from the calculated 3D palmprint data, so undistorted 3D biometrics is obtained. Using the obtained 3D palmprint data, feature matching test have been carried out by Gabor filter, competition rules and the mean curvature. Experimental results on capturing 3D palmprint show that the proposed acquisition method can fast get 3D shape information of palmprint. Some initial experiments on recognition show the proposed method is efficient by using 3D palmprint data.

  20. Modeling 3D faces from samplings via compressive sensing

    NASA Astrophysics Data System (ADS)

    Sun, Qi; Tang, Yanlong; Hu, Ping

    2013-07-01

    3D data is easier to acquire for family entertainment purpose today because of the mass-production, cheapness and portability of domestic RGBD sensors, e.g., Microsoft Kinect. However, the accuracy of facial modeling is affected by the roughness and instability of the raw input data from such sensors. To overcome this problem, we introduce compressive sensing (CS) method to build a novel 3D super-resolution scheme to reconstruct high-resolution facial models from rough samples captured by Kinect. Unlike the simple frame fusion super-resolution method, this approach aims to acquire compressed samples for storage before a high-resolution image is produced. In this scheme, depth frames are firstly captured and then each of them is measured into compressed samples using sparse coding. Next, the samples are fused to produce an optimal one and finally a high-resolution image is recovered from the fused sample. This framework is able to recover 3D facial model of a given user from compressed simples and this can reducing storage space as well as measurement cost in future devices e.g., single-pixel depth cameras. Hence, this work can potentially be applied into future applications, such as access control system using face recognition, and smart phones with depth cameras, which need high resolution and little measure time.

  1. 3-D object recognition using 2-D views.

    PubMed

    Li, Wenjing; Bebis, George; Bourbakis, Nikolaos G

    2008-11-01

    We consider the problem of recognizing 3-D objects from 2-D images using geometric models and assuming different viewing angles and positions. Our goal is to recognize and localize instances of specific objects (i.e., model-based) in a scene. This is in contrast to category-based object recognition methods where the goal is to search for instances of objects that belong to a certain visual category (e.g., faces or cars). The key contribution of our work is improving 3-D object recognition by integrating Algebraic Functions of Views (AFoVs), a powerful framework for predicting the geometric appearance of an object due to viewpoint changes, with indexing and learning. During training, we compute the space of views that groups of object features can produce under the assumption of 3-D linear transformations, by combining a small number of reference views that contain the object features using AFoVs. Unrealistic views (e.g., due to the assumption of 3-D linear transformations) are eliminated by imposing a pair of rigidity constraints based on knowledge of the transformation between the reference views of the object. To represent the space of views that an object can produce compactly while allowing efficient hypothesis generation during recognition, we propose combining indexing with learning in two stages. In the first stage, we sample the space of views of an object sparsely and represent information about the samples using indexing. In the second stage, we build probabilistic models of shape appearance by sampling the space of views of the object densely and learning the manifold formed by the samples. Learning employs the Expectation-Maximization (EM) algorithm and takes place in a "universal," lower-dimensional, space computed through Random Projection (RP). During recognition, we extract groups of point features from the scene and we use indexing to retrieve the most feasible model groups that might have produced them (i.e., hypothesis generation). The likelihood

  2. Recognition technology research based on 3D fingerprint

    NASA Astrophysics Data System (ADS)

    Tian, Qianxiao; Huang, Shujun; Zhang, Zonghua

    2014-11-01

    Fingerprint has been widely studied and applied to personal recognition in both forensics and civilian. However, the current widespread used fingerprint is identified by 2D (two-dimensional) fingerprint image and the mapping from 3D (three-dimensional) to 2D loses 1D information, which leads to low accurate and even wrong recognition. This paper presents a 3D fingerprint recognition method based on the fringe projection technique. A series of fringe patterns generated by software are projected onto a finger surface through a projecting system. From another viewpoint, the fringe patterns are deformed by the finger surface and captured by a CCD camera. The deformed fringe pattern images give the 3D shape data of the finger and the 3D fingerprint features. Through converting the 3D fingerprints to 2D space, traditional 2D fingerprint recognition method can be used to 3D fingerprints recognition. Experimental results on measuring and recognizing some 3D fingerprints show the accuracy and availability of the developed 3D fingerprint system.

  3. Personal perceptual and cognitive property for 3D recognition

    NASA Astrophysics Data System (ADS)

    Matozaki, Takeshi; Tanisita, Akihiko

    1996-04-01

    3D closed circuit TV which produces stereoscopic vision by observing different images through each eye alternately, has been proposed. But, there are several problems, both physiological and psychological, for 3D image observation in many fields. From this prospective, we are learning personal visual characteristics for 3D recognition in the transition from 2D to 3D. We have separated the mechanism of 3D recognition into several categories, and formed some hypothesis about the personal features. These hypotheses are related to an observer's personal features, as follows: (1) consideration of the angle between the left and the right eye's line of vision and the adjustment of focus, (2) consideration of the angle of vision and the time required for fusion, (3) consideration of depth sense based on life experience, (4) consideration of 3D experience, and (5) consideration of 3D sense based on the observer's age. To establish these hypotheses, and we have analyzed the personal features of the time interval required for 3D recognition through some examinations to examinees. Examinees indicate their response for 3D recognition by pushing a button. Recently, we introduced a method for picking up the reaction of 3D recognition from examinees through their biological information, for example, analysis of pulse waves of the finger. We also bring a hypothesis, as a result of the analysis of pulse waves. (1) We can observe chaotic response when the examinee is recognizing a 2D image. (2) We can observe periodic response when the examinee is recognizing a 3D image. We are making nonlinear forecasts by getting correlation between the forecast and the biological phenomena. Deterministic nonlinear prediction are applied to the data, as a promising method of chaotic time series analysis in order to analyze the long term unpredictability, one of the fundamental characteristics of deterministic chaos.

  4. A 2D range Hausdorff approach to 3D facial recognition.

    SciTech Connect

    Koch, Mark William; Russ, Trina Denise; Little, Charles Quentin

    2004-11-01

    This paper presents a 3D facial recognition algorithm based on the Hausdorff distance metric. The standard 3D formulation of the Hausdorff matching algorithm has been modified to operate on a 2D range image, enabling a reduction in computation from O(N2) to O(N) without large storage requirements. The Hausdorff distance is known for its robustness to data outliers and inconsistent data between two data sets, making it a suitable choice for dealing with the inherent problems in many 3D datasets due to sensor noise and object self-occlusion. For optimal performance, the algorithm assumes a good initial alignment between probe and template datasets. However, to minimize the error between two faces, the alignment can be iteratively refined. Results from the algorithm are presented using 3D face images from the Face Recognition Grand Challenge database version 1.0.

  5. Genetic specificity of face recognition

    PubMed Central

    Shakeshaft, Nicholas G.; Plomin, Robert

    2015-01-01

    Specific cognitive abilities in diverse domains are typically found to be highly heritable and substantially correlated with general cognitive ability (g), both phenotypically and genetically. Recent twin studies have found the ability to memorize and recognize faces to be an exception, being similarly heritable but phenotypically substantially uncorrelated both with g and with general object recognition. However, the genetic relationships between face recognition and other abilities (the extent to which they share a common genetic etiology) cannot be determined from phenotypic associations. In this, to our knowledge, first study of the genetic associations between face recognition and other domains, 2,000 18- and 19-year-old United Kingdom twins completed tests assessing their face recognition, object recognition, and general cognitive abilities. Results confirmed the substantial heritability of face recognition (61%), and multivariate genetic analyses found that most of this genetic influence is unique and not shared with other cognitive abilities. PMID:26417086

  6. 3D ladar ATR based on recognition by parts

    NASA Astrophysics Data System (ADS)

    Sobel, Erik; Douglas, Joel; Ettinger, Gil

    2003-09-01

    LADAR imaging is unique in its potential to accurately measure the 3D surface geometry of targets. We exploit this 3D geometry to perform automatic target recognition on targets in the domain of military and civilian ground vehicles. Here we present a robust model based 3D LADAR ATR system which efficiently searches through target hypothesis space by reasoning hierarchically from vehicle parts up to identification of a whole vehicle with specific pose and articulation state. The LADAR data consists of one or more 3D point clouds generated by laser returns from ground vehicles viewed from multiple sensor locations. The key to this approach is an automated 3D registration process to precisely align and match multiple data views to model based predictions of observed LADAR data. We accomplish this registration using robust 3D surface alignment techniques which we have also used successfully in 3D medical image analysis applications. The registration routine seeks to minimize a robust 3D surface distance metric to recover the best six-degree-of-freedom pose and fit. We process the observed LADAR data by first extracting salient parts, matching these parts to model based predictions and hierarchically constructing and testing increasingly detailed hypotheses about the identity of the observed target. This cycle of prediction, extraction, and matching efficiently partitions the target hypothesis space based on the distinctive anatomy of the target models and achieves effective recognition by progressing logically from a target's constituent parts up to its complete pose and articulation state.

  7. 3D gesture recognition from serial range image

    NASA Astrophysics Data System (ADS)

    Matsui, Yasuyuki; Miyasaka, Takeo; Hirose, Makoto; Araki, Kazuo

    2001-10-01

    In this research, the recognition of gesture in 3D space is examined by using serial range images obtained by a real-time 3D measurement system developed in our laboratory. Using this system, it is possible to obtain time sequences of range, intensity and color data for a moving object in real-time without assigning markers to the targets. At first, gestures are tracked in 2D space by calculating 2D flow vectors at each points using an ordinal optical flow estimation method, based on time sequences of the intensity data. Then, location of each point after 2D movement is detected on the x-y plane using thus obtained 2D flow vectors. Depth information of each point after movement is then obtained from the range data and 3D flow vectors are assigned to each point. Time sequences of thus obtained 3D flow vectors allow us to track the 3D movement of the target. So, based on time sequences of 3D flow vectors of the targets, it is possible to classify the movement of the targets using continuous DP matching technique. This tracking of 3D movement using time sequences of 3D flow vectors may be applicable for a robust gesture recognition system.

  8. A face recognition embedded system

    NASA Astrophysics Data System (ADS)

    Pun, Kwok Ho; Moon, Yiu Sang; Tsang, Chi Chiu; Chow, Chun Tak; Chan, Siu Man

    2005-03-01

    This paper presents an experimental study of the implementation of a face recognition system in embedded systems. To investigate the feasibility and practicality of real time face recognition on such systems, a door access control system based on face recognition is built. Due to the limited computation power of embedded device, a semi-automatic scheme for face detection and eye location is proposed to solve these computationally hard problems. It is found that to achieve real time performance, optimization of the core face recognition module is needed. As a result, extensive profiling is done to pinpoint the execution hotspots in the system and optimization are carried out. After careful precision analysis, all slow floating point calculations are replaced with their fixed-point versions. Experimental results show that real time performance can be achieved without significant loss in recognition accuracy.

  9. A general framework for face reconstruction using single still image based on 2D-to-3D transformation kernel.

    PubMed

    Fooprateepsiri, Rerkchai; Kurutach, Werasak

    2014-03-01

    Face authentication is a biometric classification method that verifies the identity of a user based on image of their face. Accuracy of the authentication is reduced when the pose, illumination and expression of the training face images are different than the testing image. The methods in this paper are designed to improve the accuracy of a features-based face recognition system when the pose between the input images and training images are different. First, an efficient 2D-to-3D integrated face reconstruction approach is introduced to reconstruct a personalized 3D face model from a single frontal face image with neutral expression and normal illumination. Second, realistic virtual faces with different poses are synthesized based on the personalized 3D face to characterize the face subspace. Finally, face recognition is conducted based on these representative virtual faces. Compared with other related works, this framework has the following advantages: (1) only one single frontal face is required for face recognition, which avoids the burdensome enrollment work; and (2) the synthesized face samples provide the capability to conduct recognition under difficult conditions like complex pose, illumination and expression. From the experimental results, we conclude that the proposed method improves the accuracy of face recognition by varying the pose, illumination and expression. PMID:24529782

  10. Accuracy enhanced thermal face recognition

    NASA Astrophysics Data System (ADS)

    Lin, Chun-Fu; Lin, Sheng-Fuu

    2013-11-01

    Human face recognition has been generally researched for the last three decades. Face recognition with thermal image has begun to attract significant attention gradually since illumination of environment would not affect the recognition performance. However, the recognition performance of traditional thermal face recognizer is still insufficient in practical application. This study presents a novel thermal face recognizer employing not only thermal features but also critical facial geometric features which would not be influenced by hair style to improve the recognition performance. A three-layer back-propagation feed-forward neural network is applied as the classifier. Traditional thermal face recognizers only use the indirect information of the topography of blood vessels like thermogram as features. To overcome this limitation, the proposed thermal face recognizer can use not only the indirect information but also the direct information of the topography of blood vessels which is unique for every human. Moreover, the recognition performance of the proposed thermal features would not decrease even if the hair of frontal bone varies, the eye blinks or the nose breathes. Experimental results show that the proposed features are significantly more effective than traditional thermal features and the recognition performance of thermal face recognizer is improved.

  11. A Modified Exoskeleton for 3D Shape Description and Recognition

    NASA Astrophysics Data System (ADS)

    Lipikorn, Rajalida; Shimizu, Akinobu; Hagihara, Yoshihiro; Kobatake, Hidefumi

    Three-dimensional(3D) shape representation is a powerful tool in object recognition that is an essential process in an image processing and analysis system. Skeleton is one of the most widely used representations for object recognition, nevertheless most of the skeletons obtained from conventional methods are susceptible to rotation and noise disturbances. In this paper, we present a new 3D object representation called a modified exoskeleton (mES) which preserves skeleton properties including significant characteristics about an object that are meaningful for object recognition, and is more stable and less susceptible to rotation and noise than the skeletons. Then a 3D shape recognition methodology which determines the similarity between an observed object and other known objects in a database is introduced. Through a number of experiments on 3D artificial objects and real volumetric lung tumors extracted from CT images, it can be verified that our proposed methodology based on the mES is a simple yet efficient method that is less sensitive to rotation, noise, and independent of orientation and size of the objects.

  12. A Primitive-Based 3D Object Recognition System

    NASA Astrophysics Data System (ADS)

    Dhawan, Atam P.

    1988-08-01

    A knowledge-based 3D object recognition system has been developed. The system uses the hierarchical structural, geometrical and relational knowledge in matching the 3D object models to the image data through pre-defined primitives. The primitives, we have selected, to begin with, are 3D boxes, cylinders, and spheres. These primitives as viewed from different angles covering complete 3D rotation range are stored in a "Primitive-Viewing Knowledge-Base" in form of hierarchical structural and relational graphs. The knowledge-based system then hypothesizes about the viewing angle and decomposes the segmented image data into valid primitives. A rough 3D structural and relational description is made on the basis of recognized 3D primitives. This description is now used in the detailed high-level frame-based structural and relational matching. The system has several expert and knowledge-based systems working in both stand-alone and cooperative modes to provide multi-level processing. This multi-level processing utilizes both bottom-up (data-driven) and top-down (model-driven) approaches in order to acquire sufficient knowledge to accept or reject any hypothesis for matching or recognizing the objects in the given image.

  13. Error analysis for creating 3D face templates based on cylindrical quad-tree structure

    NASA Astrophysics Data System (ADS)

    Gutfeter, Weronika

    2015-09-01

    Development of new biometric algorithms is parallel to advances in technology of sensing devices. Some of the limitations of the current face recognition systems may be eliminated by integrating 3D sensors into these systems. Depth sensing devices can capture a spatial structure of the face in addition to the texture and color. This kind of data is yet usually very voluminous and requires large amount of computer resources for being processed (face scans obtained with typical depth cameras contain more than 150 000 points per face). That is why defining efficient data structures for processing spatial images is crucial for further development of 3D face recognition methods. The concept described in this work fulfills the aforementioned demands. Modification of the quad-tree structure was chosen because it can be easily transformed into less dimensional data structures and maintains spatial relations between data points. We are able to interpret data stored in the tree as a pyramid of features which allow us to analyze face images using coarse-to-fine strategy, often exploited in biometric recognition systems.

  14. Sampling design for face recognition

    NASA Astrophysics Data System (ADS)

    Yan, Yanjun; Osadciw, Lisa A.

    2006-04-01

    A face recognition system consists of two integrated parts: One is the face recognition algorithm, the other is the selected classifier and derived features by the algorithm from a data set. The face recognition algorithm definitely plays a central role, but this paper does not aim at evaluating the algorithm, but deriving the best features for this algorithm from a specific database through sampling design of the training set, which directs how the sample should be collected and dictates the sample space. Sampling design can help exert the full potential of the face recognition algorithm without overhaul. Conventional statistical analysis usually assume some distribution to draw the inference, but the design-based inference does not assume any distribution of the data and it does not assume the independency between the sample observations. The simulations illustrates that the systematic sampling scheme performs better than the simple random sampling scheme, and the systematic sampling is comparable to using all available training images in recognition performance. Meanwhile the sampling schemes can save the system resources and alleviate the overfitting problem. However, the post stratification by sex is not shown to be significant in improving the recognition performance.

  15. Conformal geometry and its applications on 3D shape matching, recognition, and stitching.

    PubMed

    Wang, Sen; Wang, Yang; Jin, Miao; Gu, Xianfeng David; Samaras, Dimitris

    2007-07-01

    Three-dimensional shape matching is a fundamental issue in computer vision with many applications such as shape registration, 3D object recognition, and classification. However, shape matching with noise, occlusion, and clutter is a challenging problem. In this paper, we analyze a family of quasi-conformal maps including harmonic maps, conformal maps, and least-squares conformal maps with regards to 3D shape matching. As a result, we propose a novel and computationally efficient shape matching framework by using least-squares conformal maps. According to conformal geometry theory, each 3D surface with disk topology can be mapped to a 2D domain through a global optimization and the resulting map is a diffeomorphism, i.e., one-to-one and onto. This allows us to simplify the 3D shape-matching problem to a 2D image-matching problem, by comparing the resulting 2D parametric maps, which are stable, insensitive to resolution changes and robust to occlusion, and noise. Therefore, highly accurate and efficient 3D shape matching algorithms can be achieved by using the above three parametric maps. Finally, the robustness of least-squares conformal maps is evaluated and analyzed comprehensively in 3D shape matching with occlusion, noise, and resolution variation. In order to further demonstrate the performance of our proposed method, we also conduct a series of experiments on two computer vision applications, i.e., 3D face recognition and 3D nonrigid surface alignment and stitching. PMID:17496378

  16. Automated Recognition of 3D Features in GPIR Images

    NASA Technical Reports Server (NTRS)

    Park, Han; Stough, Timothy; Fijany, Amir

    2007-01-01

    A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a

  17. 3D measurement of human face by stereophotogrammetry

    NASA Astrophysics Data System (ADS)

    Wagner, Holger; Wiegmann, Axel; Kowarschik, Richard; Zöllner, Friedrich

    2006-01-01

    The following article describes a stereophotogrammetry based technique for 3D measurement of human faces. The method was developed for function orientated diagnostics and therapy in dentistry to provide prognoses for jaw-growth or surgical procedures. The main aim of our activities was to realize both -- a rapid measurement and a dense point cloud. The setup consists of two digital cameras in a convergent arrangement and a digital projector. During the measurement a rapid sequence of about 20 statistical generated patterns were projected onto the face and synchronously captured by the two cameras. Therefore, every single pixel of the two cameras is encoded by a characteristically stack of intensity values. To find corresponding points into the image sequences a correlation technique is used. At least, the 3D reconstruction is done by triangulation. The advantages of the shown method are the possible short measurement time (< 1 second) and - in comparison to gray code and phase shift techniques - the low quality requirements of the projection unit. At present the reached accuracy is +/- 0.1mm (rms), which is sufficient for medical applications. But the demonstrated method is not restricted to evaluate the shape of human faces. Also technical objects could be measured.

  18. Face recognition for uncontrolled environments

    NASA Astrophysics Data System (ADS)

    Podilchuk, Christine; Hulbert, William; Flachsbart, Ralph; Barinov, Lev

    2010-04-01

    A new face recognition algorithm has been proposed which is robust to variations in pose, expression, illumination and occlusions such as sunglasses. The algorithm is motivated by the Edit Distance used to determine the similarity between strings of one dimensional data such as DNA and text. The key to this approach is how to extend the concept of an Edit Distance on one-dimensional data to two-dimensional image data. The algorithm is based on mapping one image into another and using the characteristics of the mapping to determine a two-dimensional Pictorial-Edit Distance or P-Edit Distance. We show how the properties of the mapping are similar to insertion, deletion and substitution errors defined in an Edit Distance. This algorithm is particularly well suited for face recognition in uncontrolled environments such as stand-off and other surveillance applications. We will describe an entire system designed for face recognition at a distance including face detection, pose estimation, multi-sample fusion of video frames and identification. Here we describe how the algorithm is used for face recognition at a distance, present some initial results and describe future research directions.(

  19. Aesthetic preference recognition of 3D shapes using EEG.

    PubMed

    Chew, Lin Hou; Teo, Jason; Mountstephens, James

    2016-04-01

    Recognition and identification of aesthetic preference is indispensable in industrial design. Humans tend to pursue products with aesthetic values and make buying decisions based on their aesthetic preferences. The existence of neuromarketing is to understand consumer responses toward marketing stimuli by using imaging techniques and recognition of physiological parameters. Numerous studies have been done to understand the relationship between human, art and aesthetics. In this paper, we present a novel preference-based measurement of user aesthetics using electroencephalogram (EEG) signals for virtual 3D shapes with motion. The 3D shapes are designed to appear like bracelets, which is generated by using the Gielis superformula. EEG signals were collected by using a medical grade device, the B-Alert X10 from advance brain monitoring, with a sampling frequency of 256 Hz and resolution of 16 bits. The signals obtained when viewing 3D bracelet shapes were decomposed into alpha, beta, theta, gamma and delta rhythm by using time-frequency analysis, then classified into two classes, namely like and dislike by using support vector machines and K-nearest neighbors (KNN) classifiers respectively. Classification accuracy of up to 80 % was obtained by using KNN with the alpha, theta and delta rhythms as the features extracted from frontal channels, Fz, F3 and F4 to classify two classes, like and dislike. PMID:27066153

  20. Covert face recognition without prosopagnosia.

    PubMed

    Ellis, H D; Young, A W; Koenken, G

    1993-01-01

    An experiment is reported where subjects were presented with familiar or unfamiliar faces for supraliminal durations or for durations individually assessed as being below the threshold for recognition. Their electrodermal responses to each stimulus were measured and the results showed higher peak amplitude skin conductance responses for familiar than for unfamiliar faces, regardless of whether they had been displayed supraliminally or subliminally. A parallel is drawn between elevated skin conductance responses to subliminal stimuli and findings of covert recognition of familiar faces in prosopagnosic patients, some of whom show increased electrodermal activity (EDA) to previously familiar faces. The supraliminal presentation data also served to replicate similar work by Tranel et al (1985). The results are considered alongside other data indicating the relation between non-conscious, "automatic" aspects of normal visual information processing and abilities which can be found to be preserved without awareness after brain injury. PMID:24487927

  1. High-speed 3D face measurement based on color speckle projection

    NASA Astrophysics Data System (ADS)

    Xue, Junpeng; Su, Xianyu; Zhang, Qican

    2015-03-01

    Nowadays, 3D face recognition has become a subject of considerable interest in the security field due to its unique advantages in domestic and international. However, acquiring color-textured 3D faces data in a fast and accurate manner is still highly challenging. In this paper, a new approach based on color speckle projection for 3D face data dynamic acquisition is proposed. Firstly, the projector-camera color crosstalk matrix that indicates how much each projector channel influences each camera channel is measured. Secondly, the reference-speckle-sets images are acquired with CCD, and then three gray sets are separated from the color sets using the crosstalk matrix and are saved. Finally, the color speckle image which is modulated by face is captured, and it is split three gray channels. We measure the 3D face using multi-sets of speckle correlation methods with color speckle image in high-speed similar as one-shot, which greatly improves the measurement accuracy and stability. The suggested approach has been implemented and the results are supported by experiments.

  2. 3-D Object Recognition from Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Smith, W.; Walker, A. S.; Zhang, B.

    2011-09-01

    The market for real-time 3-D mapping includes not only traditional geospatial applications but also navigation of unmanned autonomous vehicles (UAVs). Massively parallel processes such as graphics processing unit (GPU) computing make real-time 3-D object recognition and mapping achievable. Geospatial technologies such as digital photogrammetry and GIS offer advanced capabilities to produce 2-D and 3-D static maps using UAV data. The goal is to develop real-time UAV navigation through increased automation. It is challenging for a computer to identify a 3-D object such as a car, a tree or a house, yet automatic 3-D object recognition is essential to increasing the productivity of geospatial data such as 3-D city site models. In the past three decades, researchers have used radiometric properties to identify objects in digital imagery with limited success, because these properties vary considerably from image to image. Consequently, our team has developed software that recognizes certain types of 3-D objects within 3-D point clouds. Although our software is developed for modeling, simulation and visualization, it has the potential to be valuable in robotics and UAV applications. The locations and shapes of 3-D objects such as buildings and trees are easily recognizable by a human from a brief glance at a representation of a point cloud such as terrain-shaded relief. The algorithms to extract these objects have been developed and require only the point cloud and minimal human inputs such as a set of limits on building size and a request to turn on a squaring option. The algorithms use both digital surface model (DSM) and digital elevation model (DEM), so software has also been developed to derive the latter from the former. The process continues through the following steps: identify and group 3-D object points into regions; separate buildings and houses from trees; trace region boundaries; regularize and simplify boundary polygons; construct complex roofs. Several case

  3. A Taxonomy of 3D Occluded Objects Recognition Techniques

    NASA Astrophysics Data System (ADS)

    Soleimanizadeh, Shiva; Mohamad, Dzulkifli; Saba, Tanzila; Al-ghamdi, Jarallah Saleh

    2016-03-01

    The overall performances of object recognition techniques under different condition (e.g., occlusion, viewpoint, and illumination) have been improved significantly in recent years. New applications and hardware are shifted towards digital photography, and digital media. This faces an increase in Internet usage requiring object recognition for certain applications; particularly occulded objects. However occlusion is still an issue unhandled, interlacing the relations between extracted feature points through image, research is going on to develop efficient techniques and easy to use algorithms that would help users to source images; this need to overcome problems and issues regarding occlusion. The aim of this research is to review recognition occluded objects algorithms and figure out their pros and cons to solve the occlusion problem features, which are extracted from occluded object to distinguish objects from other co-existing objects by determining the new techniques, which could differentiate the occluded fragment and sections inside an image.

  4. 3D measurement of human upper body for gesture recognition

    NASA Astrophysics Data System (ADS)

    Wan, Khairunizam; Sawada, Hideyuki

    2007-10-01

    Measurement of human motion is widely required for various applications, and a significant part of this task is to identify motion in the process of human motion recognition. There are several application purposes of doing this research such as in surveillance, entertainment, medical treatment and traffic applications as user interfaces that require the recognition of different parts of human body to identify an action or a motion. The most challenging task in human motion recognition is to achieve the ability and reliability of a motion capture system for tracking and recognizing dynamic movements, because human body structure has many degrees of freedom. Many attempts for recognizing body actions have been reported so far, in which gestural motions have to be measured by some sensors first, and the obtained data are processed in a computer. This paper introduces the 3D motion analysis of human upper body using an optical motion capture system for the purpose of gesture recognition. In this study, the image processing technique to track optical markers attached at feature points of human body is introduced for constructing a human upper body model and estimating its three dimensional motion.

  5. Pose-Invariant Face Recognition via RGB-D Images

    PubMed Central

    Sang, Gaoli; Li, Jing; Zhao, Qijun

    2016-01-01

    Three-dimensional (3D) face models can intrinsically handle large pose face recognition problem. In this paper, we propose a novel pose-invariant face recognition method via RGB-D images. By employing depth, our method is able to handle self-occlusion and deformation, both of which are challenging problems in two-dimensional (2D) face recognition. Texture images in the gallery can be rendered to the same view as the probe via depth. Meanwhile, depth is also used for similarity measure via frontalization and symmetric filling. Finally, both texture and depth contribute to the final identity estimation. Experiments on Bosphorus, CurtinFaces, Eurecom, and Kiwi databases demonstrate that the additional depth information has improved the performance of face recognition with large pose variations and under even more challenging conditions. PMID:26819581

  6. Expression-invariant face recognition using three-dimensional weighted walkthrough and centroid distance

    NASA Astrophysics Data System (ADS)

    Liang, Yan; Zhang, Yun

    2015-09-01

    Three-dimensional (3-D) face recognition provides a potential to handle challenges caused by illumination and pose variations. However, extreme expression variations still complicate the task of recognition. An accurate and robust method for expression-invariant 3-D face recognition is proposed. A 3-D face is partitioned into a set of isogeodesic stripes and the spatial relationships of the stripes are described by 3-D weighted walkthrough and the centroid distance. Moreover, the method of the similarity measure is given. Experiments are performed on the CASIA dataset and the FRGC v2.0 dataset. The results show that our method has advantages for recognition performance despite large expression variations.

  7. Hough-based recognition of complex 3-D road scenes

    NASA Astrophysics Data System (ADS)

    Foresti, Gian L.; Regazzoni, Carlo S.

    1992-02-01

    In this paper, we address the problem of the object recognition in a complex 3-D scene by detecting the 2-D object projection on the image-plane for an autonomous vehicle driving; in particular, the problems of road detection and obstacle avoidance in natural road scenes are investigated. A new implementation of the Hough Transform (HT), called Labeled Hough Transform (LHT), to extract and group symbolic features is here presented; the novelty of this method, in respect to the traditional approach, consists in the capability of splitting a maximum in the parameter space into noncontiguous segments, while performing voting. Results are presented on a road image containing obstacles which show the efficiency, good quality, and time performances of the algorithm.

  8. Many-faced cells and many-edged faces in 3D Poisson-Voronoi tessellations

    NASA Astrophysics Data System (ADS)

    Hilhorst, H. J.; Lazar, E. A.

    2014-10-01

    Motivated by recent new Monte Carlo data we investigate a heuristic asymptotic theory that applies to n-faced 3D Poisson-Voronoi cells in the limit of large n. We show how this theory may be extended to n-edged cell faces. It predicts the leading order large-n behavior of the average volume and surface area of the n-faced cell, and of the average area and perimeter of the n-edged face. Such a face is shown to be surrounded by a toroidal region of volume n/λ (with λ the seed density) that is void of seeds. Two neighboring cells sharing an n-edged face are found to have their seeds at a typical distance that scales as n-1/6 and whose probability law we determine. We present a new data set of 4 × 109 Monte Carlo generated 3D Poisson-Voronoi cells, larger than any before. Full compatibility is found between the Monte Carlo data and the theory. Deviations from the asymptotic predictions are explained in terms of subleading corrections whose powers in n we estimate from the data.

  9. Creating 3D realistic head: from two orthogonal photos to multiview face contents

    NASA Astrophysics Data System (ADS)

    Lin, Yuan; Lin, Qian; Tang, Feng; Tang, Liang; Lim, Sukhwan; Wang, Shengjin

    2011-03-01

    3D Head models have many applications, such as virtual conference, 3D web game, and so on. The existing several web-based face modeling solutions that can create a 3D face model from one or two user uploaded face images, are limited to generating the 3D model of only face region. The accuracy of such reconstruction is very limited for side views, as well as hair regions. The goal of our research is to develop a framework for reconstructing the realistic 3D human head based on two approximate orthogonal views. Our framework takes two images, and goes through segmentation, feature points detection, 3D bald head reconstruction, 3D hair reconstruction and texture mapping to create a 3D head model. The main contribution of the paper is that the processing steps are applies to both the face region as well as the hair region.

  10. Bayesian Face Recognition and Perceptual Narrowing in Face-Space

    ERIC Educational Resources Information Center

    Balas, Benjamin

    2012-01-01

    During the first year of life, infants' face recognition abilities are subject to "perceptual narrowing", the end result of which is that observers lose the ability to distinguish previously discriminable faces (e.g. other-race faces) from one another. Perceptual narrowing has been reported for faces of different species and different races, in…

  11. Neural microgenesis of personally familiar face recognition

    PubMed Central

    Ramon, Meike; Vizioli, Luca; Liu-Shuang, Joan; Rossion, Bruno

    2015-01-01

    Despite a wealth of information provided by neuroimaging research, the neural basis of familiar face recognition in humans remains largely unknown. Here, we isolated the discriminative neural responses to unfamiliar and familiar faces by slowly increasing visual information (i.e., high-spatial frequencies) to progressively reveal faces of unfamiliar or personally familiar individuals. Activation in ventral occipitotemporal face-preferential regions increased with visual information, independently of long-term face familiarity. In contrast, medial temporal lobe structures (perirhinal cortex, amygdala, hippocampus) and anterior inferior temporal cortex responded abruptly when sufficient information for familiar face recognition was accumulated. These observations suggest that following detailed analysis of individual faces in core posterior areas of the face-processing network, familiar face recognition emerges categorically in medial temporal and anterior regions of the extended cortical face network. PMID:26283361

  12. Transferring of speech movements from video to 3D face space.

    PubMed

    Pei, Yuru; Zha, Hongbin

    2007-01-01

    We present a novel method for transferring speech animation recorded in low quality videos to high resolution 3D face models. The basic idea is to synthesize the animated faces by an interpolation based on a small set of 3D key face shapes which span a 3D face space. The 3D key shapes are extracted by an unsupervised learning process in 2D video space to form a set of 2D visemes which are then mapped to the 3D face space. The learning process consists of two main phases: 1) Isomap-based nonlinear dimensionality reduction to embed the video speech movements into a low-dimensional manifold and 2) K-means clustering in the low-dimensional space to extract 2D key viseme frames. Our main contribution is that we use the Isomap-based learning method to extract intrinsic geometry of the speech video space and thus to make it possible to define the 3D key viseme shapes. To do so, we need only to capture a limited number of 3D key face models by using a general 3D scanner. Moreover, we also develop a skull movement recovery method based on simple anatomical structures to enhance 3D realism in local mouth movements. Experimental results show that our method can achieve realistic 3D animation effects with a small number of 3D key face models. PMID:17093336

  13. 3D facial expression recognition using maximum relevance minimum redundancy geometrical features

    NASA Astrophysics Data System (ADS)

    Rabiu, Habibu; Saripan, M. Iqbal; Mashohor, Syamsiah; Marhaban, Mohd Hamiruce

    2012-12-01

    In recent years, facial expression recognition (FER) has become an attractive research area, which besides the fundamental challenges, it poses, finds application in areas, such as human-computer interaction, clinical psychology, lie detection, pain assessment, and neurology. Generally the approaches to FER consist of three main steps: face detection, feature extraction and expression recognition. The recognition accuracy of FER hinges immensely on the relevance of the selected features in representing the target expressions. In this article, we present a person and gender independent 3D facial expression recognition method, using maximum relevance minimum redundancy geometrical features. The aim is to detect a compact set of features that sufficiently represents the most discriminative features between the target classes. Multi-class one-against-one SVM classifier was employed to recognize the seven facial expressions; neutral, happy, sad, angry, fear, disgust, and surprise. The average recognition accuracy of 92.2% was recorded. Furthermore, inter database homogeneity was investigated between two independent databases the BU-3DFE and UPM-3DFE the results showed a strong homogeneity between the two databases.

  14. Face Recognition Increases during Saccade Preparation

    PubMed Central

    Lin, Hai; Rizak, Joshua D.; Ma, Yuan-ye; Yang, Shang-chuan; Chen, Lin; Hu, Xin-tian

    2014-01-01

    Face perception is integral to human perception system as it underlies social interactions. Saccadic eye movements are frequently made to bring interesting visual information, such as faces, onto the fovea for detailed processing. Just before eye movement onset, the processing of some basic features, such as the orientation, of an object improves at the saccade landing point. Interestingly, there is also evidence that indicates faces are processed in early visual processing stages similar to basic features. However, it is not known whether this early enhancement of processing includes face recognition. In this study, three experiments were performed to map the timing of face presentation to the beginning of the eye movement in order to evaluate pre-saccadic face recognition. Faces were found to be similarly processed as simple objects immediately prior to saccadic movements. Starting ∼ 120 ms before a saccade to a target face, independent of whether or not the face was surrounded by other faces, the face recognition gradually improved and the critical spacing of the crowding decreased as saccade onset was approaching. These results suggest that an upcoming saccade prepares the visual system for new information about faces at the saccade landing site and may reduce the background in a crowd to target the intended face. This indicates an important role of pre-saccadic eye movement signals in human face recognition. PMID:24671174

  15. Partial face recognition: alignment-free approach.

    PubMed

    Liao, Shengcai; Jain, Anil K; Li, Stan Z

    2013-05-01

    Numerous methods have been developed for holistic face recognition with impressive performance. However, few studies have tackled how to recognize an arbitrary patch of a face image. Partial faces frequently appear in unconstrained scenarios, with images captured by surveillance cameras or handheld devices (e.g., mobile phones) in particular. In this paper, we propose a general partial face recognition approach that does not require face alignment by eye coordinates or any other fiducial points. We develop an alignment-free face representation method based on Multi-Keypoint Descriptors (MKD), where the descriptor size of a face is determined by the actual content of the image. In this way, any probe face image, holistic or partial, can be sparsely represented by a large dictionary of gallery descriptors. A new keypoint descriptor called Gabor Ternary Pattern (GTP) is also developed for robust and discriminative face recognition. Experimental results are reported on four public domain face databases (FRGCv2.0, AR, LFW, and PubFig) under both the open-set identification and verification scenarios. Comparisons with two leading commercial face recognition SDKs (PittPatt and FaceVACS) and two baseline algorithms (PCA+LDA and LBP) show that the proposed method, overall, is superior in recognizing both holistic and partial faces without requiring alignment. PMID:23520259

  16. Face photo-sketch synthesis and recognition.

    PubMed

    Wang, Xiaogang; Tang, Xiaoou

    2009-11-01

    In this paper, we propose a novel face photo-sketch synthesis and recognition method using a multiscale Markov Random Fields (MRF) model. Our system has three components: 1) given a face photo, synthesizing a sketch drawing; 2) given a face sketch drawing, synthesizing a photo; and 3) searching for face photos in the database based on a query sketch drawn by an artist. It has useful applications for both digital entertainment and law enforcement. We assume that faces to be studied are in a frontal pose, with normal lighting and neutral expression, and have no occlusions. To synthesize sketch/photo images, the face region is divided into overlapping patches for learning. The size of the patches decides the scale of local face structures to be learned. From a training set which contains photo-sketch pairs, the joint photo-sketch model is learned at multiple scales using a multiscale MRF model. By transforming a face photo to a sketch (or transforming a sketch to a photo), the difference between photos and sketches is significantly reduced, thus allowing effective matching between the two in face sketch recognition. After the photo-sketch transformation, in principle, most of the proposed face photo recognition approaches can be applied to face sketch recognition in a straightforward way. Extensive experiments are conducted on a face sketch database including 606 faces, which can be downloaded from our Web site (http://mmlab.ie.cuhk.edu.hk/facesketch.html). PMID:19762924

  17. Pose estimation and frontal face detection for face recognition

    NASA Astrophysics Data System (ADS)

    Lim, Eng Thiam; Wang, Jiangang; Xie, Wei; Ronda, Venkarteswarlu

    2005-05-01

    This paper proposes a pose estimation and frontal face detection algorithm for face recognition. Considering it's application in a real-world environment, the algorithm has to be robust yet computationally efficient. The main contribution of this paper is the efficient face localization, scale and pose estimation using color models. Simulation results showed very low computational load when compare to other face detection algorithm. The second contribution is the introduction of low dimensional statistical face geometrical model. Compared to other statistical face model the proposed method models the face geometry efficiently. The algorithm is demonstrated on a real-time system. The simulation results indicate that the proposed algorithm is computationally efficient.

  18. The advantages of stereo vision in a face recognition system

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Blasch, Erik

    2014-06-01

    Humans can recognize a face with binocular vision, while computers typically use a single face image. It is known that the performance of face recognition (by a computer) can be improved using the score fusion of multimodal images and multiple algorithms. A question is: Can we apply stereo vision to a face recognition system? We know that human binocular vision has many advantages such as stereopsis (3D vision), binocular summation, and singleness of vision including fusion of binocular images (cyclopean image). For face recognition, a 3D face or 3D facial features are typically computed from a pair of stereo images. In human visual processes, the binocular summation and singleness of vision are similar as image fusion processes. In this paper, we propose an advanced face recognition system with stereo imaging capability, which is comprised of two 2-in-1 multispectral (visible and thermal) cameras and three recognition algorithms (circular Gaussian filter, face pattern byte, and linear discriminant analysis [LDA]). Specifically, we present and compare stereo fusion at three levels (images, features, and scores) by using stereo images (from left camera and right camera). Image fusion is achieved with three methods (Laplacian pyramid, wavelet transform, average); feature fusion is done with three logical operations (AND, OR, XOR); and score fusion is implemented with four classifiers (LDA, k-nearest neighbor, support vector machine, binomial logical regression). The system performance is measured by probability of correct classification (PCC) rate (reported as accuracy rate in this paper) and false accept rate (FAR). The proposed approaches were validated with a multispectral stereo face dataset from 105 subjects. Experimental results show that any type of stereo fusion can improve the PCC, meanwhile reduce the FAR. It seems that stereo image/feature fusion is superior to stereo score fusion in terms of recognition performance. Further score fusion after image

  19. Extraversion predicts individual differences in face recognition.

    PubMed

    Li, Jingguang; Tian, Moqian; Fang, Huizhen; Xu, Miao; Li, He; Liu, Jia

    2010-07-01

    In daily life, one of the most common social tasks we perform is to recognize faces. However, the relation between face recognition ability and social activities is largely unknown. Here we ask whether individuals with better social skills are also better at recognizing faces. We found that extraverts who have better social skills correctly recognized more faces than introverts. However, this advantage was absent when extraverts were asked to recognize non-social stimuli (e.g., flowers). In particular, the underlying facet that makes extraverts better face recognizers is the gregariousness facet that measures the degree of inter-personal interaction. In addition, the link between extraversion and face recognition ability was independent of general cognitive abilities. These findings provide the first evidence that links face recognition ability to our daily activity in social communication, supporting the hypothesis that extraverts are better at decoding social information than introverts. PMID:20798810

  20. Sexual Dimorphism Analysis and Gender Classification in 3D Human Face

    NASA Astrophysics Data System (ADS)

    Hu, Yuan; Lu, Li; Yan, Jingqi; Liu, Zhi; Shi, Pengfei

    In this paper, we present the sexual dimorphism analysis in 3D human face and perform gender classification based on the result of sexual dimorphism analysis. Four types of features are extracted from a 3D human-face image. By using statistical methods, the existence of sexual dimorphism is demonstrated in 3D human face based on these features. The contributions of each feature to sexual dimorphism are quantified according to a novel criterion. The best gender classification rate is 94% by using SVMs and Matcher Weighting fusion method.This research adds to the knowledge of 3D faces in sexual dimorphism and affords a foundation that could be used to distinguish between male and female in 3D faces.

  1. Face recognition motivated by human approach

    NASA Astrophysics Data System (ADS)

    Kamgar-Parsi, Behrooz; Lawson, Wallace Edgar; Kamgar-Parsi, Behzad

    2010-04-01

    We report the development of a face recognition system which operates in the same way as humans in that it is capable of recognizing a number of people, while rejecting everybody else as strangers. While humans do it routinely, a particularly challenging aspect of the problem of open-world face recognition has been the question of rejecting previously unseen faces as unfamiliar. Our approach can handle previously unseen faces; it is based on identifying and enclosing the region(s) in the human face space which belong to the target person(s).

  2. Face recognition system and method using face pattern words and face pattern bytes

    SciTech Connect

    Zheng, Yufeng

    2014-12-23

    The present invention provides a novel system and method for identifying individuals and for face recognition utilizing facial features for face identification. The system and method of the invention comprise creating facial features or face patterns called face pattern words and face pattern bytes for face identification. The invention also provides for pattern recognitions for identification other than face recognition. The invention further provides a means for identifying individuals based on visible and/or thermal images of those individuals by utilizing computer software implemented by instructions on a computer or computer system and a computer readable medium containing instructions on a computer system for face recognition and identification.

  3. Recognition of Unfamiliar Talking Faces at Birth

    ERIC Educational Resources Information Center

    Coulon, Marion; Guellai, Bahia; Streri, Arlette

    2011-01-01

    Sai (2005) investigated the role of speech in newborns' recognition of their mothers' faces. Her results revealed that, when presented with both their mother's face and that of a stranger, newborns preferred looking at their mother only if she had previously talked to them. The present study attempted to extend these findings to any other faces.…

  4. Contextual Modulation of Biases in Face Recognition

    PubMed Central

    Felisberti, Fatima Maria; Pavey, Louisa

    2010-01-01

    Background The ability to recognize the faces of potential cooperators and cheaters is fundamental to social exchanges, given that cooperation for mutual benefit is expected. Studies addressing biases in face recognition have so far proved inconclusive, with reports of biases towards faces of cheaters, biases towards faces of cooperators, or no biases at all. This study attempts to uncover possible causes underlying such discrepancies. Methodology and Findings Four experiments were designed to investigate biases in face recognition during social exchanges when behavioral descriptors (prosocial, antisocial or neutral) embedded in different scenarios were tagged to faces during memorization. Face recognition, measured as accuracy and response latency, was tested with modified yes-no, forced-choice and recall tasks (N = 174). An enhanced recognition of faces tagged with prosocial descriptors was observed when the encoding scenario involved financial transactions and the rules of the social contract were not explicit (experiments 1 and 2). Such bias was eliminated or attenuated by making participants explicitly aware of “cooperative”, “cheating” and “neutral/indifferent” behaviors via a pre-test questionnaire and then adding such tags to behavioral descriptors (experiment 3). Further, in a social judgment scenario with descriptors of salient moral behaviors, recognition of antisocial and prosocial faces was similar, but significantly better than neutral faces (experiment 4). Conclusion The results highlight the relevance of descriptors and scenarios of social exchange in face recognition, when the frequency of prosocial and antisocial individuals in a group is similar. Recognition biases towards prosocial faces emerged when descriptors did not state the rules of a social contract or the moral status of a behavior, and they point to the existence of broad and flexible cognitive abilities finely tuned to minor changes in social context. PMID:20886086

  5. Challenges Facing 3-D Audio Display Design for Multimedia

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    The challenges facing successful multimedia presentation depend largely on the expectations of the designer and end user for a given application. Perceptual limitations in distance, elevation and azimuth sound source simulation differ significantly between headphone and cross-talk cancellation loudspeaker listening and therefore must be considered. Simulation of an environmental context is desirable but the quality depends on processing resources and lack of interaction with the host acoustical environment. While techniques such as data reduction of head-related transfer functions have been used widely to improve simulation fidelity, another approach involves determining thresholds for environmental acoustic events. Psychoacoustic studies relevant to this approach are reviewed in consideration of multimedia applications

  6. Real-time, face recognition technology

    SciTech Connect

    Brady, S.

    1995-11-01

    The Institute for Scientific Computing Research (ISCR) at Lawrence Livermore National Laboratory recently developed the real-time, face recognition technology KEN. KEN uses novel imaging devices such as silicon retinas developed at Caltech or off-the-shelf CCD cameras to acquire images of a face and to compare them to a database of known faces in a robust fashion. The KEN-Online project makes that recognition technology accessible through the World Wide Web (WWW), an internet service that has recently seen explosive growth. A WWW client can submit face images, add them to the database of known faces and submit other pictures that the system tries to recognize. KEN-Online serves to evaluate the recognition technology and grow a large face database. KEN-Online includes the use of public domain tools such as mSQL for its name-database and perl scripts to assist the uploading of images.

  7. Head pose estimation from a 2D face image using 3D face morphing with depth parameters.

    PubMed

    Kong, Seong G; Mbouna, Ralph Oyini

    2015-06-01

    This paper presents estimation of head pose angles from a single 2D face image using a 3D face model morphed from a reference face model. A reference model refers to a 3D face of a person of the same ethnicity and gender as the query subject. The proposed scheme minimizes the disparity between the two sets of prominent facial features on the query face image and the corresponding points on the 3D face model to estimate the head pose angles. The 3D face model used is morphed from a reference model to be more specific to the query face in terms of the depth error at the feature points. The morphing process produces a 3D face model more specific to the query image when multiple 2D face images of the query subject are available for training. The proposed morphing process is computationally efficient since the depth of a 3D face model is adjusted by a scalar depth parameter at feature points. Optimal depth parameters are found by minimizing the disparity between the 2D features of the query face image and the corresponding features on the morphed 3D model projected onto 2D space. The proposed head pose estimation technique was evaluated on two benchmarking databases: 1) the USF Human-ID database for depth estimation and 2) the Pointing'04 database for head pose estimation. Experiment results demonstrate that head pose estimation errors in nodding and shaking angles are as low as 7.93° and 4.65° on average for a single 2D input face image. PMID:25706638

  8. Single view-based 3D face reconstruction robust to self-occlusion

    NASA Astrophysics Data System (ADS)

    Lee, Youn Joo; Lee, Sung Joo; Park, Kang Ryoung; Jo, Jaeik; Kim, Jaihie

    2012-12-01

    State-of-the-art 3D morphable model (3DMM) is used widely for 3D face reconstruction based on a single image. However, this method has a high computational cost, and hence, a simplified 3D morphable model (S3DMM) was proposed as an alternative. Unlike the original 3DMM, S3DMM uses only a sparse 3D facial shape, and therefore, it incurs a lower computational cost. However, this method is vulnerable to self-occlusion due to head rotation. Therefore, we propose a solution to the self-occlusion problem in S3DMM-based 3D face reconstruction. This research is novel compared with previous works, in the following three respects. First, self-occlusion of the input face is detected automatically by estimating the head pose using a cylindrical head model. Second, a 3D model fitting scheme is designed based on selected visible facial feature points, which facilitates 3D face reconstruction without any effect from self-occlusion. Third, the reconstruction performance is enhanced by using the estimated pose as the initial pose parameter during the 3D model fitting process. The experimental results showed that the self-occlusion detection had high accuracy and our proposed method delivered a noticeable improvement in the 3D face reconstruction performance compared with previous methods.

  9. Sparse Feature Extraction for Pose-Tolerant Face Recognition.

    PubMed

    Abiantun, Ramzi; Prabhu, Utsav; Savvides, Marios

    2014-10-01

    Automatic face recognition performance has been steadily improving over years of research, however it remains significantly affected by a number of factors such as illumination, pose, expression, resolution and other factors that can impact matching scores. The focus of this paper is the pose problem which remains largely overlooked in most real-world applications. Specifically, we focus on one-to-one matching scenarios where a query face image of a random pose is matched against a set of gallery images. We propose a method that relies on two fundamental components: (a) A 3D modeling step to geometrically correct the viewpoint of the face. For this purpose, we extend a recent technique for efficient synthesis of 3D face models called 3D Generic Elastic Model. (b) A sparse feature extraction step using subspace modeling and ℓ1-minimization to induce pose-tolerance in coefficient space. This in return enables the synthesis of an equivalent frontal-looking face, which can be used towards recognition. We show significant performance improvements in verification rates compared to commercial matchers, and also demonstrate the resilience of the proposed method with respect to degrading input quality. We find that the proposed technique is able to match non-frontal images to other non-frontal images of varying angles. PMID:26352635

  10. Face Recognition in Humans and Machines

    NASA Astrophysics Data System (ADS)

    O'Toole, Alice; Tistarelli, Massimo

    The study of human face recognition by psychologists and neuroscientists has run parallel to the development of automatic face recognition technologies by computer scientists and engineers. In both cases, there are analogous steps of data acquisition, image processing, and the formation of representations that can support the complex and diverse tasks we accomplish with faces. These processes can be understood and compared in the context of their neural and computational implementations. In this chapter, we present the essential elements of face recognition by humans and machines, taking a perspective that spans psychological, neural, and computational approaches. From the human side, we overview the methods and techniques used in the neurobiology of face recognition, the underlying neural architecture of the system, the role of visual attention, and the nature of the representations that emerges. From the computational side, we discuss face recognition technologies and the strategies they use to overcome challenges to robust operation over viewing parameters. Finally, we conclude the chapter with a look at some recent studies that compare human and machine performances at face recognition.

  11. 3D face reconstruction from limited images based on differential evolution

    NASA Astrophysics Data System (ADS)

    Wang, Qun; Li, Jiang; Asari, Vijayan K.; Karim, Mohammad A.

    2011-09-01

    3D face modeling has been one of the greatest challenges for researchers in computer graphics for many years. Various methods have been used to model the shape and texture of faces under varying illumination and pose conditions from a single given image. In this paper, we propose a novel method for the 3D face synthesis and reconstruction by using a simple and efficient global optimizer. A 3D-2D matching algorithm which employs the integration of the 3D morphable model (3DMM) and the differential evolution (DE) algorithm is addressed. In 3DMM, the estimation process of fitting shape and texture information into 2D images is considered as the problem of searching for the global minimum in a high dimensional feature space, in which optimization is apt to have local convergence. Unlike the traditional scheme used in 3DMM, DE appears to be robust against stagnation in local minima and sensitiveness to initial values in face reconstruction. Benefitting from DE's successful performance, 3D face models can be created based on a single 2D image with respect to various illuminating and pose contexts. Preliminary results demonstrate that we are able to automatically create a virtual 3D face from a single 2D image with high performance. The validation process shows that there is only an insignificant difference between the input image and the 2D face image projected by the 3D model.

  12. Photon-counting passive 3D image sensing and processing for automatic target recognition

    NASA Astrophysics Data System (ADS)

    Yeom, Seokwon; Javidi, Bahram; Watson, Edward

    2008-04-01

    In this paper we overview the nonlinear matched filtering for photon counting recognition with 3D passive sensing. The first and second order statistical properties of the nonlinear matched filtering can improve the recognition performance compared to the linear matched filtering. Automatic target reconstruction and recognition are addressed for partially occluded objects. The recognition performance is shown to be improved significantly in the reconstruction space. The discrimination capability is analyzed in terms of Fisher ratio (FR) and receiver operating characteristic (ROC) curves.

  13. Gabor wavelet associative memory for face recognition.

    PubMed

    Zhang, Haihong; Zhang, Bailing; Huang, Weimin; Tian, Qi

    2005-01-01

    This letter describes a high-performance face recognition system by combining two recently proposed neural network models, namely Gabor wavelet network (GWN) and kernel associative memory (KAM), into a unified structure called Gabor wavelet associative memory (GWAM). GWAM has superior representation capability inherited from GWN and consequently demonstrates a much better recognition performance than KAM. Extensive experiments have been conducted to evaluate a GWAM-based recognition scheme using three popular face databases, i.e., FERET database, Olivetti-Oracle Research Lab (ORL) database and AR face database. The experimental results consistently show our scheme's superiority and demonstrate its very high-performance comparing favorably to some recent face recognition methods, achieving 99.3% and 100% accuracy, respectively, on the former two databases, exhibiting very robust performance on the last database against varying illumination conditions. PMID:15732406

  14. Combining depth and color data for 3D object recognition

    NASA Astrophysics Data System (ADS)

    Joergensen, Thomas M.; Linneberg, Christian; Andersen, Allan W.

    1997-09-01

    This paper describes the shape recognition system that has been developed within the ESPRIT project 9052 ADAS on automatic disassembly of TV-sets using a robot cell. Depth data from a chirped laser radar are fused with color data from a video camera. The sensor data is pre-processed in several ways and the obtained representation is used to train a RAM neural network (memory based reasoning approach) to detect different components within TV-sets. The shape recognizing architecture has been implemented and tested in a demonstration setup.

  15. A novel thermal face recognition approach using face pattern words

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng

    2010-04-01

    A reliable thermal face recognition system can enhance the national security applications such as prevention against terrorism, surveillance, monitoring and tracking, especially at nighttime. The system can be applied at airports, customs or high-alert facilities (e.g., nuclear power plant) for 24 hours a day. In this paper, we propose a novel face recognition approach utilizing thermal (long wave infrared) face images that can automatically identify a subject at both daytime and nighttime. With a properly acquired thermal image (as a query image) in monitoring zone, the following processes will be employed: normalization and denoising, face detection, face alignment, face masking, Gabor wavelet transform, face pattern words (FPWs) creation, face identification by similarity measure (Hamming distance). If eyeglasses are present on a subject's face, an eyeglasses mask will be automatically extracted from the querying face image, and then masked with all comparing FPWs (no more transforms). A high identification rate (97.44% with Top-1 match) has been achieved upon our preliminary face dataset (of 39 subjects) from the proposed approach regardless operating time and glasses-wearing condition.e

  16. Viewpoint Invariant Gesture Recognition and 3D Hand Pose Estimation Using RGB-D

    ERIC Educational Resources Information Center

    Doliotis, Paul

    2013-01-01

    The broad application domain of the work presented in this thesis is pattern classification with a focus on gesture recognition and 3D hand pose estimation. One of the main contributions of the proposed thesis is a novel method for 3D hand pose estimation using RGB-D. Hand pose estimation is formulated as a database retrieval problem. The proposed…

  17. Face Recognition: Canonical Mechanisms at Multiple Timescales.

    PubMed

    Giese, Martin A

    2016-07-11

    Adaptation is ubiquitous in the nervous system, and many possible computational roles have been discussed. A new functional imaging study suggests that, in face recognition, the learning of 'norm faces' and adaptation resulting in perceptual after-effects depend on the same mechanism. PMID:27404241

  18. Recognition of Simple 3D Geometrical Objects under Partial Occlusion

    NASA Astrophysics Data System (ADS)

    Barchunova, Alexandra; Sommer, Gerald

    In this paper we present a novel procedure for contour-based recognition of partially occluded three-dimensional objects. In our approach we use images of real and rendered objects whose contours have been deformed by a restricted change of the viewpoint. The preparatory part consists of contour extraction, preprocessing, local structure analysis and feature extraction. The main part deals with an extended construction and functionality of the classifier ensemble Adaptive Occlusion Classifier (AOC). It relies on a hierarchical fragmenting algorithm to perform a local structure analysis which is essential when dealing with occlusions. In the experimental part of this paper we present classification results for five classes of simple geometrical figures: prism, cylinder, half cylinder, a cube, and a bridge. We compare classification results for three classical feature extractors: Fourier descriptors, pseudo Zernike and Zernike moments.

  19. Maximal likelihood correspondence estimation for face recognition across pose.

    PubMed

    Li, Shaoxin; Liu, Xin; Chai, Xiujuan; Zhang, Haihong; Lao, Shihong; Shan, Shiguang

    2014-10-01

    Due to the misalignment of image features, the performance of many conventional face recognition methods degrades considerably in across pose scenario. To address this problem, many image matching-based methods are proposed to estimate semantic correspondence between faces in different poses. In this paper, we aim to solve two critical problems in previous image matching-based correspondence learning methods: 1) fail to fully exploit face specific structure information in correspondence estimation and 2) fail to learn personalized correspondence for each probe image. To this end, we first build a model, termed as morphable displacement field (MDF), to encode face specific structure information of semantic correspondence from a set of real samples of correspondences calculated from 3D face models. Then, we propose a maximal likelihood correspondence estimation (MLCE) method to learn personalized correspondence based on maximal likelihood frontal face assumption. After obtaining the semantic correspondence encoded in the learned displacement, we can synthesize virtual frontal images of the profile faces for subsequent recognition. Using linear discriminant analysis method with pixel-intensity features, state-of-the-art performance is achieved on three multipose benchmarks, i.e., CMU-PIE, FERET, and MultiPIE databases. Owe to the rational MDF regularization and the usage of novel maximal likelihood objective, the proposed MLCE method can reliably learn correspondence between faces in different poses even in complex wild environment, i.e., labeled face in the wild database. PMID:25163062

  20. Efficient live face detection to counter spoof attack in face recognition systems

    NASA Astrophysics Data System (ADS)

    Biswas, Bikram Kumar; Alam, Mohammad S.

    2015-03-01

    Face recognition is a critical tool used in almost all major biometrics based security systems. But recognition, authentication and liveness detection of the face of an actual user is a major challenge because an imposter or a non-live face of the actual user can be used to spoof the security system. In this research, a robust technique is proposed which detects liveness of faces in order to counter spoof attacks. The proposed technique uses a three-dimensional (3D) fast Fourier transform to compare spectral energies of a live face and a fake face in a mathematically selective manner. The mathematical model involves evaluation of energies of selective high frequency bands of average power spectra of both live and non-live faces. It also carries out proper recognition and authentication of the face of the actual user using the fringe-adjusted joint transform correlation technique, which has been found to yield the highest correlation output for a match. Experimental tests show that the proposed technique yields excellent results for identifying live faces.

  1. Embedded wavelet-based face recognition under variable position

    NASA Astrophysics Data System (ADS)

    Cotret, Pascal; Chevobbe, Stéphane; Darouich, Mehdi

    2015-02-01

    For several years, face recognition has been a hot topic in the image processing field: this technique is applied in several domains such as CCTV, electronic devices delocking and so on. In this context, this work studies the efficiency of a wavelet-based face recognition method in terms of subject position robustness and performance on various systems. The use of wavelet transform has a limited impact on the position robustness of PCA-based face recognition. This work shows, for a well-known database (Yale face database B*), that subject position in a 3D space can vary up to 10% of the original ROI size without decreasing recognition rates. Face recognition is performed on approximation coefficients of the image wavelet transform: results are still satisfying after 3 levels of decomposition. Furthermore, face database size can be divided by a factor 64 (22K with K = 3). In the context of ultra-embedded vision systems, memory footprint is one of the key points to be addressed; that is the reason why compression techniques such as wavelet transform are interesting. Furthermore, it leads to a low-complexity face detection stage compliant with limited computation resources available on such systems. The approach described in this work is tested on three platforms from a standard x86-based computer towards nanocomputers such as RaspberryPi and SECO boards. For K = 3 and a database with 40 faces, the execution mean time for one frame is 0.64 ms on a x86-based computer, 9 ms on a SECO board and 26 ms on a RaspberryPi (B model).

  2. Uniform Local Binary Pattern Based Texture-Edge Feature for 3D Human Behavior Recognition

    PubMed Central

    Ming, Yue; Wang, Guangchao; Fan, Chunxiao

    2015-01-01

    With the rapid development of 3D somatosensory technology, human behavior recognition has become an important research field. Human behavior feature analysis has evolved from traditional 2D features to 3D features. In order to improve the performance of human activity recognition, a human behavior recognition method is proposed, which is based on a hybrid texture-edge local pattern coding feature extraction and integration of RGB and depth videos information. The paper mainly focuses on background subtraction on RGB and depth video sequences of behaviors, extracting and integrating historical images of the behavior outlines, feature extraction and classification. The new method of 3D human behavior recognition has achieved the rapid and efficient recognition of behavior videos. A large number of experiments show that the proposed method has faster speed and higher recognition rate. The recognition method has good robustness for different environmental colors, lightings and other factors. Meanwhile, the feature of mixed texture-edge uniform local binary pattern can be used in most 3D behavior recognition. PMID:25942404

  3. Uniform Local Binary Pattern Based Texture-Edge Feature for 3D Human Behavior Recognition.

    PubMed

    Ming, Yue; Wang, Guangchao; Fan, Chunxiao

    2015-01-01

    With the rapid development of 3D somatosensory technology, human behavior recognition has become an important research field. Human behavior feature analysis has evolved from traditional 2D features to 3D features. In order to improve the performance of human activity recognition, a human behavior recognition method is proposed, which is based on a hybrid texture-edge local pattern coding feature extraction and integration of RGB and depth videos information. The paper mainly focuses on background subtraction on RGB and depth video sequences of behaviors, extracting and integrating historical images of the behavior outlines, feature extraction and classification. The new method of 3D human behavior recognition has achieved the rapid and efficient recognition of behavior videos. A large number of experiments show that the proposed method has faster speed and higher recognition rate. The recognition method has good robustness for different environmental colors, lightings and other factors. Meanwhile, the feature of mixed texture-edge uniform local binary pattern can be used in most 3D behavior recognition. PMID:25942404

  4. Face-space: A unifying concept in face recognition research.

    PubMed

    Valentine, Tim; Lewis, Michael B; Hills, Peter J

    2016-10-01

    The concept of a multidimensional psychological space, in which faces can be represented according to their perceived properties, is fundamental to the modern theorist in face processing. Yet the idea was not clearly expressed until 1991. The background that led to the development of face-space is explained, and its continuing influence on theories of face processing is discussed. Research that has explored the properties of the face-space and sought to understand caricature, including facial adaptation paradigms, is reviewed. Face-space as a theoretical framework for understanding the effect of ethnicity and the development of face recognition is evaluated. Finally, two applications of face-space in the forensic setting are discussed. From initially being presented as a model to explain distinctiveness, inversion, and the effect of ethnicity, face-space has become a central pillar in many aspects of face processing. It is currently being developed to help us understand adaptation effects with faces. While being in principle a simple concept, face-space has shaped, and continues to shape, our understanding of face perception. PMID:25427883

  5. Robust kernel collaborative representation for face recognition

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Wang, Xiaohui; Ma, Yanbo; Jiang, Yuzheng; Zhu, Yinghui; Jin, Zhong

    2015-05-01

    One of the greatest challenges of representation-based face recognition is that the training samples are usually insufficient. In other words, the training set usually does not include enough samples to show varieties of high-dimensional face images caused by illuminations, facial expressions, and postures. When the test sample is significantly different from the training samples of the same subject, the recognition performance will be sharply reduced. We propose a robust kernel collaborative representation based on virtual samples for face recognition. We think that the virtual training set conveys some reasonable and possible variations of the original training samples. Hence, we design a new object function to more closely match the representation coefficients generated from the original and virtual training sets. In order to further improve the robustness, we implement the corresponding representation-based face recognition in kernel space. It is noteworthy that any kind of virtual training samples can be used in our method. We use noised face images to obtain virtual face samples. The noise can be approximately viewed as a reflection of the varieties of illuminations, facial expressions, and postures. Our work is a simple and feasible way to obtain virtual face samples to impose Gaussian noise (and other types of noise) specifically to the original training samples to obtain possible variations of the original samples. Experimental results on the FERET, Georgia Tech, and ORL face databases show that the proposed method is more robust than two state-of-the-art face recognition methods, such as CRC and Kernel CRC.

  6. Proposal for the development of 3D Vertically Integrated Pattern Recognition Associative Memory (VIPRAM)

    SciTech Connect

    Deptuch, Gregory; Hoff, Jim; Kwan, Simon; Lipton, Ron; Liu, Ted; Ramberg, Erik; Todri, Aida; Yarema, Ray; Demarteua, Marcel,; Drake, Gary; Weerts, Harry; /Argonne /Chicago U. /Padua U. /INFN, Padua

    2010-10-01

    Future particle physics experiments looking for rare processes will have no choice but to address the demanding challenges of fast pattern recognition in triggering as detector hit density becomes significantly higher due to the high luminosity required to produce the rare process. The authors propose to develop a 3D Vertically Integrated Pattern Recognition Associative Memory (VIPRAM) chip for HEP applications, to advance the state-of-the-art for pattern recognition and track reconstruction for fast triggering.

  7. Video face recognition against a watch list

    NASA Astrophysics Data System (ADS)

    Abbas, Jehanzeb; Dagli, Charlie K.; Huang, Thomas S.

    2007-10-01

    Due to a large increase in the video surveillance data recently in an effort to maintain high security at public places, we need more robust systems to analyze this data and make tasks like face recognition a realistic possibility in challenging environments. In this paper we explore a watch-list scenario where we use an appearance based model to classify query faces from low resolution videos into either a watch-list or a non-watch-list face. We then use our simple yet a powerful face recognition system to recognize the faces classified as watch-list faces. Where the watch-list includes those people that we are interested in recognizing. Our system uses simple feature machine algorithms from our previous work to match video faces against still images. To test our approach, we match video faces against a large database of still images obtained from a previous work in the field from Yahoo News over a period of time. We do this matching in an efficient manner to come up with a faster and nearly real-time system. This system can be incorporated into a larger surveillance system equipped with advanced algorithms involving anomalous event detection and activity recognition. This is a step towards more secure and robust surveillance systems and efficient video data analysis.

  8. Recognition Accuracy Using 3D Endoscopic Images for Superficial Gastrointestinal Cancer: A Crossover Study

    PubMed Central

    Kaise, Mitsuru; Kikuchi, Daisuke; Iizuka, Toshiro; Fukuma, Yumiko; Kuribayashi, Yasutaka; Tanaka, Masami; Toba, Takahito; Furuhata, Tsukasa; Yamashita, Satoshi; Matsui, Akira; Mitani, Toshifumi; Hoteya, Shu

    2016-01-01

    Aim. To determine whether 3D endoscopic images improved recognition accuracy for superficial gastrointestinal cancer compared with 2D images. Methods. We created an image catalog using 2D and 3D images of 20 specimens resected by endoscopic submucosal dissection. The twelve participants were allocated into two groups. Group 1 evaluated only 2D images at first, group 2 evaluated 3D images, and, after an interval of 2 weeks, group 1 next evaluated 3D and group 2 evaluated 2D images. The evaluation items were as follows: (1) diagnostic accuracy of the tumor extent and (2) confidence levels in assessing (a) tumor extent, (b) morphology, (c) microsurface structure, and (d) comprehensive recognition. Results. The use of 3D images resulted in an improvement in diagnostic accuracy in both group 1 (2D: 76.9%, 3D: 78.6%) and group 2 (2D: 79.9%, 3D: 83.6%), with no statistically significant difference. The confidence levels were higher for all items ((a) to (d)) when 3D images were used. With respect to experience, the degree of the improvement showed the following trend: novices > trainees > experts. Conclusions. By conversion into 3D images, there was a significant improvement in the diagnostic confidence level for superficial tumors, and the improvement was greater in individuals with lower endoscopic expertise. PMID:27597863

  9. Recognition Accuracy Using 3D Endoscopic Images for Superficial Gastrointestinal Cancer: A Crossover Study.

    PubMed

    Nomura, Kosuke; Kaise, Mitsuru; Kikuchi, Daisuke; Iizuka, Toshiro; Fukuma, Yumiko; Kuribayashi, Yasutaka; Tanaka, Masami; Toba, Takahito; Furuhata, Tsukasa; Yamashita, Satoshi; Matsui, Akira; Mitani, Toshifumi; Hoteya, Shu

    2016-01-01

    Aim. To determine whether 3D endoscopic images improved recognition accuracy for superficial gastrointestinal cancer compared with 2D images. Methods. We created an image catalog using 2D and 3D images of 20 specimens resected by endoscopic submucosal dissection. The twelve participants were allocated into two groups. Group 1 evaluated only 2D images at first, group 2 evaluated 3D images, and, after an interval of 2 weeks, group 1 next evaluated 3D and group 2 evaluated 2D images. The evaluation items were as follows: (1) diagnostic accuracy of the tumor extent and (2) confidence levels in assessing (a) tumor extent, (b) morphology, (c) microsurface structure, and (d) comprehensive recognition. Results. The use of 3D images resulted in an improvement in diagnostic accuracy in both group 1 (2D: 76.9%, 3D: 78.6%) and group 2 (2D: 79.9%, 3D: 83.6%), with no statistically significant difference. The confidence levels were higher for all items ((a) to (d)) when 3D images were used. With respect to experience, the degree of the improvement showed the following trend: novices > trainees > experts. Conclusions. By conversion into 3D images, there was a significant improvement in the diagnostic confidence level for superficial tumors, and the improvement was greater in individuals with lower endoscopic expertise. PMID:27597863

  10. FaceID: A face detection and recognition system

    SciTech Connect

    Shah, M.B.; Rao, N.S.V.; Olman, V.; Uberbacher, E.C.; Mann, R.C.

    1996-12-31

    A face detection system that automatically locates faces in gray-level images is described. Also described is a system which matches a given face image with faces in a database. Face detection in an Image is performed by template matching using templates derived from a selected set of normalized faces. Instead of using original gray level images, vertical gradient images were calculated and used to make the system more robust against variations in lighting conditions and skin color. Faces of different sizes are detected by processing the image at several scales. Further, a coarse-to-fine strategy is used to speed up the processing, and a combination of whole face and face component templates are used to ensure low false detection rates. The input to the face recognition system is a normalized vertical gradient image of a face, which is compared against a database using a set of pretrained feedforward neural networks with a winner-take-all fuser. The training is performed by using an adaptation of the backpropagation algorithm. This system has been developed and tested using images from the FERET database and a set of images obtained from Rowley, et al and Sung and Poggio.

  11. Pseudo-Gabor wavelet for face recognition

    NASA Astrophysics Data System (ADS)

    Xie, Xudong; Liu, Wentao; Lam, Kin-Man

    2013-04-01

    An efficient face-recognition algorithm is proposed, which not only possesses the advantages of linear subspace analysis approaches-such as low computational complexity-but also has the advantage of a high recognition performance with the wavelet-based algorithms. Based on the linearity of Gabor-wavelet transformation and some basic assumptions on face images, we can extract pseudo-Gabor features from the face images without performing any complex Gabor-wavelet transformations. The computational complexity can therefore be reduced while a high recognition performance is still maintained by using the principal component analysis (PCA) method. The proposed algorithm is evaluated based on the Yale database, the Caltech database, the ORL database, the AR database, and the Facial Recognition Technology database, and is compared with several different face recognition methods such as PCA, Gabor wavelets plus PCA, kernel PCA, locality preserving projection, and dual-tree complex wavelet transformation plus PCA. Experiments show that consistent and promising results are obtained.

  12. Gender recognition based on face geometric features

    NASA Astrophysics Data System (ADS)

    Xiao, Jie; Guo, Zhaoli; Cai, Chao

    2013-10-01

    Automatic gender recognition based on face images plays an important role in computer vision and machine vision. In this paper, a novel and simple gender recognition method based on face geometric features is proposed. The method is divided in three steps. Firstly, Pre-processing step provides standard face images for feature extraction. Secondly, Active Shape Model (ASM) is used to extract geometric features in frontal face images. Thirdly, Adaboost classifier is chosen to separate the two classes (male and female). We tested it on 2570 pictures (1420 males and 1150 females) downloaded from the internet, and encouraging results were acquired. The comparison of the proposed geometric feature based method and the full facial image based method demonstrats its superiority.

  13. Face Recognition by Independent Component Analysis

    PubMed Central

    Bartlett, Marian Stewart; Movellan, Javier R.; Sejnowski, Terrence J.

    2010-01-01

    A number of current face recognition algorithms use face representations found by unsupervised statistical methods. Typically these methods find a set of basis images and represent faces as a linear combination of those images. Principal component analysis (PCA) is a popular example of such methods. The basis images found by PCA depend only on pairwise relationships between pixels in the image database. In a task such as face recognition, in which important information may be contained in the high-order relationships among pixels, it seems reasonable to expect that better basis images may be found by methods sensitive to these high-order statistics. Independent component analysis (ICA), a generalization of PCA, is one such method. We used a version of ICA derived from the principle of optimal information transfer through sigmoidal neurons. ICA was performed on face images in the FERET database under two different architectures, one which treated the images as random variables and the pixels as outcomes, and a second which treated the pixels as random variables and the images as outcomes. The first architecture found spatially local basis images for the faces. The second architecture produced a factorial face code. Both ICA representations were superior to representations based on PCA for recognizing faces across days and changes in expression. A classifier that combined the two ICA representations gave the best performance. PMID:18244540

  14. Reliable Gait Recognition Using 3D Reconstructions and Random Forests - An Anthropometric Approach.

    PubMed

    Sandau, Martin; Heimbürger, Rikke V; Jensen, Karl E; Moeslund, Thomas B; Aanaes, Henrik; Alkjaer, Tine; Simonsen, Erik B

    2016-05-01

    Photogrammetric measurements of bodily dimensions and analysis of gait patterns in CCTV are important tools in forensic investigations but accurate extraction of the measurements are challenging. This study tested whether manual annotation of the joint centers on 3D reconstructions could provide reliable recognition. Sixteen participants performed normal walking where 3D reconstructions were obtained continually. Segment lengths and kinematics from the extremities were manually extracted by eight expert observers. The results showed that all the participants were recognized, assuming the same expert annotated the data. Recognition based on data annotated by different experts was less reliable achieving 72.6% correct recognitions as some parameters were heavily affected by interobserver variability. This study verified that 3D reconstructions are feasible for forensic gait analysis as an improved alternative to conventional CCTV. However, further studies are needed to account for the use of different clothing, field conditions, etc. PMID:27122399

  15. Recognition Memory for Male and Female Faces.

    ERIC Educational Resources Information Center

    Yarmey, A. Daniel

    Sex differences in memory for human faces is reviewed. It is found that research evidence to date is not conclusive, but where differences exist they favor female superiority over males in facial memory. In particular, evidence is cited to suggest that females are reliably superior to males in their recognition memory for other females. This is…

  16. Deep learning and face recognition: the state of the art

    NASA Astrophysics Data System (ADS)

    Balaban, Stephen

    2015-05-01

    Deep Neural Networks (DNNs) have established themselves as a dominant technique in machine learning. DNNs have been top performers on a wide variety of tasks including image classification, speech recognition, and face recognition.1-3 Convolutional neural networks (CNNs) have been used in nearly all of the top performing methods on the Labeled Faces in the Wild (LFW) dataset.3-6 In this talk and accompanying paper, I attempt to provide a review and summary of the deep learning techniques used in the state-of-the-art. In addition, I highlight the need for both larger and more challenging public datasets to benchmark these systems. Despite the ability of DNNs and autoencoders to perform unsupervised feature learning, modern facial recognition pipelines still require domain specific engineering in the form of re-alignment. For example, in Facebook's recent DeepFace paper, a 3D "frontalization" step lies at the beginning of the pipeline. This step creates a 3D face model for the incoming image and then uses a series of affine transformations of the fiducial points to "frontalize" the image. This step enables the DeepFace system to use a neural network architecture with locally connected layers without weight sharing as opposed to standard convolutional layers.6 Deep learning techniques combined with large datasets have allowed research groups to surpass human level performance on the LFW dataset.3, 5 The high accuracy (99.63% for FaceNet at the time of publishing) and utilization of outside data (hundreds of millions of images in the case of Google's FaceNet) suggest that current face verification benchmarks such as LFW may not be challenging enough, nor provide enough data, for current techniques.3, 5 There exist a variety of organizations with mobile photo sharing applications that would be capable of releasing a very large scale and highly diverse dataset of facial images captured on mobile devices. Such an "ImageNet for Face Recognition" would likely receive a warm

  17. Multi-view indoor human behavior recognition based on 3D skeleton

    NASA Astrophysics Data System (ADS)

    Peng, Ling; Lu, Tongwei; Min, Feng

    2015-12-01

    For the problems caused by viewpoint changes in activity recognition, a multi-view interior human behavior recognition method based on 3D framework is presented. First, Microsoft's Kinect device is used to obtain body motion video in the positive perspective, the oblique angle and the side perspective. Second, it extracts bone joints and get global human features and the local features of arms and legs at the same time to form 3D skeletal features set. Third, online dictionary learning on feature set is used to reduce the dimension of feature. Finally, linear support vector machine (LSVM) is used to obtain the results of behavior recognition. The experimental results show that this method has better recognition rate.

  18. Face recognition using transform domain texture features

    NASA Astrophysics Data System (ADS)

    Rangaswamy, Y.; S K, Ramya; Raja, K. B.; K. R., Venugopal; Patnaik, L. M.

    2013-12-01

    The face recognition is an efficient biometric system to identify a person. In this paper, we propose Face Recognition using Transform Domain Texture Features (FRTDTF). The face images are preprocessed and two sets of texture features are extracted. In first feature set, the Discrete Wavelet Transform (DWT) is applied on face image and considered only high frequency sub band coefficients to extract edge information efficiently. The Dual Tree Complex Wavelet Transform (DTCWT) is applied on high frequency sub bands of DWT to derive Low and High frequency DTCWT coefficients. The texture features of DTCWT coefficients are computed using Overlapping Local Binary Pattern (OLBP) to generate feature set 1. In second feature set, the DTCWT is applied on preprocessed face image and considered all frequency sub bands coefficients to extract significant information and edge information of face image. The texture features of DTCWT matrix are computed using OLBP to generate feature set 2. The final feature set is the concatenation of feature set 1 and set 2. The Euclidian distance (ED) is used to compare test image features with features of face images in the database. It is observed that, the performance parameter values are better in the case of proposed algorithm compared to existing algorithms.

  19. Interactive Cosmetic Makeup of a 3D Point-Based Face Model

    NASA Astrophysics Data System (ADS)

    Kim, Jeong-Sik; Choi, Soo-Mi

    We present an interactive system for cosmetic makeup of a point-based face model acquired by 3D scanners. We first enhance the texture of a face model in 3D space using low-pass Gaussian filtering, median filtering, and histogram equalization. The user is provided with a stereoscopic display and haptic feedback, and can perform simulated makeup tasks including the application of foundation, color makeup, and lip gloss. Fast rendering is achieved by processing surfels using the GPU, and we use a BSP tree data structure and a dynamic local refinement of the facial surface to provide interactive haptics. We have implemented a prototype system and evaluated its performance.

  20. Face recognition with L1-norm subspaces

    NASA Astrophysics Data System (ADS)

    Maritato, Federica; Liu, Ying; Colonnese, Stefania; Pados, Dimitris A.

    2016-05-01

    We consider the problem of representing individual faces by maximum L1-norm projection subspaces calculated from available face-image ensembles. In contrast to conventional L2-norm subspaces, L1-norm subspaces are seen to offer significant robustness to image variations, disturbances, and rank selection. Face recognition becomes then the problem of associating a new unknown face image to the "closest," in some sense, L1 subspace in the database. In this work, we also introduce the concept of adaptively allocating the available number of principal components to different face image classes, subject to a given total number/budget of principal components. Experimental studies included in this paper illustrate and support the theoretical developments.

  1. Biometric watermarking based on face recognition

    NASA Astrophysics Data System (ADS)

    Satonaka, Takami

    2002-04-01

    We describe biometric watermarking procedure based on object recognition for accurate facial signature authentication. An adaptive metric learning algorithm incorporating watermark and facial signatures is introduced to separate an arbitrary pattern of unknown intruder classes from that of known true-user ones. The verification rule of multiple signatures is formulated to map a facial signature pattern in the overlapping classes to a separable disjoint one. The watermark signature, which is uniquely assigned to each face image, reduces the uncertainty of modeling missing facial signature patterns of the unknown intruder classes. The adaptive metric learning algorithm proposed improves a recognition error rate from 2.4% to 0.07% using the ORL database, which is better than previously reported numbers using the Karhunen-Loeve transform, convolution network and the hidden Marcov model. The face recognition facilitates generation and distribution of the watermark key. The watermarking approach focuses on using salient facial features to make watermark signatures robust to various attacks and transformation. The coarse-to-fine approach is presented to integrate pyramidal face detection, geometry analysis and face segmentation for watermarking. We conclude with an assessment of the strength and weakness of the chosen approach as well as possible improvements of the biometric watermarking system.

  2. Face Pose Recognition Based on Monocular Digital Imagery and Stereo-Based Estimation of its Precision

    NASA Astrophysics Data System (ADS)

    Gorbatsevich, V.; Vizilter, Yu.; Knyaz, V.; Zheltov, S.

    2014-06-01

    A technique for automated face detection and its pose estimation using single image is developed. The algorithm includes: face detection, facial features localization, face/background segmentation, face pose estimation, image transformation to frontal view. Automatic face/background segmentation is performed by original graph-cut technique based on detected feature points. The precision of face orientation estimation based on monocular digital imagery is addressed. The approach for precision estimation is developed based on comparison of synthesized facial 2D images and scanned face 3D model. The software for modelling and measurement is developed. The special system for non-contact measurements is created. Required set of 3D real face models and colour facial textures is obtained using this system. The precision estimation results demonstrate the precision of face pose estimation enough for further successful face recognition.

  3. Double linear regression classification for face recognition

    NASA Astrophysics Data System (ADS)

    Feng, Qingxiang; Zhu, Qi; Tang, Lin-Lin; Pan, Jeng-Shyang

    2015-02-01

    A new classifier designed based on linear regression classification (LRC) classifier and simple-fast representation-based classifier (SFR), named double linear regression classification (DLRC) classifier, is proposed for image recognition in this paper. As we all know, the traditional LRC classifier only uses the distance between test image vectors and predicted image vectors of the class subspace for classification. And the SFR classifier uses the test image vectors and the nearest image vectors of the class subspace to classify the test sample. However, the DLRC classifier computes out the predicted image vectors of each class subspace and uses all the predicted vectors to construct a novel robust global space. Then, the DLRC utilizes the novel global space to get the novel predicted vectors of each class for classification. A mass number of experiments on AR face database, JAFFE face database, Yale face database, Extended YaleB face database, and PIE face database are used to evaluate the performance of the proposed classifier. The experimental results show that the proposed classifier achieves better recognition rate than the LRC classifier, SFR classifier, and several other classifiers.

  4. Gender-Based Prototype Formation in Face Recognition

    ERIC Educational Resources Information Center

    Baudouin, Jean-Yves; Brochard, Renaud

    2011-01-01

    The role of gender categories in prototype formation during face recognition was investigated in 2 experiments. The participants were asked to learn individual faces and then to recognize them. During recognition, individual faces were mixed with faces, which were blended faces of same or different genders. The results of the 2 experiments showed…

  5. Super-resolution benefit for face recognition

    NASA Astrophysics Data System (ADS)

    Hu, Shuowen; Maschal, Robert; Young, S. Susan; Hong, Tsai Hong; Phillips, Jonathon P.

    2011-06-01

    Vast amounts of video footage are being continuously acquired by surveillance systems on private premises, commercial properties, government compounds, and military installations. Facial recognition systems have the potential to identify suspicious individuals on law enforcement watchlists, but accuracy is severely hampered by the low resolution of typical surveillance footage and the far distance of suspects from the cameras. To improve accuracy, super-resolution can enhance suspect details by utilizing a sequence of low resolution frames from the surveillance footage to reconstruct a higher resolution image for input into the facial recognition system. This work measures the improvement of face recognition with super-resolution in a realistic surveillance scenario. Low resolution and super-resolved query sets are generated using a video database at different eye-to-eye distances corresponding to different distances of subjects from the camera. Performance of a face recognition algorithm using the super-resolved and baseline query sets was calculated by matching against galleries consisting of frontal mug shots. The results show that super-resolution improves performance significantly at the examined mid and close ranges.

  6. Face recognition: a model specific ability.

    PubMed

    Wilmer, Jeremy B; Germine, Laura T; Nakayama, Ken

    2014-01-01

    In our everyday lives, we view it as a matter of course that different people are good at different things. It can be surprising, in this context, to learn that most of what is known about cognitive ability variation across individuals concerns the broadest of all cognitive abilities; an ability referred to as general intelligence, general mental ability, or just g. In contrast, our knowledge of specific abilities, those that correlate little with g, is severely constrained. Here, we draw upon our experience investigating an exceptionally specific ability, face recognition, to make the case that many specific abilities could easily have been missed. In making this case, we derive key insights from earlier false starts in the measurement of face recognition's variation across individuals, and we highlight the convergence of factors that enabled the recent discovery that this variation is specific. We propose that the case of face recognition ability illustrates a set of tools and perspectives that could accelerate fruitful work on specific cognitive abilities. By revealing relatively independent dimensions of human ability, such work would enhance our capacity to understand the uniqueness of individual minds. PMID:25346673

  7. Block error correction codes for face recognition

    NASA Astrophysics Data System (ADS)

    Hussein, Wafaa R.; Sellahewa, Harin; Jassim, Sabah A.

    2011-06-01

    Face recognition is one of the most desirable biometric-based authentication schemes to control access to sensitive information/locations and as a proof of identity to claim entitlement to services. The aim of this paper is to develop block-based mechanisms, to reduce recognition errors that result from varying illumination conditions with emphasis on using error correction codes. We investigate the modelling of error patterns in different parts/blocks of face images as a result of differences in illumination conditions, and we use appropriate error correction codes to deal with the corresponding distortion. We test the performance of our proposed schemes using the Extended Yale-B Face Database, which consists of face images belonging to 5 illumination subsets depending on the direction of light source from the camera. In our experiments each image is divided into three horizontal regions as follows: region1, three rows above the eyebrows, eyebrows and eyes; region2, nose region and region3, mouth and chin region. By estimating statistical parameters for errors in each region we select suitable BCH error correction codes that yield improved recognition accuracy for that particular region in comparison to applying error correction codes to the entire image. Discrete Wavelet Transform (DWT) to a depth of 3 is used for face feature extraction, followed by global/local binarization of coefficients in each subbands. We shall demonstrate that the use of BCH improves separation of the distribution of Hamming distances of client-client samples from the distribution of Hamming distances of imposter-client samples.

  8. Face and body recognition show similar improvement during childhood.

    PubMed

    Bank, Samantha; Rhodes, Gillian; Read, Ainsley; Jeffery, Linda

    2015-09-01

    Adults are proficient in extracting identity cues from faces. This proficiency develops slowly during childhood, with performance not reaching adult levels until adolescence. Bodies are similar to faces in that they convey identity cues and rely on specialized perceptual mechanisms. However, it is currently unclear whether body recognition mirrors the slow development of face recognition during childhood. Recent evidence suggests that body recognition develops faster than face recognition. Here we measured body and face recognition in 6- and 10-year-old children and adults to determine whether these two skills show different amounts of improvement during childhood. We found no evidence that they do. Face and body recognition showed similar improvement with age, and children, like adults, were better at recognizing faces than bodies. These results suggest that the mechanisms of face and body memory mature at a similar rate or that improvement of more general cognitive and perceptual skills underlies improvement of both face and body recognition. PMID:25909913

  9. Face recognition using 4-PSK joint transform correlation

    NASA Astrophysics Data System (ADS)

    Moniruzzaman, Md.; Alam, Mohammad S.

    2016-04-01

    This paper presents an efficient phase-encoded and 4-phase shift keying (PSK)-based fringe-adjusted joint transform correlation (FJTC) technique for face recognition applications. The proposed technique uses phase encoding and a 4- channel phase shifting method on the reference image which can be pre-calculated without affecting the system processing speed. The 4-channel PSK step eliminates the unwanted zero-order term, autocorrelation among multiple similar input scene objects while yield enhanced cross-correlation output. For each channel, discrete wavelet decomposition preprocessing has been used to accommodate the impact of various 3D facial expressions, effects of noise, and illumination variations. The performance of the proposed technique has been tested using various image datasets such as Yale, and extended Yale B under different environments such as illumination variation and 3D changes in facial expressions. The test results show that the proposed technique yields significantly better performance when compared to existing JTC-based face recognition techniques.

  10. 3D comparison of average faces in subjects with oral clefts.

    PubMed

    Bugaighis, Iman; Tiddeman, Bernard; Mattick, Claire R; Hobson, Ross

    2014-08-01

    This prospective cross-sectional, case-controlled morphometric study assessed three dimensional (3D) facial morphological differences between average faces of 103 children aged 8-12 years; 40 with unilateral cleft lip and palate (UCLP), 23 with unilateral cleft lip and alveolus (UCLA), 19 with bilateral cleft lip and palate (BCLP), 21 with isolated cleft palate (ICP), and 80 gender and age-matched controls. 3D stereophotogrammetric facial scans were recorded for each participant at rest. Thirty-nine landmarks were digitized for each scan, and x-, y-, z-coordinates for each landmark were extracted. A 3D photorealistic average face was constructed for each participating group and subjective and objective comparisons were carried out between each cleft and control average faces. Marked differences were observed between all groups. The most severely affected were groups where the lip and palate were affected and repaired (UCLP and UCLA). The group with midsagittal palatal deformity and repair (ICP) was the most similar to the control group. The results revealed that 3D shape analysis allows morphometric discrimination between subjects with craniofacial anomalies and the control group, and underlines the potential value of statistical shape analysis in assessing the outcomes of cleft lip and palate surgery, and orthodontic treatment. PMID:23172581

  11. A new method of 3D scene recognition from still images

    NASA Astrophysics Data System (ADS)

    Zheng, Li-ming; Wang, Xing-song

    2014-04-01

    Most methods of monocular visual three dimensional (3D) scene recognition involve supervised machine learning. However, these methods often rely on prior knowledge. Specifically, they learn the image scene as part of a training dataset. For this reason, when the sampling equipment or scene is changed, monocular visual 3D scene recognition may fail. To cope with this problem, a new method of unsupervised learning for monocular visual 3D scene recognition is here proposed. First, the image is made using superpixel segmentation based on the CIELAB color space values L, a, and b and on the coordinate values x and y of pixels, forming a superpixel image with a specific density. Second, a spectral clustering algorithm based on the superpixels' color characteristics and neighboring relationships was used to reduce the dimensions of the superpixel image. Third, the fuzzy distribution density functions representing sky, ground, and façade are multiplied with the segment pixels, where the expectations of these segments are obtained. A preliminary classification of sky, ground, and façade is generated in this way. Fourth, the most accurate classification images of sky, ground, and façade were extracted through the tier-1 wavelet sampling and Manhattan direction feature. Finally, a depth perception map is generated based on the pinhole imaging model and the linear perspective information of ground surface. Here, 400 images of Make3D Image data from the Cornell University website were used to test the algorithm. The experimental results showed that this unsupervised learning method provides a more effective monocular visual 3D scene recognition model than other methods.

  12. 3-D Human Action Recognition by Shape Analysis of Motion Trajectories on Riemannian Manifold.

    PubMed

    Devanne, Maxime; Wannous, Hazem; Berretti, Stefano; Pala, Pietro; Daoudi, Mohamed; Del Bimbo, Alberto

    2015-07-01

    Recognizing human actions in 3-D video sequences is an important open problem that is currently at the heart of many research domains including surveillance, natural interfaces and rehabilitation. However, the design and development of models for action recognition that are both accurate and efficient is a challenging task due to the variability of the human pose, clothing and appearance. In this paper, we propose a new framework to extract a compact representation of a human action captured through a depth sensor, and enable accurate action recognition. The proposed solution develops on fitting a human skeleton model to acquired data so as to represent the 3-D coordinates of the joints and their change over time as a trajectory in a suitable action space. Thanks to such a 3-D joint-based framework, the proposed solution is capable to capture both the shape and the dynamics of the human body, simultaneously. The action recognition problem is then formulated as the problem of computing the similarity between the shape of trajectories in a Riemannian manifold. Classification using k-nearest neighbors is finally performed on this manifold taking advantage of Riemannian geometry in the open curve shape space. Experiments are carried out on four representative benchmarks to demonstrate the potential of the proposed solution in terms of accuracy/latency for a low-latency action recognition. Comparative results with state-of-the-art methods are reported. PMID:25216492

  13. A moving mesh algorithm for 3-D regional groundwater flow with water table and seepage face

    NASA Astrophysics Data System (ADS)

    Knupp, P.

    A numerical algorithm is described for solving the free-surface groundwater flow equations in 3-D large-scale unconfined aquifers with strongly heterogeneous conductivity and surface recharge. The algorithm uses a moving mesh to track the water-table as it evolves according to kinematic and seepage face boundary conditions. Both steady-state and transient algorithms are implemented in the SECO-Flow 3-D code and demonstrated on stratigraphy based on the Delaware Basin of south-eastern New Mexico.

  14. A Two-Stage Framework for 3D Face Reconstruction from RGBD Images.

    PubMed

    Wang, Kangkan; Wang, Xianwang; Pan, Zhigeng; Liu, Kai

    2014-08-01

    This paper proposes a new approach for 3D face reconstruction with RGBD images from an inexpensive commodity sensor. The challenges we face are: 1) substantial random noise and corruption are present in low-resolution depth maps; and 2) there is high degree of variability in pose and face expression. We develop a novel two-stage algorithm that effectively maps low-quality depth maps to realistic face models. Each stage is targeted toward a certain type of noise. The first stage extracts sparse errors from depth patches through the data-driven local sparse coding, while the second stage smooths noise on the boundaries between patches and reconstructs the global shape by combining local shapes using our template-based surface refinement. Our approach does not require any markers or user interaction. We perform quantitative and qualitative evaluations on both synthetic and real test sets. Experimental results show that the proposed approach is able to produce high-resolution 3D face models with high accuracy, even if inputs are of low quality, and have large variations in viewpoint and face expression. PMID:26353333

  15. 3D video analysis of the novel object recognition test in rats.

    PubMed

    Matsumoto, Jumpei; Uehara, Takashi; Urakawa, Susumu; Takamura, Yusaku; Sumiyoshi, Tomiki; Suzuki, Michio; Ono, Taketoshi; Nishijo, Hisao

    2014-10-01

    The novel object recognition (NOR) test has been widely used to test memory function. We developed a 3D computerized video analysis system that estimates nose contact with an object in Long Evans rats to analyze object exploration during NOR tests. The results indicate that the 3D system reproducibly and accurately scores the NOR test. Furthermore, the 3D system captures a 3D trajectory of the nose during object exploration, enabling detailed analyses of spatiotemporal patterns of object exploration. The 3D trajectory analysis revealed a specific pattern of object exploration in the sample phase of the NOR test: normal rats first explored the lower parts of objects and then gradually explored the upper parts. A systematic injection of MK-801 suppressed changes in these exploration patterns. The results, along with those of previous studies, suggest that the changes in the exploration patterns reflect neophobia to a novel object and/or changes from spatial learning to object learning. These results demonstrate that the 3D tracking system is useful not only for detailed scoring of animal behaviors but also for investigation of characteristic spatiotemporal patterns of object exploration. The system has the potential to facilitate future investigation of neural mechanisms underlying object exploration that result from dynamic and complex brain activity. PMID:24991752

  16. 3D Imaging for hand gesture recognition: Exploring the software-hardware interaction of current technologies

    NASA Astrophysics Data System (ADS)

    Periverzov, Frol; Ilieş, Horea T.

    2012-09-01

    Interaction with 3D information is one of the fundamental and most familiar tasks in virtually all areas of engineering and science. Several recent technological advances pave the way for developing hand gesture recognition capabilities available to all, which will lead to more intuitive and efficient 3D user interfaces (3DUI). These developments can unlock new levels of expression and productivity in all activities concerned with the creation and manipulation of virtual 3D shapes and, specifically, in engineering design. Building fully automated systems for tracking and interpreting hand gestures requires robust and efficient 3D imaging techniques as well as potent shape classifiers. We survey and explore current and emerging 3D imaging technologies, and focus, in particular, on those that can be used to build interfaces between the users' hands and the machine. The purpose of this paper is to categorize and highlight the relevant differences between these existing 3D imaging approaches in terms of the nature of the information provided, output data format, as well as the specific conditions under which these approaches yield reliable data. Furthermore we explore the impact of each of these approaches on the computational cost and reliability of the required image processing algorithms. Finally we highlight the main challenges and opportunities in developing natural user interfaces based on hand gestures, and conclude with some promising directions for future research. [Figure not available: see fulltext.

  17. Face recognition: a model specific ability

    PubMed Central

    Wilmer, Jeremy B.; Germine, Laura T.; Nakayama, Ken

    2014-01-01

    In our everyday lives, we view it as a matter of course that different people are good at different things. It can be surprising, in this context, to learn that most of what is known about cognitive ability variation across individuals concerns the broadest of all cognitive abilities; an ability referred to as general intelligence, general mental ability, or just g. In contrast, our knowledge of specific abilities, those that correlate little with g, is severely constrained. Here, we draw upon our experience investigating an exceptionally specific ability, face recognition, to make the case that many specific abilities could easily have been missed. In making this case, we derive key insights from earlier false starts in the measurement of face recognition’s variation across individuals, and we highlight the convergence of factors that enabled the recent discovery that this variation is specific. We propose that the case of face recognition ability illustrates a set of tools and perspectives that could accelerate fruitful work on specific cognitive abilities. By revealing relatively independent dimensions of human ability, such work would enhance our capacity to understand the uniqueness of individual minds. PMID:25346673

  18. Uniformly spaced 3D modeling of human face from two images using parallel particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Chang, Yau-Zen; Hou, Jung-Fu; Tsao, Yi Hsiang; Lee, Shih-Tseng

    2011-09-01

    This paper proposes a scheme for finding the correspondence between uniformly spaced locations on the images of human face captured from different viewpoints at the same instant. The correspondence is dedicated for 3D reconstruction to be used in the registration procedure for neurosurgery where the exposure to projectors must be seriously restricted. The approach utilizes structured light to enhance patterns on the images and is initialized with the scale-invariant feature transform (SIFT). Successive locations are found according to spatial order using a parallel version of the particle swarm optimization algorithm. Furthermore, false locations are singled out for correction by searching for outliers from fitted curves. Case studies show that the scheme is able to correctly generate 456 evenly spaced 3D coordinate points in 23 seconds from a single shot of projected human face using a PC with 2.66 GHz Intel Q9400 CPU and 4GB RAM.

  19. FaceWarehouse: a 3D facial expression database for visual computing.

    PubMed

    Cao, Chen; Weng, Yanlin; Zhou, Shun; Tong, Yiying; Zhou, Kun

    2014-03-01

    We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image. PMID:24434222

  20. FaceWarehouse: A 3D Facial Expression Database for Visual Computing.

    PubMed

    Cao, Chen; Weng, Yanlin; Zhou, Shun; Tong, Yiying; Zhou, Kun

    2013-10-25

    We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Microsoft's Kinect system to capture 150 individuals from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions. For every raw data record, a set of facial feature points on the color image such as eye corners and mouth contour are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-three tensor to build a bilinear face model with two attributes, identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image. PMID:24166613

  1. Interactive and Stereoscopic Hybrid 3D Viewer of Radar Data with Gesture Recognition

    NASA Astrophysics Data System (ADS)

    Goenetxea, Jon; Moreno, Aitor; Unzueta, Luis; Galdós, Andoni; Segura, Álvaro

    This work presents an interactive and stereoscopic 3D viewer of weather information coming from a Doppler radar. The hybrid system shows a GIS model of the regional zone where the radar is located and the corresponding reconstructed 3D volume weather data. To enhance the immersiveness of the navigation, stereoscopic visualization has been added to the viewer, using a polarized glasses based system. The user can interact with the 3D virtual world using a Nintendo Wiimote for navigating through it and a Nintendo Wii Nunchuk for giving commands by means of hand gestures. We also present a dynamic gesture recognition procedure that measures the temporal advance of the performed gesture postures. Experimental results show how dynamic gestures are effectively recognized so that a more natural interaction and immersive navigation in the virtual world is achieved.

  2. 3D automatic anatomy recognition based on iterative graph-cut-ASM

    NASA Astrophysics Data System (ADS)

    Chen, Xinjian; Udupa, Jayaram K.; Bagci, Ulas; Alavi, Abass; Torigian, Drew A.

    2010-02-01

    We call the computerized assistive process of recognizing, delineating, and quantifying organs and tissue regions in medical imaging, occurring automatically during clinical image interpretation, automatic anatomy recognition (AAR). The AAR system we are developing includes five main parts: model building, object recognition, object delineation, pathology detection, and organ system quantification. In this paper, we focus on the delineation part. For the modeling part, we employ the active shape model (ASM) strategy. For recognition and delineation, we integrate several hybrid strategies of combining purely image based methods with ASM. In this paper, an iterative Graph-Cut ASM (IGCASM) method is proposed for object delineation. An algorithm called GC-ASM was presented at this symposium last year for object delineation in 2D images which attempted to combine synergistically ASM and GC. Here, we extend this method to 3D medical image delineation. The IGCASM method effectively combines the rich statistical shape information embodied in ASM with the globally optimal delineation capability of the GC method. We propose a new GC cost function, which effectively integrates the specific image information with the ASM shape model information. The proposed methods are tested on a clinical abdominal CT data set. The preliminary results show that: (a) it is feasible to explicitly bring prior 3D statistical shape information into the GC framework; (b) the 3D IGCASM delineation method improves on ASM and GC and can provide practical operational time on clinical images.

  3. A Global Hypothesis Verification Framework for 3D Object Recognition in Clutter.

    PubMed

    Aldoma, Aitor; Tombari, Federico; Stefano, Luigi Di; Vincze, Markus

    2016-07-01

    Pipelines to recognize 3D objects despite clutter and occlusions usually end up with a final verification stage whereby recognition hypotheses are validated or dismissed based on how well they explain sensor measurements. Unlike previous work, we propose a Global Hypothesis Verification (GHV) approach which regards all hypotheses jointly so as to account for mutual interactions. GHV provides a principled framework to tackle the complexity of our visual world by leveraging on a plurality of recognition paradigms and cues. Accordingly, we present a 3D object recognition pipeline deploying both global and local 3D features as well as shape and color. Thereby, and facilitated by the robustness of the verification process, diverse object hypotheses can be gathered and weak hypotheses need not be suppressed too early to trade sensitivity for specificity. Experiments demonstrate the effectiveness of our proposal, which significantly improves over the state-of-art and attains ideal performance (no false negatives, no false positives) on three out of the six most relevant and challenging benchmark datasets. PMID:26485476

  4. A 3D approach for object recognition in illuminated scenes with adaptive correlation filters

    NASA Astrophysics Data System (ADS)

    Picos, Kenia; Díaz-Ramírez, Víctor H.

    2015-09-01

    In this paper we solve the problem of pose recognition of a 3D object in non-uniformly illuminated and noisy scenes. The recognition system employs a bank of space-variant correlation filters constructed with an adaptive approach based on local statistical parameters of the input scene. The position and orientation of the target are estimated with the help of the filter bank. For an observed input frame, the algorithm computes the correlation process between the observed image and the bank of filters using a combination of data and task parallelism by taking advantage of a graphics processing unit (GPU) architecture. The pose of the target is estimated by finding the template that better matches the current view of target within the scene. The performance of the proposed system is evaluated in terms of recognition accuracy, location and orientation errors, and computational performance.

  5. The Significance of Hair for Face Recognition

    PubMed Central

    Toseeb, Umar; Keeble, David R. T.; Bryant, Eleanor J.

    2012-01-01

    Hair is a feature of the head that frequently changes in different situations. For this reason much research in the area of face perception has employed stimuli without hair. To investigate the effect of the presence of hair we used faces with and without hair in a recognition task. Participants took part in trials in which the state of the hair either remained consistent (Same) or switched between learning and test (Switch). It was found that in the Same trials performance did not differ for stimuli presented with and without hair. This implies that there is sufficient information in the internal features of the face for optimal performance in this task. It was also found that performance in the Switch trials was substantially lower than in the Same trials. This drop in accuracy when the stimuli were switched suggests that faces are represented in a holistic manner and that manipulation of the hair causes disruption to this, with implications for the interpretation of some previous studies. PMID:22461902

  6. Individual differences in holistic processing predict face recognition ability.

    PubMed

    Wang, Ruosi; Li, Jingguang; Fang, Huizhen; Tian, Moqian; Liu, Jia

    2012-02-01

    Why do some people recognize faces easily and others frequently make mistakes in recognizing faces? Classic behavioral work has shown that faces are processed in a distinctive holistic manner that is unlike the processing of objects. In the study reported here, we investigated whether individual differences in holistic face processing have a significant influence on face recognition. We found that the magnitude of face-specific recognition accuracy correlated with the extent to which participants processed faces holistically, as indexed by the composite-face effect and the whole-part effect. This association is due to face-specific processing in particular, not to a more general aspect of cognitive processing, such as general intelligence or global attention. This finding provides constraints on computational models of face recognition and may elucidate mechanisms underlying cognitive disorders, such as prosopagnosia and autism, that are associated with deficits in face recognition. PMID:22222218

  7. Realistic texture extraction for 3D face models robust to self-occlusion

    NASA Astrophysics Data System (ADS)

    Qu, Chengchao; Monari, Eduardo; Schuchert, Tobias; Beyerer, Jürgen

    2015-02-01

    In the context of face modeling, probably the most well-known approach to represent 3D faces is the 3D Morphable Model (3DMM). When 3DMM is fitted to a 2D image, the shape as well as the texture and illumination parameters are simultaneously estimated. However, if real facial texture is needed, texture extraction from the 2D image is necessary. This paper addresses the possible problems in texture extraction of a single image caused by self-occlusion. Unlike common approaches that leverage the symmetric property of the face by mirroring the visible facial part, which is sensitive to inhomogeneous illumination, this work first generates a virtual texture map for the skin area iteratively by averaging the color of neighbored vertices. Although this step creates unrealistic, overly smoothed texture, illumination stays constant between the real and virtual texture. In the second pass, the mirrored texture is gradually blended with the real or generated texture according to the visibility. This scheme ensures a gentle handling of illumination and yet yields realistic texture. Because the blending area only relates to non-informative area, main facial features still have unique appearance in different face halves. Evaluation results reveal realistic rendering in novel poses robust to challenging illumination conditions and small registration errors.

  8. 3D shape descriptors for face segmentation and fiducial points detection: an anatomical-based analysis

    NASA Astrophysics Data System (ADS)

    Salazar, Augusto E.; Cerón, Alexander; Prieto, Flavio A.

    2011-03-01

    The behavior of nine 3D shape descriptors which were computed on the surface of 3D face models, is studied. The set of descriptors includes six curvature-based ones, SPIN images, Folded SPIN Images, and Finger prints. Instead of defining clusters of vertices based on the value of a given primitive surface feature, a face template composed by 28 anatomical regions, is used to segment the models and to extract the location of different landmarks and fiducial points. Vertices are grouped by: region, region boundaries, and subsampled versions of them. The aim of this study is to analyze the discriminant capacity of each descriptor to characterize regions and to identify key points on the facial surface. The experiment includes testing with data from neutral faces and faces showing expressions. Also, in order to see the usefulness of the bending-invariant canonical form (BICF) to handle variations due to facial expressions, the descriptors are computed directly from the surface and also from its BICF. In the results: the values, distributions, and relevance indexes of each set of vertices, were analyzed.

  9. Nonparametric discriminant analysis for face recognition.

    PubMed

    Li, Zhifeng; Lin, Dahua; Tang, Xiaoou

    2009-04-01

    In this paper, we develop a new framework for face recognition based on nonparametric discriminant analysis (NDA) and multi-classifier integration. Traditional LDA-based methods suffer a fundamental limitation originating from the parametric nature of scatter matrices, which are based on the Gaussian distribution assumption. The performance of these methods notably degrades when the actual distribution is Non-Gaussian. To address this problem, we propose a new formulation of scatter matrices to extend the two-class nonparametric discriminant analysis to multi-class cases. Then, we develop two more improved multi-class NDA-based algorithms (NSA and NFA) with each one having two complementary methods based on the principal space and the null space of the intra-class scatter matrix respectively. Comparing to the NSA, the NFA is more effective in the utilization of the classification boundary information. In order to exploit the complementary nature of the two kinds of NFA (PNFA and NNFA), we finally develop a dual NFA-based multi-classifier fusion framework by employing the over complete Gabor representation to boost the recognition performance. We show the improvements of the developed new algorithms over the traditional subspace methods through comparative experiments on two challenging face databases, Purdue AR database and XM2VTS database. PMID:19229090

  10. Towards Robust Face Recognition from Video

    SciTech Connect

    Price, JR

    2001-10-18

    A novel, template-based method for face recognition is presented. The goals of the proposed method are to integrate multiple observations for improved robustness and to provide auxiliary confidence data for subsequent use in an automated video surveillance system. The proposed framework consists of a parallel system of classifiers, referred to as observers, where each observer is trained on one face region. The observer outputs are combined to yield the final recognition result. Three of the four confounding factors--expression, illumination, and decoration--are specifically addressed in this paper. The extension of the proposed approach to address the fourth confounding factor--pose--is straightforward and well supported in previous work. A further contribution of the proposed approach is the computation of a revealing confidence measure. This confidence measure will aid the subsequent application of the proposed method to video surveillance scenarios. Results are reported for a database comprising 676 images of 160 subjects under a variety of challenging circumstances. These results indicate significant performance improvements over previous methods and demonstrate the usefulness of the confidence data.

  11. The fast and accurate 3D-face scanning technology based on laser triangle sensors

    NASA Astrophysics Data System (ADS)

    Wang, Jinjiang; Chang, Tianyu; Ge, Baozhen; Tian, Qingguo; Chen, Yang; Kong, Bin

    2013-08-01

    A laser triangle scanning method and the structure of 3D-face measurement system were introduced. In presented system, a liner laser source was selected as an optical indicated signal in order to scanning a line one times. The CCD image sensor was used to capture image of the laser line modulated by human face. The system parameters were obtained by system calibrated calculated. The lens parameters of image part of were calibrated with machine visual image method and the triangle structure parameters were calibrated with fine wire paralleled arranged. The CCD image part and line laser indicator were set with a linear motor carry which can achieve the line laser scanning form top of the head to neck. For the nose is ledge part and the eyes are sunk part, one CCD image sensor can not obtain the completed image of laser line. In this system, two CCD image sensors were set symmetric at two sides of the laser indicator. In fact, this structure includes two laser triangle measure units. Another novel design is there laser indicators were arranged in order to reduce the scanning time for it is difficult for human to keep static for longer time. The 3D data were calculated after scanning. And further data processing include 3D coordinate refine, mesh calculate and surface show. Experiments show that this system has simply structure, high scanning speed and accurate. The scanning range covers the whole head of adult, the typical resolution is 0.5mm.

  12. [Face recognition in patients with autism spectrum disorders].

    PubMed

    Kita, Yosuke; Inagaki, Masumi

    2012-07-01

    The present study aimed to review previous research conducted on face recognition in patients with autism spectrum disorders (ASD). Face recognition is a key question in the ASD research field because it can provide clues for elucidating the neural substrates responsible for the social impairment of these patients. Historically, behavioral studies have reported low performance and/or unique strategies of face recognition among ASD patients. However, the performance and strategy of ASD patients is comparable to those of the control group, depending on the experimental situation or developmental stage, suggesting that face recognition of ASD patients is not entirely impaired. Recent brain function studies, including event-related potential and functional magnetic resonance imaging studies, have investigated the cognitive process of face recognition in ASD patients, and revealed impaired function in the brain's neural network comprising the fusiform gyrus and amygdala. This impaired function is potentially involved in the diminished preference for faces, and in the atypical development of face recognition, eliciting symptoms of unstable behavioral characteristics in these patients. Additionally, face recognition in ASD patients is examined from a different perspective, namely self-face recognition, and facial emotion recognition. While the former topic is intimately linked to basic social abilities such as self-other discrimination, the latter is closely associated with mentalizing. Further research on face recognition in ASD patients should investigate the connection between behavioral and neurological specifics in these patients, by considering developmental changes and the spectrum clinical condition of ASD. PMID:22764354

  13. 3D imaging by serial block face scanning electron microscopy for materials science using ultramicrotomy.

    PubMed

    Hashimoto, Teruo; Thompson, George E; Zhou, Xiaorong; Withers, Philip J

    2016-04-01

    Mechanical serial block face scanning electron microscopy (SBFSEM) has emerged as a means of obtaining three dimensional (3D) electron images over volumes much larger than possible by focused ion beam (FIB) serial sectioning and at higher spatial resolution than achievable with conventional X-ray computed tomography (CT). Such high resolution 3D electron images can be employed for precisely determining the shape, volume fraction, distribution and connectivity of important microstructural features. While soft (fixed or frozen) biological samples are particularly well suited for nanoscale sectioning using an ultramicrotome, the technique can also produce excellent 3D images at electron microscope resolution in a time and resource-efficient manner for engineering materials. Currently, a lack of appreciation of the capabilities of ultramicrotomy and the operational challenges associated with minimising artefacts for different materials is limiting its wider application to engineering materials. Consequently, this paper outlines the current state of the art for SBFSEM examining in detail how damage is introduced during slicing and highlighting strategies for minimising such damage. A particular focus of the study is the acquisition of 3D images for a variety of metallic and coated systems. PMID:26855205

  14. Direct Gaze Modulates Face Recognition in Young Infants

    ERIC Educational Resources Information Center

    Farroni, Teresa; Massaccesi, Stefano; Menon, Enrica; Johnson, Mark H.

    2007-01-01

    From birth, infants prefer to look at faces that engage them in direct eye contact. In adults, direct gaze is known to modulate the processing of faces, including the recognition of individuals. In the present study, we investigate whether direction of gaze has any effect on face recognition in four-month-old infants. Four-month infants were shown…

  15. Robust and Blind 3D Mesh Watermarking in Spatial Domain Based on Faces Categorization and Sorting

    NASA Astrophysics Data System (ADS)

    Molaei, Amir Masoud; Ebrahimnezhad, Hossein; Sedaaghi, Mohammad Hossein

    2016-06-01

    In this paper, a 3D watermarking algorithm in spatial domain is presented with blind detection. In the proposed method, a negligible visual distortion is observed in host model. Initially, a preprocessing is applied on the 3D model to make it robust against geometric transformation attacks. Then, a number of triangle faces are determined as mark triangles using a novel systematic approach in which faces are categorized and sorted robustly. In order to enhance the capability of information retrieval by attacks, block watermarks are encoded using Reed-Solomon block error-correcting code before embedding into the mark triangles. Next, the encoded watermarks are embedded in spherical coordinates. The proposed method is robust against additive noise, mesh smoothing and quantization attacks. Also, it is stout next to geometric transformation, vertices and faces reordering attacks. Moreover, the proposed algorithm is designed so that it is robust against the cropping attack. Simulation results confirm that the watermarked models confront very low distortion if the control parameters are selected properly. Comparison with other methods demonstrates that the proposed method has good performance against the mesh smoothing attacks.

  16. Survey on RGB, 3D, Thermal, and Multimodal Approaches for Facial Expression Recognition: History, Trends, and Affect-Related Applications.

    PubMed

    Corneanu, Ciprian Adrian; Simon, Marc Oliu; Cohn, Jeffrey F; Guerrero, Sergio Escalera

    2016-08-01

    Facial expressions are an important way through which humans interact socially. Building a system capable of automatically recognizing facial expressions from images and video has been an intense field of study in recent years. Interpreting such expressions remains challenging and much research is needed about the way they relate to human affect. This paper presents a general overview of automatic RGB, 3D, thermal and multimodal facial expression analysis. We define a new taxonomy for the field, encompassing all steps from face detection to facial expression recognition, and describe and classify the state of the art methods accordingly. We also present the important datasets and the bench-marking of most influential methods. We conclude with a general discussion about trends, important questions and future lines of research. PMID:26761193

  17. Prediction of 3D chip formation in the facing cutting with lathe machine using FEM

    NASA Astrophysics Data System (ADS)

    Prasetyo, Yudhi; Tauviqirrahman, Mohamad; Rusnaldy

    2016-04-01

    This paper presents the prediction of the chip formation at the machining process using a lathe machine in a more specific way focusing on facing cutting (face turning). The main purpose is to propose a new approach to predict the chip formation with the variation of the cutting directions i.e., the backward and forward direction. In addition, the interaction between stress analysis and chip formation on cutting process was also investigated. The simulations were conducted using three dimensional (3D) finite element method based on ABAQUS software with aluminum and high speed steel (HSS) as the workpiece and the tool materials, respectively. The simulation result showed that the chip resulted using a backward direction depicts a better formation than that using a conventional (forward) direction.

  18. Familiar Person Recognition: Is Autonoetic Consciousness More Likely to Accompany Face Recognition Than Voice Recognition?

    NASA Astrophysics Data System (ADS)

    Barsics, Catherine; Brédart, Serge

    2010-11-01

    Autonoetic consciousness is a fundamental property of human memory, enabling us to experience mental time travel, to recollect past events with a feeling of self-involvement, and to project ourselves in the future. Autonoetic consciousness is a characteristic of episodic memory. By contrast, awareness of the past associated with a mere feeling of familiarity or knowing relies on noetic consciousness, depending on semantic memory integrity. Present research was aimed at evaluating whether conscious recollection of episodic memories is more likely to occur following the recognition of a familiar face than following the recognition of a familiar voice. Recall of semantic information (biographical information) was also assessed. Previous studies that investigated the recall of biographical information following person recognition used faces and voices of famous people as stimuli. In this study, the participants were presented with personally familiar people's voices and faces, thus avoiding the presence of identity cues in the spoken extracts and allowing a stricter control of frequency exposure with both types of stimuli (voices and faces). In the present study, the rate of retrieved episodic memories, associated with autonoetic awareness, was significantly higher from familiar faces than familiar voices even though the level of overall recognition was similar for both these stimuli domains. The same pattern was observed regarding semantic information retrieval. These results and their implications for current Interactive Activation and Competition person recognition models are discussed.

  19. Image preprocessing study on KPCA-based face recognition

    NASA Astrophysics Data System (ADS)

    Li, Xuan; Li, Dehua

    2015-12-01

    Face recognition as an important biometric identification method, with its friendly, natural, convenient advantages, has obtained more and more attention. This paper intends to research a face recognition system including face detection, feature extraction and face recognition, mainly through researching on related theory and the key technology of various preprocessing methods in face detection process, using KPCA method, focuses on the different recognition results in different preprocessing methods. In this paper, we choose YCbCr color space for skin segmentation and choose integral projection for face location. We use erosion and dilation of the opening and closing operation and illumination compensation method to preprocess face images, and then use the face recognition method based on kernel principal component analysis method for analysis and research, and the experiments were carried out using the typical face database. The algorithms experiment on MATLAB platform. Experimental results show that integration of the kernel method based on PCA algorithm under certain conditions make the extracted features represent the original image information better for using nonlinear feature extraction method, which can obtain higher recognition rate. In the image preprocessing stage, we found that images under various operations may appear different results, so as to obtain different recognition rate in recognition stage. At the same time, in the process of the kernel principal component analysis, the value of the power of the polynomial function can affect the recognition result.

  20. Ball-scale based hierarchical multi-object recognition in 3D medical images

    NASA Astrophysics Data System (ADS)

    Bağci, Ulas; Udupa, Jayaram K.; Chen, Xinjian

    2010-03-01

    This paper investigates, using prior shape models and the concept of ball scale (b-scale), ways of automatically recognizing objects in 3D images without performing elaborate searches or optimization. That is, the goal is to place the model in a single shot close to the right pose (position, orientation, and scale) in a given image so that the model boundaries fall in the close vicinity of object boundaries in the image. This is achieved via the following set of key ideas: (a) A semi-automatic way of constructing a multi-object shape model assembly. (b) A novel strategy of encoding, via b-scale, the pose relationship between objects in the training images and their intensity patterns captured in b-scale images. (c) A hierarchical mechanism of positioning the model, in a one-shot way, in a given image from a knowledge of the learnt pose relationship and the b-scale image of the given image to be segmented. The evaluation results on a set of 20 routine clinical abdominal female and male CT data sets indicate the following: (1) Incorporating a large number of objects improves the recognition accuracy dramatically. (2) The recognition algorithm can be thought as a hierarchical framework such that quick replacement of the model assembly is defined as coarse recognition and delineation itself is known as finest recognition. (3) Scale yields useful information about the relationship between the model assembly and any given image such that the recognition results in a placement of the model close to the actual pose without doing any elaborate searches or optimization. (4) Effective object recognition can make delineation most accurate.

  1. Heritability of Face Shape in Twins: A Preliminary Study using 3D Stereophotogrammetry and Geometric Morphometrics

    PubMed Central

    Weinberg, Seth M.; Parsons, Trish E.; Marazita, Mary L.; Maher, Brion S.

    2014-01-01

    Introduction Previous research suggests that aspects of facial surface morphology are heritable. Traditionally, heritability studies have used a limited set of linear distances to quantify facial morphology and often employ statistical methods poorly designed to deal with biological shape. In this preliminary report, we use a combination of 3D photogrammetry and landmark-based morphometrics to explore which aspects of face shape show the strongest evidence of heritability in a sample of twins. Methods 3D surface images were obtained from 21 twin pairs (10 monozygotic, 11 same-sex dizygotic). Thirteen 3D landmarks were collected from each facial surface and their coordinates subjected to geometric morphometric analysis. This involved superimposing the individual landmark configurations and then subjecting the resulting shape coordinates to a principal components analysis. The resulting PC scores were then used to calculate rough narrow-sense heritability estimates. Results Three principal components displayed evidence of moderate to high heritability and were associated with variation in the breadth of orbital and nasal structures, upper lip height and projection, and the vertical and forward projection of the root of the nose due to variation in the position of nasion. Conclusions Aspects of facial shape, primarily related to variation in length and breadth of central midfacial structures, were shown to demonstrate evidence of strong heritability. An improved understanding of which facial features are under strong genetic control is an important step in the identification of specific genes that underlie normal facial variation. PMID:24501696

  2. Novel irregular mesh tagging algorithm for wound synthesis on a 3D face.

    PubMed

    Lee, Sangyong; Chin, Seongah

    2015-01-01

    Recently, advanced visualizing techniques in computer graphics have considerably enhanced the visual appearance of synthetic models. To realize enhanced visual graphics for synthetic medical effects, the first step followed by rendering techniques involves attaching albedo textures to the region where a certain graphic is to be rendered. For instance, in order to render wound textures efficiently, the first step is to recognize the area where the user wants to attach a wound. However, in general, face indices are not stored in sequential order, which makes sub-texturing difficult. In this paper, we present a novel mesh tagging algorithm that utilizes a task for mesh traversals and level extension in the general case of a wound sub-texture mapping and a selected region deformation in a three-dimensional (3D) model. This method works automatically on both regular and irregular mesh surfaces. The approach consists of mesh selection (MS), mesh leveling (ML), and mesh tagging (MT). To validate our approach, we performed experiments for synthesizing wounds on a 3D face model and on a simulated mesh. PMID:26405904

  3. 3D CARS image reconstruction and pattern recognition on SHG images

    NASA Astrophysics Data System (ADS)

    Medyukhina, Anna; Vogler, Nadine; Latka, Ines; Dietzek, Benjamin; Cicchi, Riccardo; Pavone, Francesco S.; Popp, Jürgen

    2012-06-01

    Nonlinear optical imaging techniques based e.g. on coherent anti-Stokes Raman scattering (CARS) or second-harmonic generation (SHG) show great potential for in-vivo investigations of tissue. While the microspectroscopic imaging tools are established, automized data evaluation, i.e. image pattern recognition and automized image classification, of nonlinear optical images still bares great possibilities for future developments towards an objective clinical diagnosis. This contribution details the capability of nonlinear microscopy for both 3D visualization of human tissues and automated discrimination between healthy and diseased patterns using ex-vivo human skin samples. By means of CARS image alignment we show how to obtain a quasi-3D model of a skin biopsy, which allows us to trace the tissue structure in different projections. Furthermore, the potential of automated pattern and organization recognition to distinguish between healthy and keloidal skin tissue is discussed. A first classification algorithm employs the intrinsic geometrical features of collagen, which can be efficiently visualized by SHG microscopy. The shape of the collagen pattern allows conclusions about the physiological state of the skin, as the typical wavy collagen structure of healthy skin is disturbed e.g. in keloid formation. Based on the different collagen patterns a quantitative score characterizing the collagen waviness - and hence reflecting the physiological state of the tissue - is obtained. Further, two additional scoring methods for collagen organization, respectively based on a statistical analysis of the mutual organization of fibers and on FFT, are presented.

  4. Face age and sex modulate the other-race effect in face recognition.

    PubMed

    Wallis, Jennifer; Lipp, Ottmar V; Vanman, Eric J

    2012-11-01

    Faces convey a variety of socially relevant cues that have been shown to affect recognition, such as age, sex, and race, but few studies have examined the interactive effect of these cues. White participants of two distinct age groups were presented with faces that differed in race, age, and sex in a face recognition paradigm. Replicating the other-race effect, young participants recognized young own-race faces better than young other-race faces. However, recognition performance did not differ across old faces of different races (Experiments 1, 2A). In addition, participants showed an other-age effect, recognizing White young faces better than White old faces. Sex affected recognition performance only when age was not varied (Experiment 2B). Overall, older participants showed a similar recognition pattern (Experiment 3) as young participants, displaying an other-race effect for young, but not old, faces. However, they recognized young and old White faces on a similar level. These findings indicate that face cues interact to affect recognition performance such that age and sex information reliably modulate the effect of race cues. These results extend accounts of face recognition that explain recognition biases (such as the other-race effect) as a function of dichotomous ingroup/outgroup categorization, in that outgroup characteristics are not simply additive but interactively determine recognition performance. PMID:22933042

  5. Newborns' Face Recognition: Role of Inner and Outer Facial Features

    ERIC Educational Resources Information Center

    Turati, Chiara; Macchi Cassia, Viola; Simion, Francesca; Leo, Irene

    2006-01-01

    Existing data indicate that newborns are able to recognize individual faces, but little is known about what perceptual cues drive this ability. The current study showed that either the inner or outer features of the face can act as sufficient cues for newborns' face recognition (Experiment 1), but the outer part of the face enjoys an advantage…

  6. Neural Substrates for Episodic Encoding and Recognition of Unfamiliar Faces

    ERIC Educational Resources Information Center

    Hofer, Alex; Siedentopf, Christian M.; Ischebeck, Anja; Rettenbacher, Maria A.; Verius, Michael; Golaszewski, Stefan M.; Felber, Stephan; Fleischhacker, W. Wolfgang

    2007-01-01

    Functional MRI was used to investigate brain activation in healthy volunteers during encoding of unfamiliar faces as well as during correct recognition of newly learned faces (CR) compared to correct identification of distractor faces (CF), missed alarms (not recognizing previously presented faces, MA), and false alarms (incorrectly recognizing…

  7. Graph optimized Laplacian eigenmaps for face recognition

    NASA Astrophysics Data System (ADS)

    Dornaika, F.; Assoum, A.; Ruichek, Y.

    2015-01-01

    In recent years, a variety of nonlinear dimensionality reduction techniques (NLDR) have been proposed in the literature. They aim to address the limitations of traditional techniques such as PCA and classical scaling. Most of these techniques assume that the data of interest lie on an embedded non-linear manifold within the higher-dimensional space. They provide a mapping from the high-dimensional space to the low-dimensional embedding and may be viewed, in the context of machine learning, as a preliminary feature extraction step, after which pattern recognition algorithms are applied. Laplacian Eigenmaps (LE) is a nonlinear graph-based dimensionality reduction method. It has been successfully applied in many practical problems such as face recognition. However the construction of LE graph suffers, similarly to other graph-based DR techniques from the following issues: (1) the neighborhood graph is artificially defined in advance, and thus does not necessary benefit the desired DR task; (2) the graph is built using the nearest neighbor criterion which tends to work poorly due to the high-dimensionality of original space; and (3) its computation depends on two parameters whose values are generally uneasy to assign, the neighborhood size and the heat kernel parameter. To address the above-mentioned problems, for the particular case of the LPP method (a linear version of LE), L. Zhang et al.1 have developed a novel DR algorithm whose idea is to integrate graph construction with specific DR process into a unified framework. This algorithm results in an optimized graph rather than a predefined one.

  8. Face recognition across makeup and plastic surgery from real-world images

    NASA Astrophysics Data System (ADS)

    Moeini, Ali; Faez, Karim; Moeini, Hossein

    2015-09-01

    A study for feature extraction is proposed to handle the problem of facial appearance changes including facial makeup and plastic surgery in face recognition. To extend a face recognition method robust to facial appearance changes, features are individually extracted from facial depth on which facial makeup and plastic surgery have no effect. Then facial depth features are added to facial texture features to perform feature extraction. Accordingly, a three-dimensional (3-D) face is reconstructed from only a single two-dimensional (2-D) frontal image in real-world scenarios. Then the facial depth is extracted from the reconstructed model. Afterward, the dual-tree complex wavelet transform (DT-CWT) is applied to both texture and reconstructed depth images to extract the feature vectors. Finally, the final feature vectors are generated by combining 2-D and 3-D feature vectors, and are then classified by adopting the support vector machine. Promising results have been achieved for makeup-invariant face recognition on two available image databases including YouTube makeup and virtual makeup, and plastic surgery-invariant face recognition on a plastic surgery face database is compared to several state-of-the-art feature extraction methods. Several real-world scenarios are also planned to evaluate the performance of the proposed method on a combination of these three databases with 1102 subjects.

  9. Isolating the Special Component of Face Recognition: Peripheral Identification and a Mooney Face

    ERIC Educational Resources Information Center

    McKone, Elinor

    2004-01-01

    A previous finding argues that, for faces, configural (holistic) processing can operate even in the complete absence of part-based contributions to recognition. Here, this result is confirmed using 2 methods. In both, recognition of inverted faces (parts only) was removed altogether (chance identification of faces in the periphery; no perception…

  10. Familiar Face Recognition in Children with Autism: The Differential Use of Inner and Outer Face Parts

    ERIC Educational Resources Information Center

    Wilson, Rebecca; Pascalis, Olivier; Blades, Mark

    2007-01-01

    We investigated whether children with autistic spectrum disorders (ASD) have a deficit in recognising familiar faces. Children with ASD were given a forced choice familiar face recognition task with three conditions: full faces, inner face parts and outer face parts. Control groups were children with developmental delay (DD) and typically…

  11. Children's Recognition of Unfamiliar Faces: Developments and Determinants.

    ERIC Educational Resources Information Center

    Soppe, H. J. G.

    1986-01-01

    Eight- to 12-year-old primary school children and 13-year-old secondary school children were given a live and photographed face recognition task and several other figural tasks. While scores on most tasks increased with age, face recognition scores were affected by age, decreasing at age 12 (puberty onset). (Author/BB)

  12. Transfer between Pose and Illumination Training in Face Recognition

    ERIC Educational Resources Information Center

    Liu, Chang Hong; Bhuiyan, Md. Al-Amin; Ward, James; Sui, Jie

    2009-01-01

    The relationship between pose and illumination learning in face recognition was examined in a yes-no recognition paradigm. The authors assessed whether pose training can transfer to a new illumination or vice versa. Results show that an extensive level of pose training through a face-name association task was able to generalize to a new…

  13. Recognition of Moving and Static Faces by Young Infants

    ERIC Educational Resources Information Center

    Otsuka, Yumiko; Konishi, Yukuo; Kanazawa, So; Yamaguchi, Masami K.; Abdi, Herve; O'Toole, Alice J.

    2009-01-01

    This study compared 3- to 4-month-olds' recognition of previously unfamiliar faces learned in a moving or a static condition. Infants in the moving condition showed successful recognition with only 30 s familiarization, even when different images of a face were used in the familiarization and test phase (Experiment 1). In contrast, infants in the…

  14. When the face fits: recognition of celebrities from matching and mismatching faces and voices.

    PubMed

    Stevenage, Sarah V; Neil, Greg J; Hamlin, Iain

    2014-01-01

    The results of two experiments are presented in which participants engaged in a face-recognition or a voice-recognition task. The stimuli were face-voice pairs in which the face and voice were co-presented and were either "matched" (same person), "related" (two highly associated people), or "mismatched" (two unrelated people). Analysis in both experiments confirmed that accuracy and confidence in face recognition was consistently high regardless of the identity of the accompanying voice. However accuracy of voice recognition was increasingly affected as the relationship between voice and accompanying face declined. Moreover, when considering self-reported confidence in voice recognition, confidence remained high for correct responses despite the proportion of these responses declining across conditions. These results converged with existing evidence indicating the vulnerability of voice recognition as a relatively weak signaller of identity, and results are discussed in the context of a person-recognition framework. PMID:23531227

  15. Street curb recognition in 3d point cloud data using morphological operations

    NASA Astrophysics Data System (ADS)

    Rodríguez-Cuenca, Borja; Concepción Alonso-Rodríguez, María; García-Cortés, Silverio; Ordóñez, Celestino

    2015-04-01

    Accurate and automatic detection of cartographic-entities saves a great deal of time and money when creating and updating cartographic databases. The current trend in remote sensing feature extraction is to develop methods that are as automatic as possible. The aim is to develop algorithms that can obtain accurate results with the least possible human intervention in the process. Non-manual curb detection is an important issue in road maintenance, 3D urban modeling, and autonomous navigation fields. This paper is focused on the semi-automatic recognition of curbs and street boundaries using a 3D point cloud registered by a mobile laser scanner (MLS) system. This work is divided into four steps. First, a coordinate system transformation is carried out, moving from a global coordinate system to a local one. After that and in order to simplify the calculations involved in the procedure, a rasterization based on the projection of the measured point cloud on the XY plane was carried out, passing from the 3D original data to a 2D image. To determine the location of curbs in the image, different image processing techniques such as thresholding and morphological operations were applied. Finally, the upper and lower edges of curbs are detected by an unsupervised classification algorithm on the curvature and roughness of the points that represent curbs. The proposed method is valid in both straight and curved road sections and applicable both to laser scanner and stereo vision 3D data due to the independence of its scanning geometry. This method has been successfully tested with two datasets measured by different sensors. The first dataset corresponds to a point cloud measured by a TOPCON sensor in the Spanish town of Cudillero. That point cloud comprises more than 6,000,000 points and covers a 400-meter street. The second dataset corresponds to a point cloud measured by a RIEGL sensor in the Austrian town of Horn. That point cloud comprises 8,000,000 points and represents a

  16. Imaging system for creating 3D block-face cryo-images of whole mice

    NASA Astrophysics Data System (ADS)

    Roy, Debashish; Breen, Michael; Salvado, Olivier; Heinzel, Meredith; McKinley, Eliot; Wilson, David

    2006-03-01

    We developed a cryomicrotome/imaging system that provides high resolution, high sensitivity block-face images of whole mice or excised organs, and applied it to a variety of biological applications. With this cryo-imaging system, we sectioned cryo-preserved tissues at 2-40 μm thickness and acquired high resolution brightfield and fluorescence images with microscopic in-plane resolution (as good as 1.2 μm). Brightfield images of normal and pathological anatomy show exquisite detail, especially in the abdominal cavity. Multi-planar reformatting and 3D renderings allow one to interrogate 3D structures. In this report, we present brightfield images of mouse anatomy, as well as 3D renderings of organs. For BPK mice model of polycystic kidney disease, we compared brightfield cryo-images and kidney volumes to MRI. The color images provided greater contrast and resolution of cysts as compared to in vivo MRI. We note that color cryo-images are closer to what a researcher sees in dissection, making it easier for them to interpret image data. The combination of field of view, depth of field, ultra high resolution and color/fluorescence contrast enables cryo-image volumes to provide details that cannot be found through in vivo imaging or other ex vivo optical imaging approaches. We believe that this novel imaging system will have applications that include identification of mouse phenotypes, characterization of diseases like blood vessel disease, kidney disease, and cancer, assessment of drug and gene therapy delivery and efficacy and validation of other imaging modalities.

  17. Flow control on a 3D backward facing ramp by pulsed jets

    NASA Astrophysics Data System (ADS)

    Joseph, Pierric; Bortolus, Dorian; Grasso, Francesco

    2014-06-01

    This paper presents an experimental study of flow separation control over a 3D backward facing ramp by means of pulsed jets. Such geometry has been selected to reproduce flow phenomena of interest for the automotive industry. The base flow has been characterised using PIV and pressure measurements. The results show that the classical notchback topology is correctly reproduced. A control system based on magnetic valves has been used to produce the pulsed jets whose properties have been characterised by hot wire anemometry. In order to shed some light on the role of the different parameters affecting the suppression of the slant recirculation area, a parametric study has been carried out by varying the frequency and the momentum coefficient of the jets for several Reynolds numbers. xml:lang="fr"

  18. Face averages enhance user recognition for smartphone security.

    PubMed

    Robertson, David J; Kramer, Robin S S; Burton, A Mike

    2015-01-01

    Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy). In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual's 'face-average'--a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1) and for real faces (Experiment 2): users could unlock their phones more reliably when the device stored an average of the user's face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings. PMID:25807251

  19. Face Averages Enhance User Recognition for Smartphone Security

    PubMed Central

    Robertson, David J.; Kramer, Robin S. S.; Burton, A. Mike

    2015-01-01

    Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy). In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual’s ‘face-average’ – a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1) and for real faces (Experiment 2): users could unlock their phones more reliably when the device stored an average of the user’s face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings. PMID:25807251

  20. Face Recognition Using ALLE and SIFT for Human Robot Interaction

    NASA Astrophysics Data System (ADS)

    Pan, Yaozhang; Ge, Shuzhi Sam; He, Hongsheng

    Face recognition is a very important aspect in developing human-robot interaction (HRI) for social robots. In this paper, an efficient face recognition algorithm is introduced for building intelligent robot vision system to recognize human faces. Dimension deduction algorithms locally linear embedding (LLE) and adaptive locally linear embedding (ALLE) and feature extraction algorithm scale-invariant feature transform (SIFT) are combined to form new methods called LLE-SIFT and ALLE-SIFT for finding compact and distinctive descriptors for face images. The new feature descriptors are demonstrated to have better performance in face recognition applications than standard SIFT descriptors, which shows that the proposed method is promising for developing robot vision system of face recognition.

  1. Developmental Commonalities between Object and Face Recognition in Adolescence

    PubMed Central

    Jüttner, Martin; Wakui, Elley; Petters, Dean; Davidoff, Jules

    2016-01-01

    In the visual perception literature, the recognition of faces has often been contrasted with that of non-face objects, in terms of differences with regard to the role of parts, part relations and holistic processing. However, recent evidence from developmental studies has begun to blur this sharp distinction. We review evidence for a protracted development of object recognition that is reminiscent of the well-documented slow maturation observed for faces. The prolonged development manifests itself in a retarded processing of metric part relations as opposed to that of individual parts and offers surprising parallels to developmental accounts of face recognition, even though the interpretation of the data is less clear with regard to holistic processing. We conclude that such results might indicate functional commonalities between the mechanisms underlying the recognition of faces and non-face objects, which are modulated by different task requirements in the two stimulus domains. PMID:27014176

  2. Human face recognition by Euclidean distance and neural network

    NASA Astrophysics Data System (ADS)

    Pornpanomchai, Chomtip; Inkuna, Chittrapol

    2010-02-01

    The idea of this project development is to improve the concept of human face recognition that has been studied in order to apply it for a more precise and effective recognition of human faces, and offered an alternative to agencies with respect to their access-departure control system. To accomplish this, a technique of calculation of distances between face features, including efficient face recognition though a neural network, is used. The system uses a technique of image processing consisting of 3 major processes: 1) preprocessing or preparation of images, 2) feature extraction from images of eyes, ears, nose and mouth, used for a calculation of Euclidean distances between each organ; and 3) face recognition using a neural network method. Based on the experimental results from reading image of a total of 200 images from 100 human faces, the system can correctly recognize 96 % with average access time of 3.304 sec per image.

  3. Neural network techniques for invariant recognition and motion tracking of 3-D objects

    SciTech Connect

    Hwang, J.N.; Tseng, Y.H.

    1995-12-31

    Invariant recognition and motion tracking of 3-D objects under partial object viewing are difficult tasks. In this paper, we introduce a new neural network solution that is robust to noise corruption and partial viewing of objects. This method directly utilizes the acquired range data and requires no feature extraction. In the proposed approach, the object is first parametrically represented by a continuous distance transformation neural network (CDTNN) which is trained by the surface points of the exemplar object. When later presented with the surface points of an unknown object, this parametric representation allows the mismatch information to back-propagate through the CDTNN to gradually determine the best similarity transformation (translation and rotation) of the unknown object. The mismatch can be directly measured in the reconstructed representation domain between the model and the unknown object.

  4. Fusion of 3D laser scanner and depth images for obstacle recognition in mobile applications

    NASA Astrophysics Data System (ADS)

    Budzan, Sebastian; Kasprzyk, Jerzy

    2016-02-01

    The problem of obstacle detection and recognition or, generally, scene mapping is one of the most investigated problems in computer vision, especially in mobile applications. In this paper a fused optical system using depth information with color images gathered from the Microsoft Kinect sensor and 3D laser range scanner data is proposed for obstacle detection and ground estimation in real-time mobile systems. The algorithm consists of feature extraction in the laser range images, processing of the depth information from the Kinect sensor, fusion of the sensor information, and classification of the data into two separate categories: road and obstacle. Exemplary results are presented and it is shown that fusion of information gathered from different sources increases the effectiveness of the obstacle detection in different scenarios, and it can be used successfully for road surface mapping.

  5. An optimal sensing strategy for recognition and localization of 3-D natural quadric objects

    NASA Technical Reports Server (NTRS)

    Lee, Sukhan; Hahn, Hernsoo

    1991-01-01

    An optimal sensing strategy for an optical proximity sensor system engaged in the recognition and localization of 3-D natural quadric objects is presented. The optimal sensing strategy consists of the selection of an optimal beam orientation and the determination of an optimal probing plane that compose an optimal data collection operation known as an optimal probing. The decision of an optimal probing is based on the measure of discrimination power of a cluster of surfaces on a multiple interpretation image (MII), where the measure of discrimination power is defined in terms of a utility function computing the expected number of interpretations that can be pruned out by a probing. An object representation suitable for active sensing based on a surface description vector (SDV) distribution graph and hierarchical tables is presented. Experimental results are shown.

  6. Analyzing the relevance of shape descriptors in automated recognition of facial gestures in 3D images

    NASA Astrophysics Data System (ADS)

    Rodriguez A., Julian S.; Prieto, Flavio

    2013-03-01

    The present document shows and explains the results from analyzing shape descriptors (DESIRE and Spherical Spin Image) for facial recognition of 3D images. DESIRE is a descriptor made of depth images, silhouettes and rays extended from a polygonal mesh; whereas the Spherical Spin Image (SSI) associated to a polygonal mesh point, is a 2D histogram built from neighboring points by using the position information that captures features of the local shape. The database used contains images of facial expressions which in average were recognized 88.16% using a neuronal network and 91.11% with a Bayesian classifier in the case of the first descriptor; in contrast, the second descriptor only recognizes in average 32% and 23,6% using the same mentioned classifiers respectively.

  7. Developement of 3D Vertically Integrated Pattern Recognition Associative Memory (VIPRAM)

    SciTech Connect

    Deputch, G.; Hoff, J.; Lipton, R.; Liu, T.; Olsen, J.; Ramberg, E.; Wu, Jin-Yuan; Yarema, R.; Shochet, M.; Tang, F.; Demarteau, M.; /Argonne /INFN, Padova

    2011-04-13

    Many next-generation physics experiments will be characterized by the collection of large quantities of data, taken in rapid succession, from which scientists will have to unravel the underlying physical processes. In most cases, large backgrounds will overwhelm the physics signal. Since the quantity of data that can be stored for later analysis is limited, real-time event selection is imperative to retain the interesting events while rejecting the background. Scaling of current technologies is unlikely to satisfy the scientific needs of future projects, so investments in transformational new technologies need to be made. For example, future particle physics experiments looking for rare processes will have to address the demanding challenges of fast pattern recognition in triggering as detector hit density becomes significantly higher due to the high luminosity required to produce the rare processes. In this proposal, we intend to develop hardware-based technology that significantly advances the state-of-the-art for fast pattern recognition within and outside HEP using the 3D vertical integration technology that has emerged recently in industry. The ultimate physics reach of the LHC experiments will crucially depend on the tracking trigger's ability to help discriminate between interesting rare events and the background. Hardware-based pattern recognition for fast triggering on particle tracks has been successfully used in high-energy physics experiments for some time. The CDF Silicon Vertex Trigger (SVT) at the Fermilab Tevatron is an excellent example. The method used there, developed in the 1990's, is based on algorithms that use a massively parallel associative memory architecture to identify patterns efficiently at high speed. However, due to much higher occupancy and event rates at the LHC, and the fact that the LHC detectors have a much larger number of channels in their tracking detectors, there is an enormous challenge in implementing pattern recognition

  8. A method of 3D object recognition and localization in a cloud of points

    NASA Astrophysics Data System (ADS)

    Bielicki, Jerzy; Sitnik, Robert

    2013-12-01

    The proposed method given in this article is prepared for analysis of data in the form of cloud of points directly from 3D measurements. It is designed for use in the end-user applications that can directly be integrated with 3D scanning software. The method utilizes locally calculated feature vectors (FVs) in point cloud data. Recognition is based on comparison of the analyzed scene with reference object library. A global descriptor in the form of a set of spatially distributed FVs is created for each reference model. During the detection process, correlation of subsets of reference FVs with FVs calculated in the scene is computed. Features utilized in the algorithm are based on parameters, which qualitatively estimate mean and Gaussian curvatures. Replacement of differentiation with averaging in the curvatures estimation makes the algorithm more resistant to discontinuities and poor quality of the input data. Utilization of the FV subsets allows to detect partially occluded and cluttered objects in the scene, while additional spatial information maintains false positive rate at a reasonably low level.

  9. Hybrid system of optics and computer for 3-D object recognition

    NASA Astrophysics Data System (ADS)

    Li, Qun Z.; Miao, Peng C.; He, Anzhi

    1992-03-01

    In this paper, a hybrid system of optics and computer for 3D object recognition is presented. The system consists of a Twyman-Green interferometer, a He-Ne laser, a computer, a TV camera, and an image processor. The structured light produced by a Twyman-Green interferometer is split in and illuminates objects in two directions at the same time. Moire contour is formed on the surface of object. In order to delete unwanted patterns in moire contour, we don't utilize the moire contour on the surface of object. We place a TV camera in the middle of the angle between two illuminating directions and take two groups of deformed fringes on the surface of objects. Two groups of deformed fringes are processed using the digital image processing system controlled and operated by XOR logic in the computer, moire fringes are then extracted from the complicated environment. 3D coordinates of points of the object are obtained after moire fringe is followed, and points belonging to the same fringe are given the same altitude. The object is described by its projected drawings in three coordinate planes. The projected drawings in three coordinate planes of the known objects are stored in the library of judgment. The object can be recognized by inquiring the library of judgment.

  10. The activation of visual face memory and explicit face recognition are delayed in developmental prosopagnosia.

    PubMed

    Parketny, Joanna; Towler, John; Eimer, Martin

    2015-08-01

    Individuals with developmental prosopagnosia (DP) are strongly impaired in recognizing faces, but the causes of this deficit are not well understood. We employed event-related brain potentials (ERPs) to study the time-course of neural processes involved in the recognition of previously unfamiliar faces in DPs and in age-matched control participants with normal face recognition abilities. Faces of different individuals were presented sequentially in one of three possible views, and participants had to detect a specific Target Face ("Joe"). EEG was recorded during task performance to Target Faces, Nontarget Faces, or the participants' Own Face (which had to be ignored). The N250 component was measured as a marker of the match between a seen face and a stored representation in visual face memory. The subsequent P600f was measured as an index of attentional processes associated with the conscious awareness and recognition of a particular face. Target Faces elicited reliable N250 and P600f in the DP group, but both of these components emerged later in DPs than in control participants. This shows that the activation of visual face memory for previously unknown learned faces and the subsequent attentional processing and conscious recognition of these faces are delayed in DP. N250 and P600f components to Own Faces did not differ between the two groups, indicating that the processing of long-term familiar faces is less affected in DP. However, P600f components to Own Faces were absent in two participants with DP who failed to recognize their Own Face during the experiment. These results provide new evidence that face recognition deficits in DP may be linked to a delayed activation of visual face memory and explicit identity recognition mechanisms. PMID:26169316

  11. Impaired processing of self-face recognition in anorexia nervosa.

    PubMed

    Hirot, France; Lesage, Marine; Pedron, Lya; Meyer, Isabelle; Thomas, Pierre; Cottencin, Olivier; Guardia, Dewi

    2016-03-01

    Body image disturbances and massive weight loss are major clinical symptoms of anorexia nervosa (AN). The aim of the present study was to examine the influence of body changes and eating attitudes on self-face recognition ability in AN. Twenty-seven subjects suffering from AN and 27 control participants performed a self-face recognition task (SFRT). During the task, digital morphs between their own face and a gender-matched unfamiliar face were presented in a random sequence. Participants' self-face recognition failures, cognitive flexibility, body concern and eating habits were assessed with the Self-Face Recognition Questionnaire (SFRQ), Trail Making Test (TMT), Body Shape Questionnaire (BSQ) and Eating Disorder Inventory-2 (EDI-2), respectively. Subjects suffering from AN exhibited significantly greater difficulties than control participants in identifying their own face (p = 0.028). No significant difference was observed between the two groups for TMT (all p > 0.1, non-significant). Regarding predictors of self-face recognition skills, there was a negative correlation between SFRT and body mass index (p = 0.01) and a positive correlation between SFRQ and EDI-2 (p < 0.001) or BSQ (p < 0.001). Among factors involved, nutritional status and intensity of eating disorders could play a part in impaired self-face recognition. PMID:26420298

  12. The effect of distraction on face and voice recognition.

    PubMed

    Stevenage, Sarah V; Neil, Greg J; Barlow, Jess; Dyson, Amy; Eaton-Brown, Catherine; Parsons, Beth

    2013-03-01

    The results of two experiments are presented which explore the effect of distractor items on face and voice recognition. Following from the suggestion that voice processing is relatively weak compared to face processing, it was anticipated that voice recognition would be more affected by the presentation of distractor items between study and test compared to face recognition. Using a sequential matching task with a fixed interval between study and test that either incorporated distractor items or did not, the results supported our prediction. Face recognition remained strong irrespective of the number of distractor items between study and test. In contrast, voice recognition was significantly impaired by the presence of distractor items regardless of their number (Experiment 1). This pattern remained whether distractor items were highly similar to the targets or not (Experiment 2). These results offer support for the proposal that voice processing is a relatively vulnerable method of identification. PMID:22926436

  13. Face Recognition Using Local Quantized Patterns and Gabor Filters

    NASA Astrophysics Data System (ADS)

    Khryashchev, V.; Priorov, A.; Stepanova, O.; Nikitin, A.

    2015-05-01

    The problem of face recognition in a natural or artificial environment has received a great deal of researchers' attention over the last few years. A lot of methods for accurate face recognition have been proposed. Nevertheless, these methods often fail to accurately recognize the person in difficult scenarios, e.g. low resolution, low contrast, pose variations, etc. We therefore propose an approach for accurate and robust face recognition by using local quantized patterns and Gabor filters. The estimation of the eye centers is used as a preprocessing stage. The evaluation of our algorithm on different samples from a standardized FERET database shows that our method is invariant to the general variations of lighting, expression, occlusion and aging. The proposed approach allows about 20% correct recognition accuracy increase compared with the known face recognition algorithms from the OpenCV library. The additional use of Gabor filters can significantly improve the robustness to changes in lighting conditions.

  14. Recognition of Faces of Ingroup and Outgroup Children and Adults

    ERIC Educational Resources Information Center

    Corenblum, B.; Meissner, Christian A.

    2006-01-01

    People are often more accurate in recognizing faces of ingroup members than in recognizing faces of outgroup members. Although own-group biases in face recognition are well established among adults, less attention has been given to such biases among children. This is surprising considering how often children give testimony in criminal and civil…

  15. Knowledge Based 3d Building Model Recognition Using Convolutional Neural Networks from LIDAR and Aerial Imageries

    NASA Astrophysics Data System (ADS)

    Alidoost, F.; Arefi, H.

    2016-06-01

    In recent years, with the development of the high resolution data acquisition technologies, many different approaches and algorithms have been presented to extract the accurate and timely updated 3D models of buildings as a key element of city structures for numerous applications in urban mapping. In this paper, a novel and model-based approach is proposed for automatic recognition of buildings' roof models such as flat, gable, hip, and pyramid hip roof models based on deep structures for hierarchical learning of features that are extracted from both LiDAR and aerial ortho-photos. The main steps of this approach include building segmentation, feature extraction and learning, and finally building roof labeling in a supervised pre-trained Convolutional Neural Network (CNN) framework to have an automatic recognition system for various types of buildings over an urban area. In this framework, the height information provides invariant geometric features for convolutional neural network to localize the boundary of each individual roofs. CNN is a kind of feed-forward neural network with the multilayer perceptron concept which consists of a number of convolutional and subsampling layers in an adaptable structure and it is widely used in pattern recognition and object detection application. Since the training dataset is a small library of labeled models for different shapes of roofs, the computation time of learning can be decreased significantly using the pre-trained models. The experimental results highlight the effectiveness of the deep learning approach to detect and extract the pattern of buildings' roofs automatically considering the complementary nature of height and RGB information.

  16. Two dimensional LDA using volume measure in face recognition

    NASA Astrophysics Data System (ADS)

    Meng, Jicheng; Feng, Li; Zheng, Xiaolong

    2007-11-01

    The classification criterion for the two dimensional LDA (2DLDA)-based face recognition methods has been little considered, while we almost pay all attention to the 2DLDA-based feature extraction. The typical classification measure used in 2DLDA-based face recognition is the sum of the Euclidean distance between two feature vectors in feature matrix, called traditional distance measure (TDM). However, this classification criterion does not match the high dimensional geometry space theory. So we apply the volume measure (VM), which is based on the high dimensional geometry theory, to the 2DLDA-based face recognition in this paper. To test its performance, experiments were performed on the YALE face databases. The experimental results show the volume measure (VM) is more efficient than the TDM in 2DLDA-based face recognition.

  17. Multi-feature fusion for thermal face recognition

    NASA Astrophysics Data System (ADS)

    Bi, Yin; Lv, Mingsong; Wei, Yangjie; Guan, Nan; Yi, Wang

    2016-07-01

    Human face recognition has been researched for the last three decades. Face recognition with thermal images now attracts significant attention since they can be used in low/none illuminated environment. However, thermal face recognition performance is still insufficient for practical applications. One main reason is that most existing work leverage only single feature to characterize a face in a thermal image. To solve the problem, we propose multi-feature fusion, a technique that combines multiple features in thermal face characterization and recognition. In this work, we designed a systematical way to combine four features, including Local binary pattern, Gabor jet descriptor, Weber local descriptor and Down-sampling feature. Experimental results show that our approach outperforms methods that leverage only a single feature and is robust to noise, occlusion, expression, low resolution and different l1 -minimization methods.

  18. Optical Correlator for Face Recognition Using Collinear Holographic System

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Kodate, Kashiko

    2006-08-01

    We have constructed an optical correlator for fast face recognition. Recognition rate can be markedly improved, if reference images are optically recorded and can be accessed directly without converting them to digital signals. In addition, a large capacity of optical storage allows us to increase the size of the reference database. We propose a new optical correlator that integrates the optical correlation technology used in our face recognition system and collinear holography. From preliminary correlation experiments using the collinear optical set-up, we achieved excellent performance of high correlation peaks and low error rates. We expect an optical correlation of 10 μs/frame, i.e., 100,000 face/s when applied to face recognition. This system can also be applied to various image searches.

  19. Face recognition algorithms surpass humans matching faces over changes in illumination.

    PubMed

    O'Toole, Alice J; Jonathon Phillips, P; Jiang, Fang; Ayyad, Janet; Penard, Nils; Abdi, Hervé

    2007-09-01

    There has been significant progress in improving the performance of computer-based face recognition algorithms over the last decade. Although algorithms have been tested and compared extensively with each other, there has been remarkably little work comparing the accuracy of computer-based face recognition systems with humans. We compared seven state-of-the-art face recognition algorithms with humans on a facematching task. Humans and algorithms determined whether pairs of face images, taken under different illumination conditions, were pictures of the same person or of different people. Three algorithms surpassed human performance matching face pairs prescreened to be "difficult" and six algorithms surpassed humans on "easy" face pairs. Although illumination variation continues to challenge face recognition algorithms, current algorithms compete favorably with humans. The superior performance of the best algorithms over humans, in light of the absolute performance levels of the algorithms, underscores the need to compare algorithms with the best current control--humans. PMID:17627050

  20. Integration of faces and voices, but not faces and names, in person recognition.

    PubMed

    O'Mahony, Christiane; Newell, Fiona N

    2012-02-01

    Recent studies on cross-modal recognition suggest that face and voice information are linked for the purpose of person identification. We tested whether congruent associations between familiarized faces and voices facilitated subsequent person recognition relative to incongruent associations. Furthermore, we investigated whether congruent face and name associations would similarly benefit person identification relative to incongruent face and name associations. Participants were familiarized with a set of talking video-images of actors, their names, and their voices. They were then tested on their recognition of either the face, voice, or name of each actor from bimodal stimuli which were either congruent or novel (incongruent) associations between the familiarized face and voice or face and name. We found that response times to familiarity decisions based on congruent face and voice stimuli were facilitated relative to incongruent associations. In contrast, we failed to find a benefit for congruent face and name pairs. Our findings suggest that faces and voices, but not faces and names, are integrated in memory for the purpose of person recognition. These findings have important implications for current models of face perception and support growing evidence for multisensory effects in face perception areas of the brain for the purpose of person recognition. PMID:22229775

  1. The Impact of Early Bilingualism on Face Recognition Processes.

    PubMed

    Kandel, Sonia; Burfin, Sabine; Méary, David; Ruiz-Tada, Elisa; Costa, Albert; Pascalis, Olivier

    2016-01-01

    Early linguistic experience has an impact on the way we decode audiovisual speech in face-to-face communication. The present study examined whether differences in visual speech decoding could be linked to a broader difference in face processing. To identify a phoneme we have to do an analysis of the speaker's face to focus on the relevant cues for speech decoding (e.g., locating the mouth with respect to the eyes). Face recognition processes were investigated through two classic effects in face recognition studies: the Other-Race Effect (ORE) and the Inversion Effect. Bilingual and monolingual participants did a face recognition task with Caucasian faces (own race), Chinese faces (other race), and cars that were presented in an Upright or Inverted position. The results revealed that monolinguals exhibited the classic ORE. Bilinguals did not. Overall, bilinguals were slower than monolinguals. These results suggest that bilinguals' face processing abilities differ from monolinguals'. Early exposure to more than one language may lead to a perceptual organization that goes beyond language processing and could extend to face analysis. We hypothesize that these differences could be due to the fact that bilinguals focus on different parts of the face than monolinguals, making them more efficient in other race face processing but slower. However, more studies using eye-tracking techniques are necessary to confirm this explanation. PMID:27486422

  2. The Impact of Early Bilingualism on Face Recognition Processes

    PubMed Central

    Kandel, Sonia; Burfin, Sabine; Méary, David; Ruiz-Tada, Elisa; Costa, Albert; Pascalis, Olivier

    2016-01-01

    Early linguistic experience has an impact on the way we decode audiovisual speech in face-to-face communication. The present study examined whether differences in visual speech decoding could be linked to a broader difference in face processing. To identify a phoneme we have to do an analysis of the speaker’s face to focus on the relevant cues for speech decoding (e.g., locating the mouth with respect to the eyes). Face recognition processes were investigated through two classic effects in face recognition studies: the Other-Race Effect (ORE) and the Inversion Effect. Bilingual and monolingual participants did a face recognition task with Caucasian faces (own race), Chinese faces (other race), and cars that were presented in an Upright or Inverted position. The results revealed that monolinguals exhibited the classic ORE. Bilinguals did not. Overall, bilinguals were slower than monolinguals. These results suggest that bilinguals’ face processing abilities differ from monolinguals’. Early exposure to more than one language may lead to a perceptual organization that goes beyond language processing and could extend to face analysis. We hypothesize that these differences could be due to the fact that bilinguals focus on different parts of the face than monolinguals, making them more efficient in other race face processing but slower. However, more studies using eye-tracking techniques are necessary to confirm this explanation. PMID:27486422

  3. Effective face recognition using bag of features with additive kernels

    NASA Astrophysics Data System (ADS)

    Yang, Shicai; Bebis, George; Chu, Yongjie; Zhao, Lindu

    2016-01-01

    In past decades, many techniques have been used to improve face recognition performance. The most common and well-studied ways are to use the whole face image to build a subspace based on the reduction of dimensionality. Differing from methods above, we consider face recognition as an image classification problem. The face images of the same person are considered to fall into the same category. Each category and each face image could be both represented by a simple pyramid histogram. Spatial dense scale-invariant feature transform features and bag of features method are used to build categories and face representations. In an effort to make the method more efficient, a linear support vector machine solver, Pegasos, is used for the classification in the kernel space with additive kernels instead of nonlinear SVMs. Our experimental results demonstrate that the proposed method can achieve very high recognition accuracy on the ORL, YALE, and FERET databases.

  4. Original and Mirror Face Images and Minimum Squared Error Classification for Visible Light Face Recognition

    PubMed Central

    Wang, Rong

    2015-01-01

    In real-world applications, the image of faces varies with illumination, facial expression, and poses. It seems that more training samples are able to reveal possible images of the faces. Though minimum squared error classification (MSEC) is a widely used method, its applications on face recognition usually suffer from the problem of a limited number of training samples. In this paper, we improve MSEC by using the mirror faces as virtual training samples. We obtained the mirror faces generated from original training samples and put these two kinds of samples into a new set. The face recognition experiments show that our method does obtain high accuracy performance in classification. PMID:26576452

  5. Feature based sliding window technique for face recognition

    NASA Astrophysics Data System (ADS)

    Javed, Muhammad Younus; Mohsin, Syed Maajid; Anjum, Muhammad Almas

    2010-02-01

    Human beings are commonly identified by biometric schemes which are concerned with identifying individuals by their unique physical characteristics. The use of passwords and personal identification numbers for detecting humans are being used for years now. Disadvantages of these schemes are that someone else may use them or can easily be forgotten. Keeping in view of these problems, biometrics approaches such as face recognition, fingerprint, iris/retina and voice recognition have been developed which provide a far better solution when identifying individuals. A number of methods have been developed for face recognition. This paper illustrates employment of Gabor filters for extracting facial features by constructing a sliding window frame. Classification is done by assigning class label to the unknown image that has maximum features similar to the image stored in the database of that class. The proposed system gives a recognition rate of 96% which is better than many of the similar techniques being used for face recognition.

  6. FaceIt: face recognition from static and live video for law enforcement

    NASA Astrophysics Data System (ADS)

    Atick, Joseph J.; Griffin, Paul M.; Redlich, A. N.

    1997-01-01

    Recent advances in image and pattern recognition technology- -especially face recognition--are leading to the development of a new generation of information systems of great value to the law enforcement community. With these systems it is now possible to pool and manage vast amounts of biometric intelligence such as face and finger print records and conduct computerized searches on them. We review one of the enabling technologies underlying these systems: the FaceIt face recognition engine; and discuss three applications that illustrate its benefits as a problem-solving technology and an efficient and cost effective investigative tool.

  7. Face engagement during infancy predicts later face recognition ability in younger siblings of children with autism.

    PubMed

    de Klerk, Carina C J M; Gliga, Teodora; Charman, Tony; Johnson, Mark H

    2014-07-01

    Face recognition difficulties are frequently documented in children with autism spectrum disorders (ASD). It has been hypothesized that these difficulties result from a reduced interest in faces early in life, leading to decreased cortical specialization and atypical development of the neural circuitry for face processing. However, a recent study by our lab demonstrated that infants at increased familial risk for ASD, irrespective of their diagnostic status at 3 years, exhibit a clear orienting response to faces. The present study was conducted as a follow-up on the same cohort to investigate how measures of early engagement with faces relate to face-processing abilities later in life. We also investigated whether face recognition difficulties are specifically related to an ASD diagnosis, or whether they are present at a higher rate in all those at familial risk. At 3 years we found a reduced ability to recognize unfamiliar faces in the high-risk group that was not specific to those children who received an ASD diagnosis, consistent with face recognition difficulties being an endophenotype of the disorder. Furthermore, we found that longer looking at faces at 7 months was associated with poorer performance on the face recognition task at 3 years in the high-risk group. These findings suggest that longer looking at faces in infants at risk for ASD might reflect early face-processing difficulties and predicts difficulties with recognizing faces later in life. PMID:24314028

  8. Culture moderates the relationship between interdependence and face recognition

    PubMed Central

    Ng, Andy H.; Steele, Jennifer R.; Sasaki, Joni Y.; Sakamoto, Yumiko; Williams, Amanda

    2015-01-01

    Recent theory suggests that face recognition accuracy is affected by people’s motivations, with people being particularly motivated to remember ingroup versus outgroup faces. In the current research we suggest that those higher in interdependence should have a greater motivation to remember ingroup faces, but this should depend on how ingroups are defined. To examine this possibility, we used a joint individual difference and cultural approach to test (a) whether individual differences in interdependence would predict face recognition accuracy, and (b) whether this effect would be moderated by culture. In Study 1 European Canadians higher in interdependence demonstrated greater recognition for same-race (White), but not cross-race (East Asian) faces. In Study 2 we found that culture moderated this effect. Interdependence again predicted greater recognition for same-race (White), but not cross-race (East Asian) faces among European Canadians; however, interdependence predicted worse recognition for both same-race (East Asian) and cross-race (White) faces among first-generation East Asians. The results provide insight into the role of motivation in face perception as well as cultural differences in the conception of ingroups. PMID:26579011

  9. Face recognition in simulated prosthetic vision: face detection-based image processing strategies

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Wu, Xiaobei; Lu, Yanyu; Wu, Hao; Kan, Han; Chai, Xinyu

    2014-08-01

    Objective. Given the limited visual percepts elicited by current prosthetic devices, it is essential to optimize image content in order to assist implant wearers to achieve better performance of visual tasks. This study focuses on recognition of familiar faces using simulated prosthetic vision. Approach. Combined with region-of-interest (ROI) magnification, three face extraction strategies based on a face detection technique were used: the Viola-Jones face region, the statistical face region (SFR) and the matting face region. Main results. These strategies significantly enhanced recognition performance compared to directly lowering resolution (DLR) with Gaussian dots. The inclusion of certain external features, such as hairstyle, was beneficial for face recognition. Given the high recognition accuracy achieved and applicable processing speed, SFR-ROI was the preferred strategy. DLR processing resulted in significant face gender recognition differences (i.e. females were more easily recognized than males), but these differences were not apparent with other strategies. Significance. Face detection-based image processing strategies improved visual perception by highlighting useful information. Their use is advisable for face recognition when using low-resolution prosthetic vision. These results provide information for the continued design of image processing modules for use in visual prosthetics, thus maximizing the benefits for future prosthesis wearers.

  10. Real-time automated 3D sensing, detection, and recognition of dynamic biological micro-organic events

    NASA Astrophysics Data System (ADS)

    Javidi, Bahram; Yeom, Seokwon; Moon, Inkyu; Daneshpanah, Mehdi

    2006-05-01

    In this paper, we present an overview of three-dimensional (3D) optical imaging techniques for real-time automated sensing, visualization, and recognition of dynamic biological microorganisms. Real time sensing and 3D reconstruction of the dynamic biological microscopic objects can be performed by single-exposure on-line (SEOL) digital holographic microscopy. A coherent 3D microscope-based interferometer is constructed to record digital holograms of dynamic micro biological events. Complex amplitude 3D images of the biological microorganisms are computationally reconstructed at different depths by digital signal processing. Bayesian segmentation algorithms are applied to identify regions of interest for further processing. A number of pattern recognition approaches are addressed to identify and recognize the microorganisms. One uses 3D morphology of the microorganisms by analyzing 3D geometrical shapes which is composed of magnitude and phase. Segmentation, feature extraction, graph matching, feature selection, and training and decision rules are used to recognize the biological microorganisms. In a different approach, 3D technique is used that are tolerant to the varying shapes of the non-rigid biological microorganisms. After segmentation, a number of sampling patches are arbitrarily extracted from the complex amplitudes of the reconstructed 3D biological microorganism. These patches are processed using a number of cost functions and statistical inference theory for the equality of means and equality of variances between the sampling segments. Also, we discuss the possibility of employing computational integral imaging for 3D sensing, visualization, and recognition of biological microorganisms illuminated under incoherent light. Experimental results with several biological microorganisms are presented to illustrate detection, segmentation, and identification of micro biological events.

  11. What Types of Visual Recognition Tasks Are Mediated by the Neural Subsystem that Subserves Face Recognition?

    ERIC Educational Resources Information Center

    Brooks, Brian E.; Cooper, Eric E.

    2006-01-01

    Three divided visual field experiments tested current hypotheses about the types of visual shape representation tasks that recruit the cognitive and neural mechanisms underlying face recognition. Experiment 1 found a right hemisphere advantage for subordinate but not basic-level face recognition. Experiment 2 found a right hemisphere advantage for…

  12. Eye movements during emotion recognition in faces.

    PubMed

    Schurgin, M W; Nelson, J; Iida, S; Ohira, H; Chiao, J Y; Franconeri, S L

    2014-01-01

    When distinguishing whether a face displays a certain emotion, some regions of the face may contain more useful information than others. Here we ask whether people differentially attend to distinct regions of a face when judging different emotions. Experiment 1 measured eye movements while participants discriminated between emotional (joy, anger, fear, sadness, shame, and disgust) and neutral facial expressions. Participant eye movements primarily fell in five distinct regions (eyes, upper nose, lower nose, upper lip, nasion). Distinct fixation patterns emerged for each emotion, such as a focus on the lips for joyful faces and a focus on the eyes for sad faces. These patterns were strongest for emotional faces but were still present when viewers sought evidence of emotion within neutral faces, indicating a goal-driven influence on eye-gaze patterns. Experiment 2 verified that these fixation patterns tended to reflect attention to the most diagnostic regions of the face for each emotion. Eye movements appear to follow both stimulus-driven and goal-driven perceptual strategies when decoding emotional information from a face. PMID:25406159

  13. SIFT fusion of kernel eigenfaces for face recognition

    NASA Astrophysics Data System (ADS)

    Kisku, Dakshina R.; Tistarelli, Massimo; Gupta, Phalguni; Sing, Jamuna K.

    2015-10-01

    In this paper, we investigate an application that integrates holistic appearance based method and feature based method for face recognition. The automatic face recognition system makes use of multiscale Kernel PCA (Principal Component Analysis) characterized approximated face images and reduced the number of invariant SIFT (Scale Invariant Feature Transform) keypoints extracted from face projected feature space. To achieve higher variance in the inter-class face images, we compute principal components in higher-dimensional feature space to project a face image onto some approximated kernel eigenfaces. As long as feature spaces retain their distinctive characteristics, reduced number of SIFT points are detected for a number of principal components and keypoints are then fused using user-dependent weighting scheme and form a feature vector. The proposed method is tested on ORL face database, and the efficacy of the system is proved by the test results computed using the proposed algorithm.

  14. Improving cross-modal face recognition using polarimetric imaging.

    PubMed

    Short, Nathaniel; Hu, Shuowen; Gurram, Prudhvi; Gurton, Kristan; Chan, Alex

    2015-03-15

    We investigate the performance of polarimetric imaging in the long-wave infrared (LWIR) spectrum for cross-modal face recognition. For this work, polarimetric imagery is generated as stacks of three components: the conventional thermal intensity image (referred to as S0), and the two Stokes images, S1 and S2, which contain combinations of different polarizations. The proposed face recognition algorithm extracts and combines local gradient magnitude and orientation information from S0, S1, and S2 to generate a robust feature set that is well-suited for cross-modal face recognition. Initial results show that polarimetric LWIR-to-visible face recognition achieves an 18% increase in Rank-1 identification rate compared to conventional LWIR-to-visible face recognition. We conclude that a substantial improvement in automatic face recognition performance can be achieved by exploiting the polarization-state of radiance, as compared to using conventional thermal imagery. PMID:25768137

  15. Fast face recognition by using an inverted index

    NASA Astrophysics Data System (ADS)

    Herrmann, Christian; Beyerer, Jürgen

    2015-02-01

    This contribution addresses the task of searching for faces in large video datasets. Despite vast progress in the field, face recognition remains a challenge for uncontrolled large scale applications like searching for persons in surveillance footage or internet videos. While current productive systems focus on the best shot approach, where only one representative frame from a given face track is selected, thus sacrificing recognition performance, systems achieving state-of-the-art recognition performance, like the recently published DeepFace, ignore recognition speed, which makes them impractical for large scale applications. We suggest a set of measures to address the problem. First, considering the feature location allows collecting the extracted features in according sets. Secondly, the inverted index approach, which became popular in the area of image retrieval, is applied to these feature sets. A face track is thus described by a set of local indexed visual words which enables a fast search. This way, all information from a face track is collected which allows better recognition performance than best shot approaches and the inverted index permits constantly high recognition speeds. Evaluation on a dataset of several thousand videos shows the validity of the proposed approach.

  16. Robust textural features for real time face recognition

    NASA Astrophysics Data System (ADS)

    Cui, Chen; Asari, Vijayan K.; Braun, Andrew D.

    2015-03-01

    Automatic face recognition in real life environment is challenged by various issues such as the object motion, lighting conditions, poses and expressions. In this paper, we present the development of a system based on a refined Enhanced Local Binary Pattern (ELBP) feature set and a Support Vector Machine (SVM) classifier to perform face recognition in a real life environment. Instead of counting the number of 1's in ELBP, we use the 8-bit code of the thresholded data as per the ELBP rule, and then binarize the image with a predefined threshold value, removing the small connections on the binarized image. The proposed system is currently trained with several people's face images obtained from video sequences captured by a surveillance camera. One test set contains the disjoint images of the trained people's faces to test the accuracy and the second test set contains the images of non-trained people's faces to test the percentage of the false positives. The recognition rate among 570 images of 9 trained faces is around 94%, and the false positive rate with 2600 images of 34 non-trained faces is around 1%. Research work is progressing for the recognition of partially occluded faces as well. An appropriate weighting strategy will be applied to the different parts of the face area to achieve a better performance.

  17. Newborns' Face Recognition over Changes in Viewpoint

    ERIC Educational Resources Information Center

    Turati, Chiara; Bulf, Hermann; Simion, Francesca

    2008-01-01

    The study investigated the origins of the ability to recognize faces despite rotations in depth. Four experiments are reported that tested, using the habituation technique, whether 1-to-3-day-old infants are able to recognize the invariant aspects of a face over changes in viewpoint. Newborns failed to recognize facial perceptual invariances…

  18. Error Rates in Users of Automatic Face Recognition Software

    PubMed Central

    White, David; Dunn, James D.; Schmid, Alexandra C.; Kemp, Richard I.

    2015-01-01

    In recent years, wide deployment of automatic face recognition systems has been accompanied by substantial gains in algorithm performance. However, benchmarking tests designed to evaluate these systems do not account for the errors of human operators, who are often an integral part of face recognition solutions in forensic and security settings. This causes a mismatch between evaluation tests and operational accuracy. We address this by measuring user performance in a face recognition system used to screen passport applications for identity fraud. Experiment 1 measured target detection accuracy in algorithm-generated ‘candidate lists’ selected from a large database of passport images. Accuracy was notably poorer than in previous studies of unfamiliar face matching: participants made over 50% errors for adult target faces, and over 60% when matching images of children. Experiment 2 then compared performance of student participants to trained passport officers–who use the system in their daily work–and found equivalent performance in these groups. Encouragingly, a group of highly trained and experienced “facial examiners” outperformed these groups by 20 percentage points. We conclude that human performance curtails accuracy of face recognition systems–potentially reducing benchmark estimates by 50% in operational settings. Mere practise does not attenuate these limits, but superior performance of trained examiners suggests that recruitment and selection of human operators, in combination with effective training and mentorship, can improve the operational accuracy of face recognition systems. PMID:26465631

  19. Developmental Changes in Face Recognition during Childhood: Evidence from Upright and Inverted Faces

    ERIC Educational Resources Information Center

    de Heering, Adelaide; Rossion, Bruno; Maurer, Daphne

    2012-01-01

    Adults are experts at recognizing faces but there is controversy about how this ability develops with age. We assessed 6- to 12-year-olds and adults using a digitized version of the Benton Face Recognition Test, a sensitive tool for assessing face perception abilities. Children's response times for correct responses did not decrease between ages 6…

  20. Face Engagement during Infancy Predicts Later Face Recognition Ability in Younger Siblings of Children with Autism

    ERIC Educational Resources Information Center

    de Klerk, Carina C. J. M.; Gliga, Teodora; Charman, Tony; Johnson, Mark H.

    2014-01-01

    Face recognition difficulties are frequently documented in children with autism spectrum disorders (ASD). It has been hypothesized that these difficulties result from a reduced interest in faces early in life, leading to decreased cortical specialization and atypical development of the neural circuitry for face processing. However, a recent study…

  1. Extraction and refinement of building faces in 3D point clouds

    NASA Astrophysics Data System (ADS)

    Pohl, Melanie; Meidow, Jochen; Bulatov, Dimitri

    2013-10-01

    In this paper, we present an approach to generate a 3D model of an urban scene out of sensor data. The first milestone on that way is to classify the sensor data into the main parts of a scene, such as ground, vegetation, buildings and their outlines. This has already been accomplished within our previous work. Now, we propose a four-step algorithm to model the building structure, which is assumed to consist of several dominant planes. First, we extract small elevated objects, like chimneys, using a hot-spot detector and handle the detected regions separately. In order to model the variety of roof structures precisely, we split up complex building blocks into parts. Two different approaches are used: To act on the assumption of underlying 2D ground polygons, we use geometric methods to divide them into sub-polygons. Without polygons, we use morphological operations and segmentation methods. In the third step, extraction of dominant planes takes place, by using either RANSAC or J-linkage algorithm. They operate on point clouds of sufficient confidence within the previously separated building parts and give robust results even with noisy, outlier-rich data. Last, we refine the previously determined plane parameters using geometric relations of the building faces. Due to noise, these expected properties of roofs and walls are not fulfilled. Hence, we enforce them as hard constraints and use the previously extracted plane parameters as initial values for an optimization method. To test the proposed workflow, we use both several data sets, including noisy data from depth maps and data computed by laser scanning.

  2. Cross-modal face recognition using multi-matcher face scores

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Blasch, Erik

    2015-05-01

    The performance of face recognition can be improved using information fusion of multimodal images and/or multiple algorithms. When multimodal face images are available, cross-modal recognition is meaningful for security and surveillance applications. For example, a probe face is a thermal image (especially at nighttime), while only visible face images are available in the gallery database. Matching a thermal probe face onto the visible gallery faces requires crossmodal matching approaches. A few such studies were implemented in facial feature space with medium recognition performance. In this paper, we propose a cross-modal recognition approach, where multimodal faces are cross-matched in feature space and the recognition performance is enhanced with stereo fusion at image, feature and/or score level. In the proposed scenario, there are two cameras for stereo imaging, two face imagers (visible and thermal images) in each camera, and three recognition algorithms (circular Gaussian filter, face pattern byte, linear discriminant analysis). A score vector is formed with three cross-matched face scores from the aforementioned three algorithms. A classifier (e.g., k-nearest neighbor, support vector machine, binomial logical regression [BLR]) is trained then tested with the score vectors by using 10-fold cross validations. The proposed approach was validated with a multispectral stereo face dataset from 105 subjects. Our experiments show very promising results: ACR (accuracy rate) = 97.84%, FAR (false accept rate) = 0.84% when cross-matching the fused thermal faces onto the fused visible faces by using three face scores and the BLR classifier.

  3. Face-Recognition Memory: Implications for Children's Eyewitness Testimony.

    ERIC Educational Resources Information Center

    Chance, June E.; Goldstein, Alvin G.

    1984-01-01

    Reviews studies of face-recognition memory and considers implications for assessing the dependability of children's performances as eyewitnesses. Considers personal factors (age, intellectual differences, and gender) and situational factors (familiarity of face, retention interval, and others). Also identifies developmental questions for future…

  4. Supervised Filter Learning for Representation Based Face Recognition

    PubMed Central

    Bi, Chao; Zhang, Lei; Qi, Miao; Zheng, Caixia; Yi, Yugen; Wang, Jianzhong; Zhang, Baoxue

    2016-01-01

    Representation based classification methods, such as Sparse Representation Classification (SRC) and Linear Regression Classification (LRC) have been developed for face recognition problem successfully. However, most of these methods use the original face images without any preprocessing for recognition. Thus, their performances may be affected by some problematic factors (such as illumination and expression variances) in the face images. In order to overcome this limitation, a novel supervised filter learning algorithm is proposed for representation based face recognition in this paper. The underlying idea of our algorithm is to learn a filter so that the within-class representation residuals of the faces' Local Binary Pattern (LBP) features are minimized and the between-class representation residuals of the faces' LBP features are maximized. Therefore, the LBP features of filtered face images are more discriminative for representation based classifiers. Furthermore, we also extend our algorithm for heterogeneous face recognition problem. Extensive experiments are carried out on five databases and the experimental results verify the efficacy of the proposed algorithm. PMID:27416030

  5. Development of Face Recognition in Infant Chimpanzees (Pan Troglodytes)

    ERIC Educational Resources Information Center

    Myowa-Yamakoshi, M.; Yamaguchi, M.K.; Tomonaga, M.; Tanaka, M.; Matsuzawa, T.

    2005-01-01

    In this paper, we assessed the developmental changes in face recognition by three infant chimpanzees aged 1-18 weeks, using preferential-looking procedures that measured the infants' eye- and head-tracking of moving stimuli. In Experiment 1, we prepared photographs of the mother of each infant and an ''average'' chimpanzee face using…

  6. Supervised Filter Learning for Representation Based Face Recognition.

    PubMed

    Bi, Chao; Zhang, Lei; Qi, Miao; Zheng, Caixia; Yi, Yugen; Wang, Jianzhong; Zhang, Baoxue

    2016-01-01

    Representation based classification methods, such as Sparse Representation Classification (SRC) and Linear Regression Classification (LRC) have been developed for face recognition problem successfully. However, most of these methods use the original face images without any preprocessing for recognition. Thus, their performances may be affected by some problematic factors (such as illumination and expression variances) in the face images. In order to overcome this limitation, a novel supervised filter learning algorithm is proposed for representation based face recognition in this paper. The underlying idea of our algorithm is to learn a filter so that the within-class representation residuals of the faces' Local Binary Pattern (LBP) features are minimized and the between-class representation residuals of the faces' LBP features are maximized. Therefore, the LBP features of filtered face images are more discriminative for representation based classifiers. Furthermore, we also extend our algorithm for heterogeneous face recognition problem. Extensive experiments are carried out on five databases and the experimental results verify the efficacy of the proposed algorithm. PMID:27416030

  7. The Development of Spatial Frequency Biases in Face Recognition

    ERIC Educational Resources Information Center

    Leonard, Hayley C.; Karmiloff-Smith, Annette; Johnson, Mark H.

    2010-01-01

    Previous research has suggested that a mid-band of spatial frequencies is critical to face recognition in adults, but few studies have explored the development of this bias in children. We present a paradigm adapted from the adult literature to test spatial frequency biases throughout development. Faces were presented on a screen with particular…

  8. Fusion of visible and infrared imagery for face recognition

    NASA Astrophysics Data System (ADS)

    Chen, Xuerong; Jing, Zhongliang; Sun, Shaoyuan; Xiao, Gang

    2004-12-01

    In recent years face recognition has received substantial attention, but still remained very challenging in real applications. Despite the variety of approaches and tools studied, face recognition is not accurate or robust enough to be used in uncontrolled environments. Infrared (IR) imagery of human faces offers a promising alternative to visible imagery, however, IR has its own limitations. In this paper, a scheme to fuse information from the two modalities is proposed. The scheme is based on eigenfaces and probabilistic neural network (PNN), using fuzzy integral to fuse the objective evidence supplied by each modality. Recognition rate is used to evaluate the fusion scheme. Experimental results show that the scheme improves recognition performance substantially.

  9. Recognition memory in developmental prosopagnosia: electrophysiological evidence for abnormal routes to face recognition

    PubMed Central

    Burns, Edwin J.; Tree, Jeremy J.; Weidemann, Christoph T.

    2014-01-01

    Dual process models of recognition memory propose two distinct routes for recognizing a face: recollection and familiarity. Recollection is characterized by the remembering of some contextual detail from a previous encounter with a face whereas familiarity is the feeling of finding a face familiar without any contextual details. The Remember/Know (R/K) paradigm is thought to index the relative contributions of recollection and familiarity to recognition performance. Despite researchers measuring face recognition deficits in developmental prosopagnosia (DP) through a variety of methods, none have considered the distinct contributions of recollection and familiarity to recognition performance. The present study examined recognition memory for faces in eight individuals with DP and a group of controls using an R/K paradigm while recording electroencephalogram (EEG) data at the scalp. Those with DP were found to produce fewer correct “remember” responses and more false alarms than controls. EEG results showed that posterior “remember” old/new effects were delayed and restricted to the right posterior (RP) area in those with DP in comparison to the controls. A posterior “know” old/new effect commonly associated with familiarity for faces was only present in the controls whereas individuals with DP exhibited a frontal “know” old/new effect commonly associated with words, objects and pictures. These results suggest that individuals with DP do not utilize normal face-specific routes when making face recognition judgments but instead process faces using a pathway more commonly associated with objects. PMID:25177283

  10. The own-age face recognition bias is task dependent.

    PubMed

    Proietti, Valentina; Macchi Cassia, Viola; Mondloch, Catherine J

    2015-08-01

    The own-age bias (OAB) in face recognition (more accurate recognition of own-age than other-age faces) is robust among young adults but not older adults. We investigated the OAB under two different task conditions. In Experiment 1 young and older adults (who reported more recent experience with own than other-age faces) completed a match-to-sample task with young and older adult faces; only young adults showed an OAB. In Experiment 2 young and older adults completed an identity detection task in which we manipulated the identity strength of target and distracter identities by morphing each face with an average face in 20% steps. Accuracy increased with identity strength and facial age influenced older adults' (but not younger adults') strategy, but there was no evidence of an OAB. Collectively, these results suggest that the OAB depends on task demands and may be absent when searching for one identity. PMID:25491773

  11. Sparse representation based face recognition using weighted regions

    NASA Astrophysics Data System (ADS)

    Bilgazyev, Emil; Yeniaras, E.; Uyanik, I.; Unan, Mahmut; Leiss, E. L.

    2013-12-01

    Face recognition is a challenging research topic, especially when the training (gallery) and recognition (probe) images are acquired using different cameras under varying conditions. Even a small noise or occlusion in the images can compromise the accuracy of recognition. Lately, sparse encoding based classification algorithms gave promising results for such uncontrollable scenarios. In this paper, we introduce a novel methodology by modeling the sparse encoding with weighted patches to increase the robustness of face recognition even further. In the training phase, we define a mask (i.e., weight matrix) using a sparse representation selecting the facial regions, and in the recognition phase, we perform comparison on selected facial regions. The algorithm was evaluated both quantitatively and qualitatively using two comprehensive surveillance facial image databases, i.e., SCfaceandMFPV, with the results clearly superior to common state-of-the-art methodologies in different scenarios.

  12. Individual differences in cortical face selectivity predict behavioral performance in face recognition.

    PubMed

    Huang, Lijie; Song, Yiying; Li, Jingguang; Zhen, Zonglei; Yang, Zetian; Liu, Jia

    2014-01-01

    In functional magnetic resonance imaging studies, object selectivity is defined as a higher neural response to an object category than other object categories. Importantly, object selectivity is widely considered as a neural signature of a functionally-specialized area in processing its preferred object category in the human brain. However, the behavioral significance of the object selectivity remains unclear. In the present study, we used the individual differences approach to correlate participants' face selectivity in the face-selective regions with their behavioral performance in face recognition measured outside the scanner in a large sample of healthy adults. Face selectivity was defined as the z score of activation with the contrast of faces vs. non-face objects, and the face recognition ability was indexed as the normalized residual of the accuracy in recognizing previously-learned faces after regressing out that for non-face objects in an old/new memory task. We found that the participants with higher face selectivity in the fusiform face area (FFA) and the occipital face area (OFA), but not in the posterior part of the superior temporal sulcus (pSTS), possessed higher face recognition ability. Importantly, the association of face selectivity in the FFA and face recognition ability cannot be accounted for by FFA response to objects or behavioral performance in object recognition, suggesting that the association is domain-specific. Finally, the association is reliable, confirmed by the replication from another independent participant group. In sum, our finding provides empirical evidence on the validity of using object selectivity as a neural signature in defining object-selective regions in the human brain. PMID:25071513

  13. [Neural basis of self-face recognition: social aspects].

    PubMed

    Sugiura, Motoaki

    2012-07-01

    Considering the importance of the face in social survival and evidence from evolutionary psychology of visual self-recognition, it is reasonable that we expect neural mechanisms for higher social-cognitive processes to underlie self-face recognition. A decade of neuroimaging studies so far has, however, not provided an encouraging finding in this respect. Self-face specific activation has typically been reported in the areas for sensory-motor integration in the right lateral cortices. This observation appears to reflect the physical nature of the self-face which representation is developed via the detection of contingency between one's own action and sensory feedback. We have recently revealed that the medial prefrontal cortex, implicated in socially nuanced self-referential process, is activated during self-face recognition under a rich social context where multiple other faces are available for reference. The posterior cingulate cortex has also exhibited this activation modulation, and in the separate experiment showed a response to attractively manipulated self-face suggesting its relevance to positive self-value. Furthermore, the regions in the right lateral cortices typically showing self-face-specific activation have responded also to the face of one's close friend under the rich social context. This observation is potentially explained by the fact that the contingency detection for physical self-recognition also plays a role in physical social interaction, which characterizes the representation of personally familiar people. These findings demonstrate that neuroscientific exploration reveals multiple facets of the relationship between self-face recognition and social-cognitive process, and that technically the manipulation of social context is key to its success. PMID:22764347

  14. Robust Point Set Matching for Partial Face Recognition.

    PubMed

    Weng, Renliang; Lu, Jiwen; Tan, Yap-Peng

    2016-03-01

    Over the past three decades, a number of face recognition methods have been proposed in computer vision, and most of them use holistic face images for person identification. In many real-world scenarios especially some unconstrained environments, human faces might be occluded by other objects, and it is difficult to obtain fully holistic face images for recognition. To address this, we propose a new partial face recognition approach to recognize persons of interest from their partial faces. Given a pair of gallery image and probe face patch, we first detect keypoints and extract their local textural features. Then, we propose a robust point set matching method to discriminatively match these two extracted local feature sets, where both the textural information and geometrical information of local features are explicitly used for matching simultaneously. Finally, the similarity of two faces is converted as the distance between these two aligned feature sets. Experimental results on four public face data sets show the effectiveness of the proposed approach. PMID:26761775

  15. Object Recognition in Flight: How Do Bees Distinguish between 3D Shapes?

    PubMed

    Werner, Annette; Stürzl, Wolfgang; Zanker, Johannes

    2016-01-01

    Honeybees (Apis mellifera) discriminate multiple object features such as colour, pattern and 2D shape, but it remains unknown whether and how bees recover three-dimensional shape. Here we show that bees can recognize objects by their three-dimensional form, whereby they employ an active strategy to uncover the depth profiles. We trained individual, free flying honeybees to collect sugar water from small three-dimensional objects made of styrofoam (sphere, cylinder, cuboids) or folded paper (convex, concave, planar) and found that bees can easily discriminate between these stimuli. We also tested possible strategies employed by the bees to uncover the depth profiles. For the card stimuli, we excluded overall shape and pictorial features (shading, texture gradients) as cues for discrimination. Lacking sufficient stereo vision, bees are known to use speed gradients in optic flow to detect edges; could the bees apply this strategy also to recover the fine details of a surface depth profile? Analysing the bees' flight tracks in front of the stimuli revealed specific combinations of flight maneuvers (lateral translations in combination with yaw rotations), which are particularly suitable to extract depth cues from motion parallax. We modelled the generated optic flow and found characteristic patterns of angular displacement corresponding to the depth profiles of our stimuli: optic flow patterns from pure translations successfully recovered depth relations from the magnitude of angular displacements, additional rotation provided robust depth information based on the direction of the displacements; thus, the bees flight maneuvers may reflect an optimized visuo-motor strategy to extract depth structure from motion signals. The robustness and simplicity of this strategy offers an efficient solution for 3D-object-recognition without stereo vision, and could be employed by other flying insects, or mobile robots. PMID:26886006

  16. Object Recognition in Flight: How Do Bees Distinguish between 3D Shapes?

    PubMed Central

    Werner, Annette; Stürzl, Wolfgang; Zanker, Johannes

    2016-01-01

    Honeybees (Apis mellifera) discriminate multiple object features such as colour, pattern and 2D shape, but it remains unknown whether and how bees recover three-dimensional shape. Here we show that bees can recognize objects by their three-dimensional form, whereby they employ an active strategy to uncover the depth profiles. We trained individual, free flying honeybees to collect sugar water from small three-dimensional objects made of styrofoam (sphere, cylinder, cuboids) or folded paper (convex, concave, planar) and found that bees can easily discriminate between these stimuli. We also tested possible strategies employed by the bees to uncover the depth profiles. For the card stimuli, we excluded overall shape and pictorial features (shading, texture gradients) as cues for discrimination. Lacking sufficient stereo vision, bees are known to use speed gradients in optic flow to detect edges; could the bees apply this strategy also to recover the fine details of a surface depth profile? Analysing the bees’ flight tracks in front of the stimuli revealed specific combinations of flight maneuvers (lateral translations in combination with yaw rotations), which are particularly suitable to extract depth cues from motion parallax. We modelled the generated optic flow and found characteristic patterns of angular displacement corresponding to the depth profiles of our stimuli: optic flow patterns from pure translations successfully recovered depth relations from the magnitude of angular displacements, additional rotation provided robust depth information based on the direction of the displacements; thus, the bees flight maneuvers may reflect an optimized visuo-motor strategy to extract depth structure from motion signals. The robustness and simplicity of this strategy offers an efficient solution for 3D-object-recognition without stereo vision, and could be employed by other flying insects, or mobile robots. PMID:26886006

  17. Recognition of face and non-face stimuli in autistic spectrum disorder.

    PubMed

    Arkush, Leo; Smith-Collins, Adam P R; Fiorentini, Chiara; Skuse, David H

    2013-12-01

    The ability to remember faces is critical for the development of social competence. From childhood to adulthood, we acquire a high level of expertise in the recognition of facial images, and neural processes become dedicated to sustaining competence. Many people with autism spectrum disorder (ASD) have poor face recognition memory; changes in hairstyle or other non-facial features in an otherwise familiar person affect their recollection skills. The observation implies that they may not use the configuration of the inner face to achieve memory competence, but bolster performance in other ways. We aimed to test this hypothesis by comparing the performance of a group of high-functioning unmedicated adolescents with ASD and a matched control group on a "surprise" face recognition memory task. We compared their memory for unfamiliar faces with their memory for images of houses. To evaluate the role that is played by peripheral cues in assisting recognition memory, we cropped both sets of pictures, retaining only the most salient central features. ASD adolescents had poorer recognition memory for faces than typical controls, but their recognition memory for houses was unimpaired. Cropping images of faces did not disproportionately influence their recall accuracy, relative to controls. House recognition skills (cropped and uncropped) were similar in both groups. In the ASD group only, performance on both sets of task was closely correlated, implying that memory for faces and other complex pictorial stimuli is achieved by domain-general (non-dedicated) cognitive mechanisms. Adolescents with ASD apparently do not use domain-specialized processing of inner facial cues to support face recognition memory. PMID:23894016

  18. A Benchmark and Comparative Study of Video-Based Face Recognition on COX Face Database.

    PubMed

    Huang, Zhiwu; Shan, Shiguang; Wang, Ruiping; Zhang, Haihong; Lao, Shihong; Kuerban, Alifu; Chen, Xilin

    2015-12-01

    Face recognition with still face images has been widely studied, while the research on video-based face recognition is inadequate relatively, especially in terms of benchmark datasets and comparisons. Real-world video-based face recognition applications require techniques for three distinct scenarios: 1) Videoto-Still (V2S); 2) Still-to-Video (S2V); and 3) Video-to-Video (V2V), respectively, taking video or still image as query or target. To the best of our knowledge, few datasets and evaluation protocols have benchmarked for all the three scenarios. In order to facilitate the study of this specific topic, this paper contributes a benchmarking and comparative study based on a newly collected still/video face database, named COX(1) Face DB. Specifically, we make three contributions. First, we collect and release a largescale still/video face database to simulate video surveillance with three different video-based face recognition scenarios (i.e., V2S, S2V, and V2V). Second, for benchmarking the three scenarios designed on our database, we review and experimentally compare a number of existing set-based methods. Third, we further propose a novel Point-to-Set Correlation Learning (PSCL) method, and experimentally show that it can be used as a promising baseline method for V2S/S2V face recognition on COX Face DB. Extensive experimental results clearly demonstrate that video-based face recognition needs more efforts, and our COX Face DB is a good benchmark database for evaluation. PMID:26513790

  19. Challenges faced in applying 3D noncontact metrology to turbine engine blade inspection

    NASA Astrophysics Data System (ADS)

    Ross, Joseph; Harding, Kevin; Hogarth, Eric

    2011-08-01

    3D Non-contact Inspection systems are becoming more capable and affordable, however successful application to complex parts requires understanding the remaining system limitations. Turbine airfoils are key components used in several important industries that present some unique challenges to any metrology application. Issues such as surface finish, complicated shapes and unique geometries exercise many of the key capabilities of a non-contact 3D measurement system. Therefore, many of the short comings of any 3D method become evident in addressing airfoil measurement applications. This paper will address the key challenges posed by complicated shapes such as airfoils, and what gaps still exist in the application of the technology.

  20. Framework for performance evaluation of face recognition algorithms

    NASA Astrophysics Data System (ADS)

    Black, John A., Jr.; Gargesha, Madhusudhana; Kahol, Kanav; Kuchi, Prem; Panchanathan, Sethuraman

    2002-07-01

    Face detection and recognition is becoming increasingly important in the contexts of surveillance,credit card fraud detection,assistive devices for visual impaired,etc. A number of face recognition algorithms have been proposed in the literature.The availability of a comprehensive face database is crucial to test the performance of these face recognition algorithms.However,while existing publicly-available face databases contain face images with a wide variety of poses angles, illumination angles,gestures,face occlusions,and illuminant colors, these images have not been adequately annotated,thus limiting their usefulness for evaluating the relative performance of face detection algorithms. For example,many of the images in existing databases are not annotated with the exact pose angles at which they were taken.In order to compare the performance of various face recognition algorithms presented in the literature there is a need for a comprehensive,systematically annotated database populated with face images that have been captured (1)at a variety of pose angles (to permit testing of pose invariance),(2)with a wide variety of illumination angles (to permit testing of illumination invariance),and (3)under a variety of commonly encountered illumination color temperatures (to permit testing of illumination color invariance). In this paper, we present a methodology for creating such an annotated database that employs a novel set of apparatus for the rapid capture of face images from a wide variety of pose angles and illumination angles. Four different types of illumination are used,including daylight,skylight,incandescent and fluorescent. The entire set of images,as well as the annotations and the experimental results,is being placed in the public domain,and made available for download over the worldwide web.

  1. Real-time optoelectronic morphological processor for human face recognition

    NASA Astrophysics Data System (ADS)

    Liu, Haisong; Wu, Minxian; Jin, Guofan; Cheng, Gang; He, Qingsheng

    1998-01-01

    Many commercial and law enforcement applications of face recognition need to be high-speed and real-time, such as passing through customs quickly while ensuring security. However, face recognition by using computers only is time- consuming due to the intensive calculation task. Recently optical implementations of real-time face recognition have attracted much attention. In this paper, a real-time optoelectronic morphological processor for face recognition is presented. It is based on original-complementary composite encoding hit-or-miss transformation, which combines the foreground and background of an image into a whole. One liquid-crystal display panel is used as two real- time SLMs for both stored images and face images to be recognized, which are of 256 X 256 pixels. A speed of 40 frames/s and four-channel recognition ability have been achieved. The experimental results have shown that the processor has an accuracy over 90% and error tolerance to rotation up to 8 deg, to noise disturbance up to 25%, and to image loss up to 40%.

  2. Preadolescents' recognition of faces of unfamiliar peers: the effect of attractiveness of faces.

    PubMed

    Mallet, Pascal; Lallemand, Noëlle

    2003-12-01

    The authors examined preadolescents' ability to recognize faces of unfamiliar peers according to their attractiveness. They hypothesized that highly attractive faces would be less accurately recognized than moderately attractive faces because the former are more typical. In Experiment 1, 106 participants (M age = 10 years) were asked to recognize faces of unknown peers who varied in gender and attractiveness (high- vs. medium-attractiveness). Results showed that attractiveness enhanced the accuracy of recognition for boys' faces and impaired recognition of girls' faces. The same interaction was found in Experiment 2, in which 92 participants (M age = 12 years) were tested for their recognition of another set of faces of unfamiliar peers. The authors conducted Experiment 3 to examine whether the reason for that interaction is that high- and medium-attractive girls' faces differ more in typicality than do boys' faces. The effect size of attractiveness on typicality was similar for boys' and girls' faces. The overall results are discussed with reference to the development of face encoding and biological gender differences with respect to the typicality of faces during preadolescence. PMID:14719778

  3. 3D Exploration of Meteorological Data: Facing the challenges of operational forecasters

    NASA Astrophysics Data System (ADS)

    Koutek, Michal; Debie, Frans; van der Neut, Ian

    2016-04-01

    In the past years the Royal Netherlands Meteorological Institute (KNMI) has been working on innovation in the field of meteorological data visualization. We are dealing with Numerical Weather Prediction (NWP) model data and observational data, i.e. satellite images, precipitation radar, ground and air-borne measurements. These multidimensional multivariate data are geo-referenced and can be combined in 3D space to provide more intuitive views on the atmospheric phenomena. We developed the Weather3DeXplorer (W3DX), a visualization framework for processing and interactive exploration and visualization using Virtual Reality (VR) technology. We managed to have great successes with research studies on extreme weather situations. In this paper we will elaborate what we have learned from application of interactive 3D visualization in the operational weather room. We will explain how important it is to control the degrees-of-freedom during interaction that are given to the users: forecasters/scientists; (3D camera and 3D slicing-plane navigation appear to be rather difficult for the users, when not implemented properly). We will present a novel approach of operational 3D visualization user interfaces (UI) that for a great deal eliminates the obstacle and the time it usually takes to set up the visualization parameters and an appropriate camera view on a certain atmospheric phenomenon. We have found our inspiration in the way our operational forecasters work in the weather room. We decided to form a bridge between 2D visualization images and interactive 3D exploration. Our method combines WEB-based 2D UI's, pre-rendered 3D visualization catalog for the latest NWP model runs, with immediate entry into interactive 3D session for selected visualization setting. Finally, we would like to present the first user experiences with this approach.

  4. Face recognition using facial expression: a novel approach

    NASA Astrophysics Data System (ADS)

    Singh, Deepak Kumar; Gupta, Priya; Tiwary, U. S.

    2008-04-01

    Facial expressions are undoubtedly the most effective nonverbal communication. The face has always been the equation of a person's identity. The face draws the demarcation line between identity and extinction. Each line on the face adds an attribute to the identity. These lines become prominent when we experience an emotion and these lines do not change completely with age. In this paper we have proposed a new technique for face recognition which focuses on the facial expressions of the subject to identify his face. This is a grey area on which not much light has been thrown earlier. According to earlier researches it is difficult to alter the natural expression. So our technique will be beneficial for identifying occluded or intentionally disguised faces. The test results of the experiments conducted prove that this technique will give a new direction in the field of face recognition. This technique will provide a strong base to the area of face recognition and will be used as the core method for critical defense security related issues.

  5. Face Recognition by Metropolitan Police Super-Recognisers

    PubMed Central

    Robertson, David J.; Noyes, Eilidh; Dowsett, Andrew J.; Jenkins, Rob; Burton, A. Mike

    2016-01-01

    Face recognition is used to prove identity across a wide variety of settings. Despite this, research consistently shows that people are typically rather poor at matching faces to photos. Some professional groups, such as police and passport officers, have been shown to perform just as poorly as the general public on standard tests of face recognition. However, face recognition skills are subject to wide individual variation, with some people showing exceptional ability—a group that has come to be known as ‘super-recognisers’. The Metropolitan Police Force (London) recruits ‘super-recognisers’ from within its ranks, for deployment on various identification tasks. Here we test four working super-recognisers from within this police force, and ask whether they are really able to perform at levels above control groups. We consistently find that the police ‘super-recognisers’ perform at well above normal levels on tests of unfamiliar and familiar face matching, with degraded as well as high quality images. Recruiting employees with high levels of skill in these areas, and allocating them to relevant tasks, is an efficient way to overcome some of the known difficulties associated with unfamiliar face recognition. PMID:26918457

  6. Face Recognition by Metropolitan Police Super-Recognisers.

    PubMed

    Robertson, David J; Noyes, Eilidh; Dowsett, Andrew J; Jenkins, Rob; Burton, A Mike

    2016-01-01

    Face recognition is used to prove identity across a wide variety of settings. Despite this, research consistently shows that people are typically rather poor at matching faces to photos. Some professional groups, such as police and passport officers, have been shown to perform just as poorly as the general public on standard tests of face recognition. However, face recognition skills are subject to wide individual variation, with some people showing exceptional ability-a group that has come to be known as 'super-recognisers'. The Metropolitan Police Force (London) recruits 'super-recognisers' from within its ranks, for deployment on various identification tasks. Here we test four working super-recognisers from within this police force, and ask whether they are really able to perform at levels above control groups. We consistently find that the police 'super-recognisers' perform at well above normal levels on tests of unfamiliar and familiar face matching, with degraded as well as high quality images. Recruiting employees with high levels of skill in these areas, and allocating them to relevant tasks, is an efficient way to overcome some of the known difficulties associated with unfamiliar face recognition. PMID:26918457

  7. Familiarity is not notoriety: phenomenological accounts of face recognition

    PubMed Central

    Liccione, Davide; Moruzzi, Sara; Rossi, Federica; Manganaro, Alessia; Porta, Marco; Nugrahaningsih, Nahumi; Caserio, Valentina; Allegri, Nicola

    2014-01-01

    From a phenomenological perspective, faces are perceived differently from objects as their perception always involves the possibility of a relational engagement (Bredlau, 2011). This is especially true for familiar faces, i.e., faces of people with a history of real relational engagements. Similarly, valence of emotional expressions assumes a key role, as they define the sense and direction of this engagement. Following these premises, the aim of the present study is to demonstrate that face recognition is facilitated by at least two variables, familiarity and emotional expression, and that perception of familiar faces is not influenced by orientation. In order to verify this hypothesis, we implemented a 3 × 3 × 2 factorial design, showing 17 healthy subjects three type of faces (unfamiliar, personally familiar, famous) characterized by three different emotional expressions (happy, hungry/sad, neutral) and in two different orientation (upright vs. inverted). We showed every subject a total of 180 faces with the instructions to give a familiarity judgment. Reaction times (RTs) were recorded and we found that the recognition of a face is facilitated by personal familiarity and emotional expression, and that this process is otherwise independent from a cognitive elaboration of stimuli and remains stable despite orientation. These results highlight the need to make a distinction between famous and personally familiar faces when studying face perception and to consider its historical aspects from a phenomenological point of view. PMID:25225476

  8. Eye contrast polarity is critical for face recognition by infants.

    PubMed

    Otsuka, Yumiko; Motoyoshi, Isamu; Hill, Harold C; Kobayashi, Megumi; Kanazawa, So; Yamaguchi, Masami K

    2013-07-01

    Just as faces share the same basic arrangement of features, with two eyes above a nose above a mouth, human eyes all share the same basic contrast polarity relations, with a sclera lighter than an iris and a pupil, and this is unique among primates. The current study examined whether this bright-dark relationship of sclera to iris plays a critical role in face recognition from early in development. Specifically, we tested face discrimination in 7- and 8-month-old infants while independently manipulating the contrast polarity of the eye region and of the rest of the face. This gave four face contrast polarity conditions: fully positive condition, fully negative condition, positive face with negated eyes ("negative eyes") condition, and negated face with positive eyes ("positive eyes") condition. In a familiarization and novelty preference procedure, we found that 7- and 8-month-olds could discriminate between faces only when the contrast polarity of the eyes was preserved (positive) and that this did not depend on the contrast polarity of the rest of the face. This demonstrates the critical role of eye contrast polarity for face recognition in 7- and 8-month-olds and is consistent with previous findings for adults. PMID:23499321

  9. The hows and whys of face memory: level of construal influences the recognition of human faces

    PubMed Central

    Wyer, Natalie A.; Hollins, Timothy J.; Pahl, Sabine; Roper, Jean

    2015-01-01

    Three experiments investigated the influence of level of construal (i.e., the interpretation of actions in terms of their meaning or their details) on different stages of face memory. We employed a standard multiple-face recognition paradigm, with half of the faces inverted at test. Construal level was manipulated prior to recognition (Experiment 1), during study (Experiment 2) or both (Experiment 3). The results support a general advantage for high-level construal over low-level construal at both study and at test, and suggest that matching processing style between study and recognition has no advantage. These experiments provide additional evidence in support of a link between semantic processing (i.e., construal) and visual (i.e., face) processing. We conclude with a discussion of implications for current theories relating to both construal and face processing. PMID:26500586

  10. Cellular Phone Face Recognition System Based on Optical Phase Correlation

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Ishikawa, Sayuri; Ohta, Maiko; Kodate, Kashiko

    We propose a high security facial recognition system using a cellular phone on the mobile network. This system is composed of a face recognition engine based on optical phase correlation which uses phase information with emphasis on a Fourier domain, a control sever and the cellular phone with a compact camera for taking pictures, as a portable terminal. Compared with various correlation methods, our face recognition engine revealed the most accurate EER of less than 1%. By using the JAVA interface on this system, we implemented the stable system taking pictures, providing functions to prevent spoofing while transferring images. This recognition system was tested on 300 women students and the results proved this system effective.

  11. Image quality-based adaptive illumination normalisation for face recognition

    NASA Astrophysics Data System (ADS)

    Sellahewa, Harin; Jassim, Sabah A.

    2009-05-01

    Automatic face recognition is a challenging task due to intra-class variations. Changes in lighting conditions during enrolment and identification stages contribute significantly to these intra-class variations. A common approach to address the effects such of varying conditions is to pre-process the biometric samples in order normalise intra-class variations. Histogram equalisation is a widely used illumination normalisation technique in face recognition. However, a recent study has shown that applying histogram equalisation on well-lit face images could lead to a decrease in recognition accuracy. This paper presents a dynamic approach to illumination normalisation, based on face image quality. The quality of a given face image is measured in terms of its luminance distortion by comparing this image against a known reference face image. Histogram equalisation is applied to a probe image if its luminance distortion is higher than a predefined threshold. We tested the proposed adaptive illumination normalisation method on the widely used Extended Yale Face Database B. Identification results demonstrate that our adaptive normalisation produces better identification accuracy compared to the conventional approach where every image is normalised, irrespective of the lighting condition they were acquired.

  12. A Markov Random Field Groupwise Registration Framework for Face Recognition

    PubMed Central

    Liao, Shu; Shen, Dinggang; Chung, Albert C.S.

    2014-01-01

    In this paper, we propose a new framework for tackling face recognition problem. The face recognition problem is formulated as groupwise deformable image registration and feature matching problem. The main contributions of the proposed method lie in the following aspects: (1) Each pixel in a facial image is represented by an anatomical signature obtained from its corresponding most salient scale local region determined by the survival exponential entropy (SEE) information theoretic measure. (2) Based on the anatomical signature calculated from each pixel, a novel Markov random field based groupwise registration framework is proposed to formulate the face recognition problem as a feature guided deformable image registration problem. The similarity between different facial images are measured on the nonlinear Riemannian manifold based on the deformable transformations. (3) The proposed method does not suffer from the generalizability problem which exists commonly in learning based algorithms. The proposed method has been extensively evaluated on four publicly available databases: FERET, CAS-PEAL-R1, FRGC ver 2.0, and the LFW. It is also compared with several state-of-the-art face recognition approaches, and experimental results demonstrate that the proposed method consistently achieves the highest recognition rates among all the methods under comparison. PMID:25506109

  13. Face recognition by using optical correlator with wavelet preprocessing

    NASA Astrophysics Data System (ADS)

    Strzelecki, Jacek; Chalasinska-Macukow, Katarzyna

    2004-08-01

    The method of face recognition by using optical correlator with wavelet preprocessing is presented. The wavelet transform is used to improve the performance of standard Vander Lugt correlator with phase only filter (POF). The influence of various wavelet transforms of images of human faces on the recognition results has been analyzed. The quality of the face recognition process was tested according to two criteria: the peak to correlation energy ratio (PCE), and the discrimination capability (DC). Additionally, proper localization of correlation peak has been controlled. During the preprocessing step a set of three wavelets -- mexican hat, Haar, and Gabor wavelets, with various scales was used. In addition, Gabor wavelets were tested for various orientation angles. During the recognition procedure the input scene and POF are transformed by the same wavelet. We show the results of the computer simulation for a variety of images of human faces: original images without any distortions, noisy images, and images with non-uniform light ilumination. A comparison of results of recognition obtained with and without wavelet preprocessing is given.

  14. FACELOCK-Lock Control Security System Using Face Recognition-

    NASA Astrophysics Data System (ADS)

    Hirayama, Takatsugu; Iwai, Yoshio; Yachida, Masahiko

    A security system using biometric person authentication technologies is suited to various high-security situations. The technology based on face recognition has advantages such as lower user’s resistance and lower stress. However, facial appearances change according to facial pose, expression, lighting, and age. We have developed the FACELOCK security system based on our face recognition methods. Our methods are robust for various facial appearances except facial pose. Our system consists of clients and a server. The client communicates with the server through our protocol over a LAN. Users of our system do not need to be careful about their facial appearance.

  15. The Role of Higher Level Adaptive Coding Mechanisms in the Development of Face Recognition

    ERIC Educational Resources Information Center

    Pimperton, Hannah; Pellicano, Elizabeth; Jeffery, Linda; Rhodes, Gillian

    2009-01-01

    DevDevelopmental improvements in face identity recognition ability are widely documented, but the source of children's immaturity in face recognition remains unclear. Differences in the way in which children and adults visually represent faces might underlie immaturities in face recognition. Recent evidence of a face identity aftereffect (FIAE),…

  16. Two dimensional discriminant neighborhood preserving embedding in face recognition

    NASA Astrophysics Data System (ADS)

    Pang, Meng; Jiang, Jifeng; Lin, Chuang; Wang, Binghui

    2015-03-01

    One of the key issues of face recognition is to extract the features of face images. In this paper, we propose a novel method, named two-dimensional discriminant neighborhood preserving embedding (2DDNPE), for image feature extraction and face recognition. 2DDNPE benefits from four techniques, i.e., neighborhood preserving embedding (NPE), locality preserving projection (LPP), image based projection and Fisher criterion. Firstly, NPE and LPP are two popular manifold learning techniques which can optimally preserve the local geometry structures of the original samples from different angles. Secondly, image based projection enables us to directly extract the optimal projection vectors from twodimensional image matrices rather than vectors, which avoids the small sample size problem as well as reserves useful structural information embedded in the original images. Finally, the Fisher criterion applied in 2DDNPE can boost face recognition rates by minimizing the within-class distance, while maximizing the between-class distance. To evaluate the performance of 2DDNPE, several experiments are conducted on the ORL and Yale face datasets. The results corroborate that 2DDNPE outperforms the existing 1D feature extraction methods, such as NPE, LPP, LDA and PCA across all experiments with respect to recognition rate and training time. 2DDNPE also delivers consistently promising results compared with other competing 2D methods such as 2DNPP, 2DLPP, 2DLDA and 2DPCA.

  17. Efficient Detection of Occlusion prior to Robust Face Recognition

    PubMed Central

    Dugelay, Jean-Luc

    2014-01-01

    While there has been an enormous amount of research on face recognition under pose/illumination/expression changes and image degradations, problems caused by occlusions attracted relatively less attention. Facial occlusions, due, for example, to sunglasses, hat/cap, scarf, and beard, can significantly deteriorate performances of face recognition systems in uncontrolled environments such as video surveillance. The goal of this paper is to explore face recognition in the presence of partial occlusions, with emphasis on real-world scenarios (e.g., sunglasses and scarf). In this paper, we propose an efficient approach which consists of first analysing the presence of potential occlusion on a face and then conducting face recognition on the nonoccluded facial regions based on selective local Gabor binary patterns. Experiments demonstrate that the proposed method outperforms the state-of-the-art works including KLD-LGBPHS, S-LNMF, OA-LBP, and RSC. Furthermore, performances of the proposed approach are evaluated under illumination and extreme facial expression changes provide also significant results. PMID:24526902

  18. Can Massive but Passive Exposure to Faces Contribute to Face Recognition Abilities?

    ERIC Educational Resources Information Center

    Yovel, Galit; Halsband, Keren; Pelleg, Michel; Farkash, Naomi; Gal, Bracha; Goshen-Gottstein, Yonatan

    2012-01-01

    Recent studies have suggested that individuation of other-race faces is more crucial for enhancing recognition performance than exposure that involves categorization of these faces to an identity-irrelevant criterion. These findings were primarily based on laboratory training protocols that dissociated exposure and individuation by using…

  19. Generating virtual training samples for sparse representation of face images and face recognition

    NASA Astrophysics Data System (ADS)

    Du, Yong; Wang, Yu

    2016-03-01

    There are many challenges in face recognition. In real-world scenes, images of the same face vary with changing illuminations, different expressions and poses, multiform ornaments, or even altered mental status. Limited available training samples cannot convey these possible changes in the training phase sufficiently, and this has become one of the restrictions to improve the face recognition accuracy. In this article, we view the multiplication of two images of the face as a virtual face image to expand the training set and devise a representation-based method to perform face recognition. The generated virtual samples really reflect some possible appearance and pose variations of the face. By multiplying a training sample with another sample from the same subject, we can strengthen the facial contour feature and greatly suppress the noise. Thus, more human essential information is retained. Also, uncertainty of the training data is simultaneously reduced with the increase of the training samples, which is beneficial for the training phase. The devised representation-based classifier uses both the original and new generated samples to perform the classification. In the classification phase, we first determine K nearest training samples for the current test sample by calculating the Euclidean distances between the test sample and training samples. Then, a linear combination of these selected training samples is used to represent the test sample, and the representation result is used to classify the test sample. The experimental results show that the proposed method outperforms some state-of-the-art face recognition methods.

  20. Face recognition with histograms of fractional differential gradients

    NASA Astrophysics Data System (ADS)

    Yu, Lei; Ma, Yan; Cao, Qi

    2014-05-01

    It has proved that fractional differentiation can enhance the edge information and nonlinearly preserve textural detailed information in an image. This paper investigates its ability for face recognition and presents a local descriptor called histograms of fractional differential gradients (HFDG) to extract facial visual features. HFDG encodes a face image into gradient patterns using multiorientation fractional differential masks, from which histograms of gradient directions are computed as the face representation. Experimental results on Yale, face recognition technology (FERET), Carnegie Mellon University pose, illumination, and expression (CMU PIE), and A. Martinez and R. Benavente (AR) databases validate the feasibility of the proposed method and show that HFDG outperforms local binary patterns (LBP), histograms of oriented gradients (HOG), enhanced local directional patterns (ELDP), and Gabor feature-based methods.

  1. Holistic face processing can inhibit recognition of forensic facial composites.

    PubMed

    McIntyre, Alex H; Hancock, Peter J B; Frowd, Charlie D; Langton, Stephen R H

    2016-04-01

    Facial composite systems help eyewitnesses to show the appearance of criminals. However, likenesses created by unfamiliar witnesses will not be completely accurate, and people familiar with the target can find them difficult to identify. Faces are processed holistically; we explore whether this impairs identification of inaccurate composite images and whether recognition can be improved. In Experiment 1 (n = 64) an imaging technique was used to make composites of celebrity faces more accurate and identification was contrasted with the original composite images. Corrected composites were better recognized, confirming that errors in production of the likenesses impair identification. The influence of holistic face processing was explored by misaligning the top and bottom parts of the composites (cf. Young, Hellawell, & Hay, 1987). Misalignment impaired recognition of corrected composites but identification of the original, inaccurate composites significantly improved. This effect was replicated with facial composites of noncelebrities in Experiment 2 (n = 57). We conclude that, like real faces, facial composites are processed holistically: recognition is impaired because unlike real faces, composites contain inaccuracies and holistic face processing makes it difficult to perceive identifiable features. This effect was consistent across composites of celebrities and composites of people who are personally familiar. Our findings suggest that identification of forensic facial composites can be enhanced by presenting composites in a misaligned format. PMID:26436334

  2. Road Signs Detection and Recognition Utilizing Images and 3d Point Cloud Acquired by Mobile Mapping System

    NASA Astrophysics Data System (ADS)

    Li, Y. H.; Shinohara, T.; Satoh, T.; Tachibana, K.

    2016-06-01

    High-definition and highly accurate road maps are necessary for the realization of automated driving, and road signs are among the most important element in the road map. Therefore, a technique is necessary which can acquire information about all kinds of road signs automatically and efficiently. Due to the continuous technical advancement of Mobile Mapping System (MMS), it has become possible to acquire large number of images and 3d point cloud efficiently with highly precise position information. In this paper, we present an automatic road sign detection and recognition approach utilizing both images and 3D point cloud acquired by MMS. The proposed approach consists of three stages: 1) detection of road signs from images based on their color and shape features using object based image analysis method, 2) filtering out of over detected candidates utilizing size and position information estimated from 3D point cloud, region of candidates and camera information, and 3) road sign recognition using template matching method after shape normalization. The effectiveness of proposed approach was evaluated by testing dataset, acquired from more than 180 km of different types of roads in Japan. The results show a very high success in detection and recognition of road signs, even under the challenging conditions such as discoloration, deformation and in spite of partial occlusions.

  3. Orienting to face expression during encoding improves men's recognition of own gender faces.

    PubMed

    Fulton, Erika K; Bulluck, Megan; Hertzog, Christopher

    2015-10-01

    It is unclear why women have superior episodic memory of faces, but the benefit may be partially the result of women engaging in superior processing of facial expressions. Therefore, we hypothesized that orienting instructions to attend to facial expression at encoding would significantly improve men's memory of faces and possibly reduce gender differences. We directed 203 college students (122 women) to study 120 faces under instructions to orient to either the person's gender or their emotional expression. They later took a recognition test of these faces by either judging whether they had previously studied the same person or that person with the exact same expression; the latter test evaluated recollection of specific facial details. Orienting to facial expressions during encoding significantly improved men's recognition of own-gender faces and eliminated the advantage that women had for male faces under gender orienting instructions. Although gender differences in spontaneous strategy use when orienting to faces cannot fully account for gender differences in face recognition, orienting men to facial expression during encoding is one way to significantly improve their episodic memory for male faces. PMID:26295282

  4. Suitable models for face geometry normalization in facial expression recognition

    NASA Astrophysics Data System (ADS)

    Sadeghi, Hamid; Raie, Abolghasem A.

    2015-01-01

    Recently, facial expression recognition has attracted much attention in machine vision research because of its various applications. Accordingly, many facial expression recognition systems have been proposed. However, the majority of existing systems suffer from a critical problem: geometric variability. It directly affects the performance of geometric feature-based facial expression recognition approaches. Furthermore, it is a crucial challenge in appearance feature-based techniques. This variability appears in both neutral faces and facial expressions. Appropriate face geometry normalization can improve the accuracy of each facial expression recognition system. Therefore, this paper proposes different geometric models or shapes for normalization. Face geometry normalization removes geometric variability of facial images and consequently, appearance feature extraction methods can be accurately utilized to represent facial images. Thus, some expression-based geometric models are proposed for facial image normalization. Next, local binary patterns and local phase quantization are used for appearance feature extraction. A combination of an effective geometric normalization with accurate appearance representations results in more than a 4% accuracy improvement compared to several state-of-the-arts in facial expression recognition. Moreover, utilizing the model of facial expressions which have larger mouth and eye region sizes gives higher accuracy due to the importance of these regions in facial expression.

  5. Face Recognition with Multi-Resolution Spectral Feature Images

    PubMed Central

    Sun, Zhan-Li; Lam, Kin-Man; Dong, Zhao-Yang; Wang, Han; Gao, Qing-Wei; Zheng, Chun-Hou

    2013-01-01

    The one-sample-per-person problem has become an active research topic for face recognition in recent years because of its challenges and significance for real-world applications. However, achieving relatively higher recognition accuracy is still a difficult problem due to, usually, too few training samples being available and variations of illumination and expression. To alleviate the negative effects caused by these unfavorable factors, in this paper we propose a more accurate spectral feature image-based 2DLDA (two-dimensional linear discriminant analysis) ensemble algorithm for face recognition, with one sample image per person. In our algorithm, multi-resolution spectral feature images are constructed to represent the face images; this can greatly enlarge the training set. The proposed method is inspired by our finding that, among these spectral feature images, features extracted from some orientations and scales using 2DLDA are not sensitive to variations of illumination and expression. In order to maintain the positive characteristics of these filters and to make correct category assignments, the strategy of classifier committee learning (CCL) is designed to combine the results obtained from different spectral feature images. Using the above strategies, the negative effects caused by those unfavorable factors can be alleviated efficiently in face recognition. Experimental results on the standard databases demonstrate the feasibility and efficiency of the proposed method. PMID:23418451

  6. Impact of Intention on the ERP Correlates of Face Recognition

    ERIC Educational Resources Information Center

    Guillaume, Fabrice; Tiberghien, Guy

    2013-01-01

    The present study investigated the impact of study-test similarity on face recognition by manipulating, in the same experiment, the expression change (same vs. different) and the task-processing context (inclusion vs. exclusion instructions) as within-subject variables. Consistent with the dual-process framework, the present results showed that…

  7. Simulationist Models of Face-Based Emotion Recognition

    ERIC Educational Resources Information Center

    Goldman, Alvin I.; Sripada, Chandra Sekhar

    2005-01-01

    Recent studies of emotion mindreading reveal that for three emotions, fear, disgust, and anger, deficits in face-based recognition are paired with deficits in the production of the same emotion. What type of mindreading process would explain this pattern of paired deficits? The simulation approach and the theorizing approach are examined to…

  8. Emotional Recognition in Autism Spectrum Conditions from Voices and Faces

    ERIC Educational Resources Information Center

    Stewart, Mary E.; McAdam, Clair; Ota, Mitsuhiko; Peppe, Sue; Cleland, Joanne

    2013-01-01

    The present study reports on a new vocal emotion recognition task and assesses whether people with autism spectrum conditions (ASC) perform differently from typically developed individuals on tests of emotional identification from both the face and the voice. The new test of vocal emotion contained trials in which the vocal emotion of the sentence…

  9. An Inner Face Advantage in Children's Recognition of Familiar Peers

    ERIC Educational Resources Information Center

    Ge, Liezhong; Anzures, Gizelle; Wang, Zhe; Kelly, David J.; Pascalis, Olivier; Quinn, Paul C.; Slater, Alan M.; Yang, Zhiliang; Lee, Kang

    2008-01-01

    Children's recognition of familiar own-age peers was investigated. Chinese children (4-, 8-, and 14-year-olds) were asked to identify their classmates from photographs showing the entire face, the internal facial features only, the external facial features only, or the eyes, nose, or mouth only. Participants from all age groups were familiar with…

  10. Effect of severe image compression on face recognition algorithms

    NASA Astrophysics Data System (ADS)

    Zhao, Peilong; Dong, Jiwen; Li, Hengjian

    2015-10-01

    In today's information age, people will depend more and more on computers to obtain and make use of information, there is a big gap between the multimedia information after digitization that has large data and the current hardware technology that can provide the computer storage resources and network band width. For example, there is a large amount of image storage and transmission problem. Image compression becomes useful in cases when images need to be transmitted across networks in a less costly way by increasing data volume while reducing transmission time. This paper discusses image compression to effect on face recognition system. For compression purposes, we adopted the JPEG, JPEG2000, JPEG XR coding standard. The face recognition algorithms studied are SIFT. As a form of an extensive research, Experimental results show that it still maintains a high recognition rate under the high compression ratio, and JPEG XR standards is superior to other two kinds in terms of performance and complexity.

  11. Quaternion-based discriminant analysis method for color face recognition.

    PubMed

    Xu, Yong

    2012-01-01

    Pattern recognition techniques have been used to automatically recognize the objects, personal identities, predict the function of protein, the category of the cancer, identify lesion, perform product inspection, and so on. In this paper we propose a novel quaternion-based discriminant method. This method represents and classifies color images in a simple and mathematically tractable way. The proposed method is suitable for a large variety of real-world applications such as color face recognition and classification of the ground target shown in multispectrum remote images. This method first uses the quaternion number to denote the pixel in the color image and exploits a quaternion vector to represent the color image. This method then uses the linear discriminant analysis algorithm to transform the quaternion vector into a lower-dimensional quaternion vector and classifies it in this space. The experimental results show that the proposed method can obtain a very high accuracy for color face recognition. PMID:22937054

  12. Automatic 3D object recognition and reconstruction based on neuro-fuzzy modelling

    NASA Astrophysics Data System (ADS)

    Samadzadegan, Farhad; Azizi, Ali; Hahn, Michael; Lucas, Curo

    Three-dimensional object recognition and reconstruction (ORR) is a research area of major interest in computer vision and photogrammetry. Virtual cities, for example, is one of the exciting application fields of ORR which became very popular during the last decade. Natural and man-made objects of cities such as trees and buildings are complex structures and automatic recognition and reconstruction of these objects from digital aerial images but also other data sources is a big challenge. In this paper a novel approach for object recognition is presented based on neuro-fuzzy modelling. Structural, textural and spectral information is extracted and integrated in a fuzzy reasoning process. The learning capability of neural networks is introduced to the fuzzy recognition process by taking adaptable parameter sets into account which leads to the neuro-fuzzy approach. Object reconstruction follows recognition seamlessly by using the recognition output and the descriptors which have been extracted for recognition. A first successful application of this new ORR approach is demonstrated for the three object classes 'buildings', 'cars' and 'trees' by using aerial colour images of an urban area of the town of Engen in Germany.

  13. Fixation Patterns During Recognition of Personally Familiar and Unfamiliar Faces

    PubMed Central

    van Belle, Goedele; Ramon, Meike; Lefèvre, Philippe; Rossion, Bruno

    2010-01-01

    Previous studies recording eye gaze during face perception have rendered somewhat inconclusive findings with respect to fixation differences between familiar and unfamiliar faces. This can be attributed to a number of factors that differ across studies: the type and extent of familiarity with the faces presented, the definition of areas of interest subject to analyses, as well as a lack of consideration for the time course of scan patterns. Here we sought to address these issues by recording fixations in a recognition task with personally familiar and unfamiliar faces. After a first common fixation on a central superior location of the face in between features, suggesting initial holistic encoding, and a subsequent left eye bias, local features were focused and explored more for familiar than unfamiliar faces. Although the number of fixations did not differ for un-/familiar faces, the locations of fixations began to differ before familiarity decisions were provided. This suggests that in the context of familiarity decisions without time constraints, differences in processing familiar and unfamiliar faces arise relatively early – immediately upon initiation of the first fixation to identity-specific information – and that the local features of familiar faces are processed more than those of unfamiliar faces. PMID:21607074

  14. Face recognition using local gradient binary count pattern

    NASA Astrophysics Data System (ADS)

    Zhao, Xiaochao; Lin, Yaping; Ou, Bo; Yang, Junfeng; Wu, Zhelun

    2015-11-01

    A local feature descriptor, the local gradient binary count pattern (LGBCP), is proposed for face recognition. Unlike some current methods that extract features directly from a face image in the spatial domain, LGBCP encodes the local gradient information of the face's texture in an effective way and provides a more discriminative code than other methods. We compute the gradient information of a face image through convolutions with compass masks. The gradient information is encoded using the local binary count operator. We divide a face into several subregions and extract the distribution of the LGBCP codes from each subregion. Then all the histograms are concatenated into a vector, which is used for face description. For recognition, the chi-square statistic is used to measure the similarity of different feature vectors. Besides directly calculating the similarity of two feature vectors, we provide a weighted matching scheme in which different weights are assigned to different subregions. The nearest-neighborhood classifier is exploited for classification. Experiments are conducted on the FERET, CAS-PEAL, and AR face databases. LGBCP achieves 96.15% on the Fb set of FERET. For CAS-PEAL, LGBCP gets 96.97%, 98.91%, and 90.89% on the aging, distance, and expression sets, respectively.

  15. Anti Theft Mechanism Through Face recognition Using FPGA

    NASA Astrophysics Data System (ADS)

    Sundari, Y. B. T.; Laxminarayana, G.; Laxmi, G. Vijaya

    2012-11-01

    The use of vehicle is must for everyone. At the same time, protection from theft is also very important. Prevention of vehicle theft can be done remotely by an authorized person. The location of the car can be found by using GPS and GSM controlled by FPGA. In this paper, face recognition is used to identify the persons and comparison is done with the preloaded faces for authorization. The vehicle will start only when the authorized personís face is identified. In the event of theft attempt or unauthorized personís trial to drive the vehicle, an MMS/SMS will be sent to the owner along with the location. Then the authorized person can alert the security personnel for tracking and catching the vehicle. For face recognition, a Principal Component Analysis (PCA) algorithm is developed using MATLAB. The control technique for GPS and GSM is developed using VHDL over SPTRAN 3E FPGA. The MMS sending method is written in VB6.0. The proposed application can be implemented with some modifications in the systems wherever the face recognition or detection is needed like, airports, international borders, banking applications etc.

  16. Face recognition with the Karhunen-Loeve transform

    NASA Astrophysics Data System (ADS)

    Suarez, Pedro F.

    1991-12-01

    The major goal of this research was to investigate machine recognition of faces. The approach taken to achieve this goal was to investigate the use of Karhunen-Loe've Transform (KLT) by implementing flexible and practical code. The KLT utilizes the eigenvectors of the covariance matrix as a basis set. Faces were projected onto the eigenvectors, called eigenfaces, and the resulting projection coefficients were used as features. Face recognition accuracies for the KLT coefficients were superior to Fourier based techniques. Additionally, this thesis demonstrated the image compression and reconstruction capabilities of the KLT. This theses also developed the use of the KLT as a facial feature detector. The ability to differentiate between facial features provides a computer communications interface for non-vocal people with cerebral palsy. Lastly, this thesis developed a KLT based axis system for laser scanner data of human heads. The scanner data axis system provides the anthropometric community a more precise method of fitting custom helmets.

  17. Functional aspects of recollective experience in face recognition.

    PubMed

    Parkin, A J; Gardiner, J M; Rosser, R

    1995-12-01

    This article describes two experiments on awareness in recognition memory for novel faces. Two kinds of awareness, recollective experience and feelings of familiarity in the absence of recollective experience, were measured by "remember" and "know" responses. Experiment 1 showed that "remember" but not "know" responses were reduced by divided attention at study. Experiment 2 showed that massed versus spaced repetition of faces in the study list had opposite effects on "remember" and "know" responses. Massed repetition increased "know" responses and reduced "remember" responses. Spaced repetition increased "remember" responses and reduced "know" responses. The results of both experiments replicate previous findings from the verbal domain in the domain of face recognition, and hence they increase the ecological validity of this experimental approach to memory and awareness and the generality of its database. These findings are discussed from a rehearsal perspective on factors influencing the two states of awareness and in relation to the alternative "process dissociation" procedure. PMID:8750414

  18. Multi-stream face recognition for crime-fighting

    NASA Astrophysics Data System (ADS)

    Jassim, Sabah A.; Sellahewa, Harin

    2007-04-01

    Automatic face recognition (AFR) is a challenging task that is increasingly becoming the preferred biometric trait for identification and has the potential of becoming an essential tool in the fight against crime and terrorism. Closed-circuit television (CCTV) cameras have increasingly been used over the last few years for surveillance in public places such as airports, train stations and shopping centers. They are used to detect and prevent crime, shoplifting, public disorder and terrorism. The work of law-enforcing and intelligence agencies is becoming more reliant on the use of databases of biometric data for large section of the population. Face is one of the most natural biometric traits that can be used for identification and surveillance. However, variations in lighting conditions, facial expressions, face size and pose are a great obstacle to AFR. This paper is concerned with using waveletbased face recognition schemes in the presence of variations of expressions and illumination. In particular, we will investigate the use of a combination of wavelet frequency channels for a multi-stream face recognition using various wavelet subbands as different face signal streams. The proposed schemes extend our recently developed face veri.cation scheme for implementation on mobile devices. We shall present experimental results on the performance of our proposed schemes for a number of face databases including a new AV database recorded on a PDA. By analyzing the various experimental data, we shall demonstrate that the multi-stream approach is robust against variations in illumination and facial expressions than the previous single-stream approach.

  19. Evaluation of Model Recognition for Grammar-Based Automatic 3d Building Model Reconstruction

    NASA Astrophysics Data System (ADS)

    Yu, Qian; Helmholz, Petra; Belton, David

    2016-06-01

    In recent years, 3D city models are in high demand by many public and private organisations, and the steadily growing capacity in both quality and quantity are increasing demand. The quality evaluation of these 3D models is a relevant issue both from the scientific and practical points of view. In this paper, we present a method for the quality evaluation of 3D building models which are reconstructed automatically from terrestrial laser scanning (TLS) data based on an attributed building grammar. The entire evaluation process has been performed in all the three dimensions in terms of completeness and correctness of the reconstruction. Six quality measures are introduced to apply on four datasets of reconstructed building models in order to describe the quality of the automatic reconstruction, and also are assessed on their validity from the evaluation point of view.

  20. Are Face and Object Recognition Independent? A Neurocomputational Modeling Exploration.

    PubMed

    Wang, Panqu; Gauthier, Isabel; Cottrell, Garrison

    2016-04-01

    Are face and object recognition abilities independent? Although it is commonly believed that they are, Gauthier et al. [Gauthier, I., McGugin, R. W., Richler, J. J., Herzmann, G., Speegle, M., & VanGulick, A. E. Experience moderates overlap between object and face recognition, suggesting a common ability. Journal of Vision, 14, 7, 2014] recently showed that these abilities become more correlated as experience with nonface categories increases. They argued that there is a single underlying visual ability, v, that is expressed in performance with both face and nonface categories as experience grows. Using the Cambridge Face Memory Test and the Vanderbilt Expertise Test, they showed that the shared variance between Cambridge Face Memory Test and Vanderbilt Expertise Test performance increases monotonically as experience increases. Here, we address why a shared resource across different visual domains does not lead to competition and to an inverse correlation in abilities? We explain this conundrum using our neurocomputational model of face and object processing ["The Model", TM, Cottrell, G. W., & Hsiao, J. H. Neurocomputational models of face processing. In A. J. Calder, G. Rhodes, M. Johnson, & J. Haxby (Eds.), The Oxford handbook of face perception. Oxford, UK: Oxford University Press, 2011]. We model the domain general ability v as the available computational resources (number of hidden units) in the mapping from input to label and experience as the frequency of individual exemplars in an object category appearing during network training. Our results show that, as in the behavioral data, the correlation between subordinate level face and object recognition accuracy increases as experience grows. We suggest that different domains do not compete for resources because the relevant features are shared between faces and objects. The essential power of experience is to generate a "spreading transform" for faces (separating them in representational space) that

  1. Recognition by Humans and Pigeons of Novel Views of 3-D Objects and Their Photographs

    ERIC Educational Resources Information Center

    Friedman, Alinda; Spetch, Marcia L.; Ferrey, Anne

    2005-01-01

    Humans and pigeons were trained to discriminate between 2 views of actual 3-D objects or their photographs. They were tested on novel views that were either within the closest rotational distance between the training views (interpolated) or outside of that range (extrapolated). When training views were 60? apart, pigeons, but not humans,…

  2. Neural Mechanism for Mirrored Self-face Recognition

    PubMed Central

    Sugiura, Motoaki; Miyauchi, Carlos Makoto; Kotozaki, Yuka; Akimoto, Yoritaka; Nozawa, Takayuki; Yomogida, Yukihito; Hanawa, Sugiko; Yamamoto, Yuki; Sakuma, Atsushi; Nakagawa, Seishu; Kawashima, Ryuta

    2015-01-01

    Self-face recognition in the mirror is considered to involve multiple processes that integrate 2 perceptual cues: temporal contingency of the visual feedback on one's action (contingency cue) and matching with self-face representation in long-term memory (figurative cue). The aim of this study was to examine the neural bases of these processes by manipulating 2 perceptual cues using a “virtual mirror” system. This system allowed online dynamic presentations of real-time and delayed self- or other facial actions. Perception-level processes were identified as responses to only a single perceptual cue. The effect of the contingency cue was identified in the cuneus. The regions sensitive to the figurative cue were subdivided by the response to a static self-face, which was identified in the right temporal, parietal, and frontal regions, but not in the bilateral occipitoparietal regions. Semantic- or integration-level processes, including amodal self-representation and belief validation, which allow modality-independent self-recognition and the resolution of potential conflicts between perceptual cues, respectively, were identified in distinct regions in the right frontal and insular cortices. The results are supportive of the multicomponent notion of self-recognition and suggest a critical role for contingency detection in the co-emergence of self-recognition and empathy in infants. PMID:24770712

  3. Face recognition using spatially constrained earth mover's distance.

    PubMed

    Xu, Dong; Yan, Shuicheng; Luo, Jiebo

    2008-11-01

    Face recognition is a challenging problem, especially when the face images are not strictly aligned (e.g., images can be captured from different viewpoints or the faces may not be accurately cropped by a human or automatic algorithm). In this correspondence, we investigate face recognition under the scenarios with potential spatial misalignments. First, we formulate an asymmetric similarity measure based on Spatially constrained Earth Mover's Distance (SEMD), for which the source image is partitioned into nonoverlapping local patches while the destination image is represented as a set of overlapping local patches at different positions. Assuming that faces are already roughly aligned according to the positions of their eyes, one patch in the source image can be matched only to one of its neighboring patches in the destination image under the spatial constraint of reasonably small misalignments. Because the similarity measure as defined by SEMD is asymmetric, we propose two schemes to combine the two similarity measures computed in both directions. Moreover, we adopt a distance-as-feature approach by treating the distances to the reference images as features in a Kernel Discriminant Analysis (KDA) framework. Experiments on three benchmark face databases, namely the CMU PIE, FERET, and FRGC databases, demonstrate the effectiveness of the proposed SEMD. PMID:18854252

  4. The neural plasticity of other-race face recognition.

    PubMed

    Tanaka, James W; Pierce, Lara J

    2009-03-01

    Although it is well established that people are better at recognizing own-race faces than at recognizing other-race faces, the neural mechanisms mediating this advantage are not well understood. In this study, Caucasian participants were trained to differentiate African American (or Hispanic) faces at the individual level (e.g., Joe, Bob) and to categorize Hispanic (or African American) faces at the basic level of race (e.g., Hispanic, African American). Behaviorally, subordinate-level individuation training led to improved performance on a posttraining recognition test, relative to basic-level training. As measured by event-related potentials, subordinate- and basic-level training had relatively little effect on the face N170 component. However, as compared with basic-level training, subordinate-level training elicited an increased response in the posterior expert N250 component. These results demonstrate that learning to discriminate other-race faces at the subordinate level of the individual leads to improved recognition and enhanced activation of the expert N250 component. PMID:19246333

  5. Spatial location in brief, free-viewing face encoding modulates contextual face recognition

    PubMed Central

    Felisberti, Fatima M.; McDermott, Mark R.

    2013-01-01

    The effect of the spatial location of faces in the visual field during brief, free-viewing encoding in subsequent face recognition is not known. This study addressed this question by tagging three groups of faces with cheating, cooperating or neutral behaviours and presenting them for encoding in two visual hemifields (upper vs. lower or left vs. right). Participants then had to indicate if a centrally presented face had been seen before or not. Head and eye movements were free in all phases. Findings showed that the overall recognition of cooperators was significantly better than cheaters, and it was better for faces encoded in the upper hemifield than in the lower hemifield, both in terms of a higher d′ and faster reaction time (RT). The d′ for any given behaviour in the left and right hemifields was similar. The RT in the left hemifield did not vary with tagged behaviour, whereas the RT in the right hemifield was longer for cheaters than for cooperators. The results showed that memory biases in contextual face recognition were modulated by the spatial location of briefly encoded faces and are discussed in terms of scanning reading habits, top-left bias in lighting preference and peripersonal space. PMID:24349694

  6. A Multi-Modal Face Recognition Method Using Complete Local Derivative Patterns and Depth Maps

    PubMed Central

    Yin, Shouyi; Dai, Xu; Ouyang, Peng; Liu, Leibo; Wei, Shaojun

    2014-01-01

    In this paper, we propose a multi-modal 2D + 3D face recognition method for a smart city application based on a Wireless Sensor Network (WSN) and various kinds of sensors. Depth maps are exploited for the 3D face representation. As for feature extraction, we propose a new feature called Complete Local Derivative Pattern (CLDP). It adopts the idea of layering and has four layers. In the whole system, we apply CLDP separately on Gabor features extracted from a 2D image and depth map. Then, we obtain two features: CLDP-Gabor and CLDP-Depth. The two features weighted by the corresponding coefficients are combined together in the decision level to compute the total classification distance. At last, the probe face is assigned the identity with the smallest classification distance. Extensive experiments are conducted on three different databases. The results demonstrate the robustness and superiority of the new approach. The experimental results also prove that the proposed multi-modal 2D + 3D method is superior to other multi-modal ones and CLDP performs better than other Local Binary Pattern (LBP) based features. PMID:25333290

  7. A multi-modal face recognition method using complete local derivative patterns and depth maps.

    PubMed

    Yin, Shouyi; Dai, Xu; Ouyang, Peng; Liu, Leibo; Wei, Shaojun

    2014-01-01

    In this paper, we propose a multi-modal 2D + 3D face recognition method for a smart city application based on a Wireless Sensor Network (WSN) and various kinds of sensors. Depth maps are exploited for the 3D face representation. As for feature extraction, we propose a new feature called Complete Local Derivative Pattern (CLDP). It adopts the idea of layering and has four layers. In the whole system, we apply CLDP separately on Gabor features extracted from a 2D image and depth map. Then, we obtain two features: CLDP-Gabor and CLDP-Depth. The two features weighted by the corresponding coefficients are combined together in the decision level to compute the total classification distance. At last, the probe face is assigned the identity with the smallest classification distance. Extensive experiments are conducted on three different databases. The results demonstrate the robustness and superiority of the new approach. The experimental results also prove that the proposed multi-modal 2D + 3D method is superior to other multi-modal ones and CLDP performs better than other Local Binary Pattern (LBP) based features. PMID:25333290

  8. Dialog-Based 3D-Image Recognition Using a Domain Ontology

    NASA Astrophysics Data System (ADS)

    Hois, Joana; Wünstel, Michael; Bateman, John A.; Röfer, Thomas

    The combination of vision and speech, together with the resulting necessity for formal representations, builds a central component of an autonomous system. A robot that is supposed to navigate autonomously through space must be able to perceive its environment as automatically as possible. But each recognition system has its own inherent limits. Especially a robot whose task is to navigate through unknown terrain has to deal with unidentified or even unknown objects, thus compounding the recognition problem still further. The system described in this paper takes this into account by trying to identify objects based on their functionality where possible. To handle cases where recognition is insufficient, we examine here two further strategies: on the one hand, the linguistic reference and labeling of the unidentified objects and, on the other hand, ontological deduction. This approach then connects the probabilistic area of object recognition with the logical area of formal reasoning. In order to support formal reasoning, additional relational scene information has to be supplied by the recognition system. Moreover, for a sound ontological basis for these reasoning tasks, it is necessary to define a domain ontology that provides for the representation of real-world objects and their corresponding spatial relations in linguistic and physical respects. Physical spatial relations and objects are measured by the visual system, whereas linguistic spatial relations and objects are required for interactions with a user.

  9. A wavelet-based approach to face verification/recognition

    NASA Astrophysics Data System (ADS)

    Jassim, Sabah; Sellahewa, Harin

    2005-10-01

    Face verification/recognition is a tough challenge in comparison to identification based on other biometrics such as iris, or fingerprints. Yet, due to its unobtrusive nature, the face is naturally suitable for security related applications. Face verification process relies on feature extraction from face images. Current schemes are either geometric-based or template-based. In the latter, the face image is statistically analysed to obtain a set of feature vectors that best describe it. Performance of a face verification system is affected by image variations due to illumination, pose, occlusion, expressions and scale. This paper extends our recent work on face verification for constrained platforms, where the feature vector of a face image is the coefficients in the wavelet transformed LL-subbands at depth 3 or more. It was demonstrated that the wavelet-only feature vector scheme has a comparable performance to sophisticated state-of-the-art when tested on two benchmark databases (ORL, and BANCA). The significance of those results stem from the fact that the size of the k-th LL- subband is 1/4k of the original image size. Here, we investigate the use of wavelet coefficients in various subbands at level 3 or 4 using various wavelet filters. We shall compare the performance of the wavelet-based scheme for different filters at different subbands with a number of state-of-the-art face verification/recognition schemes on two benchmark databases, namely ORL and the control section of BANCA. We shall demonstrate that our schemes have comparable performance to (or outperform) the best performing other schemes.

  10. FACE-R--a 3D database of 400 living individuals' full head CT- and face scans and preliminary GMM analysis for craniofacial reconstruction.

    PubMed

    Kustár, Agnes; Forró, Laszlo; Kalina, Ildiko; Fazekas, Ferenc; Honti, Szabolcs; Makra, Szabolcs; Friess, Martin

    2013-11-01

    In the past, improvements in craniofacial reconstructions (CFR) methodology languished due to the lack of adequate 3D databases that were sufficiently large and appropriate for 3-dimensional shape statistics. In our study, we created the "FACE-R" database from CT records and 3D surface scans of 400 clinical patients from Hungary, providing a significantly larger sample that was available before. The uniqueness of our database is linking of two data types that makes possible to investigate the bone and skin surface of the same individual, in upright position, thus eliminating many of the gravitational effects on the face during CT scanning. We performed a preliminary geometric morphometric (GMM) study using 3D data that produces a general idea of skull and face shape correlations. The vertical position of the tip of the (soft) nose for a skull and landmarks such as rhinion need to be taken into account. Likewise, the anterior nasal spine appears to exert some influence in this regard. PMID:24020394

  11. Improving representation-based classification for robust face recognition

    NASA Astrophysics Data System (ADS)

    Zhang, Hongzhi; Zhang, Zheng; Li, Zhengming; Chen, Yan; Shi, Jian

    2014-06-01

    The sparse representation classification (SRC) method proposed by Wright et al. is considered as the breakthrough of face recognition because of its good performance. Nevertheless it still cannot perfectly address the face recognition problem. The main reason for this is that variation of poses, facial expressions, and illuminations of the facial image can be rather severe and the number of available facial images are fewer than the dimensions of the facial image, so a certain linear combination of all the training samples is not able to fully represent the test sample. In this study, we proposed a novel framework to improve the representation-based classification (RBC). The framework first ran the sparse representation algorithm and determined the unavoidable deviation between the test sample and optimal linear combination of all the training samples in order to represent it. It then exploited the deviation and all the training samples to resolve the linear combination coefficients. Finally, the classification rule, the training samples, and the renewed linear combination coefficients were used to classify the test sample. Generally, the proposed framework can work for most RBC methods. From the viewpoint of regression analysis, the proposed framework has a solid theoretical soundness. Because it can, to an extent, identify the bias effect of the RBC method, it enables RBC to obtain more robust face recognition results. The experimental results on a variety of face databases demonstrated that the proposed framework can improve the collaborative representation classification, SRC, and improve the nearest neighbor classifier.

  12. A Highly Accurate Face Recognition System Using Filtering Correlation

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Ishikawa, Sayuri; Kodate, Kashiko

    2007-09-01

    The authors previously constructed a highly accurate fast face recognition optical correlator (FARCO) [E. Watanabe and K. Kodate: Opt. Rev. 12 (2005) 460], and subsequently developed an improved, super high-speed FARCO (S-FARCO), which is able to process several hundred thousand frames per second. The principal advantage of our new system is its wide applicability to any correlation scheme. Three different configurations were proposed, each depending on correlation speed. This paper describes and evaluates a software correlation filter. The face recognition function proved highly accurate, seeing that a low-resolution facial image size (64 × 64 pixels) has been successfully implemented. An operation speed of less than 10 ms was achieved using a personal computer with a central processing unit (CPU) of 3 GHz and 2 GB memory. When we applied the software correlation filter to a high-security cellular phone face recognition system, experiments on 30 female students over a period of three months yielded low error rates: 0% false acceptance rate and 2% false rejection rate. Therefore, the filtering correlation works effectively when applied to low resolution images such as web-based images or faces captured by a monitoring camera.

  13. Efficient face recognition using local derivative pattern and shifted phase-encoded fringe-adjusted joint transform correlation

    NASA Astrophysics Data System (ADS)

    Biswas, Bikram K.; Alam, Mohammad S.; Chowdhury, Suparna

    2016-04-01

    An improved shifted phase-encoded fringe-adjusted joint transform correlation technique is proposed in this paper for face recognition which can accommodate the detrimental effects of noise, illumination, and other 3D distortions such as expression and rotation variations. This technique utilizes a third order local derivative pattern operator (LDP3) followed by a shifted phase-encoded fringe-adjusted joint transform correlation (SPFJTC) operation. The local derivative pattern operator ensures better facial feature extraction in a variable environment while the SPFJTC yields robust correlation output for the desired signals. The performance of the proposed method is determined by using the Yale Face Database, Yale Face Database B, and Georgia Institute of Technology Face Database. This technique has been found to yield better face recognition rate compared to alternate JTC based techniques.

  14. Near-infrared face recognition utilizing open CV software

    NASA Astrophysics Data System (ADS)

    Sellami, Louiza; Ngo, Hau; Fowler, Chris J.; Kearney, Liam M.

    2014-06-01

    Commercially available hardware, freely available algorithms, and authors' developed software are synergized successfully to detect and recognize subjects in an environment without visible light. This project integrates three major components: an illumination device operating in near infrared (NIR) spectrum, a NIR capable camera and a software algorithm capable of performing image manipulation, facial detection and recognition. Focusing our efforts in the near infrared spectrum allows the low budget system to operate covertly while still allowing for accurate face recognition. In doing so a valuable function has been developed which presents potential benefits in future civilian and military security and surveillance operations.

  15. Impact of intention on the ERP correlates of face recognition.

    PubMed

    Guillaume, Fabrice; Tiberghien, Guy

    2013-02-01

    The present study investigated the impact of study-test similarity on face recognition by manipulating, in the same experiment, the expression change (same vs. different) and the task-processing context (inclusion vs. exclusion instructions) as within-subject variables. Consistent with the dual-process framework, the present results showed that participants performed better on the inclusion task than on the exclusion task, with no response bias. A mid-frontal FN400 old/new effect and a parietal old/new effect were found in both tasks. However, modulations of the ERP old/new effects generated by the expression change on recognized faces differed across tasks. The modulations of the ERP old/new effects were proportional to the degree of matching between the study face and the recognition face in the inclusion task, but not in the exclusion task. The observed modulation of the FN400 old/new effect by the task instructions when familiarity and conceptual priming were kept constant indicates that these early ERP correlates of recognition depend on voluntary task-related control. The present results question the idea that FN400 reflects implicit memory processes such as conceptual priming and show that the extent to which the FN400 discriminates between conditions depends on the retrieval orientation at test. They are discussed in relation to recent controversies about the ERP correlates of familiarity in face recognition. This study suggests that while both conceptual and perceptual information can contribute to the familiarity signal reflected by the FN400 effect, their relative contributions vary with the task demands. PMID:23174431

  16. Robust Detection of Round Shaped Pits Lying on 3D Meshes: Application to Impact Crater Recognition

    NASA Astrophysics Data System (ADS)

    Schmidt, Martin-Pierre; Muscato, Jennifer; Viseur, Sophie; Jorda, Laurent; Bouley, Sylvain; Mari, Jean-Luc

    2015-04-01

    Most celestial bodies display impacts of collisions with asteroids and meteoroids. These traces are called craters. The possibility of observing and identifying these craters and their characteristics (radius, depth and morphology) is the only method available to measure the age of different units at the surface of the body, which in turn allows to constrain its conditions of formation. Interplanetary space probes always carry at least one imaging instrument on board. The visible images of the target are used to reconstruct high-resolution 3D models of its surface as a cloud of points in the case of multi-image dense stereo, or as a triangular mesh in the case of stereo and shape-from-shading. The goal of this work is to develop a methodology to automatically detect the craters lying on these 3D models. The robust extraction of feature areas on surface objects embedded in 3D, like circular pits, is a challenging problem. Classical approaches generally rely on image processing and template matching on a 2D flat projection of the 3D object (i.e.: a high-resolution photograph). In this work, we propose a full-3D method that mainly relies on curvature analysis. Mean and Gaussian curvatures are estimated on the surface. They are used to label vertices that belong to concave parts corresponding to specific pits on the surface. The surface is thus transformed into binary map distinguishing potential crater features to other types of features. Centers are located in the targeted surface regions, corresponding to potential crater features. Concentric rings are then built around the found centers. They consist in circular closed lines exclusively composed of edges of the initial mesh. The first built ring represents the nearest vertex neighborhood of the found center. The ring is then optimally expanded using a circularity constrain and the curvature values of the ring vertices. This method has been tested on a 3D model of the asteroid Lutetia observed by the ROSETTA (ESA

  17. Quantitative analysis and feature recognition in 3-D microstructural data sets

    NASA Astrophysics Data System (ADS)

    Lewis, A. C.; Suh, C.; Stukowski, M.; Geltmacher, A. B.; Spanos, G.; Rajan, K.

    2006-12-01

    A three-dimensional (3-D) reconstruction of an austenitic stainless-steel microstructure was used as input for an image-based finite-element model to simulate the anisotropic elastic mechanical response of the microstructure. The quantitative data-mining and data-warehousing techniques used to correlate regions of high stress with critical microstructural features are discussed. Initial analysis of elastic stresses near grain boundaries due to mechanical loading revealed low overall correlation with their location in the microstructure. However, the use of data-mining and feature-tracking techniques to identify high-stress outliers revealed that many of these high-stress points are generated near grain boundaries and grain edges (triple junctions). These techniques also allowed for the differentiation between high stresses due to boundary conditions of the finite volume reconstructed, and those due to 3-D microstructural features.

  18. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes.

    PubMed

    Yebes, J Javier; Bergasa, Luis M; García-Garrido, Miguel Ángel

    2015-01-01

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM. PMID:25903553

  19. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes

    PubMed Central

    Yebes, J. Javier; Bergasa, Luis M.; García-Garrido, Miguel Ángel

    2015-01-01

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM. PMID:25903553

  20. Tolerance for distorted faces: challenges to a configural processing account of familiar face recognition.

    PubMed

    Sandford, Adam; Burton, A Mike

    2014-09-01

    Face recognition is widely held to rely on 'configural processing', an analysis of spatial relations between facial features. We present three experiments in which viewers were shown distorted faces, and asked to resize these to their correct shape. Based on configural theories appealing to metric distances between features, we reason that this should be an easier task for familiar than unfamiliar faces (whose subtle arrangements of features are unknown). In fact, participants were inaccurate at this task, making between 8% and 13% errors across experiments. Importantly, we observed no advantage for familiar faces: in one experiment participants were more accurate with unfamiliars, and in two experiments there was no difference. These findings were not due to general task difficulty - participants were able to resize blocks of colour to target shapes (squares) more accurately. We also found an advantage of familiarity for resizing other stimuli (brand logos). If configural processing does underlie face recognition, these results place constraints on the definition of 'configural'. Alternatively, familiar face recognition might rely on more complex criteria - based on tolerance to within-person variation rather than highly specific measurement. PMID:24853629

  1. Neural correlates of impaired emotional face recognition in cerebellar lesions.

    PubMed

    Adamaszek, Michael; Kirkby, Kenneth C; D'Agata, Fedrico; Olbrich, Sebastian; Langner, Sönke; Steele, Christopher; Sehm, Bernhard; Busse, Stefan; Kessler, Christof; Hamm, Alfons

    2015-07-10

    Clinical and neuroimaging data indicate a cerebellar contribution to emotional processing, which may account for affective-behavioral disturbances in patients with cerebellar lesions. We studied the neurophysiology of cerebellar involvement in recognition of emotional facial expression. Participants comprised eight patients with discrete ischemic cerebellar lesions and eight control patients without any cerebrovascular stroke. Event-related potentials (ERP) were used to measure responses to faces from the Karolinska Directed Emotional Faces Database (KDEF), interspersed in a stream of images with salient contents. Images of faces augmented N170 in both groups, but increased late positive potential (LPP) only in control patients without brain lesions. Dipole analysis revealed altered activation patterns for negative emotions in patients with cerebellar lesions, including activation of the left inferior prefrontal area to images of faces showing fear, contralateral to controls. Correlation analysis indicated that lesions of cerebellar area Crus I contribute to ERP deviations. Overall, our results implicate the cerebellum in integrating emotional information at different higher order stages, suggesting distinct cerebellar contributions to the proposed large-scale cerebral network of emotional face recognition. PMID:25912431

  2. Recognition of faces of ingroup and outgroup children and adults.

    PubMed

    Corenblum, B; Meissner, Christian A

    2006-03-01

    People are often more accurate in recognizing faces of ingroup members than in recognizing faces of outgroup members. Although own-group biases in face recognition are well established among adults, less attention has been given to such biases among children. This is surprising considering how often children give testimony in criminal and civil cases. In the current two studies, Euro-Canadian children attending public school and young adults enrolled in university-level classes were asked whether previously presented photographs of Euro-American and African American adults (Study 1) or photographs of Native Canadian, Euro-Canadian, and African American children (Study 2) were new or old. In both studies, own-group biases were found on measures of discrimination accuracy and response bias as well as on estimates of reaction time, confidence, and confidence-accuracy relations. Results of both studies were consistent with predictions derived from multidimensional face space theory of face recognition. Implications of the current studies for the validity of children's eyewitness testimony are also discussed. PMID:16243349

  3. Still-to-video face recognition in unconstrained environments

    NASA Astrophysics Data System (ADS)

    Wang, Haoyu; Liu, Changsong; Ding, Xiaoqing

    2015-02-01

    Face images from video sequences captured in unconstrained environments usually contain several kinds of variations, e.g. pose, facial expression, illumination, image resolution and occlusion. Motion blur and compression artifacts also deteriorate recognition performance. Besides, in various practical systems such as law enforcement, video surveillance and e-passport identification, only a single still image per person is enrolled as the gallery set. Many existing methods may fail to work due to variations in face appearances and the limit of available gallery samples. In this paper, we propose a novel approach for still-to-video face recognition in unconstrained environments. By assuming that faces from still images and video frames share the same identity space, a regularized least squares regression method is utilized to tackle the multi-modality problem. Regularization terms based on heuristic assumptions are enrolled to avoid overfitting. In order to deal with the single image per person problem, we exploit face variations learned from training sets to synthesize virtual samples for gallery samples. We adopt a learning algorithm combining both affine/convex hull-based approach and regularizations to match image sets. Experimental results on a real-world dataset consisting of unconstrained video sequences demonstrate that our method outperforms the state-of-the-art methods impressively.

  4. Friends with Faces: How Social Networks Can Enhance Face Recognition and Vice Versa

    NASA Astrophysics Data System (ADS)

    Mavridis, Nikolaos; Kazmi, Wajahat; Toulis, Panos

    The "friendship" relation, a social relation among individuals, is one of the primary relations modeled in some of the world's largest online social networking sites, such as "FaceBook." On the other hand, the "co-occurrence" relation, as a relation among faces appearing in pictures, is one that is easily detectable using modern face detection techniques. These two relations, though appearing in different realms (social vs. visual sensory), have a strong correlation: faces that co-occur in photos often belong to individuals who are friends. Using real-world data gathered from "Facebook," which were gathered as part of the "FaceBots" project, the world's first physical face-recognizing and conversing robot that can utilize and publish information on "Facebook" was established. We present here methods as well as results for utilizing this correlation in both directions. Both algorithms for utilizing knowledge of the social context for faster and better face recognition are given, as well as algorithms for estimating the friendship network of a number of individuals given photos containing their faces. The results are quite encouraging. In the primary example, doubling of the recognition accuracy as well as a sixfold improvement in speed is demonstrated. Various improvements, interesting statistics, as well as an empirical investigation leading to predictions of scalability to much bigger data sets are discussed.

  5. Determination of candidate subjects for better recognition of faces

    NASA Astrophysics Data System (ADS)

    Wang, Xuansheng; Chen, Zhen; Teng, Zhongming

    2016-05-01

    In order to improve the accuracy of face recognition and to solve the problem of various poses, we present an improved collaborative representation classification (CRC) algorithm using original training samples and the corresponding mirror images. First, the mirror images are generated from the original training samples. Second, both original training samples and their mirror images are simultaneously used to represent the test sample via improved collaborative representation. Then, some classes which are "close" to the test sample are coarsely selected as candidate classes. At last, the candidate classes are used to represent the test sample again, and then the class most similar to the test sample can be determined finely. The experimental results show our proposed algorithm has more robustness than the original CRC algorithm and can effectively improve the accuracy of face recognition.

  6. An integrated modeling approach to age invariant face recognition

    NASA Astrophysics Data System (ADS)

    Alvi, Fahad Bashir; Pears, Russel

    2015-03-01

    This Research study proposes a novel method for face recognition based on Anthropometric features that make use of an integrated approach comprising of a global and personalized models. The system is aimed to at situations where lighting, illumination, and pose variations cause problems in face recognition. A Personalized model covers the individual aging patterns while a Global model captures general aging patterns in the database. We introduced a de-aging factor that de-ages each individual in the database test and training sets. We used the k nearest neighbor approach for building a personalized model and global model. Regression analysis was applied to build the models. During the test phase, we resort to voting on different features. We used FG-Net database for checking the results of our technique and achieved 65 percent Rank 1 identification rate.

  7. A fast 3-D object recognition algorithm for the vision system of a special-purpose dexterous manipulator

    NASA Technical Reports Server (NTRS)

    Hung, Stephen H. Y.

    1989-01-01

    A fast 3-D object recognition algorithm that can be used as a quick-look subsystem to the vision system for the Special-Purpose Dexterous Manipulator (SPDM) is described. Global features that can be easily computed from range data are used to characterize the images of a viewer-centered model of an object. This algorithm will speed up the processing by eliminating the low level processing whenever possible. It may identify the object, reject a set of bad data in the early stage, or create a better environment for a more powerful algorithm to carry the work further.

  8. Design and implementation of face recognition system based on Windows

    NASA Astrophysics Data System (ADS)

    Zhang, Min; Liu, Ting; Li, Ailan

    2015-07-01

    In view of the basic Windows login password input way lacking of safety and convenient operation, we will introduce the biometrics technology, face recognition, into the computer to login system. Not only can it encrypt the computer system, also according to the level to identify administrators at all levels. With the enhancement of the system security, user input can neither be a cumbersome nor worry about being stolen password confidential.

  9. Possible use of small UAV to create high resolution 3D model of vertical rock faces

    NASA Astrophysics Data System (ADS)

    Mészáros, János; Kerkovits, Krisztian

    2014-05-01

    One of the newest and mostly emerging acquisition technologies is the use of small unmanned aerial vehicles (UAVs) to photogrammetry and remote sensing. Several successful research project or industrial use can be found worldwide (mine investigation, precision agriculture, mapping etc.) but those surveys are focusing mainly on the survey of horizontal areas. In our research a mixed acquisition method was developed and tested to create a dense, 3D model about a columnar outcrop close to Kő-hegy (Pest County). Our primary goal was to create a model whereat the pattern of different layers is clearly visible and measurable, as well as to test the robustness of our idea. Our method uses a consumer grade camera to take digital photographs about the outcrop. A small, custom made tricopter was built to carry the camera above middle and top parts of the rock, the bottom part can be photographed only from several ground positions. During the field survey ground control points were installed and measured using a kinematic correction GPS. These latter data were used during the georeferencing of generated point cloud. Free online services built on Structure from Motion (SfM) algorithms and desktop software also were tested to generate the relative point cloud and for further processing and analysis.

  10. Effects of Lateral Reversal on Recognition Memory for Photographs of Faces.

    ERIC Educational Resources Information Center

    McKelvie, Stuart J.

    1983-01-01

    Examined recognition memory for photographs of faces in four experiments using students and adults. Results supported a feature (rather than Gestalt) model of facial recognition in which the two sides of the face are different in its memory representation. (JAC)

  11. Familiar and unfamiliar face recognition in crested macaques (Macaca nigra)

    PubMed Central

    Micheletta, Jérôme; Whitehouse, Jamie; Parr, Lisa A.; Marshman, Paul; Engelhardt, Antje; Waller, Bridget M.

    2015-01-01

    Many species use facial features to identify conspecifics, which is necessary to navigate a complex social environment. The fundamental mechanisms underlying face processing are starting to be well understood in a variety of primate species. However, most studies focus on a limited subset of species tested with unfamiliar faces. As well as limiting our understanding of how widely distributed across species these skills are, this also limits our understanding of how primates process faces of individuals they know, and whether social factors (e.g. dominance and social bonds) influence how readily they recognize others. In this study, socially housed crested macaques voluntarily participated in a series of computerized matching-to-sample tasks investigating their ability to discriminate (i) unfamiliar individuals and (ii) members of their own social group. The macaques performed above chance on all tasks. Familiar faces were not easier to discriminate than unfamiliar faces. However, the subjects were better at discriminating higher ranking familiar individuals, but not unfamiliar ones. This suggests that our subjects applied their knowledge of their dominance hierarchies to the pictorial representation of their group mates. Faces of high-ranking individuals garner more social attention, and therefore might be more deeply encoded than other individuals. Our results extend the study of face recognition to a novel species, and consequently provide valuable data for future comparative studies. PMID:26064665

  12. Emotion recognition: the role of featural and configural face information.

    PubMed

    Bombari, Dario; Schmid, Petra C; Schmid Mast, Marianne; Birri, Sandra; Mast, Fred W; Lobmaier, Janek S

    2013-01-01

    Several studies investigated the role of featural and configural information when processing facial identity. A lot less is known about their contribution to emotion recognition. In this study, we addressed this issue by inducing either a featural or a configural processing strategy (Experiment 1) and by investigating the attentional strategies in response to emotional expressions (Experiment 2). In Experiment 1, participants identified emotional expressions in faces that were presented in three different versions (intact, blurred, and scrambled) and in two orientations (upright and inverted). Blurred faces contain mainly configural information, and scrambled faces contain mainly featural information. Inversion is known to selectively hinder configural processing. Analyses of the discriminability measure (A') and response times (RTs) revealed that configural processing plays a more prominent role in expression recognition than featural processing, but their relative contribution varies depending on the emotion. In Experiment 2, we qualified these differences between emotions by investigating the relative importance of specific features by means of eye movements. Participants had to match intact expressions with the emotional cues that preceded the stimulus. The analysis of eye movements confirmed that the recognition of different emotions rely on different types of information. While the mouth is important for the detection of happiness and fear, the eyes are more relevant for anger, fear, and sadness. PMID:23679155

  13. Combination of direct matching and collaborative representation for face recognition

    NASA Astrophysics Data System (ADS)

    Zhang, Chongyang

    2013-06-01

    It has been proved that representation-based classification (RBC) can achieve high accuracy in face recognition. However, conventional RBC has a very high computational cost. Collaborative representation proposed in [1] not only has the advantages of RBC but also is computationally very efficient. In this paper, a combination of direct matching of images and collaborative representation is proposed for face recognition. Experimental results show that the proposed method can always classify more accurately than collaborative representation! The underlying reason is that direct matching of images and collaborative representation use different ways to calculate the dissimilarity between the test sample and training sample. As a result, the score obtained using direct matching of images is very complementary to the score obtained using collaborative representation. Actually, the analysis shows that the matching scores generated from direct matching of images and collaborative representation always have a low correlation. This allows the proposed method to exploit more information for face recognition and to produce a better result.

  14. Using Regression to Measure Holistic Face Processing Reveals a Strong Link with Face Recognition Ability

    ERIC Educational Resources Information Center

    DeGutis, Joseph; Wilmer, Jeremy; Mercado, Rogelio J.; Cohan, Sarah

    2013-01-01

    Although holistic processing is thought to underlie normal face recognition ability, widely discrepant reports have recently emerged about this link in an individual differences context. Progress in this domain may have been impeded by the widespread use of subtraction scores, which lack validity due to their contamination with control condition…

  15. Comparison of computer-based and optical face recognition paradigms

    NASA Astrophysics Data System (ADS)

    Alorf, Abdulaziz A.

    The main objectives of this thesis are to validate an improved principal components analysis (IPCA) algorithm on images; designing and simulating a digital model for image compression, face recognition and image detection by using a principal components analysis (PCA) algorithm and the IPCA algorithm; designing and simulating an optical model for face recognition and object detection by using the joint transform correlator (JTC); establishing detection and recognition thresholds for each model; comparing between the performance of the PCA algorithm and the performance of the IPCA algorithm in compression, recognition and, detection; and comparing between the performance of the digital model and the performance of the optical model in recognition and detection. The MATLAB(c) software was used for simulating the models. PCA is a technique used for identifying patterns in data and representing the data in order to highlight any similarities or differences. The identification of patterns in data of high dimensions (more than three dimensions) is too difficult because the graphical representation of data is impossible. Therefore, PCA is a powerful method for analyzing data. IPCA is another statistical tool for identifying patterns in data. It uses information theory for improving PCA. The joint transform correlator (JTC) is an optical correlator used for synthesizing a frequency plane filter for coherent optical systems. The IPCA algorithm, in general, behaves better than the PCA algorithm in the most of the applications. It is better than the PCA algorithm in image compression because it obtains higher compression, more accurate reconstruction, and faster processing speed with acceptable errors; in addition, it is better than the PCA algorithm in real-time image detection due to the fact that it achieves the smallest error rate as well as remarkable speed. On the other hand, the PCA algorithm performs better than the IPCA algorithm in face recognition because it offers

  16. A motivational determinant of facial emotion recognition: regulatory focus affects recognition of emotions in faces.

    PubMed

    Sassenrath, Claudia; Sassenberg, Kai; Ray, Devin G; Scheiter, Katharina; Jarodzka, Halszka

    2014-01-01

    Two studies examined an unexplored motivational determinant of facial emotion recognition: observer regulatory focus. It was predicted that a promotion focus would enhance facial emotion recognition relative to a prevention focus because the attentional strategies associated with promotion focus enhance performance on well-learned or innate tasks - such as facial emotion recognition. In Study 1, a promotion or a prevention focus was experimentally induced and better facial emotion recognition was observed in a promotion focus compared to a prevention focus. In Study 2, individual differences in chronic regulatory focus were assessed and attention allocation was measured using eye tracking during the facial emotion recognition task. Results indicated that the positive relation between a promotion focus and facial emotion recognition is mediated by shorter fixation duration on the face which reflects a pattern of attention allocation matched to the eager strategy in a promotion focus (i.e., striving to make hits). A prevention focus did not have an impact neither on perceptual processing nor on facial emotion recognition. Taken together, these findings demonstrate important mechanisms and consequences of observer motivational orientation for facial emotion recognition. PMID:25380247

  17. The Effect of Inversion on Face Recognition in Adults with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Hedley, Darren; Brewer, Neil; Young, Robyn

    2015-01-01

    Face identity recognition has widely been shown to be impaired in individuals with autism spectrum disorders (ASD). In this study we examined the influence of inversion on face recognition in 26 adults with ASD and 33 age and IQ matched controls. Participants completed a recognition test comprising upright and inverted faces. Participants with ASD…

  18. The 3-D image recognition based on fuzzy neural network technology

    NASA Technical Reports Server (NTRS)

    Hirota, Kaoru; Yamauchi, Kenichi; Murakami, Jun; Tanaka, Kei

    1993-01-01

    Three dimensional stereoscopic image recognition system based on fuzzy-neural network technology was developed. The system consists of three parts; preprocessing part, feature extraction part, and matching part. Two CCD color camera image are fed to the preprocessing part, where several operations including RGB-HSV transformation are done. A multi-layer perception is used for the line detection in the feature extraction part. Then fuzzy matching technique is introduced in the matching part. The system is realized on SUN spark station and special image input hardware system. An experimental result on bottle images is also presented.

  19. Genome-Wide Identification and 3D Modeling of Proteins involved in DNA Damage Recognition and Repair (Final Report)

    SciTech Connect

    Abagyan, Ruben; An, Jianghong

    2005-08-12

    DNA Damage Recognition and Repair (DDR&R) proteins play a critical role in cellular responses to low-dose radiation and are associated with cancer. We have performed a systematic, genome-wide computational analysis of genomic data for human genes involved in the DDR&R process. The significant achievements of this project include: 1) Construction of the computational pipeline for searching DDR&R genes, building and validation of 3D models of proteins involved in DDR&R; 2) Functional and structural annotation of the 3D models and generation of comprehensive lists of suggested knock-out mutations; and the development of a method to predict the effects of mutations. Large scale testing of technology to identify novel small binding pockets in protein structures leading to new DDRR inhibitor strategies 3) Improvements of macromolecular docking technology (see the CAPRI 1-3 and 4-5 results) 4) Development of a new algorithm for improved analysis of high-density oligonucleotide arrays for gene expression profiling; 5) Construction and maintenance of the DNA Damage Recognition and Repair Database; 6) Producing 15 research papers (12 published and 3 in preparation).

  20. Genome-Wide Identification and 3D Modeling of Proteins involved in DNA Damage Recognition and Repair (Final Report)

    SciTech Connect

    Ruben A. Abagyan, PhD

    2004-04-15

    OAK-B135 DNA Damage Recognition and Repair (DDR and R) proteins play a critical role in cellular responses to low-dose radiation and are associated with cancer. the authors have performed a systematic, genome-wide computational analysis of genomic data for human genes involved in the DDR and R process. The significant achievements of this project include: (1) Construction of the computational pipeline for searching DDR and R genes, building and validation of 3D models of proteins involved in DDR and R; (2) Functional and structural annotation of the 3D models and generation of comprehensive lists of suggested knock-out mutations; (3) Important improvement of macromolecular docking technology and its application to predict the DNA-Protein complex conformation; (4) Development of a new algorithm for improved analysis of high-density oligonucleotide arrays for gene expression profiling; (5) Construction and maintenance of the DNA Damage Recognition and Repair Database; and (6) Producing 14 research papers (10 published and 4 in preparation).

  1. Presentation attack detection for face recognition using light field camera.

    PubMed

    Raghavendra, R; Raja, Kiran B; Busch, Christoph

    2015-03-01

    The vulnerability of face recognition systems isa growing concern that has drawn the interest from both academic and research communities. Despite the availability of a broad range of face presentation attack detection (PAD)(or countermeasure or antispoofing) schemes, there exists no superior PAD technique due to evolution of sophisticated presentation attacks (or spoof attacks). In this paper, we present a new perspective for face presentation attack detection by introducing light field camera (LFC). Since the use of a LFC can record the direction of each incoming ray in addition to the intensity, it exhibits an unique characteristic of rendering multiple depth(or focus) images in a single capture. Thus, we present a novel approach that involves exploring the variation of the focus between multiple depth (or focus) images rendered by the LFC that in turn can be used to reveal the presentation attacks. To this extent, we first collect a new face artefact database using LFC that comprises of 80 subjects. Face artefacts are generated by simulating two widely used attacks, such as photo print and electronic screen attack. Extensive experiments carried out on the light field face artefact database have revealed the outstanding performance of the proposed PAD scheme when benchmarked with various well established state-of-the-art schemes. PMID:25622320

  2. Occluded human recognition for a leader following system using 3D range and image data in forest environment

    NASA Astrophysics Data System (ADS)

    Cho, Kuk; Ilyas, Muhammad; Baeg, Seung-Ho; Park, Sangdeok

    2014-06-01

    This paper describes the occluded target recognition and tracking method for a leader-following system by fusing 3D range and image data acquired from 3D light detection and ranging (LIDAR) and a color camera installed on an autonomous vehicle in forest environment. During 3D data processing, the distance-based clustering method has an instinctive problem in close encounters. In the tracking phase, we divide an object tracking process into three phases based on occlusion scenario; before an occlusion (BO) phase, a partially or fully occlusion phase and after an occlusion (AO) phase. To improve the data association performance, we use camera's rich information to find correspondence among objects during above mentioned three phases of occlusion. In this paper, we solve a correspondence problem using the color features of human objects with the sum of squared distance (SSD) and the normalized cross correlation (NCC). The features are integrated with derived windows from Harris corner. The experimental results for a leader following on an autonomous vehicle are shown with LIDAR and camera for improving a data association problem in a multiple object tracking system.

  3. Effects of compression and individual variability on face recognition performance

    NASA Astrophysics Data System (ADS)

    McGarry, Delia P.; Arndt, Craig M.; McCabe, Steven A.; D'Amato, Donald P.

    2004-08-01

    The Enhanced Border Security and Visa Entry Reform Act of 2002 requires that the Visa Waiver Program be available only to countries that have a program to issue to their nationals machine-readable passports incorporating biometric identifiers complying with applicable standards established by the International Civil Aviation Organization (ICAO). In June 2002, the New Technologies Working Group of ICAO unanimously endorsed the use of face recognition (FR) as the globally interoperable biometric for machine-assisted identity confirmation with machine-readable travel documents (MRTDs), although Member States may elect to use fingerprint and/or iris recognition as additional biometric technologies. The means and formats are still being developed through which biometric information might be stored in the constrained space of integrated circuit chips embedded within travel documents. Such information will be stored in an open, yet unalterable and very compact format, probably as digitally signed and efficiently compressed images. The objective of this research is to characterize the many factors that affect FR system performance with respect to the legislated mandates concerning FR. A photograph acquisition environment and a commercial face recognition system have been installed at Mitretek, and over 1,400 images have been collected of volunteers. The image database and FR system are being used to analyze the effects of lossy image compression, individual differences, such as eyeglasses and facial hair, and the acquisition environment on FR system performance. Images are compressed by varying ratios using JPEG2000 to determine the trade-off points between recognition accuracy and compression ratio. The various acquisition factors that contribute to differences in FR system performance among individuals are also being measured. The results of this study will be used to refine and test efficient face image interchange standards that ensure highly accurate recognition, both

  4. Markov Network-Based Unified Classifier for Face Recognition.

    PubMed

    Hwang, Wonjun; Kim, Junmo

    2015-11-01

    In this paper, we propose a novel unifying framework using a Markov network to learn the relationships among multiple classifiers. In face recognition, we assume that we have several complementary classifiers available, and assign observation nodes to the features of a query image and hidden nodes to those of gallery images. Under the Markov assumption, we connect each hidden node to its corresponding observation node and the hidden nodes of neighboring classifiers. For each observation-hidden node pair, we collect the set of gallery candidates most similar to the observation instance, and capture the relationship between the hidden nodes in terms of a similarity matrix among the retrieved gallery images. Posterior probabilities in the hidden nodes are computed using the belief propagation algorithm, and we use marginal probability as the new similarity value of the classifier. The novelty of our proposed framework lies in the method that considers classifier dependence using the results of each neighboring classifier. We present the extensive evaluation results for two different protocols, known and unknown image variation tests, using four publicly available databases: 1) the Face Recognition Grand Challenge ver. 2.0; 2) XM2VTS; 3) BANCA; and 4) Multi-PIE. The result shows that our framework consistently yields improved recognition rates in various situations. PMID:26219095

  5. 3D scene's object detection and recognition using depth layers and SIFT-based machine learning

    NASA Astrophysics Data System (ADS)

    Kounalakis, T.; Triantafyllidis, G. A.

    2011-09-01

    This paper presents a novel system that is fusing efficient and state-of-the-art techniques of stereo vision and machine learning, aiming at object detection and recognition. To this goal, the system initially creates depth maps by employing the Graph-Cut technique. Then, the depth information is used for object detection by separating the objects from the whole scene. Next, the Scale-Invariant Feature Transform (SIFT) is used, providing the system with unique object's feature key-points, which are employed in training an Artificial Neural Network (ANN). The system is then able to classify and recognize the nature of these objects, creating knowledge from the real world. [Figure not available: see fulltext.

  6. The 3D Recognition, Generation, Fusion, Update and Refinement (RG4) Concept

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Cheeseman, Peter; Smelyanskyi, Vadim N.; Kuehnel, Frank; Morris, Robin D.; Norvig, Peter (Technical Monitor)

    2001-01-01

    This paper describes an active (real time) recognition strategy whereby information is inferred iteratively across several viewpoints in descent imagery. We will show how we use inverse theory within the context of parametric model generation, namely height and spectral reflection functions, to generate model assertions. Using this strategy in an active context implies that, from every viewpoint, the proposed system must refine its hypotheses taking into account the image and the effect of uncertainties as well. The proposed system employs probabilistic solutions to the problem of iteratively merging information (images) from several viewpoints. This involves feeding the posterior distribution from all previous images as a prior for the next view. Novel approaches will be developed to accelerate the inversion search using novel statistic implementations and reducing the model complexity using foveated vision. Foveated vision refers to imagery where the resolution varies across the image. In this paper, we allow the model to be foveated where the highest resolution region is called the foveation region. Typically, the images will have dynamic control of the location of the foveation region. For descent imagery in the Entry, Descent, and Landing (EDL) process, it is possible to have more than one foveation region. This research initiative is directed towards descent imagery in connection with NASA's EDL applications. Three-Dimensional Model Recognition, Generation, Fusion, Update, and Refinement (RGFUR or RG4) for height and the spectral reflection characteristics are in focus for various reasons, one of which is the prospect that their interpretation will provide for real time active vision for automated EDL.

  7. Thermal-to-visible face recognition using multiple kernel learning

    NASA Astrophysics Data System (ADS)

    Hu, Shuowen; Gurram, Prudhvi; Kwon, Heesung; Chan, Alex L.

    2014-06-01

    Recognizing faces acquired in the thermal spectrum from a gallery of visible face images is a desired capability for the military and homeland security, especially for nighttime surveillance and intelligence gathering. However, thermal-tovisible face recognition is a highly challenging problem, due to the large modality gap between thermal and visible imaging. In this paper, we propose a thermal-to-visible face recognition approach based on multiple kernel learning (MKL) with support vector machines (SVMs). We first subdivide the face into non-overlapping spatial regions or blocks using a method based on coalitional game theory. For comparison purposes, we also investigate uniform spatial subdivisions. Following this subdivision, histogram of oriented gradients (HOG) features are extracted from each block and utilized to compute a kernel for each region. We apply sparse multiple kernel learning (SMKL), which is a MKLbased approach that learns a set of sparse kernel weights, as well as the decision function of a one-vs-all SVM classifier for each of the subjects in the gallery. We also apply equal kernel weights (non-sparse) and obtain one-vs-all SVM models for the same subjects in the gallery. Only visible images of each subject are used for MKL training, while thermal images are used as probe images during testing. With subdivision generated by game theory, we achieved Rank-1 identification rate of 50.7% for SMKL and 93.6% for equal kernel weighting using a multimodal dataset of 65 subjects. With uniform subdivisions, we achieved a Rank-1 identification rate of 88.3% for SMKL, but 92.7% for equal kernel weighting.

  8. Log-Gabor Weber descriptor for face recognition

    NASA Astrophysics Data System (ADS)

    Li, Jing; Sang, Nong; Gao, Changxin

    2015-09-01

    The Log-Gabor transform, which is suitable for analyzing gradually changing data such as in iris and face images, has been widely used in image processing, pattern recognition, and computer vision. In most cases, only the magnitude or phase information of the Log-Gabor transform is considered. However, the complementary effect taken by combining magnitude and phase information simultaneously for an image-feature extraction problem has not been systematically explored in the existing works. We propose a local image descriptor for face recognition, called Log-Gabor Weber descriptor (LGWD). The novelty of our LGWD is twofold: (1) to fully utilize the information from the magnitude or phase feature of multiscale and orientation Log-Gabor transform, we apply the Weber local binary pattern operator to each transform response. (2) The encoded Log-Gabor magnitude and phase information are fused at the feature level by utilizing kernel canonical correlation analysis strategy, considering that feature level information fusion is effective when the modalities are correlated. Experimental results on the AR, Extended Yale B, and UMIST face databases, compared with those available from recent experiments reported in the literature, show that our descriptor yields a better performance than state-of-the art methods.

  9. Exact asymptotic statistics of the n-edged face in a 3D Poisson-Voronoi tessellation

    NASA Astrophysics Data System (ADS)

    Hilhorst, H. J.

    2016-05-01

    This work considers the 3D Poisson-Voronoi tessellation. It investigates the joint probability distribution {πn}(L) for an arbitrarily selected cell face to be n-edged and for the distance between the seeds of the two adjacent cells to be equal to 2L. For this quantity an exact expression is derived, valid in the limit n\\to ∞ with n 1/6 L fixed. The leading order correction term is determined. Good agreement with earlier Monte Carlo data is obtained. The cell face is shown to be surrounded by a three-dimensional domain that is empty of seeds and is the union of n balls; it is pumpkin-shaped and analogous to the flower of the 2D Voronoi cell. For n\\to ∞ this domain tends towards a torus of equal major and minor radii. The radii scale as n 1/3, in agreement with earlier heuristic work. A detailed understanding is achieved of several other statistical properties of the n-edged cell face.

  10. Driver face recognition as a security and safety feature

    NASA Astrophysics Data System (ADS)

    Vetter, Volker; Giefing, Gerd-Juergen; Mai, Rudolf; Weisser, Hubert

    1995-09-01

    We present a driver face recognition system for comfortable access control and individual settings of automobiles. The primary goals are the prevention of car thefts and heavy accidents caused by unauthorized use (joy-riders), as well as the increase of safety through optimal settings, e.g. of the mirrors and the seat position. The person sitting on the driver's seat is observed automatically by a small video camera in the dashboard. All he has to do is to behave cooperatively, i.e. to look into the camera. A classification system validates his access. Only after a positive identification, the car can be used and the driver-specific environment (e.g. seat position, mirrors, etc.) may be set up to ensure the driver's comfort and safety. The driver identification system has been integrated in a Volkswagen research car. Recognition results are presented.

  11. Recognition of 3-D symmetric objects from range images in automated assembly tasks

    NASA Technical Reports Server (NTRS)

    Alvertos, Nicolas; Dcunha, Ivan

    1990-01-01

    A new technique is presented for the three dimensional recognition of symmetric objects from range images. Beginning from the implicit representation of quadrics, a set of ten coefficients is determined for symmetric objects like spheres, cones, cylinders, ellipsoids, and parallelepipeds. Instead of using these ten coefficients trying to fit them to smooth surfaces (patches) based on the traditional way of determining curvatures, a new approach based on two dimensional geometry is used. For each symmetric object, a unique set of two dimensional curves is obtained from the various angles at which the object is intersected with a plane. Using the same ten coefficients obtained earlier and based on the discriminant method, each of these curves is classified as a parabola, circle, ellipse, or hyperbola. Each symmetric object is found to possess a unique set of these two dimensional curves whereby it can be differentiated from the others. It is shown that instead of using the three dimensional discriminant which involves evaluation of the rank of its matrix, it is sufficient to use the two dimensional discriminant which only requires three arithmetic operations.

  12. A roadmap to global illumination in 3D scenes: solutions for GPU object recognition applications

    NASA Astrophysics Data System (ADS)

    Picos, Kenia; Díaz-Ramírez, Victor H.; Tapia, Juan J.

    2014-09-01

    Light interactions with matter is of remarkable complexity. An adequate modeling of global illumination is a vastly studied topic since the beginning of computer graphics, and still is an unsolved problem. The rendering equation for global illumination is based of refraction and reflection of light in interaction with matter within an environment. This physical process possesses a high computational complexity when implemented in a digital computer. The appearance of an object depends on light interactions with the surface of the material, such as emission, scattering, and absorption. Several image-synthesis methods have been used to realistically render the appearance of light incidence on an object. Recent global illumination algorithms employ mathematical models and computational strategies that improve the efficiency of the simulation solution. This work presents a review the state of the art of global illumination algorithms and focuses on the efficiency of the solution in a computational implementation in a graphics processing unit. A reliable system is developed to simulate realistics scenes in the context of real-time object recognition under different lighting conditions. Computer simulations results are presented and discussed in terms of discrimination capability, and robustness to additive noise, when considering several lighting model reflections and multiple light sources.

  13. Facial emotion recognition deficits: The new face of schizophrenia.

    PubMed

    Behere, Rishikesh V

    2015-01-01

    Schizophrenia has been classically described to have positive, negative, and cognitive symptom dimension. Emerging evidence strongly supports a fourth dimension of social cognitive symptoms with facial emotion recognition deficits (FERD) representing a new face in our understanding of this complex disorder. FERD have been described to be one among the important deficits in schizophrenia and could be trait markers for the disorder. FERD are associated with socio-occupational dysfunction and hence are of important clinical relevance. This review discusses FERD in schizophrenia, challenges in its assessment in our cultural context, its implications in understanding neurobiological mechanisms and clinical applications. PMID:26600574

  14. Facial emotion recognition deficits: The new face of schizophrenia

    PubMed Central

    Behere, Rishikesh V.

    2015-01-01

    Schizophrenia has been classically described to have positive, negative, and cognitive symptom dimension. Emerging evidence strongly supports a fourth dimension of social cognitive symptoms with facial emotion recognition deficits (FERD) representing a new face in our understanding of this complex disorder. FERD have been described to be one among the important deficits in schizophrenia and could be trait markers for the disorder. FERD are associated with socio-occupational dysfunction and hence are of important clinical relevance. This review discusses FERD in schizophrenia, challenges in its assessment in our cultural context, its implications in understanding neurobiological mechanisms and clinical applications. PMID:26600574

  15. Gender and ethnicity specific generic elastic models from a single 2D image for novel 2D pose face synthesis and recognition.

    PubMed

    Heo, Jingu; Savvides, Marios

    2012-12-01

    In this paper, we propose a novel method for generating a realistic 3D human face from a single 2D face image for the purpose of synthesizing new 2D face images at arbitrary poses using gender and ethnicity specific models. We employ the Generic Elastic Model (GEM) approach, which elastically deforms a generic 3D depth-map based on the sparse observations of an input face image in order to estimate the depth of the face image. Particularly, we show that Gender and Ethnicity specific GEMs (GE-GEMs) can approximate the 3D shape of the input face image more accurately, achieving a better generalization of 3D face modeling and reconstruction compared to the original GEM approach. We qualitatively validate our method using publicly available databases by showing each reconstructed 3D shape generated from a single image and new synthesized poses of the same person at arbitrary angles. For quantitative comparisons, we compare our synthesized results against 3D scanned data and also perform face recognition using synthesized images generated from a single enrollment frontal image. We obtain promising results for handling pose and expression changes based on the proposed method. PMID:22201062

  16. Magnetic fields end-face effect investigation of HTS bulk over PMG with 3D-modeling numerical method

    NASA Astrophysics Data System (ADS)

    Qin, Yujie; Lu, Yiyun

    2015-09-01

    In this paper, the magnetic fields end-face effect of high temperature superconducting (HTS) bulk over a permanent magnetic guideway (PMG) is researched with 3D-modeling numerical method. The electromagnetic behavior of the bulk is simulated using finite element method (FEM). The framework is formulated by the magnetic field vector method (H-method). A superconducting levitation system composed of one rectangular HTS bulk and one infinite long PMG is successfully investigated using the proposed method. The simulation results show that for finite geometrical HTS bulk, even the applied magnetic field is only distributed in x-y plane, the magnetic field component Hz which is along the z-axis can be observed interior the HTS bulk.

  17. Face recognition: Eigenface, elastic matching, and neural nets

    SciTech Connect

    Zhang, J.; Yan, Y.; Lades, M.

    1997-09-01

    This paper is a comparative study of three recently proposed algorithms for face recognition: eigenface, autoassociation and classification neural nets, and elastic matching. After these algorithms were analyzed under a common statistical decision framework, they were evaluated experimentally on four individual data bases, each with a moderate subject size, and a combined data base with more than a hundred different subjects. Analysis and experimental results indicate that the eigenface algorithm, which is essentially a minimum distance classifier, works well when lighting variation is small. Its performance deteriorates significantly as lighting variation increases. The elastic matching algorithm, on the other hand, is insensitive to lighting, face position, and expression variations and therefore is more versatile. The performance of the autoassociation and classification nets is upper bounded by that of the eigenface but is more difficult to implement in practice.

  18. Effects of distance on face recognition: implications for eyewitness identification.

    PubMed

    Lampinen, James Michael; Erickson, William Blake; Moore, Kara N; Hittson, Aaron

    2014-12-01

    Eyewitnesses sometimes view faces from a distance, but little research has examined the accuracy of witnesses as a function of distance. The purpose to the present project is to examine the relationship between identification accuracy and distance under carefully controlled conditions. This is one of the first studies to examine the ability to recognize faces of strangers at a distance under free-field conditions. Participants viewed eight live human targets, displayed at one of six outdoor distances that varied between 5 and 40 yards. Participants were shown 16 photographs, 8 of the previously viewed targets and 8 of nonviewed foils that matched a verbal description of the target counterpart. Participants rated their confidence of having seen or not having seen each individual on an 8-point scale. Long distances were associated with poor recognition memory and response bias shifts. PMID:24820456

  19. Quality labeled faces in the wild (QLFW): a database for studying face recognition in real-world environments

    NASA Astrophysics Data System (ADS)

    Karam, Lina J.; Zhu, Tong

    2015-03-01

    The varying quality of face images is an important challenge that limits the effectiveness of face recognition technology when applied in real-world applications. Existing face image databases do not consider the effect of distortions that commonly occur in real-world environments. This database (QLFW) represents an initial attempt to provide a set of labeled face images spanning the wide range of quality, from no perceived impairment to strong perceived impairment for face detection and face recognition applications. Types of impairment include JPEG2000 compression, JPEG compression, additive white noise, Gaussian blur and contrast change. Subjective experiments are conducted to assess the perceived visual quality of faces under different levels and types of distortions and also to assess the human recognition performance under the considered distortions. One goal of this work is to enable automated performance evaluation of face recognition technologies in the presence of different types and levels of visual distortions. This will consequently enable the development of face recognition systems that can operate reliably on real-world visual content in the presence of real-world visual distortions. Another goal is to enable the development and assessment of visual quality metrics for face images and for face detection and recognition applications.

  20. A Comparative Study of 2D PCA Face Recognition Method with Other Statistically Based Face Recognition Methods

    NASA Astrophysics Data System (ADS)

    Senthilkumar, R.; Gnanamurthy, R. K.

    2015-07-01

    In this paper, two-dimensional principal component analysis (2D PCA) is compared with other algorithms like 1D PCA, Fisher discriminant analysis (FDA), independent component analysis (ICA) and Kernel PCA (KPCA) which are used for image representation and face recognition. As opposed to PCA, 2D PCA is based on 2D image matrices rather than 1D vectors, so the image matrix does not need to be transformed into a vector prior to feature extraction. Instead, an image covariance matrix is constructed directly using the original image matrices and its Eigen vectors are derived for image feature extraction. To test 2D PCA and evaluate its performance, a series of experiments are performed on three face image databases: ORL, Senthil, and Yale face databases. The recognition rate across all trials higher using 2D PCA than PCA, FDA, ICA and KPCA. The experimental results also indicated that the extraction of image features is computationally more efficient using 2D PCA than PCA.

  1. Simultaneous Versus Sequential Presentation in Testing Recognition Memory for Faces.

    PubMed

    Finley, Jason R; Roediger, Henry L; Hughes, Andrea D; Wahlheim, Christopher N; Jacoby, Larry L

    2015-01-01

    Three experiments examined the issue of whether faces could be better recognized in a simul- taneous test format (2-alternative forced choice [2AFC]) or a sequential test format (yes-no). All experiments showed that when target faces were present in the test, the simultaneous procedure led to superior performance (area under the ROC curve), whether lures were high or low in similarity to the targets. However, when a target-absent condition was used in which no lures resembled the targets but the lures were similar to each other, the simultaneous procedure yielded higher false alarm rates (Experiments 2 and 3) and worse overall performance (Experi- ment 3). This pattern persisted even when we excluded responses that participants opted to withhold rather than volunteer. We conclude that for the basic recognition procedures used in these experiments, simultaneous presentation of alternatives (2AFC) generally leads to better discriminability than does sequential presentation (yes-no) when a target is among the alterna- tives. However, our results also show that the opposite can occur when there is no target among the alternatives. An important future step is to see whether these patterns extend to more realistic eyewitness lineup procedures. The pictures used in the experiment are available online at http://www.press.uillinois.edu/journals/ajp/media/testing_recognition/. PMID:26255438

  2. A blur-robust descriptor with applications to face recognition.

    PubMed

    Gopalan, Raghuraman; Taheri, Sima; Turaga, Pavan; Chellappa, Rama

    2012-06-01

    Understanding the effect of blur is an important problem in unconstrained visual analysis. We address this problem in the context of image-based recognition by a fusion of image-formation models and differential geometric tools. First, we discuss the space spanned by blurred versions of an image and then, under certain assumptions, provide a differential geometric analysis of that space. More specifically, we create a subspace resulting from convolution of an image with a complete set of orthonormal basis functions of a prespecified maximum size (that can represent an arbitrary blur kernel within that size), and show that the corresponding subspaces created from a clean image and its blurred versions are equal under the ideal case of zero noise and some assumptions on the properties of blur kernels. We then study the practical utility of this subspace representation for the problem of direct recognition of blurred faces by viewing the subspaces as points on the Grassmann manifold and present methods to perform recognition for cases where the blur is both homogenous and spatially varying. We empirically analyze the effect of noise, as well as the presence of other facial variations between the gallery and probe images, and provide comparisons with existing approaches on standard data sets. PMID:22231594

  3. Infrared face recognition based on binary particle swarm optimization and SVM-wrapper model

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua; Liu, Guodong

    2015-10-01

    Infrared facial imaging, being light- independent, and not vulnerable to facial skin, expressions and posture, can avoid or limit the drawbacks of face recognition in visible light. Robust feature selection and representation is a key issue for infrared face recognition research. This paper proposes a novel infrared face recognition method based on local binary pattern (LBP). LBP can improve the robust of infrared face recognition under different environment situations. How to make full use of the discriminant ability in LBP patterns is an important problem. A search algorithm combination binary particle swarm with SVM is used to find out the best discriminative subset in LBP features. Experimental results show that the proposed method outperforms traditional LBP based infrared face recognition methods. It can significantly improve the recognition performance of infrared face recognition.

  4. Examination of Consumption of Processing Performance in Face Recognition on Working Memory

    NASA Astrophysics Data System (ADS)

    Yonemura, Keiichi; Sugiura, Akihiko

    In this study, we examined consumption of processing resources in face recognition on working memory. Selective interference is occurred by dual-task method with matching-to-sample. We assess processing delay and correct percentage of task that time. As recognition categories, we used simple figures, languages, objects, scenes, and faces (considering everyday life and vascular dementia). By experimental result, we understood consumption of processing resources in face recognition is the largest among other categories on working memory. Using the relation of processing resources consumption between face recognition and other recognitions on working memory, we hope that assessing of vascular dementia noticed frontal lobe dysfunction.

  5. Ambient temperature normalization for infrared face recognition based on the second-order polynomial model

    NASA Astrophysics Data System (ADS)

    Wang, Zhengzi

    2015-08-01

    The influence of ambient temperature is a big challenge to robust infrared face recognition. This paper proposes a new ambient temperature normalization algorithm to improve the performance of infrared face recognition under variable ambient temperatures. Based on statistical regression theory, a second order polynomial model is learned to describe the ambient temperature's impact on infrared face image. Then, infrared image was normalized to reference ambient temperature by the second order polynomial model. Finally, this normalization method is applied to infrared face recognition to verify its efficiency. The experiments demonstrate that the proposed temperature normalization method is feasible and can significantly improve the robustness of infrared face recognition.

  6. Low Resolution Face Recognition Across Variations in Pose and Illumination.

    PubMed

    Mudunuri, Sivaram Prasad; Biswas, Soma

    2016-05-01

    We propose a completely automatic approach for recognizing low resolution face images captured in uncontrolled environment. The approach uses multidimensional scaling to learn a common transformation matrix for the entire face which simultaneously transforms the facial features of the low resolution and the high resolution training images such that the distance between them approximates the distance had both the images been captured under the same controlled imaging conditions. Stereo matching cost is used to obtain the similarity of two images in the transformed space. Though this gives very good recognition performance, the time taken for computing the stereo matching cost is significant. To overcome this limitation, we propose a reference-based approach in which each face image is represented by its stereo matching cost from a few reference images. Experimental evaluation on the real world challenging databases and comparison with the state-of-the-art super-resolution, classifier based and cross modal synthesis techniques show the effectiveness of the proposed algorithm. PMID:27046843

  7. The Effects of Inversion and Familiarity on Face versus Body Cues to Person Recognition

    ERIC Educational Resources Information Center

    Robbins, Rachel A.; Coltheart, Max

    2012-01-01

    Extensive research has focused on face recognition, and much is known about this topic. However, much of this work seems to be based on an assumption that faces are the most important aspect of person recognition. Here we test this assumption in two experiments. We show that when viewers are forced to choose, they "do" use the face more than the…

  8. Influence of Emotional Facial Expressions on 3-5-Year-Olds' Face Recognition

    ERIC Educational Resources Information Center

    Freitag, Claudia; Schwarzer, Gudrun

    2011-01-01

    Three experiments examined 3- and 5-year-olds' recognition of faces in constant and varied emotional expressions. Children were asked to identify repeatedly presented target faces, distinguishing them from distractor faces, during an immediate recognition test and during delayed assessments after 10 min and one week. Emotional facial expression…

  9. 3D-front-face fluorescence spectroscopy and independent components analysis: A new way to monitor bread dough development.

    PubMed

    Garcia, Rebeca; Boussard, Aline; Rakotozafy, Lalatiana; Nicolas, Jacques; Potus, Jacques; Rutledge, Douglas N; Cordella, Christophe B Y

    2016-01-15

    Following bread dough development can be a hard task as no reliable method exists to give the optimal mixing time. Dough development is linked to the evolution of gluten proteins, carbohydrates and lipids which can result in modifications in the spectral properties of the various fluorophores naturally present in the system. In this paper, we propose to use 3-D-front-face-fluorescence (3D-FFF) spectroscopy in the 250-550nm domain to follow the dough development as influenced by formulation (addition or not of glucose, glucose oxidase and ferulic acid in the dough recipe) and mixing time (2, 4, 6 and 8min). In all the 32 dough samples as well as in flour, three regions of maximum fluorescence intensities have been observed at 320nm after excitation at 295nm (Region 1), at 420nm after excitation at 360nm (Region 2) and 450nm after excitation at 390nm (Region 3). The principal components analysis (PCA) of the evolution of these maxima shows that the formulations with and without ferulic acid are clearly separated since the presence of ferulic acid induces a decrease of fluorescence in Region 1 and an increase in Regions 2 and 3. In addition, a kinetic effect of the mixing time can be observed (decrease of fluorescence in the Regions 1 and 2) mainly in the absence of ferulic acid. The analysis of variance (ANOVA) on these maximum values statistically confirms these observations. Independent components analysis (ICA) is also applied to the complete 3-D-FFF spectra in order to extract interpretable signals from spectral data which reflect the complex contribution of several fluorophores as influenced by their environment. In all cases, 3 signals can be clearly separated matching the 3 regions of maximal fluorescence. The signals corresponding to regions 1 and 2 can be ascribed to proteins and ferulic acid respectively, whereas the fluorophores associated with the 3rd signal (corresponding to region 3) remain unidentified. Good correlations are obtained between the IC

  10. Nonlinear Topological Component Analysis: Application to Age-Invariant Face Recognition.

    PubMed

    Bouchaffra, Djamel

    2015-07-01

    We introduce a novel formalism that performs dimensionality reduction and captures topological features (such as the shape of the observed data) to conduct pattern classification. This mission is achieved by: 1) reducing the dimension of the observed variables through a kernelized radial basis function technique and expressing the latent variables probability distribution in terms of the observed variables; 2) disclosing the data manifold as a 3-D polyhedron via the α -shape constructor and extracting topological features; and 3) classifying a data set using a mixture of multinomial distributions. We have applied our methodology to the problem of age-invariant face recognition. Experimental results obtained demonstrate the efficiency of the proposed methodology named nonlinear topological component analysis when compared with some state-of-the-art approaches. PMID:25134092

  11. Formal Implementation of a Performance Evaluation Model for the Face Recognition System

    PubMed Central

    Shin, Yong-Nyuo; Kim, Jason; Lee, Yong-Jun; Shin, Woochang; Choi, Jin-Young

    2008-01-01

    Due to usability features, practical applications, and its lack of intrusiveness, face recognition technology, based on information, derived from individuals' facial features, has been attracting considerable attention recently. Reported recognition rates of commercialized face recognition systems cannot be admitted as official recognition rates, as they are based on assumptions that are beneficial to the specific system and face database. Therefore, performance evaluation methods and tools are necessary to objectively measure the accuracy and performance of any face recognition system. In this paper, we propose and formalize a performance evaluation model for the biometric recognition system, implementing an evaluation tool for face recognition systems based on the proposed model. Furthermore, we performed evaluations objectively by providing guidelines for the design and implementation of a performance evaluation system, formalizing the performance test process. PMID:18317524

  12. Postencoding cognitive processes in the cross-race effect: Categorization and individuation during face recognition.

    PubMed

    Ho, Michael R; Pezdek, Kathy

    2016-06-01

    The cross-race effect (CRE) describes the finding that same-race faces are recognized more accurately than cross-race faces. According to social-cognitive theories of the CRE, processes of categorization and individuation at encoding account for differential recognition of same- and cross-race faces. Recent face memory research has suggested that similar but distinct categorization and individuation processes also occur postencoding, at recognition. Using a divided-attention paradigm, in Experiments 1A and 1B we tested and confirmed the hypothesis that distinct postencoding categorization and individuation processes occur during the recognition of same- and cross-race faces. Specifically, postencoding configural divided-attention tasks impaired recognition accuracy more for same-race than for cross-race faces; on the other hand, for White (but not Black) participants, postencoding featural divided-attention tasks impaired recognition accuracy more for cross-race than for same-race faces. A social categorization paradigm used in Experiments 2A and 2B tested the hypothesis that the postencoding in-group or out-group social orientation to faces affects categorization and individuation processes during the recognition of same-race and cross-race faces. Postencoding out-group orientation to faces resulted in categorization for White but not for Black participants. This was evidenced by White participants' impaired recognition accuracy for same-race but not for cross-race out-group faces. Postencoding in-group orientation to faces had no effect on recognition accuracy for either same-race or cross-race faces. The results of Experiments 2A and 2B suggest that this social orientation facilitates White but not Black participants' individuation and categorization processes at recognition. Models of recognition memory for same-race and cross-race faces need to account for processing differences that occur at both encoding and recognition. PMID:26391033

  13. Face recognition using multiple maximum scatter difference discrimination dictionary learning

    NASA Astrophysics Data System (ADS)

    Zhu, Yanyong; Dong, Jiwen; Li, Hengjian

    2015-10-01

    Based on multiple maximum scatter difference discrimination Dictionary learning, a novel face recognition algorithm is proposed. Dictionary used for sparse coding plays a key role in sparse representation classification. In this paper, a multiple maximum scatter difference discriminated criterion is used for dictionary learning. During the process of dictionary learning, the multiple maximum scatter difference computes its discriminated vectors from both the range of the between class scatter matrix and the null space of the within-class scatter matrix. The proposed algorithm is theoretically elegant and easy to calculate. Extensive experimental studies conducted on the AR database and Extended Yale Database B in comparison with existing basic sparse representation and other classification methods, it shows that the performance is a little better than the original sparse representation methods with lower complexity.

  14. Local ICA for the Most Wanted face recognition

    NASA Astrophysics Data System (ADS)

    Guan, Xin; Szu, Harold H.; Markowitz, Zvi

    2000-04-01

    Facial disguises of FBI Most Wanted criminals are inevitable and anticipated in our design of automatic/aided target recognition (ATR) imaging systems. For example, man's facial hairs may hide his mouth and chin but not necessarily the nose and eyes. Sunglasses will cover the eyes but not the nose, mouth, and chins. This fact motivates us to build sets of the independent component analyses bases separately for each facial region of the entire alleged criminal group. Then, given an alleged criminal face, collective votes are obtained from all facial regions in terms of 'yes, no, abstain' and are tallied for a potential alarm. Moreover, and innocent outside shall fall below the alarm threshold and is allowed to pass the checkpoint. Such a PD versus FAR called ROC curve is obtained.

  15. An investigation of matching symmetry in the human pinnae with possible implications for 3D ear recognition and sound localization.

    PubMed

    Claes, Peter; Reijniers, Jonas; Shriver, Mark D; Snyders, Jonatan; Suetens, Paul; Nielandt, Joachim; De Tré, Guy; Vandermeulen, Dirk

    2015-01-01

    The human external ears, or pinnae, have an intriguing shape and, like most parts of the human external body, bilateral symmetry is observed between left and right. It is a well-known part of our auditory sensory system and mediates the spatial localization of incoming sounds in 3D from monaural cues due to its shape-specific filtering as well as binaural cues due to the paired bilateral locations of the left and right ears. Another less broadly appreciated aspect of the human pinna shape is its uniqueness from one individual to another, which is on the level of what is seen in fingerprints and facial features. This makes pinnae very useful in human identification, which is of great interest in biometrics and forensics. Anatomically, the type of symmetry observed is known as matching symmetry, with structures present as separate mirror copies on both sides of the body, and in this work we report the first such investigation of the human pinna in 3D. Within the framework of geometric morphometrics, we started by partitioning ear shape, represented in a spatially dense way, into patterns of symmetry and asymmetry, following a two-factor anova design. Matching symmetry was measured in all substructures of the pinna anatomy. However, substructures that 'stick out' such as the helix, tragus, and lobule also contained a fair degree of asymmetry. In contrast, substructures such as the conchae, antitragus, and antihelix expressed relatively stronger degrees of symmetric variation in relation to their levels of asymmetry. Insights gained from this study were injected into an accompanying identification setup exploiting matching symmetry where improved performance is demonstrated. Finally, possible implications of the results in the context of ear recognition as well as sound localization are discussed. PMID:25382291

  16. A novel method of target recognition based on 3D-color-space locally adaptive regression kernels model

    NASA Astrophysics Data System (ADS)

    Liu, Jiaqi; Han, Jing; Zhang, Yi; Bai, Lianfa

    2015-10-01

    Locally adaptive regression kernels model can describe the edge shape of images accurately and graphic trend of images integrally, but it did not consider images' color information while the color is an important element of an image. Therefore, we present a novel method of target recognition based on 3-D-color-space locally adaptive regression kernels model. Different from the general additional color information, this method directly calculate the local similarity features of 3-D data from the color image. The proposed method uses a few examples of an object as a query to detect generic objects with incompact, complex and changeable shapes. Our method involves three phases: First, calculating the novel color-space descriptors from the RGB color space of query image which measure the likeness of a voxel to its surroundings. Salient features which include spatial- dimensional and color -dimensional information are extracted from said descriptors, and simplifying them to construct a non-similar local structure feature set of the object class by principal components analysis (PCA). Second, we compare the salient features with analogous features from the target image. This comparison is done using a matrix generalization of the cosine similarity measure. Then the similar structures in the target image are obtained using local similarity structure statistical matching. Finally, we use the method of non-maxima suppression in the similarity image to extract the object position and mark the object in the test image. Experimental results demonstrate that our approach is effective and accurate in improving the ability to identify targets.

  17. A 3D-1D substitution matrix for protein fold recognition that includes predicted secondary structure of the sequence.

    PubMed

    Rice, D W; Eisenberg, D

    1997-04-11

    In protein fold recognition, a probe amino acid sequence is compared to a library of representative folds of known structure to identify a structural homolog. In cases where the probe and its homolog have clear sequence similarity, traditional residue substitution matrices have been used to predict the structural similarity. In cases where the probe is sequentially distant from its homolog, we have developed a (7 x 3 x 2 x 7 x 3) 3D-1D substitution matrix (called H3P2), calculated from a database of 119 structural pairs. Members of each pair share a similar fold, but have sequence identity less than 30%. Each probe sequence position is defined by one of seven residue classes and three secondary structure classes. Each homologous fold position is defined by one of seven residue classes, three secondary structure classes, and two burial classes. Thus the matrix is five-dimensional and contains 7 x 3 x 2 x 7 x 3 = 882 elements or 3D-1D scores. The first step in assigning a probe sequence to its homologous fold is the prediction of the three-state (helix, strand, coil) secondary structure of the probe; here we use the profile based neural network prediction of secondary structure (PHD) program. Then a dynamic programming algorithm uses the H3P2 matrix to align the probe sequence with structures in a representative fold library. To test the effectiveness of the H3P2 matrix a challenging, fold class diverse, and cross-validated benchmark assessment is used to compare the H3P2 matrix to the GONNET, PAM250, BLOSUM62 and a secondary structure only substitution matrix. For distantly related sequences the H3P2 matrix detects more homologous structures at higher reliabilities than do these other substitution matrices, based on sensitivity versus specificity plots (or SENS-SPEC plots). The added efficacy of the H3P2 matrix arises from its information on the statistical preferences for various sequence-structure environment combinations from very distantly related proteins. It

  18. Face learning and the emergence of view-independent face recognition: an event-related brain potential study.

    PubMed

    Zimmermann, Friederike G S; Eimer, Martin

    2013-06-01

    Recognizing unfamiliar faces is more difficult than familiar face recognition, and this has been attributed to qualitative differences in the processing of familiar and unfamiliar faces. Familiar faces are assumed to be represented by view-independent codes, whereas unfamiliar face recognition depends mainly on view-dependent low-level pictorial representations. We employed an electrophysiological marker of visual face recognition processes in order to track the emergence of view-independence during the learning of previously unfamiliar faces. Two face images showing either the same or two different individuals in the same or two different views were presented in rapid succession, and participants had to perform an identity-matching task. On trials where both faces showed the same view, repeating the face of the same individual triggered an N250r component at occipito-temporal electrodes, reflecting the rapid activation of visual face memory. A reliable N250r component was also observed on view-change trials. Crucially, this view-independence emerged as a result of face learning. In the first half of the experiment, N250r components were present only on view-repetition trials but were absent on view-change trials, demonstrating that matching unfamiliar faces was initially based on strictly view-dependent codes. In the second half, the N250r was triggered not only on view-repetition trials but also on view-change trials, indicating that face recognition had now become more view-independent. This transition may be due to the acquisition of abstract structural codes of individual faces during face learning, but could also reflect the formation of associative links between sets of view-specific pictorial representations of individual faces. PMID:23583970

  19. A Reciprocal Model of Face Recognition and Autistic Traits: Evidence from an Individual Differences Perspective

    PubMed Central

    Halliday, Drew W. R.; MacDonald, Stuart W. S.; Sherf, Suzanne K.; Tanaka, James W.

    2014-01-01

    Although not a core symptom of the disorder, individuals with autism often exhibit selective impairments in their face processing abilities. Importantly, the reciprocal connection between autistic traits and face perception has rarely been examined within the typically developing population. In this study, university participants from the social sciences, physical sciences, and humanities completed a battery of measures that assessed face, object and emotion recognition abilities, general perceptual-cognitive style, and sub-clinical autistic traits (the Autism Quotient (AQ)). We employed separate hierarchical multiple regression analyses to evaluate which factors could predict face recognition scores and AQ scores. Gender, object recognition performance, and AQ scores predicted face recognition behaviour. Specifically, males, individuals with more autistic traits, and those with lower object recognition scores performed more poorly on the face recognition test. Conversely, university major, gender and face recognition performance reliably predicted AQ scores. Science majors, males, and individuals with poor face recognition skills showed more autistic-like traits. These results suggest that the broader autism phenotype is associated with lower face recognition abilities, even among typically developing individuals. PMID:24853862

  20. A reciprocal model of face recognition and autistic traits: evidence from an individual differences perspective.

    PubMed

    Halliday, Drew W R; MacDonald, Stuart W S; Scherf, K Suzanne; Sherf, Suzanne K; Tanaka, James W

    2014-01-01

    Although not a core symptom of the disorder, individuals with autism often exhibit selective impairments in their face processing abilities. Importantly, the reciprocal connection between autistic traits and face perception has rarely been examined within the typically developing population. In this study, university participants from the social sciences, physical sciences, and humanities completed a battery of measures that assessed face, object and emotion recognition abilities, general perceptual-cognitive style, and sub-clinical autistic traits (the Autism Quotient (AQ)). We employed separate hierarchical multiple regression analyses to evaluate which factors could predict face recognition scores and AQ scores. Gender, object recognition performance, and AQ scores predicted face recognition behaviour. Specifically, males, individuals with more autistic traits, and those with lower object recognition scores performed more poorly on the face recognition test. Conversely, university major, gender and face recognition performance reliably predicted AQ scores. Science majors, males, and individuals with poor face recognition skills showed more autistic-like traits. These results suggest that the broader autism phenotype is associated with lower face recognition abilities, even among typically developing individuals. PMID:24853862

  1. Face recognition: database acquisition, hybrid algorithms, and human studies

    NASA Astrophysics Data System (ADS)

    Gutta, Srinivas; Huang, Jeffrey R.; Singh, Dig; Wechsler, Harry

    1997-02-01

    One of the most important technologies absent in traditional and emerging frontiers of computing is the management of visual information. Faces are accessible `windows' into the mechanisms that govern our emotional and social lives. The corresponding face recognition tasks considered herein include: (1) Surveillance, (2) CBIR, and (3) CBIR subject to correct ID (`match') displaying specific facial landmarks such as wearing glasses. We developed robust matching (`classification') and retrieval schemes based on hybrid classifiers and showed their feasibility using the FERET database. The hybrid classifier architecture consist of an ensemble of connectionist networks--radial basis functions-- and decision trees. The specific characteristics of our hybrid architecture include (a) query by consensus as provided by ensembles of networks for coping with the inherent variability of the image formation and data acquisition process, and (b) flexible and adaptive thresholds as opposed to ad hoc and hard thresholds. Experimental results, proving the feasibility of our approach, yield (i) 96% accuracy, using cross validation (CV), for surveillance on a data base consisting of 904 images (ii) 97% accuracy for CBIR tasks, on a database of 1084 images, and (iii) 93% accuracy, using CV, for CBIR subject to correct ID match tasks on a data base of 200 images.

  2. Pose-robust recognition of low-resolution face images.

    PubMed

    Biswas, Soma; Aggarwal, Gaurav; Flynn, Patrick J; Bowyer, Kevin W

    2013-12-01

    Face images captured by surveillance cameras usually have poor resolution in addition to uncontrolled poses and illumination conditions, all of which adversely affect the performance of face matching algorithms. In this paper, we develop a completely automatic, novel approach for matching surveillance quality facial images to high-resolution images in frontal pose, which are often available during enrollment. The proposed approach uses multidimensional scaling to simultaneously transform the features from the poor quality probe images and the high-quality gallery images in such a manner that the distances between them approximate the distances had the probe images been captured in the same conditions as the gallery images. Tensor analysis is used for facial landmark localization in the low-resolution uncontrolled probe images for computing the features. Thorough evaluation on the Multi-PIE dataset and comparisons with state-of-the-art super-resolution and classifier-based approaches are performed to illustrate the usefulness of the proposed approach. Experiments on surveillance imagery further signify the applicability of the framework. We also show the usefulness of the proposed approach for the application of tracking and recognition in surveillance videos. PMID:24136439

  3. Structural attributes of the temporal lobe predict face recognition ability in youth.

    PubMed

    Li, Jun; Dong, Minghao; Ren, Aifeng; Ren, Junchan; Zhang, Jinsong; Huang, Liyu

    2016-04-01

    The face recognition ability varies across individuals. However, it remains elusive how brain anatomical structure is related to the face recognition ability in healthy subjects. In this study, we adopted voxel-based morphometry analysis and machine learning approach to investigate the neural basis of individual face recognition ability using anatomical magnetic resonance imaging. We demonstrated that the gray matter volume (GMV) of the right ventral anterior temporal lobe (vATL), an area sensitive to face identity, is significant positively correlated with the subject's face recognition ability which was measured by the Cambridge face memory test (CFMT) score. Furthermore, the predictive model established by the balanced cross-validation combined with linear regression method revealed that the right vATL GMV can predict subjects' face ability. However, the subjects' Cambridge face memory test scores cannot be predicted by the GMV of the face processing network core brain regions including the right occipital face area (OFA) and the right face fusion area (FFA). Our results suggest that the right vATL may play an important role in face recognition and might provide insight into the neural mechanisms underlying face recognition deficits in patients with pathophysiological conditions such as prosopagnosia. PMID:26802942

  4. 3D face reconstruction from 2D pictures: first results of a web-based computer aided system for aesthetic procedures.

    PubMed

    Oliveira-Santos, Thiago; Baumberger, Christian; Constantinescu, Mihai; Olariu, Radu; Nolte, Lutz-Peter; Alaraibi, Salman; Reyes, Mauricio

    2013-05-01

    The human face is a vital component of our identity and many people undergo medical aesthetics procedures in order to achieve an ideal or desired look. However, communication between physician and patient is fundamental to understand the patient's wishes and to achieve the desired results. To date, most plastic surgeons rely on either "free hand" 2D drawings on picture printouts or computerized picture morphing. Alternatively, hardware dependent solutions allow facial shapes to be created and planned in 3D, but they are usually expensive or complex to handle. To offer a simple and hardware independent solution, we propose a web-based application that uses 3 standard 2D pictures to create a 3D representation of the patient's face on which facial aesthetic procedures such as filling, skin clearing or rejuvenation, and rhinoplasty are planned in 3D. The proposed application couples a set of well-established methods together in a novel manner to optimize 3D reconstructions for clinical use. Face reconstructions performed with the application were evaluated by two plastic surgeons and also compared to ground truth data. Results showed the application can provide accurate 3D face representations to be used in clinics (within an average of 2 mm error) in less than 5 min. PMID:23319167

  5. A robust face recognition algorithm under varying illumination using adaptive retina modeling

    NASA Astrophysics Data System (ADS)

    Cheong, Yuen Kiat; Yap, Vooi Voon; Nisar, Humaira

    2013-10-01

    Variation in illumination has a drastic effect on the appearance of a face image. This may hinder the automatic face recognition process. This paper presents a novel approach for face recognition under varying lighting conditions. The proposed algorithm uses adaptive retina modeling based illumination normalization. In the proposed approach, retina modeling is employed along with histogram remapping following normal distribution. Retina modeling is an approach that combines two adaptive nonlinear equations and a difference of Gaussians filter. Two databases: extended Yale B database and CMU PIE database are used to verify the proposed algorithm. For face recognition Gabor Kernel Fisher Analysis method is used. Experimental results show that the recognition rate for the face images with different illumination conditions has improved by the proposed approach. Average recognition rate for Extended Yale B database is 99.16%. Whereas, the recognition rate for CMU-PIE database is 99.64%.

  6. Orientation and Affective Expression Effects on Face Recognition in Williams Syndrome and Autism

    ERIC Educational Resources Information Center

    Rose, Fredric E.; Lincoln, Alan J.; Lai, Zona; Ene, Michaela; Searcy, Yvonne M.; Bellugi, Ursula

    2007-01-01

    We sought to clarify the nature of the face processing strength commonly observed in individuals with Williams syndrome (WS) by comparing the face recognition ability of persons with WS to that of persons with autism and to healthy controls under three conditions: Upright faces with neutral expressions, upright faces with varying affective…

  7. The Cambridge Face Memory Test for Children (CFMT-C): a new tool for measuring face recognition skills in childhood.

    PubMed

    Croydon, Abigail; Pimperton, Hannah; Ewing, Louise; Duchaine, Brad C; Pellicano, Elizabeth

    2014-09-01

    Face recognition ability follows a lengthy developmental course, not reaching maturity until well into adulthood. Valid and reliable assessments of face recognition memory ability are necessary to examine patterns of ability and disability in face processing, yet there is a dearth of such assessments for children. We modified a well-known test of face memory in adults, the Cambridge Face Memory Test (Duchaine & Nakayama, 2006, Neuropsychologia, 44, 576-585), to make it developmentally appropriate for children. To establish its utility, we administered either the upright or inverted versions of the computerised Cambridge Face Memory Test - Children (CFMT-C) to 401 children aged between 5 and 12 years. Our results show that the CFMT-C is sufficiently sensitive to demonstrate age-related gains in the recognition of unfamiliar upright and inverted faces, does not suffer from ceiling or floor effects, generates robust inversion effects, and is capable of detecting difficulties in face memory in children diagnosed with autism. Together, these findings indicate that the CFMT-C constitutes a new valid assessment tool for children's face recognition skills. PMID:25054837

  8. Face shape and face identity processing in behavioral variant fronto-temporal dementia: A specific deficit for familiarity and name recognition of famous faces.

    PubMed

    De Winter, François-Laurent; Timmers, Dorien; de Gelder, Beatrice; Van Orshoven, Marc; Vieren, Marleen; Bouckaert, Miriam; Cypers, Gert; Caekebeke, Jo; Van de Vliet, Laura; Goffin, Karolien; Van Laere, Koen; Sunaert, Stefan; Vandenberghe, Rik; Vandenbulcke, Mathieu; Van den Stock, Jan

    2016-01-01

    Deficits in face processing have been described in the behavioral variant of fronto-temporal dementia (bvFTD), primarily regarding the recognition of facial expressions. Less is known about face shape and face identity processing. Here we used a hierarchical strategy targeting face shape and face identity recognition in bvFTD and matched healthy controls. Participants performed 3 psychophysical experiments targeting face shape detection (Experiment 1), unfamiliar face identity matching (Experiment 2), familiarity categorization and famous face-name matching (Experiment 3). The results revealed group differences only in Experiment 3, with a deficit in the bvFTD group for both familiarity categorization and famous face-name matching. Voxel-based morphometry regression analyses in the bvFTD group revealed an association between grey matter volume of the left ventral anterior temporal lobe and familiarity recognition, while face-name matching correlated with grey matter volume of the bilateral ventral anterior temporal lobes. Subsequently, we quantified familiarity-specific and name-specific recognition deficits as the sum of the celebrities of which respectively only the name or only the familiarity was accurately recognized. Both indices were associated with grey matter volume of the bilateral anterior temporal cortices. These findings extent previous results by documenting the involvement of the left anterior temporal lobe (ATL) in familiarity detection and the right ATL in name recognition deficits in fronto-temporal lobar degeneration. PMID:27298765

  9. Internal versus external features in triggering the brain waveforms for conjunction and feature faces in recognition.

    PubMed

    Nie, Aiqing; Jiang, Jingguo; Fu, Qiao

    2014-08-20

    Previous research has found that conjunction faces (whose internal features, e.g. eyes, nose, and mouth, and external features, e.g. hairstyle and ears, are from separate studied faces) and feature faces (partial features of these are studied) can produce higher false alarms than both old and new faces (i.e. those that are exactly the same as the studied faces and those that have not been previously presented) in recognition. The event-related potentials (ERPs) that relate to conjunction and feature faces at recognition, however, have not been described as yet; in addition, the contributions of different facial features toward ERPs have not been differentiated. To address these issues, the present study compared the ERPs elicited by old faces, conjunction faces (the internal and the external features were from two studied faces), old internal feature faces (whose internal features were studied), and old external feature faces (whose external features were studied) with those of new faces separately. The results showed that old faces not only elicited an early familiarity-related FN400, but a more anterior distributed late old/new effect that reflected recollection. Conjunction faces evoked similar late brain waveforms as old internal feature faces, but not to old external feature faces. These results suggest that, at recognition, old faces hold higher familiarity than compound faces in the profiles of ERPs and internal facial features are more crucial than external ones in triggering the brain waveforms that are characterized as reflecting the result of familiarity. PMID:25003951

  10. The "parts and wholes" of face recognition: A review of the literature.

    PubMed

    Tanaka, James W; Simonyi, Diana

    2016-10-01

    It has been claimed that faces are recognized as a "whole" rather than by the recognition of individual parts. In a paper published in the Quarterly Journal of Experimental Psychology in 1993, Martha Farah and I attempted to operationalize the holistic claim using the part/whole task. In this task, participants studied a face and then their memory presented in isolation and in the whole face. Consistent with the holistic view, recognition of the part was superior when tested in the whole-face condition compared to when it was tested in isolation. The "whole face" or holistic advantage was not found for faces that were inverted, or scrambled, nor for non-face objects, suggesting that holistic encoding was specific to normal, intact faces. In this paper, we reflect on the part/whole paradigm and how it has contributed to our understanding of what it means to recognize a face as a "whole" stimulus. We describe the value of part/whole task for developing theories of holistic and non-holistic recognition of faces and objects. We discuss the research that has probed the neural substrates of holistic processing in healthy adults and people with prosopagnosia and autism. Finally, we examine how experience shapes holistic face recognition in children and recognition of own- and other-race faces in adults. The goal of this article is to summarize the research on the part/whole task and speculate on how it has informed our understanding of holistic face processing. PMID:26886495

  11. Development of Face Recognition in 5- to 15-Year-Olds

    ERIC Educational Resources Information Center

    Kinnunen, Suna; Korkman, Marit; Laasonen, Marja; Lahti-Nuuttila, Pekka

    2013-01-01

    This study focuses on the development of face recognition in typically developing preschool- and school-aged children (aged 5 to 15 years old, "n" = 611, 336 girls). Social predictors include sex differences and own-sex bias. At younger ages, the development of face recognition was rapid and became more gradual as the age increased up…

  12. Tracking and recognition face in videos with incremental local sparse representation model

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Wang, Yunhong; Zhang, Zhaoxiang

    2013-10-01

    This paper addresses the problem of tracking and recognizing faces via incremental local sparse representation. First a robust face tracking algorithm is proposed via employing local sparse appearance and covariance pooling method. In the following face recognition stage, with the employment of a novel template update strategy, which combines incremental subspace learning, our recognition algorithm adapts the template to appearance changes and reduces the influence of occlusion and illumination variation. This leads to a robust video-based face tracking and recognition with desirable performance. In the experiments, we test the quality of face recognition in real-world noisy videos on YouTube database, which includes 47 celebrities. Our proposed method produces a high face recognition rate at 95% of all videos. The proposed face tracking and recognition algorithms are also tested on a set of noisy videos under heavy occlusion and illumination variation. The tracking results on challenging benchmark videos demonstrate that the proposed tracking algorithm performs favorably against several state-of-the-art methods. In the case of the challenging dataset in which faces undergo occlusion and illumination variation, and tracking and recognition experiments under significant pose variation on the University of California, San Diego (Honda/UCSD) database, our proposed method also consistently demonstrates a high recognition rate.

  13. Face recognition and emotional valence: processing without awareness by neurologically intact participants does not simulate covert recognition in prosopagnosia.

    PubMed

    Stone, A; Valentine, T; Davis, R

    2001-06-01

    Covert face recognition in neurologically intact participants was investigated with the use of very brief stimulus presentation to prevent awareness of the stimulus. In Experiment 1, skin conductance response (SCR) to photographs of celebrity and unfamiliar faces was recorded; the faces were displayed for 220 msec and for 17 msec in a within-participants design. SCR to faces presented for 220 msec was larger and more likely to occur with familiar faces than with unfamiliar faces. Face familiarity did not affect the SCR to faces presented for 17 msec. SCR was larger for faces of good than for faces of evil celebrities presented for 17 msec, but valence did not affect SCR to faces displayed for 220 msec. In Experiment 2, associative priming was found in a face familiarity decision task when the prime face was displayed for 220 msec, but no facilitation occurred when primes were presented for 17 msec. In Experiment 3, participants were able to differentiate evil and good faces presented without awareness in a two-alternative forced-choice decision. The results provide no evidence of familiarity detection outside awareness in normal participants and suggest that, contrary to previous research, very brief presentation to neurologically intact participants is not a useful model for the types of covert recognition found in prosopagnosia. However, a response based on affective valence appears to be available from brief presentation. PMID:12467113

  14. Towards Perceptual Interface for Visualization Navigation of Large Data Sets Using Gesture Recognition with Bezier Curves and Registered 3-D Data

    SciTech Connect

    Shin, M C; Tsap, L V; Goldgof, D B

    2003-03-20

    This paper presents a gesture recognition system for visualization navigation. Scientists are interested in developing interactive settings for exploring large data sets in an intuitive environment. The input consists of registered 3-D data. A geometric method using Bezier curves is used for the trajectory analysis and classification of gestures. The hand gesture speed is incorporated into the algorithm to enable correct recognition from trajectories with variations in hand speed. The method is robust and reliable: correct hand identification rate is 99.9% (from 1641 frames), modes of hand movements are correct 95.6% of the time, recognition rate (given the right mode) is 97.9%. An application to gesture-controlled visualization of 3D bioinformatics data is also presented.

  15. Image Description with Local Patterns: An Application to Face Recognition

    NASA Astrophysics Data System (ADS)

    Zhou, Wei; Ahrary, Alireza; Kamata, Sei-Ichiro

    In this paper, we propose a novel approach for presenting the local features of digital image using 1D Local Patterns by Multi-Scans (1DLPMS). We also consider the extentions and simplifications of the proposed approach into facial images analysis. The proposed approach consists of three steps. At the first step, the gray values of pixels in image are represented as a vector giving the local neighborhood intensity distrubutions of the pixels. Then, multi-scans are applied to capture different spatial information on the image with advantage of less computation than other traditional ways, such as Local Binary Patterns (LBP). The second step is encoding the local features based on different encoding rules using 1D local patterns. This transformation is expected to be less sensitive to illumination variations besides preserving the appearance of images embedded in the original gray scale. At the final step, Grouped 1D Local Patterns by Multi-Scans (G1DLPMS) is applied to make the proposed approach computationally simpler and easy to extend. Next, we further formulate boosted algorithm to extract the most discriminant local features. The evaluated results demonstrate that the proposed approach outperforms the conventional approaches in terms of accuracy in applications of face recognition, gender estimation and facial expression.

  16. Combining quantitative 2D and 3D image analysis in the serial block face SEM: application to secretory organelles of pancreatic islet cells.

    PubMed

    Shomorony, A; Pfeifer, C R; Aronova, M A; Zhang, G; Cai, T; Xu, H; Notkins, A L; Leapman, R D

    2015-08-01

    A combination of two-dimensional (2D) and three-dimensional (3D) analyses of tissue volume ultrastructure acquired by serial block face scanning electron microscopy can greatly shorten the time required to obtain quantitative information from big data sets that contain many billions of voxels. Thus, to analyse the number of organelles of a specific type, or the total volume enclosed by a population of organelles within a cell, it is possible to estimate the number density or volume fraction of that organelle using a stereological approach to analyse randomly selected 2D block face views through the cells, and to combine such estimates with precise measurement of 3D cell volumes by delineating the plasma membrane in successive block face images. The validity of such an approach can be easily tested since the entire 3D tissue volume is available in the serial block face scanning electron microscopy data set. We have applied this hybrid 3D/2D technique to determine the number of secretory granules in the endocrine α and β cells of mouse pancreatic islets of Langerhans, and have been able to estimate the total insulin content of a β cell. PMID:26139222

  17. Combining quantitative 2D and 3D image analysis in the serial block face SEM: application to secretory organelles of pancreatic islet cells

    PubMed Central

    SHOMORONY, A.; PFEIFER, C.R.; ARONOVA, M.A.; ZHANG, G.; CAI, T.; XU, H.; NOTKINS, A.L.

    2015-01-01

    Summary A combination of two‐dimensional (2D) and three‐dimensional (3D) analyses of tissue volume ultrastructure acquired by serial block face scanning electron microscopy can greatly shorten the time required to obtain quantitative information from big data sets that contain many billions of voxels. Thus, to analyse the number of organelles of a specific type, or the total volume enclosed by a population of organelles within a cell, it is possible to estimate the number density or volume fraction of that organelle using a stereological approach to analyse randomly selected 2D block face views through the cells, and to combine such estimates with precise measurement of 3D cell volumes by delineating the plasma membrane in successive block face images. The validity of such an approach can be easily tested since the entire 3D tissue volume is available in the serial block face scanning electron microscopy data set. We have applied this hybrid 3D/2D technique to determine the number of secretory granules in the endocrine α and β cells of mouse pancreatic islets of Langerhans, and have been able to estimate the total insulin content of a β cell. PMID:26139222

  18. Facial expression influences recognition memory for faces: robust enhancement effect of fearful expression.

    PubMed

    Wang, Bo

    2013-04-01

    Memory for faces is important for social interactions. However, it is unclear whether negative or positive expression affects recollection and familiarity for faces and whether the effect can be modulated by retention interval. Two experiments examined the effect of emotional expression on recognition for faces at two delay conditions. In Experiment 1 participants viewed neutral, positive, and negative (including fearful, sad, angry etc.) faces and made gender discrimination for each face. In Experiment 2 they viewed and made gender discrimination for neutral, positive, and fearful faces. Following the incidental learning they were randomly assigned into the immediate and 24-hour (24-h) delay conditions. Findings from the two experiments are as follows: (1) In the immediate and 24-h delay conditions overall recognition and recollection for negative faces (fearful faces in Experiment 2) were better than for neutral faces and positive faces. (2) In the immediate and 24-h delay conditions recollection and familiarity for positive faces was equivalent to recollection for neutral faces. (3) The enhancement effect of fearful expression on recognition and recollection was not due to greater discriminability between the old and new faces in the fearful category. The results indicate that recognition and recollection for faces and the enhancement effect of fearful expression is robust within 24 hours. PMID:23016604

  19. Kruskal-Wallis-Based Computationally Efficient Feature Selection for Face Recognition

    PubMed Central

    Hussain, Ayyaz; Basit, Abdul

    2014-01-01

    Face recognition in today's technological world, and face recognition applications attain much more importance. Most of the existing work used frontal face images to classify face image. However these techniques fail when applied on real world face images. The proposed technique effectively extracts the prominent facial features. Most of the features are redundant and do not contribute to representing face. In order to eliminate those redundant features, computationally efficient algorithm is used to select the more discriminative face features. Extracted features are then passed to classification step. In the classification step, different classifiers are ensemble to enhance the recognition accuracy rate as single classifier is unable to achieve the high accuracy. Experiments are performed on standard face database images and results are compared with existing techniques. PMID:24967437

  20. Understanding gender bias in face recognition: effects of divided attention at encoding.

    PubMed

    Palmer, Matthew A; Brewer, Neil; Horry, Ruth

    2013-03-01

    Prior research has demonstrated a female own-gender bias in face recognition, with females better at recognizing female faces than male faces. We explored the basis for this effect by examining the effect of divided attention during encoding on females' and males' recognition of female and male faces. For female participants, divided attention impaired recognition performance for female faces to a greater extent than male faces in a face recognition paradigm (Study 1; N=113) and an eyewitness identification paradigm (Study 2; N=502). Analysis of remember-know judgments (Study 2) indicated that divided attention at encoding selectively reduced female participants' recollection of female faces at test. For male participants, divided attention selectively reduced recognition performance (and recollection) for male stimuli in Study 2, but had similar effects on recognition of male and female faces in Study 1. Overall, the results suggest that attention at encoding contributes to the female own-gender bias by facilitating the later recollection of female faces. PMID:23422290

  1. Laterality effects in normal subjects' recognition of familiar faces, voices and names. Perceptual and representational components.

    PubMed

    Gainotti, Guido

    2013-06-01

    A growing body of evidence suggests that a different hemispheric specialization may exist for different modalities of person identification, with a prevalent right lateralization of the sensory-motor systems allowing face and voice recognition and a prevalent left lateralization of the name recognition system. Data supporting this claim concern, however, much more disorders of familiar people recognition observed in patients with focal brain lesions than results of experimental studies conducted in normal subjects. These last data are sparse and in part controversial, but are important from the theoretical point of view, because it is not clear if hemispheric asymmetries in the recognition of faces, voices and names are limited to their perceptual processing, or also extend to the domain of their cortical representations. The present review has tried to clarify this issues, taking into account investigations that have evaluated in normal subjects laterality effects in recognition of familiar names, faces and voices, by means of behavioural, neurophysiological and neuroimaging techniques. Results of this survey indicate that: (a) recognition of familiar faces and voices show a prevalent right lateralization, whereas recognition of familiar names is lateralized to the left hemisphere; (b) the right hemisphere prevalence is greater in tasks involving familiar than unfamiliar faces and voices, and the left hemisphere superiority is greater in the recognition of familiar than unfamiliar names. Taken together, these data suggest that hemispheric asymmetries in the recognition of faces, voices and names are not limited to their perceptual processing, but also extend to the domain of their cortical representations. PMID:23542500

  2. Novel image fusion scheme based on maximum ratio combining for robust multispectral face recognition

    NASA Astrophysics Data System (ADS)

    Omri, Faten; Foufou, Sebti

    2015-04-01

    Recently, the research in multispectral face recognition has focused on developing efficient frameworks for improving face recognition performance at close-up distances. However, few studies have investigated the multispectral face images captured at long distance. In fact, great challenges still exist in recognizing human face in images captured at long distance as the image quality might be affected and some important features masked. Therefore, multispectral face recognition tools and algorithms should evolve from close-up distances to long distances. To address these issues, we present in this paper a novel image fusion scheme based on Maximum Ratio Combining algorithm and improve multispectral face recognition at long distance. The proposed method is compared with similar super-resolution method based on the Maximum likelihood algorithm. Simulation results show the efficiency of the proposed approach in term of average variance of detection error.

  3. 3D Object Recognition using Gabor Feature Extraction and PCA-FLD Projections of Holographically Sensed Data

    NASA Astrophysics Data System (ADS)

    Yeom, Seokwon; Javidi, Bahram

    In this research, a 3D object classification technique using a single hologram has been presented. The PCA-FLD classifier with feature vectors based on Gabor wavelets has been utilized for this purpose. Training and test data of the 3D objects were obtained by computational holographic imaging. We were able to classify 3D objects used in the experiments with a few reconstructed planes of the hologram. The Gabor approach appears to be a good feature extractor for hologram-based 3D classification. The FLD combined with the PCA proved to be a very efficient classifier even with a few training data. Substantial dimensionality reduction was achieved by using the proposed technique for 3D classification problem using holographic imaging. As a consequence, we were able to classify different classes of 3D objects using computer-reconstructed holographic images.

  4. Eye tracking reveals a crucial role for facial motion in recognition of faces by infants.

    PubMed

    Xiao, Naiqi G; Quinn, Paul C; Liu, Shaoying; Ge, Liezhong; Pascalis, Olivier; Lee, Kang

    2015-06-01

    Current knowledge about face processing in infancy comes largely from studies using static face stimuli, but faces that infants see in the real world are mostly moving ones. To bridge this gap, 3-, 6-, and 9-month-old Asian infants (N = 118) were familiarized with either moving or static Asian female faces, and then their face recognition was tested with static face images. Eye-tracking methodology was used to record eye movements during the familiarization and test phases. The results showed a developmental change in eye movement patterns, but only for the moving faces. In addition, the more infants shifted their fixations across facial regions, the better their face recognition was, but only for the moving faces. The results suggest that facial movement influences the way faces are encoded from early in development. PMID:26010387

  5. Face Recognition Is Affected by Similarity in Spatial Frequency Range to a Greater Degree Than Within-Category Object Recognition

    ERIC Educational Resources Information Center

    Collin, Charles A.; Liu, Chang Hong; Troje, Nikolaus F.; McMullen, Patricia A.; Chaudhuri, Avi

    2004-01-01

    Previous studies have suggested that face identification is more sensitive to variations in spatial frequency content than object recognition, but none have compared how sensitive the 2 processes are to variations in spatial frequency overlap (SFO). The authors tested face and object matching accuracy under varying SFO conditions. Their results…

  6. Experience moderates overlap between object and face recognition, suggesting a common ability

    PubMed Central

    Gauthier, Isabel; McGugin, Rankin W.; Richler, Jennifer J.; Herzmann, Grit; Speegle, Magen; Van Gulick, Ana E.

    2014-01-01

    Some research finds that face recognition is largely independent from the recognition of other objects; a specialized and innate ability to recognize faces could therefore have little or nothing to do with our ability to recognize objects. We propose a new framework in which recognition performance for any category is the product of domain-general ability and category-specific experience. In Experiment 1, we show that the overlap between face and object recognition depends on experience with objects. In 256 subjects we measured face recognition, object recognition for eight categories, and self-reported experience with these categories. Experience predicted neither face recognition nor object recognition but moderated their relationship: Face recognition performance is increasingly similar to object recognition performance with increasing object experience. If a subject has a lot of experience with objects and is found to perform poorly, they also prove to have a low ability with faces. In a follow-up survey, we explored the dimensions of experience with objects that may have contributed to self-reported experience in Experiment 1. Different dimensions of experience appear to be more salient for different categories, with general self-reports of expertise reflecting judgments of verbal knowledge about a category more than judgments of visual performance. The complexity of experience and current limitations in its measurement support the importance of aggregating across multiple categories. Our findings imply that both face and object recognition are supported by a common, domain-general ability expressed through experience with a category and best measured when accounting for experience. PMID:24993021

  7. Component Structure of Individual Differences in True and False Recognition of Faces

    ERIC Educational Resources Information Center

    Bartlett, James C.; Shastri, Kalyan K.; Abdi, Herve; Neville-Smith, Marsha

    2009-01-01

    Principal-component analyses of 4 face-recognition studies uncovered 2 independent components. The first component was strongly related to false-alarm errors with new faces as well as to facial "conjunctions" that recombine features of previously studied faces. The second component was strongly related to hits as well as to the conjunction/new…

  8. Fusion of Visible and Thermal Descriptors Using Genetic Algorithms for Face Recognition Systems

    PubMed Central

    Hermosilla, Gabriel; Gallardo, Francisco; Farias, Gonzalo; San Martin, Cesar

    2015-01-01

    The aim of this article is to present a new face recognition system based on the fusion of visible and thermal features obtained from the most current local matching descriptors by maximizing face recognition rates through the use of genetic algorithms. The article considers a comparison of the performance of the proposed fusion methodology against five current face recognition methods and classic fusion techniques used commonly in the literature. These were selected by considering their performance in face recognition. The five local matching methods and the proposed fusion methodology are evaluated using the standard visible/thermal database, the Equinox database, along with a new database, the PUCV-VTF, designed for visible-thermal studies in face recognition and described for the first time in this work. The latter is created considering visible and thermal image sensors with different real-world conditions, such as variations in illumination, facial expression, pose, occlusion, etc. The main conclusions of this article are that two variants of the proposed fusion methodology surpass current face recognition methods and the classic fusion techniques reported in the literature, attaining recognition rates of over 97% and 99% for the Equinox and PUCV-VTF databases, respectively. The fusion methodology is very robust to illumination and expression changes, as it combines thermal and visible information efficiently by using genetic algorithms, thus allowing it to choose optimal face areas where one spectrum is more representative than the other. PMID:26213932

  9. Fusion of Visible and Thermal Descriptors Using Genetic Algorithms for Face Recognition Systems.

    PubMed

    Hermosilla, Gabriel; Gallardo, Francisco; Farias, Gonzalo; San Martin, Cesar

    2015-01-01

    The aim of this article is to present a new face recognition system based on the fusion of visible and thermal features obtained from the most current local matching descriptors by maximizing face recognition rates through the use of genetic algorithms. The article considers a comparison of the performance of the proposed fusion methodology against five current face recognition methods and classic fusion techniques used commonly in the literature. These were selected by considering their performance in face recognition. The five local matching methods and the proposed fusion methodology are evaluated using the standard visible/thermal database, the Equinox database, along with a new database, the PUCV-VTF, designed for visible-thermal studies in face recognition and described for the first time in this work. The latter is created considering visible and thermal image sensors with different real-world conditions, such as variations in illumination, facial expression, pose, occlusion, etc. The main conclusions of this article are that two variants of the proposed fusion methodology surpass current face recognition methods and the classic fusion techniques reported in the literature, attaining recognition rates of over 97% and 99% for the Equinox and PUCV-VTF databases, respectively. The fusion methodology is very robust to illumination and expression changes, as it combines thermal and visible information efficiently by using genetic algorithms, thus allowing it to choose optimal face areas where one spectrum is more representative than the other. PMID:26213932

  10. Fearful contextual expression impairs the encoding and recognition of target faces: an ERP study.

    PubMed

    Lin, Huiyan; Schulz, Claudia; Straube, Thomas

    2015-01-01

    Previous event-related potential (ERP) studies have shown that the N170 to faces is modulated by the emotion of the face and its context. However, it is unclear how the encoding of emotional target faces as reflected in the N170 is modulated by the preceding contextual facial expression when temporal onset and identity of target faces are unpredictable. In addition, no study as yet has investigated whether contextual facial expression modulates later recognition of target faces. To address these issues, participants in the present study were asked to identify target faces (fearful or neutral) that were presented after a sequence of fearful or neutral contextual faces. The number of sequential contextual faces was random and contextual and target faces were of different identities so that temporal onset and identity of target faces were unpredictable. Electroencephalography (EEG) data was recorded during the encoding phase. Subsequently, participants had to perform an unexpected old/new recognition task in which target face identities were presented in either the encoded or the non-encoded expression. ERP data showed a reduced N170 to target faces in fearful as compared to neutral context regardless of target facial expression. In the later recognition phase, recognition rates were reduced for target faces in the encoded expression when they had been encountered in fearful as compared to neutral context. The present findings suggest that fearful compared to neutral contextual faces reduce the allocation of attentional resources towards target faces, which results in limited encoding and recognition of target faces. PMID:26388751

  11. Single-Sample Face Recognition Based on Intra-Class Differences in a Variation Model

    PubMed Central

    Cai, Jun; Chen, Jing; Liang, Xing

    2015-01-01

    In this paper, a novel random facial variation modeling system for sparse representation face recognition is presented. Although recently Sparse Representation-Based Classification (SRC) has represented a breakthrough in the field of face recognition due to its good performance and robustness, there is the critical problem that SRC needs sufficiently large training samples to achieve good performance. To address these issues, we challenge the single-sample face recognition problem with intra-class differences of variation in a facial image model based on random projection and sparse representation. In this paper, we present a developed facial variation modeling systems composed only of various facial variations. We further propose a novel facial random noise dictionary learning method that is invariant to different faces. The experiment results on the AR, Yale B, Extended Yale B, MIT and FEI databases validate that our method leads to substantial improvements, particularly in single-sample face recognition problems. PMID:25580904

  12. Recognition of personally familiar faces and functional connectivity in Alzheimer's disease.

    PubMed

    Kurth, Sophie; Moyse, Evelyne; Bahri, Mohamed A; Salmon, Eric; Bastin, Christine

    2015-06-01

    Studies have reported that patients in the severe stages of Alzheimer's disease (AD) experience difficulties recognizing their own faces in recent photographs. Two case reports of late-stage AD showed that this loss of self-face recognition was temporally graded: photographs from the remote past were recognized more easily than more recent photographs. Little is known about the neural correlates of own face recognition abilities in AD patients, while neuroimaging studies in healthy adults have related these abilities to a bilateral fronto-parieto-occipital network. In this study, two behavioral experiments (experiments 1 and 2) and one functional magnetic resonance imaging (fMRI) experiment (second part of experiment 2) were conducted to compare mild AD patients (experiment 1) and moderate AD patients (experiment 2) with healthy older participants in a recognition task involving self and familiar faces from different decades of the participants' life. In moderate AD patients, variable performance allowed us to examine correlations between scores and resting-state fMRI in order to link behavioral data to cerebral activity. At the behavioral level, the results revealed that, in mild AD, self and familiar face recognition was preserved. Moreover, mild AD patients and healthy older participants showed an inverse temporal gradient, with faster recognition of self and familiar recent photographs than self and familiar remote photographs. However, in moderate AD, both self and familiar face recognition were affected. fMRI results showed that the higher the connectivity between the dorsomedial prefrontal cortex (dMPFC) and the right superior frontal gyrus (rSFG), the lower the self and familiar face recognition scores in moderate AD patients. Given that previous studies have related the superior frontal region to control processes rather than face recognition processes, these results might reflect less segregation and more interference between brain networks in AD. In

  13. Visual scanning behavior is related to recognition performance for own- and other-age faces

    PubMed Central

    Proietti, Valentina; Macchi Cassia, Viola; dell’Amore, Francesca; Conte, Stefania; Bricolo, Emanuela

    2015-01-01

    It is well-established that our recognition ability is enhanced for faces belonging to familiar categories, such as own-race faces and own-age faces. Recent evidence suggests that, for race, the recognition bias is also accompanied by different visual scanning strategies for own- compared to other-race faces. Here, we tested the hypothesis that these differences in visual scanning patterns extend also to the comparison between own and other-age faces and contribute to the own-age recognition advantage. Participants (young adults with limited experience with infants) were tested in an old/new recognition memory task where they encoded and subsequently recognized a series of adult and infant faces while their eye movements were recorded. Consistent with findings on the other-race bias, we found evidence of an own-age bias in recognition which was accompanied by differential scanning patterns, and consequently differential encoding strategies, for own-compared to other-age faces. Gaze patterns for own-age faces involved a more dynamic sampling of the internal features and longer viewing time on the eye region compared to the other regions of the face. This latter strategy was extensively employed during learning (vs. recognition) and was positively correlated to discriminability. These results suggest that deeply encoding the eye region is functional for recognition and that the own-age bias is evident not only in differential recognition performance, but also in the employment of different sampling strategies found to be effective for accurate recognition. PMID:26579056

  14. Solving the Border Control Problem: Evidence of Enhanced Face Matching in Individuals with Extraordinary Face Recognition Skills.

    PubMed

    Bobak, Anna Katarzyna; Dowsett, Andrew James; Bate, Sarah

    2016-01-01

    Photographic identity documents (IDs) are commonly used despite clear evidence that unfamiliar face matching is a difficult and error-prone task. The current study set out to examine the performance of seven individuals with extraordinary face recognition memory, so called "super recognisers" (SRs), on two face matching tasks resembling border control identity checks. In Experiment 1, the SRs as a group outperformed control participants on the "Glasgow Face Matching Test", and some case-by-case comparisons also reached significance. In Experiment 2, a perceptually difficult face matching task was used: the "Models Face Matching Test". Once again, SRs outperformed controls both on group and mostly in case-by-case analyses. These findings suggest that SRs are considerably better at face matching than typical perceivers, and would make proficient personnel for border control agencies. PMID:26829321

  15. Solving the Border Control Problem: Evidence of Enhanced Face Matching in Individuals with Extraordinary Face Recognition Skills

    PubMed Central

    Bobak, Anna Katarzyna; Dowsett, Andrew James; Bate, Sarah

    2016-01-01

    Photographic identity documents (IDs) are commonly used despite clear evidence that unfamiliar face matching is a difficult and error-prone task. The current study set out to examine the performance of seven individuals with extraordinary face recognition memory, so called “super recognisers” (SRs), on two face matching tasks resembling border control identity checks. In Experiment 1, the SRs as a group outperformed control participants on the “Glasgow Face Matching Test”, and some case-by-case comparisons also reached significance. In Experiment 2, a perceptually difficult face matching task was used: the “Models Face Matching Test”. Once again, SRs outperformed controls both on group and mostly in case-by-case analyses. These findings suggest that SRs are considerably better at face matching than typical perceivers, and would make proficient personnel for border control agencies. PMID:26829321

  16. Emotional recognition from face, voice, and music in dementia of the Alzheimer type.

    PubMed

    Drapeau, Joanie; Gosselin, Nathalie; Gagnon, Lise; Peretz, Isabelle; Lorrain, Dominique

    2009-07-01

    Persons with dementia of the Alzheimer type (DAT) are impaired in recognizing emotions from face and voice. Yet clinical practitioners use these mediums to communicate with DAT patients. Music is also used in clinical practice, but little is known about emotional processing from music in DAT. This study aims to assess emotional recognition in mild DAT. Seven patients with DAT and 16 healthy elderly adults were given three tasks of emotional recognition for face, prosody, and music. DAT participants were only impaired in the emotional recognition from the face. These preliminary results suggest that dynamic auditory emotions are preserved in DAT. PMID:19673804

  17. Locality Constrained Joint Dynamic Sparse Representation for Local Matching Based Face Recognition

    PubMed Central

    Wang, Jianzhong; Yi, Yugen; Zhou, Wei; Shi, Yanjiao; Qi, Miao; Zhang, Ming; Zhang, Baoxue; Kong, Jun

    2014-01-01

    Recently, Sparse Representation-based Classification (SRC) has attracted a lot of attention for its applications to various tasks, especially in biometric techniques such as face recognition. However, factors such as lighting, expression, pose and disguise variations in face images will decrease the performances of SRC and most other face recognition techniques. In order to overcome these limitations, we propose a robust face recognition method named Locality Constrained Joint Dynamic Sparse Representation-based Classification (LCJDSRC) in this paper. In our method, a face image is first partitioned into several smaller sub-images. Then, these sub-images are sparsely represented using the proposed locality constrained joint dynamic sparse representation algorithm. Finally, the representation results for all sub-images are aggregated to obtain the final recognition result. Compared with other algorithms which process each sub-image of a face image independently, the proposed algorithm regards the local matching-based face recognition as a multi-task learning problem. Thus, the latent relationships among the sub-images from the same face image are taken into account. Meanwhile, the locality information of the data is also considered in our algorithm. We evaluate our algorithm by comparing it with other state-of-the-art approaches. Extensive experiments on four benchmark face databases (ORL, Extended YaleB, AR and LFW) demonstrate the effectiveness of LCJDSRC. PMID:25419662

  18. Improving Preschoolers' Recognition Memory for Faces with Orienting Information.

    ERIC Educational Resources Information Center

    Montepare, Joann M.

    To determine whether preschool children's memory for unfamiliar faces could be facilitated by giving them orienting information about faces, 4- and 5-year-old subjects were told that they were going to play a guessing game in which they would be looking at faces and guessing which ones they had seen before. In study 1, 6 boys and 6 girls within…

  19. Separability oriented fusion of LBP and CS-LDP for infrared face recognition

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua; Liu, Guodong

    2015-10-01

    Due to low resolutions of infrared face image, the local texture features are more appreciated for infrared face feature extraction. To extract rich facial texture features, infrared face recognition based on local binary pattern (LBP) and center-symmetric local derivative pattern (CS-LDP) is proposed. Firstly, LBP is utilized to extract the first order texture from the original infrared face image; Secondly, the second order features are extracted CS-LDP. Finally, an adaptive weighted fusion algorithm based separability discriminant criterion is proposed to get final recognition features. Experimental results on our infrared faces databases demonstrate that separability oriented fusion of LBP and CS-LDP contributes complementary discriminant ability, which can improve the performance for infrared face recognition

  20. Coupled bias-variance tradeoff for cross-pose face recognition.

    PubMed

    Li, Annan; Shan, Shiguang; Gao, Wen

    2012-01-01

    Subspace-based face representation can be looked as a regression problem. From this viewpoint, we first revisited the problem of recognizing faces across pose differences, which is a bottleneck in face recognition. Then, we propose a new approach for cross-pose face recognition using a regressor with a coupled bias-variance tradeoff. We found that striking a coupled balance between bias and variance in regression for different poses could improve the regressor-based cross-pose face representation, i.e., the regressor can be more stable against a pose difference. With the basic idea, ridge regression and lasso regression are explored. Experimental results on CMU PIE, the FERET, and the Multi-PIE face databases show that the proposed bias-variance tradeoff can achieve considerable reinforcement in recognition performance. PMID:21724510

  1. Impairments in Monkey and Human Face Recognition in 2-Year-Old Toddlers with Autism Spectrum Disorder and Developmental Delay

    ERIC Educational Resources Information Center

    Chawarska, Katarzyna; Volkmar, Fred

    2007-01-01

    Face recognition impairments are well documented in older children with Autism Spectrum Disorders (ASD); however, the developmental course of the deficit is not clear. This study investigates the progressive specialization of face recognition skills in children with and without ASD. Experiment 1 examines human and monkey face recognition in…

  2. 3D SMoSIFT: three-dimensional sparse motion scale invariant feature transform for activity recognition from RGB-D videos

    NASA Astrophysics Data System (ADS)

    Wan, Jun; Ruan, Qiuqi; Li, Wei; An, Gaoyun; Zhao, Ruizhen

    2014-03-01

    Human activity recognition based on RGB-D data has received more attention in recent years. We propose a spatiotemporal feature named three-dimensional (3D) sparse motion scale-invariant feature transform (SIFT) from RGB-D data for activity recognition. First, we build pyramids as scale space for each RGB and depth frame, and then use Shi-Tomasi corner detector and sparse optical flow to quickly detect and track robust keypoints around the motion pattern in the scale space. Subsequently, local patches around keypoints, which are extracted from RGB-D data, are used to build 3D gradient and motion spaces. Then SIFT-like descriptors are calculated on both 3D spaces, respectively. The proposed feature is invariant to scale, transition, and partial occlusions. More importantly, the running time of the proposed feature is fast so that it is well-suited for real-time applications. We have evaluated the proposed feature under a bag of words model on three public RGB-D datasets: one-shot learning Chalearn Gesture Dataset, Cornell Activity Dataset-60, and MSR Daily Activity 3D dataset. Experimental results show that the proposed feature outperforms other spatiotemporal features and are comparative to other state-of-the-art approaches, even though there is only one training sample for each class.

  3. Transverse injection into Mach 2 flow behind a rearward-facing step - A 3-D, compressible flow test case for hypersonic combustor CFD validation

    SciTech Connect

    Mcdaniel, J.C.; Fletcher, D.G.; Hartfield, R.J.; Hollo, S.D. NASA, Ames Research Center, Moffett Field, CA )

    1991-12-01

    A spatially-complete data set of the important primitive flow variables is presented for the complex, nonreacting, 3D unit combustor flow field employing transverse injection into a Mach 2 flow behind a rearward-facing step. A unique wind tunnel facility providing the capability for iodine seeding was built specifically for these measurements. Two optical techniques based on laser-induced-iodine fluorescence were developed and utilized for nonintrusive, in situ flow field measurements. LDA provided both mean and fluctuating velocity component measurements. A thermographic phosphor wall temperature measurement technique was developed and employed. Data from the 2D flow over a rearward-facing step and the complex 3D mixing flow with injection are reported. 25 refs.

  4. Effects of exposure to facial expression variation in face learning and recognition.

    PubMed

    Liu, Chang Hong; Chen, Wenfeng; Ward, James

    2015-11-01

    Facial expression is a major source of image variation in face images. Linking numerous expressions to the same face can be a huge challenge for face learning and recognition. It remains largely unknown what level of exposure to this image variation is critical for expression-invariant face recognition. We examined this issue in a recognition memory task, where the number of facial expressions of each face being exposed during a training session was manipulated. Faces were either trained with multiple expressions or a single expression, and they were later tested in either the same or different expressions. We found that recognition performance after learning three emotional expressions had no improvement over learning a single emotional expression (Experiments 1 and 2). However, learning three emotional expressions improved recognition compared to learning a single neutral expression (Experiment 3). These findings reveal both the limitation and the benefit of multiple exposures to variations of emotional expression in achieving expression-invariant face recognition. The transfer of expression training to a new type of expression is likely to depend on a relatively extensive level of training and a certain degree of variation across the types of expressions. PMID:25398479

  5. KD-tree based clustering algorithm for fast face recognition on large-scale data

    NASA Astrophysics Data System (ADS)

    Wang, Yuanyuan; Lin, Yaping; Yang, Junfeng

    2015-07-01

    This paper proposes an acceleration method for large-scale face recognition system. When dealing with a large-scale database, face recognition is time-consuming. In order to tackle this problem, we employ the k-means clustering algorithm to classify face data. Specifically, the data in each cluster are stored in the form of the kd-tree, and face feature matching is conducted with the kd-tree based nearest neighborhood search. Experiments on CAS-PEAL and self-collected database show the effectiveness of our proposed method.

  6. Partial least squares regression on DCT domain for infrared face recognition

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua

    2014-09-01

    Compact and discriminative feature extraction is a challenging task for infrared face recognition. In this paper, we propose an infrared face recognition method using Partial Least Square (PLS) regression on Discrete Cosine Transform (DCT) coefficients. With the strong ability for data de-correlation and compact energy, DCT is studied to get the compact features in infrared face. To dig out discriminative information in DCT coefficients, class-specific One-to-Rest Partial Least Squares (PLS) classifier is learned for accurate classification. The infrared data were collected by an infrared camera Thermo Vision A40 supplied by FLIR Systems Inc. The experimental results show that the recognition rate of the proposed algorithm can reach 95.8%, outperforms that of the state of art infrared face recognition methods based on Linear Discriminant Analysis (LDA) and DCT.

  7. Color face recognition based on steerable pyramid transform and extreme learning machines.

    PubMed

    Uçar, Ayşegül

    2014-01-01

    This paper presents a novel color face recognition algorithm by means of fusing color and local information. The proposed algorithm fuses the multiple features derived from different color spaces. Multiorientation and multiscale information relating to the color face features are extracted by applying Steerable Pyramid Transform (SPT) to the local face regions. In this paper, the new three hybrid color spaces, YSCr, ZnSCr, and BnSCr, are firstly constructed using the Cb and Cr component images of the YCbCr color space, the S color component of the HSV color spaces, and the Zn and Bn color components of the normalized XYZ color space. Secondly, the color component face images are partitioned into the local patches. Thirdly, SPT is applied to local face regions and some statistical features are extracted. Fourthly, all features are fused according to decision fusion frame and the combinations of Extreme Learning Machines classifiers are applied to achieve color face recognition with fast and high correctness. The experiments show that the proposed Local Color Steerable Pyramid Transform (LCSPT) face recognition algorithm improves seriously face recognition performance by using the new color spaces compared to the conventional and some hybrid ones. Furthermore, it achieves faster recognition compared with state-of-the-art studies. PMID:24558319

  8. Image-Invariant Responses in Face-Selective Regions Do Not Explain the Perceptual Advantage for Familiar Face Recognition

    PubMed Central

    Davies-Thompson, Jodie; Newling, Katherine

    2013-01-01

    The ability to recognize familiar faces across different viewing conditions contrasts with the inherent difficulty in the perception of unfamiliar faces across similar image manipulations. It is widely believed that this difference in perception and recognition is based on the neural representation for familiar faces being less sensitive to changes in the image than it is for unfamiliar faces. Here, we used an functional magnetic resonance-adaptation paradigm to investigate image invariance in face-selective regions of the human brain. We found clear evidence for a degree of image-invariant adaptation to facial identity in face-selective regions, such as the fusiform face area. However, contrary to the predictions of models of face processing, comparable levels of image invariance were evident for both familiar and unfamiliar faces. This suggests that the marked differences in the perception of familiar and unfamiliar faces may not depend on differences in the way multiple images are represented in core face-selective regions of the human brain. PMID:22345357

  9. Confidence-Accuracy Calibration in Absolute and Relative Face Recognition Judgments

    ERIC Educational Resources Information Center

    Weber, Nathan; Brewer, Neil

    2004-01-01

    Confidence-accuracy (CA) calibration was examined for absolute and relative face recognition judgments as well as for recognition judgments from groups of stimuli presented simultaneously or sequentially (i.e., simultaneous or sequential mini-lineups). When the effect of difficulty was controlled, absolute and relative judgments produced…

  10. Principal patterns of fractional-order differential gradients for face recognition

    NASA Astrophysics Data System (ADS)

    Yu, Lei; Cao, Qi; Zhao, Anping

    2015-01-01

    We investigate the ability of fractional-order differentiation (FD) for facial texture representation and present a local descriptor, called the principal patterns of fractional-order differential gradients (PPFDGs), for face recognition. In PPFDG, multiple FD gradient patterns of a face image are obtained utilizing multiorientation FD masks. As a result, each pixel of the face image can be represented as a high-dimensional gradient vector. Then, by employing principal component analysis to the gradient vectors over the centered neighborhood of each pixel, we capture the principal gradient patterns and meanwhile compute the corresponding orientation patterns from which oriented gradient magnitudes are computed. Histogram features are finally extracted from these oriented gradient magnitude patterns as the face representation using local binary patterns. Experimental results on face recognition technology, A.M. Martinez and R. Benavente, Extended Yale B, and labeled faces in the wild face datasets validate the effectiveness of the proposed method.

  11. Maximized Posteriori Attributes Selection from Facial Salient Landmarks for Face Recognition

    NASA Astrophysics Data System (ADS)

    Gupta, Phalguni; Kisku, Dakshina Ranjan; Sing, Jamuna Kanta; Tistarelli, Massimo

    This paper presents a robust and dynamic face recognition technique based on the extraction and matching of devised probabilistic graphs drawn on SIFT features related to independent face areas. The face matching strategy is based on matching individual salient facial graph characterized by SIFT features as connected to facial landmarks such as the eyes and the mouth. In order to reduce the face matching errors, the Dempster-Shafer decision theory is applied to fuse the individual matching scores obtained from each pair of salient facial features. The proposed algorithm is evaluated with the ORL and the IITK face databases. The experimental results demonstrate the effectiveness and potential of the proposed face recognition technique also in case of partially occluded faces.

  12. Super-resolution method for face recognition using nonlinear mappings on coherent features.

    PubMed

    Huang, Hua; He, Huiting

    2011-01-01

    Low-resolution (LR) of face images significantly decreases the performance of face recognition. To address this problem, we present a super-resolution method that uses nonlinear mappings to infer coherent features that favor higher recognition of the nearest neighbor (NN) classifiers for recognition of single LR face image. Canonical correlation analysis is applied to establish the coherent subspaces between the principal component analysis (PCA) based features of high-resolution (HR) and LR face images. Then, a nonlinear mapping between HR/LR features can be built by radial basis functions (RBFs) with lower regression errors in the coherent feature space than in the PCA feature space. Thus, we can compute super-resolved coherent features corresponding to an input LR image according to the trained RBF model efficiently and accurately. And, face identity can be obtained by feeding these super-resolved features to a simple NN classifier. Extensive experiments on the Facial Recognition Technology, University of Manchester Institute of Science and Technology, and Olivetti Research Laboratory databases show that the proposed method outperforms the state-of-the-art face recognition algorithms for single LR image in terms of both recognition rate and robustness to facial variations of pose and expression. PMID:21062679

  13. Capturing specific abilities as a window into human individuality: The example of face recognition

    PubMed Central

    Wilmer, Jeremy B.; Germine, Laura; Chabris, Christopher F.; Chatterjee, Garga; Gerbasi, Margaret; Nakayama, Ken

    2013-01-01

    Proper characterization of each individual's unique pattern of strengths and weaknesses requires good measures of diverse abilities. Here, we advocate combining our growing understanding of neural and cognitive mechanisms with modern psychometric methods in a renewed effort to capture human individuality through a consideration of specific abilities. We articulate five criteria for the isolation and measurement of specific abilities, then apply these criteria to face recognition. We cleanly dissociate face recognition from more general visual and verbal recognition. This dissociation stretches across ability as well as disability, suggesting that specific developmental face recognition deficits are a special case of a broader specificity that spans the entire spectrum of human face recognition performance. Item-by-item results from 1,471 web-tested participants, included as supplementary information, fuel item analyses, validation, norming, and item response theory (IRT) analyses of our three tests: (a) the widely used Cambridge Face Memory Test (CFMT); (b) an Abstract Art Memory Test (AAMT), and (c) a Verbal Paired-Associates Memory Test (VPMT). The availability of this data set provides a solid foundation for interpreting future scores on these tests. We argue that the allied fields of experimental psychology, cognitive neuroscience, and vision science could fuel the discovery of additional specific abilities to add to face recognition, thereby providing new perspectives on human individuality. PMID:23428079

  14. Face recognition using artificial neural network group-based adaptive tolerance (GAT) trees.

    PubMed

    Zhang, M; Fulcher, J

    1996-01-01

    Recent artificial neural network research has focused on simple models, but such models have not been very successful in describing complex systems (such as face recognition). This paper introduces the artificial neural network group-based adaptive tolerance (GAT) tree model for translation-invariant face recognition, suitable for use in an airport security system. GAT trees use a two-stage divide-and-conquer tree-type approach. The first stage determines general properties of the input, such as whether the facial image contains glasses or a beard. The second stage identifies the individual. Face perception classification, detection of front faces with glasses and/or beards, and face recognition results using GAT trees under laboratory conditions are presented. We conclude that the neural network group-based model offers significant improvement over conventional neural network trees for this task. PMID:18263454

  15. Recognition Memory Measures Yield Disproportionate Effects of Aging on Learning Face-Name Associations

    PubMed Central

    James, Lori E.; Fogler, Kethera A.; Tauber, Sarah K.

    2008-01-01

    No previous research has tested whether the specific age-related deficit in learning face-name associations that has been identified using recall tasks also occurs for recognition memory measures. Young and older participants saw pictures of unfamiliar people with a name and an occupation for each person, and were tested on a matching (in Experiment 1) or multiple-choice (in Experiment 2) recognition memory test. For both recognition measures, the pattern of effects was the same as that obtained using a recall measure: more face-occupation associations were remembered than face-name associations, young adults remembered more associated information than older adults overall, and older adults had disproportionately poorer memory for face-name associations. Findings implicate age-related difficulty in forming and retrieving the association between the face and the name as the primary cause of obtained deficits in previous name learning studies. PMID:18808254

  16. Correlations between psychometric schizotypy, scan path length, fixations on the eyes and face recognition.

    PubMed

    Hills, Peter J; Eaton, Elizabeth; Pake, J Michael

    2016-01-01

    Psychometric schizotypy in the general population correlates negatively with face recognition accuracy, potentially due to deficits in inhibition, social withdrawal, or eye-movement abnormalities. We report an eye-tracking face recognition study in which participants were required to match one of two faces (target and distractor) to a cue face presented immediately before. All faces could be presented with or without paraphernalia (e.g., hats, glasses, facial hair). Results showed that paraphernalia distracted participants, and that the most distracting condition was when the cue and the distractor face had paraphernalia but the target face did not, while there was no correlation between distractibility and participants' scores on the Schizotypal Personality Questionnaire (SPQ). Schizotypy was negatively correlated with proportion of time fixating on the eyes and positively correlated with not fixating on a feature. It was negatively correlated with scan path length and this variable correlated with face recognition accuracy. These results are interpreted as schizotypal traits being associated with a restricted scan path leading to face recognition deficits. PMID:25835241

  17. Using Computerized Games to Teach Face Recognition Skills to Children with Autism Spectrum Disorder: The "Let's Face It!" Program

    ERIC Educational Resources Information Center

    Tanaka, James W.; Wolf, Julie M.; Klaiman, Cheryl; Koenig, Kathleen; Cockburn, Jeffrey; Herlihy, Lauren; Brown, Carla; Stahl, Sherin; Kaiser, Martha D.; Schultz, Robert T.

    2010-01-01

    Background: An emerging body of evidence indicates that relative to typically developing children, children with autism are selectively impaired in their ability to recognize facial identity. A critical question is whether face recognition skills can be enhanced through a direct training intervention. Methods: In a randomized clinical trial,…

  18. Effect of Partial Occlusion on Newborns' Face Preference and Recognition

    ERIC Educational Resources Information Center

    Gava, Lucia; Valenza, Eloisa; Turati, Chiara; de Schonen, Scania

    2008-01-01

    Many studies have shown that newborns prefer (e.g. Goren, Sarty & Wu, 1975 ; Valenza, Simion, Macchi Cassia & Umilta, 1996) and recognize (e.g. Bushnell, Say & Mullin, 1989; Pascalis & de Schonen, 1994) faces. However, it is not known whether, at birth, faces are still preferred and recognized when some of their parts are not visible because…

  19. Atypical Development of Face and Greeble Recognition in Autism

    ERIC Educational Resources Information Center

    Scherf, K. Suzanne; Behrmann, Marlene; Minshew, Nancy; Luna, Beatriz

    2008-01-01

    Background: Impaired face processing is a widely documented deficit in autism. Although the origin of this deficit is unclear, several groups have suggested that a lack of perceptual expertise is contributory. We investigated whether individuals with autism develop expertise in visuoperceptual processing of faces and whether any deficiency in such…

  20. A new face of sleep: The impact of post-learning sleep on recognition memory for face-name associations.

    PubMed

    Maurer, Leonie; Zitting, Kirsi-Marja; Elliott, Kieran; Czeisler, Charles A; Ronda, Joseph M; Duffy, Jeanne F

    2015-12-01

    Sleep has been demonstrated to improve consolidation of many types of new memories. However, few prior studies have examined how sleep impacts learning of face-name associations. The recognition of a new face along with the associated name is an important human cognitive skill. Here we investigated whether post-presentation sleep impacts recognition memory of new face-name associations in healthy adults. Fourteen participants were tested twice. Each time, they were presented 20 photos of faces with a corresponding name. Twelve hours later, they were shown each face twice, once with the correct and once with an incorrect name, and asked if each face-name combination was correct and to rate their confidence. In one condition the 12-h interval between presentation and recall included an 8-h nighttime sleep opportunity ("Sleep"), while in the other condition they remained awake ("Wake"). There were more correct and highly confident correct responses when the interval between presentation and recall included a sleep opportunity, although improvement between the "Wake" and "Sleep" conditions was not related to duration of sleep or any sleep stage. These data suggest that a nighttime sleep opportunity improves the ability to correctly recognize face-name associations. Further studies investigating the mechanism of this improvement are important, as this finding has implications for individuals with sleep disturbances and/or memory impairments. PMID:26549626

  1. Factor G utilizes a carbohydrate-binding cleft that is conserved between horseshoe crab and bacteria for the recognition of beta-1,3-D-glucans.

    PubMed

    Ueda, Yuki; Ohwada, Shuhei; Abe, Yoshito; Shibata, Toshio; Iijima, Manabu; Yoshimitsu, Yukiko; Koshiba, Takumi; Nakata, Munehiro; Ueda, Tadashi; Kawabata, Shun-ichiro

    2009-09-15

    In the horseshoe crab, the recognition of beta-1,3-D-glucans by factor G triggers hemolymph coagulation. Factor G contains a domain of two tandem xylanase Z-like modules (Z1-Z2), each of which recognizes beta-1,3-D-glucans. To gain an insight into the recognition of beta-1,3-D-glucans from a structural view point, recombinants of Z1-Z2, the C-terminal module Z2, Z2 with a Cys to Ala substitution (Z2A), and its tandem repeat Z2A-Z2A were characterized. Z2 and Z1-Z2, but not Z2A and Z2A-Z2A, formed insoluble aggregates at higher concentrations more than approximately 30 and 3 microM, respectively. Z1-Z2 and Z2A-Z2A bound more strongly to an insoluble beta-1,3-D-glucan (curdlan) than Z2A. The affinity of Z2A for a soluble beta-1,3-D-glucan (laminarin) was equivalent to those of Z1-Z2, Z2A-Z2A, and native factor G, suggesting that the binding of a single xylanase Z-like module prevents the subsequent binding of another module to laminarin. Interestingly, Z2A as well as intact factor G exhibited fungal agglutinating activity, and fungi were specifically detected with fluorescently tagged Z2A by microscopy. The chemical shift perturbation of Z2A induced by the interaction with laminaripentaose was analyzed by nuclear magnetic resonance spectroscopy. The ligand-binding site of Z2A was located in a cleft on a beta-sheet in a predicted beta-sandwich structure, which was superimposed onto cleft B in a cellulose-binding module of endoglucanase 5A from the soil bacterium Cellvibrio mixtus. We conclude that the pattern recognition for beta-1,3-D-glucans by factor G is accomplished via a carbohydrate-binding cleft that is evolutionally conserved between horseshoe crab and bacteria. PMID:19710471

  2. Face Processing and Facial Emotion Recognition in Adults with Down Syndrome

    ERIC Educational Resources Information Center

    Barisnikov, Koviljka; Hippolyte, Loyse; Van der Linden, Martial

    2008-01-01

    Face processing and facial expression recognition was investigated in 17 adults with Down syndrome, and results were compared with those of a child control group matched for receptive vocabulary. On the tasks involving faces without emotional content, the adults with Down syndrome performed significantly worse than did the controls. However, their…

  3. Face Recognition in Children with a Pervasive Developmental Disorder Not Otherwise Specified.

    ERIC Educational Resources Information Center

    Serra, M.; Althaus, M.; de Sonneville, L. M. J.; Stant, A. D.; Jackson, A. E.; Minderaa, R. B.

    2003-01-01

    A study investigated the accuracy and speed of face recognition in 26 children (ages 7-10) with Pervasive Developmental Disorder Not Otherwise Specified. Subjects needed an amount of time to recognize the faces that almost equaled the time they needed to recognize abstract patterns that were difficult to distinguish. (Contains references.)…

  4. The Simon Then Garfunkel Effect: Semantic Priming, Sensitivity, and the Modularity of Face Recognition.

    ERIC Educational Resources Information Center

    Rhodes, Gillian; Tremewan, Tanya

    1993-01-01

    In 5 experiments involving 306 adults, the mechanisms underlying semantic priming in the domain of face recognition, particularly famous faces, and the plausibility of modularity were assessed. Results suggest that sensitivity changes that occur when direct associative connections within the module can be ruled out pose a problem for modularity.…

  5. Brief Report: Face-Specific Recognition Deficits in Young Children with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Bradshaw, Jessica; Shic, Frederick; Chawarska, Katarzyna

    2011-01-01

    This study used eyetracking to investigate the ability of young children with autism spectrum disorders (ASD) to recognize social (faces) and nonsocial (simple objects and complex block patterns) stimuli using the visual paired comparison (VPC) paradigm. Typically developing (TD) children showed evidence for recognition of faces and simple…

  6. Deficits in Other-Race Face Recognition: No Evidence for Encoding-Based Effects

    PubMed Central

    Papesh, Megan H.; Goldinger, Stephen D.

    2010-01-01

    The other-race effect (ORE) in face recognition is typically observed in tasks which require long-term memory. Several studies, however, have found the effect early in face encoding (Lindsay, Jack, & Christian, 1991; Walker & Hewstone, 2006). In 6 experiments, with over 300 participants, we found no evidence that the recognition deficit associated with the ORE reflects deficits in immediate encoding. In Experiment 1, with a study-to-test retention interval of 4 min, participants were better able to recognise White faces, relative to Asian faces. Experiment 1 also validated the use of computer-generated faces in subsequent experiments. In Experiments 2 through 4, performance was virtually identical to Asian and White faces in match-to-sample, immediate recognition. In Experiment 5, decreasing target-foil similarity and disrupting the retention interval with trivia questions elicited a re-emergence of the ORE. Experiments 6A and 6B replicated this effect, and showed that memory for Asian faces was particularly susceptible to distraction; White faces were recognised equally well, regardless of trivia questions during the retention interval. The recognition deficit in the ORE apparently emerges from retention or retrieval deficits, not differences in immediate perceptual processing. PMID:20025384

  7. Brief Report: Developing Spatial Frequency Biases for Face Recognition in Autism and Williams Syndrome

    ERIC Educational Resources Information Center

    Leonard, Hayley C.; Annaz, Dagmara; Karmiloff-Smith, Annette; Johnson, Mark H.

    2011-01-01

    The current study investigated whether contrasting face recognition abilities in autism and Williams syndrome could be explained by different spatial frequency biases over developmental time. Typically-developing children and groups with Williams syndrome and autism were asked to recognise faces in which low, middle and high spatial frequency…

  8. An Own-Race Advantage for Components as Well as Configurations in Face Recognition

    ERIC Educational Resources Information Center

    Hayward, William G.; Rhodes, Gillian; Schwaninger, Adrian

    2008-01-01

    The own-race advantage in face recognition has been hypothesized as being due to a superiority in the processing of configural information for own-race faces. Here we examined the contributions of both configural and component processing to the own-race advantage. We recruited 48 Caucasian participants in Australia and 48 Chinese participants in…

  9. Deficits in other-race face recognition: no evidence for encoding-based effects.

    PubMed

    Papesh, Megan H; Goldinger, Stephen D

    2009-12-01

    The other-race effect (ORE) in face recognition is typically observed in tasks which require long-term memory. Several studies, however, have found the effect early in face encoding (Lindsay, Jack, & Christian, 1991; Walker & Hewstone, 2006). In 6 experiments, with over 300 participants, we found no evidence that the recognition deficit associated with the ORE reflects deficits in immediate encoding. In Experiment 1, with a study-to-test retention interval of 4 min, participants were better able to recognise White faces, relative to Asian faces. Experiment 1 also validated the use of computer-generated faces in subsequent experiments. In Experiments 2 through 4, performance was virtually identical to Asian and White faces in match-to-sample, immediate recognition. In Experiment 5, decreasing target-foil similarity and disrupting the retention interval with trivia questions elicited a re-emergence of the ORE. Experiments 6A and 6B replicated this effect, and showed that memory for Asian faces was particularly susceptible to distraction; White faces were recognised equally well, regardless of trivia questions during the retention interval. The recognition deficit in the ORE apparently emerges from retention or retrieval deficits, not differences in immediate perceptual processing. PMID:20025384

  10. Verbal Overshadowing and Face Recognition in Young and Old Adults

    ERIC Educational Resources Information Center

    Kinlen, Thomas J.; Adams-Price, Carolyn E.; Henley, Tracy B.

    2007-01-01

    Verbal overshadowing has been found to disrupt recognition accuracy when hard-to-describe stimuli are used. The current study replicates previous research on verbal overshadowing with younger people and extends this research into an older population to examine the possible link between verbal expertise and verbal overshadowing. It was hypothesized…

  11. Determining optimally orthogonal discriminant vectors in DCT domain for multiscale-based face recognition

    NASA Astrophysics Data System (ADS)

    Niu, Yanmin; Wang, Xuchu

    2011-02-01

    This paper presents a new face recognition method that extracts multiple discriminant features based on multiscale image enhancement technique and kernel-based orthogonal feature extraction improvements with several interesting characteristics. First, it can extract more discriminative multiscale face feature than traditional pixel-based or Gabor-based feature. Second, it can effectively deal with the small sample size problem as well as feature correlation problem by using eigenvalue decomposition on scatter matrices. Finally, the extractor handles nonlinearity efficiently by using kernel trick. Multiple recognition experiments on open face data set with comparison to several related methods show the effectiveness and superiority of the proposed method.

  12. The design and implementation of effective face detection and recognition system

    NASA Astrophysics Data System (ADS)

    Sun, Yigui

    2011-06-01

    In the paper, a face detection and recognition system (FDRS) based on video sequences and still image is proposed. It uses the AdaBoost algorithm to detect human face in the image or frame, adopts Discrete Cosine Transforms (DCT) for feature extraction and recognition in face image. The related technologies are firstly outlined. Then, the system requirements and UML use case diagram are described. In addition, the paper mainly introduces the design solution and key procedures. The FDRS's source-code is built in VC++, Standard Template Library (STL) and Intel Open Source Computer Vision Library (OpenCV).

  13. Blurred face recognition by fusing blur-invariant texture and structure features

    NASA Astrophysics Data System (ADS)

    Zhu, Mengyu; Cao, Zhiguo; Xiao, Yang; Xie, Xiaokang

    2015-10-01

    Blurred face recognition is still remaining as a challenge task, but with wide applications. Image blur can largely affect recognition performance. The local phase quantization (LPQ) was proposed to extract the blur-invariant texture information. It was used for blurred face recognition and achieved good performance. However, LPQ considers only the phase blur-invariant texture information, which is not sufficient. In addition, LPQ is extracted holistically, which cannot fully explore its discriminative power on local spatial properties. In this paper, we propose a novel method for blurred face recognition. The texture and structure blur-invariant features are extracted and fused to generate a more complete description on blurred image. For texture blur-invariant feature, LPQ is extracted in a densely sampled way and vector of locally aggregated descriptors (VLAD) is employed to enhance its performance. For structure blur-invariant feature, the histogram of oriented gradient (HOG) is used. To further enhance its blur invariance, we improve HOG by eliminating weak gradient magnitude which is more sensitive to image blur than the strong gradient. The improved HOG is then fused with the original HOG by canonical correlation analysis (CCA). At last, we fuse them together by CCA to form the final blur-invariant representation of the face image. The experiments are performed on three face datasets. The results demonstrate that our improvements and our proposition can have a good performance in blurred face recognition.

  14. Catechol-O-methyltransferase val158met Polymorphism Interacts with Sex to Affect Face Recognition Ability

    PubMed Central

    Lamb, Yvette N.; McKay, Nicole S.; Singh, Shrimal S.; Waldie, Karen E.; Kirk, Ian J.

    2016-01-01

    The catechol-O-methyltransferase (COMT) val158met polymorphism affects the breakdown of synaptic dopamine. Consequently, this polymorphism has been associated with a variety of neurophysiological and behavioral outcomes. Some of the effects have been found to be sex-specific and it appears estrogen may act to down-regulate the activity of the COMT enzyme. The dopaminergic system has been implicated in face recognition, a form of cognition for which a female advantage has typically been reported. This study aimed to investigate potential joint effects of sex and COMT genotype on face recognition. A sample of 142 university students was genotyped and assessed using the Faces I subtest of the Wechsler Memory Scale – Third Edition (WMS-III). A significant two-way interaction between sex and COMT genotype on face recognition performance was found. Of the male participants, COMT val homozygotes and heterozygotes had significantly lower scores than met homozygotes. Scores did not differ between genotypes for female participants. While male val homozygotes had significantly lower scores than female val homozygotes, no sex differences were observed in the heterozygotes and met homozygotes. This study contributes to the accumulating literature documenting sex-specific effects of the COMT polymorphism by demonstrating a COMT-sex interaction for face recognition, and is consistent with a role for dopamine in face recognition. PMID:27445927

  15. Oxytocin increases bias, but not accuracy, in face recognition line-ups.

    PubMed

    Bate, Sarah; Bennetts, Rachel; Parris, Benjamin A; Bindemann, Markus; Udale, Robert; Bussunt, Amanda

    2015-07-01

    Previous work indicates that intranasal inhalation of oxytocin improves face recognition skills, raising the possibility that it may be used in security settings. However, it is unclear whether oxytocin directly acts upon the core face-processing system itself or indirectly improves face recognition via affective or social salience mechanisms. In a double-blind procedure, 60 participants received either an oxytocin or placebo nasal spray before completing the One-in-Ten task-a standardized test of unfamiliar face recognition containing target-present and target-absent line-ups. Participants in the oxytocin condition outperformed those in the placebo condition on target-present trials, yet were more likely to make false-positive errors on target-absent trials. Signal detection analyses indicated that oxytocin induced a more liberal response bias, rather than increasing accuracy per se. These findings support a social salience account of the effects of oxytocin on face recognition and indicate that oxytocin may impede face recognition in certain scenarios. PMID:25433464

  16. Self-Face Recognition in Schizophrenia: An Eye-Tracking Study.

    PubMed

    Bortolon, Catherine; Capdevielle, Delphine; Salesse, Robin N; Raffard, Stéphane

    2016-01-01

    Self-face recognition has been shown to be impaired in schizophrenia (SZ), according to studies using behavioral tasks implicating cognitive demands. Here, we employed an eye-tracking methodology, which is a relevant tool to understand impairments in self-face recognition deficits in SZ because it provides a natural, continuous and online record of face processing. Moreover, it allows collecting the most relevant and informative features each individual looks at during the self-face recognition. These advantages are especially relevant considering the fundamental role played by the patterns of visual exploration on face processing. Thus, this paper aims to investigate self-face recognition deficits in SZ using eye-tracking methodology. Visual scan paths were monitored in 20 patients with SZ and 20 healthy controls. Self, famous, and unknown faces were morphed in steps of 20%. Location, number, and duration of fixations on relevant areas were recorded with an eye-tracking system. Participants performed a passive exploration task (no specific instruction was provided), followed by an active decision making task (individuals were explicitly requested to recognize the different faces). Results showed that patients with SZ had fewer and longer fixations compared to controls. Nevertheless, both groups focused their attention on relevant facial features in a similar way. No significant difference was found between groups when participants were requested to recognize the faces (active task). In conclusion, using an eye tracking methodology and two tasks with low levels of cognitive demands, our results suggest that patients with SZ are able to: (1) explore faces and focus on relevant features of the face in a similar way as controls; and (2) recognize their own face. PMID:26903833

  17. The Own-Age Bias in Face Recognition: A Meta-Analytic and Theoretical Review

    ERIC Educational Resources Information Center

    Rhodes, Matthew G.; Anastasi, Jeffrey S.

    2012-01-01

    A large number of studies have examined the finding that recognition memory for faces of one's own age group is often superior to memory for faces of another age group. We examined this "own-age bias" (OAB) in the meta-analyses reported. These data showed that hits were reliably greater for same-age relative to other-age faces (g = 0.23) and that…

  18. Self-Face Recognition in Schizophrenia: An Eye-Tracking Study

    PubMed Central

    Bortolon, Catherine; Capdevielle, Delphine; Salesse, Robin N.; Raffard, Stéphane

    2016-01-01

    Self-face recognition has been shown to be impaired in schizophrenia (SZ), according to studies using behavioral tasks implicating cognitive demands. Here, we employed an eye-tracking methodology, which is a relevant tool to understand impairments in self-face recognition deficits in SZ because it provides a natural, continuous and online record of face processing. Moreover, it allows collecting the most relevant and informative features each individual looks at during the self-face recognition. These advantages are especially relevant considering the fundamental role played by the patterns of visual exploration on face processing. Thus, this paper aims to investigate self-face recognition deficits in SZ using eye-tracking methodology. Visual scan paths were monitored in 20 patients with SZ and 20 healthy controls. Self, famous, and unknown faces were morphed in steps of 20%. Location, number, and duration of fixations on relevant areas were recorded with an eye-tracking system. Participants performed a passive exploration task (no specific instruction was provided), followed by an active decision making task (individuals were explicitly requested to recognize the different faces). Results showed that patients with SZ had fewer and longer fixations compared to controls. Nevertheless, both groups focused their attention on relevant facial features in a similar way. No significant difference was found between groups when participants were requested to recognize the faces (active task). In conclusion, using an eye tracking methodology and two tasks with low levels of cognitive demands, our results suggest that patients with SZ are able to: (1) explore faces and focus on relevant features of the face in a similar way as controls; and (2) recognize their own face. PMID:26903833

  19. Infrared face recognition based on LBP histogram and KW feature selection

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua

    2014-07-01

    The conventional LBP-based feature as represented by the local binary pattern (LBP) histogram still has room for performance improvements. This paper focuses on the dimension reduction of LBP micro-patterns and proposes an improved infrared face recognition method based on LBP histogram representation. To extract the local robust features in infrared face images, LBP is chosen to get the composition of micro-patterns of sub-blocks. Based on statistical test theory, Kruskal-Wallis (KW) feature selection method is proposed to get the LBP patterns which are suitable for infrared face recognition. The experimental results show combination of LBP and KW features selection improves the performance of infrared face recognition, the proposed method outperforms the traditional methods based on LBP histogram, discrete cosine transform(DCT) or principal component analysis(PCA).

  20. Maximum margin sparse representation discriminative mapping with application to face recognition

    NASA Astrophysics Data System (ADS)

    Zhang, Qiang; Cai, Yunze; Xu, Xiaoming

    2013-02-01

    Sparse subspace learning has drawn more and more attention recently. We propose a novel sparse subspace learning algorithm called maximum margin sparse representation discriminative mapping (MSRDM), which adds the discriminative information into sparse neighborhood preservation. Based on combination of maximum margin discriminant criterion and sparse representation, MSRDM can preserve both local geometry structure and classification information. MSRDM can avoid the small sample size problem in face recognition naturally and the computation is efficient. To improve face recognition performance, we propose to integrate Gabor-like complex wavelet and natural image features by complex vectors as input features of MSRDM. Experimental results on ORL, UMIST, Yale, and PIE face databases demonstrate the effectiveness of the proposed face recognition method.

  1. On the particular vulnerability of face recognition to aging: a review of three hypotheses

    PubMed Central

    Boutet, Isabelle; Taler, Vanessa; Collin, Charles A.

    2015-01-01

    Age-related face recognition deficits are characterized by high false alarms to unfamiliar faces, are not as pronounced for other complex stimuli, and are only partially related to general age-related impairments in cognition. This paper reviews some of the underlying processes likely to be implicated in theses deficits by focusing on areas where contradictions abound as a means to highlight avenues for future research. Research pertaining to the three following hypotheses is presented: (i) perceptual deterioration, (ii) encoding of configural information, and (iii) difficulties in recollecting contextual information. The evidence surveyed provides support for the idea that all three factors are likely to contribute, under certain conditions, to the deficits in face recognition seen in older adults. We discuss how these different factors might interact in the context of a generic framework of the different stages implicated in face recognition. Several suggestions for future investigations are outlined. PMID:26347670

  2. Fashioning the Face: Sensorimotor Simulation Contributes to Facial Expression Recognition.

    PubMed

    Wood, Adrienne; Rychlowska, Magdalena; Korb, Sebastian; Niedenthal, Paula

    2016-03-01

    When we observe a facial expression of emotion, we often mimic it. This automatic mimicry reflects underlying sensorimotor simulation that supports accurate emotion recognition. Why this is so is becoming more obvious: emotions are patterns of expressive, behavioral, physiological, and subjective feeling responses. Activation of one component can therefore automatically activate other components. When people simulate a perceived facial expression, they partially activate the corresponding emotional state in themselves, which provides a basis for inferring the underlying emotion of the expresser. We integrate recent evidence in favor of a role for sensorimotor simulation in emotion recognition. We then connect this account to a domain-general understanding of how sensory information from multiple modalities is integrated to generate perceptual predictions in the brain. PMID:26876363

  3. Recognition by association: Within- and cross-modality associative priming with faces and voices.

    PubMed

    Stevenage, Sarah V; Hale, Sarah; Morgan, Yasmin; Neil, Greg J

    2014-02-01

    Recent literature has raised the suggestion that voice recognition runs in parallel to face recognition. As a result, a prediction can be made that voices should prime faces and faces should prime voices. A traditional associative priming paradigm was used in two studies to explore within-modality priming and cross-modality priming. In the within-modality condition where both prime and target were faces, analysis indicated the expected associative priming effect: The familiarity decision to the second target celebrity was made more quickly if preceded by a semantically related prime celebrity, than if preceded by an unrelated prime celebrity. In the cross-modality condition, where a voice prime preceded a face target, analysis indicated no associative priming when a 3-s stimulus onset asynchrony (SOA) was used. However, when a relatively longer SOA was used, providing time for robust recognition of the prime, significant cross-modality priming emerged. These data are explored within the context of a unified account of face and voice recognition, which recognizes weaker voice processing than face processing. PMID:24387093

  4. The fusiform face area is not sufficient for face recognition: evidence from a patient with dense prosopagnosia and no occipital face area.

    PubMed

    Steeves, Jennifer K E; Culham, Jody C; Duchaine, Bradley C; Pratesi, Cristiana Cavina; Valyear, Kenneth F; Schindler, Igor; Humphrey, G Keith; Milner, A David; Goodale, Melvyn A

    2006-01-01

    We tested functional activation for faces in patient D.F., who following acquired brain damage has a profound deficit in object recognition based on form (visual form agnosia) and also prosopagnosia that is undocumented to date. Functional imaging demonstrated that like our control observers, D.F. shows significantly more activation when passively viewing face compared to scene images in an area that is consistent with the fusiform face area (FFA) (p < 0.01). Control observers also show occipital face area (OFA) activation; however, whereas D.F.'s lesions appear to overlap the OFA bilaterally. We asked, given that D.F. shows FFA activation for faces, to what extent is she able to recognize faces? D.F. demonstrated a severe impairment in higher level face processing--she could not recognize face identity, gender or emotional expression. In contrast, she performed relatively normally on many face categorization tasks. D.F. can differentiate faces from non-faces given sufficient texture information and processing time, and she can do this is independent of color and illumination information. D.F. can use configural information for categorizing faces when they are presented in an upright but not a sideways orientation and given that she also cannot discriminate half-faces she may rely on a spatially symmetric feature arrangement. Faces appear to be a unique category, which she can classify even when she has no advance knowledge that she will be shown face images. Together, these imaging and behavioral data support the importance of the integrity of a complex network of regions for face identification, including more than just the FFA--in particular the OFA, a region believed to be associated with low-level processing. PMID:16125741

  5. Face and Emotion Recognition in MCDD versus PDD-NOS

    ERIC Educational Resources Information Center

    Herba, Catherine M.; de Bruin, Esther; Althaus, Monika; Verheij, Fop; Ferdinand, Robert F.

    2008-01-01

    Previous studies indicate that Multiple Complex Developmental Disorder (MCDD) children differ from PDD-NOS and autistic children on a symptom level and on psychophysiological functioning. Children with MCDD (n = 21) and PDD-NOS (n = 62) were compared on two facets of social-cognitive functioning: identification of neutral faces and facial…

  6. Facilitating recognition of crowded faces with presaccadic attention

    PubMed Central

    Wolfe, Benjamin A.; Whitney, David

    2014-01-01

    In daily life, we make several saccades per second to objects we cannot normally recognize in the periphery due to visual crowding. While we are aware of the presence of these objects, we cannot identify them and may, at best, only know that an object is present at a particular location. The process of planning a saccade involves a presaccadic attentional component known to be critical for saccadic accuracy, but whether this or other presaccadic processes facilitate object identification as opposed to object detection—especially with high level natural objects like faces—is less clear. In the following experiments, we show that presaccadic information about a crowded face reduces the deleterious effect of crowding, facilitating discrimination of two emotional faces, even when the target face is never foveated. While accurate identification of crowded objects is possible in the absence of a saccade, accurate identification of a crowded object is considerably facilitated by presaccadic attention. Our results provide converging evidence for a selective increase in available information about high level objects, such as faces, at a presaccadic stage. PMID:24592233

  7. Face recognition using fuzzy integral and wavelet decomposition method.

    PubMed

    Kwak, Keun-Chang; Pedrycz, Witold

    2004-08-01

    In this paper, we develop a method for recognizing face images by combining wavelet decomposition, Fisherface method, and fuzzy integral. The proposed approach is comprised of four main stages. The first stage uses the wavelet decomposition that helps extract intrinsic features of face images. As a result of this decomposition, we obtain four subimages (namely approximation, horizontal, vertical, and diagonal detailed images). The second stage of the approach concerns the application of the Fisherface method to these four decompositions. The choice of the Fisherface method in this setting is motivated by its insensitivity to large variation in light direction, face pose, and facial expression. The two last phases are concerned with the aggregation of the individual classifiers by means of the fuzzy integral. Both Sugeno and Choquet type of fuzzy integral are considered as the aggregation method. In the experiments we use n-fold cross-validation to assure high consistency of the produced classification outcomes. The experimental results obtained for the Chungbuk National University (CNU) and Yale University face databases reveal that the approach presented in this paper yields better classification performance in comparison to the results obtained by other classifiers. PMID:15462434

  8. Self-Face and Self-Body Recognition in Autism

    ERIC Educational Resources Information Center

    Gessaroli, Erica; Andreini, Veronica; Pellegri, Elena; Frassinetti, Francesca

    2013-01-01

    The advantage in responding to self vs. others' body and face-parts (the so called self-advantage) is considered to reflect the implicit access to the bodily self representation and has been studied in healthy and brain-damaged adults in previous studies. If the distinction of the self from others is a key aspect of social behaviour and is a…

  9. Semantic and visual determinants of face recognition in a prosopagnosic patient.

    PubMed

    Dixon, M J; Bub, D N; Arguin, M

    1998-05-01

    Prosopagnosia is the neuropathological inability to recognize familiar people by their faces. It can occur in isolation or can coincide with recognition deficits for other nonface objects. Often, patients whose prosopagnosia is accompanied by object recognition difficulties have more trouble identifying certain categories of objects relative to others. In previous research, we demonstrated that objects that shared multiple visual features and were semantically close posed severe recognition difficulties for a patient with temporal lobe damage. We now demonstrate that this patient's face recognition is constrained by these same parameters. The prosopagnosic patient ELM had difficulties pairing faces to names when the faces shared visual features and the names were semantically related (e.g., Tonya Harding, Nancy Kerrigan, and Josee Chouinard -three ice skaters). He made tenfold fewer errors when the exact same faces were associated with semantically unrelated people (e.g., singer Celine Dion, actress Betty Grable, and First Lady Hillary Clinton). We conclude that prosopagnosia and co-occurring category-specific recognition problems both stem from difficulties disambiguating the stored representations of objects that share multiple visual features and refer to semantically close identities or concepts. PMID:9869710

  10. When less is more: Impact of face processing ability on recognition of visually degraded faces.

    PubMed

    Royer, Jessica; Blais, Caroline; Gosselin, Frédéric; Duncan, Justin; Fiset, Daniel

    2015-10-01

    It is generally thought that faces are perceived as indissociable wholes. As a result, many assume that hiding large portions of the face by the addition of noise or by masking limits or qualitatively alters natural "expert" face processing by forcing observers to use atypical processing mechanisms. We addressed this question by measuring face processing abilities with whole faces and with Bubbles (Gosselin & Schyns, 2001), an extreme masking method thought by some to bias the observers toward the use of atypical processing mechanisms by limiting the use of whole-face strategies. We obtained a strong and negative correlation between individual face processing ability and the number of bubbles (r = -.79), and this correlation remained strong even after controlling for general visual/cognitive processing ability (rpartial = -.72). In other words, the better someone is at processing faces, the fewer facial parts they need to accurately carry out this task. Thus, contrary to what many researchers assume, face processing mechanisms appear to be quite insensitive to the visual impoverishment of the face stimulus. PMID:26168140

  11. Face Recognition System for Set-Top Box-Based Intelligent TV

    PubMed Central

    Lee, Won Oh; Kim, Yeong Gon; Hong, Hyung Gil; Park, Kang Ryoung

    2014-01-01

    Despite the prevalence of smart TVs, many consumers continue to use conventional TVs with supplementary set-top boxes (STBs) because of the high cost of smart TVs. However, because the processing power of a STB is quite low, the smart TV functionalities that can be implemented in a STB are very limited. Because of this, negligible research has been conducted regarding face recognition for conventional TVs with supplementary STBs, even though many such studies have been conducted with smart TVs. In terms of camera sensors, previous face recognition systems have used high-resolution cameras, cameras with high magnification zoom lenses, or camera systems with panning and tilting devices that can be used for face recognition from various positions. However, these cameras and devices cannot be used in intelligent TV environments because of limitations related to size and cost, and only small, low cost web-cameras can be used. The resulting face recognition performance is degraded because of the limited resolution and quality levels of the images. Therefore, we propose a new face recognition system for intelligent TVs in order to overcome the limitations associated with low resource set-top box and low cost web-cameras. We implement the face recognition system using a software algorithm that does not require special devices or cameras. Our research has the following four novelties: first, the candidate regions in a viewer's face are detected in an image captured by a camera connected to the STB via low processing background subtraction and face color filtering; second, the detected candidate regions of face are transmitted to a server that has high processing power in order to detect face regions accurately; third, in-plane rotations of the face regions are compensated based on similarities between the left and right half sub-regions of the face regions; fourth, various poses of the viewer's face region are identified using five templates obtained during the initial user

  12. Toward fast feature adaptation and localization for real-time face recognition systems

    NASA Astrophysics Data System (ADS)

    Zuo, Fei; de With, Peter H.

    2003-06-01

    In a home environment, video surveillance employing face detection and recognition is attractive for new applications. Facial feature (e.g. eyes and mouth) localization in the face is an essential task for face recognition because it constitutes an indispensable step for face geometry normalization. This paper presents a new and efficient feature localization approach for real-time personal surveillance applications with low-quality images. The proposed approach consists of three major steps: (1) self-adaptive iris tracing, which is preceded by a trace-point selection process with multiple initializations to overcome the local convergence problem, (2) eye structure verification using an eye template with limited deformation freedom, and (3) eye-pair selection based on a combination of metrics. We have tested our facial feature localization method on about 100 randomly selected face images from the AR database and 30 face images downloaded from the Internet. The results show that our approach achieves a correct detection rate of 96%. Since our eye-selection technique does not involve time-consuming deformation processes, it yields relatively fast processing. The proposed algorithm has been successfully applied to a real-time home video surveillance system and proven to be an effective and computationally efficient face normalization method preceding the face recognition.

  13. Super resolution based face recognition: do we need training image set?

    NASA Astrophysics Data System (ADS)

    Al-Hassan, Nadia; Sellahewa, Harin; Jassim, Sabah A.

    2013-05-01

    This paper is concerned with face recognition under uncontrolled condition, e.g. at a distance surveillance scenarios, and post-rioting forensic, whereby captured face images are severely degraded/blurred and of low-resolution. This is a tough challenge due to many factors including capturing conditions. We present the results of our investigations into recently developed Compressive Sensing (CS) theory to develop scalable face recognition schemes using a variety of overcomplete dictionaries that construct super-resolved face images from any input low-resolution degraded face image. We shall demonstrate that deterministic as well as non-deterministic dictionaries that do not involve the use of face image information but satisfy some form of the Restricted Isometry Property used for CS can achieve face recognition accuracy levels, as good as if not better than those achieved by dictionaries proposed in the literature, that are learned from face image databases using elaborate procedures. We shall elaborate on how this approach helps in crime fighting and terrorism.

  14. The cross-race effect in face recognition memory by bicultural individuals.

    PubMed

    Marsh, Benjamin U; Pezdek, Kathy; Ozery, Daphna Hausman

    2016-09-01

    Social-cognitive models of the cross-race effect (CRE) generally specify that cross-race faces are automatically categorized as an out-group, and that different encoding processes are then applied to same-race and cross-race faces, resulting in better recognition memory for same-race faces. We examined whether cultural priming moderates the cognitive categorization of cross-race faces. In Experiment 1, monoracial Latino-Americans, considered to have a bicultural self, were primed to focus on either a Latino or American cultural self and then viewed Latino and White faces. Latino-Americans primed as Latino exhibited higher recognition accuracy (A') for Latino than White faces; those primed as American exhibited higher recognition accuracy for White than Latino faces. In Experiment 2, as predicted, prime condition did not moderate the CRE in European-Americans. These results suggest that for monoracial biculturals, priming either of their cultural identities influences the encoding processes applied to same- and cross-race faces, thereby moderating the CRE. PMID:27219532

  15. Age-Related Differences in Brain Electrical Activity during Extended Continuous Face Recognition in Younger Children, Older Children and Adults

    ERIC Educational Resources Information Center

    Van Strien, Jan W.; Glimmerveen, Johanna C.; Franken, Ingmar H. A.; Martens, Vanessa E. G.; de Bruin, Eveline A.

    2011-01-01

    To examine the development of recognition memory in primary-school children, 36 healthy younger children (8-9 years old) and 36 healthy older children (11-12 years old) participated in an ERP study with an extended continuous face recognition task (Study 1). Each face of a series of 30 faces was shown randomly six times interspersed with…

  16. Adaptive weighted local textural features for illumination, expression, and occlusion invariant face recognition

    NASA Astrophysics Data System (ADS)

    Cui, Chen; Asari, Vijayan K.

    2014-03-01

    Biometric features such as fingerprints, iris patterns, and face features help to identify people and restrict access to secure areas by performing advanced pattern analysis and matching. Face recognition is one of the most promising biometric methodologies for human identification in a non-cooperative security environment. However, the recognition results obtained by face recognition systems are a affected by several variations that may happen to the patterns in an unrestricted environment. As a result, several algorithms have been developed for extracting different facial features for face recognition. Due to the various possible challenges of data captured at different lighting conditions, viewing angles, facial expressions, and partial occlusions in natural environmental conditions, automatic facial recognition still remains as a difficult issue that needs to be resolved. In this paper, we propose a novel approach to tackling some of these issues by analyzing the local textural descriptions for facial feature representation. The textural information is extracted by an enhanced local binary pattern (ELBP) description of all the local regions of the face. The relationship of each pixel with respect to its neighborhood is extracted and employed to calculate the new representation. ELBP reconstructs a much better textural feature extraction vector from an original gray level image in different lighting conditions. The dimensionality of the texture image is reduced by principal component analysis performed on each local face region. Each low dimensional vector representing a local region is now weighted based on the significance of the sub-region. The weight of each sub-region is determined by employing the local variance estimate of the respective region, which represents the significance of the region. The final facial textural feature vector is obtained by concatenating the reduced dimensional weight sets of all the modules (sub-regions) of the face image

  17. Local binary pattern based face recognition by estimation of facial distinctive information distribution

    NASA Astrophysics Data System (ADS)

    da, Bangyou; Sang, Nong

    2009-11-01

    We present a novel approach for face recognition by combining a local binary pattern (LBP)-based face descriptor and the distinctive information of faces. Several studies of psychophysics have shown that the eyes or mouth can be an important cue in human face perception, and the nose plays an insignificant role. This means that there exists a distinctive information distribution of faces. First, we give a quantitative estimation of the density for each pixel in a fronted face image by combining the Parzen-window approach and scale invariant feature transform detector, which is taken as the measure of the distinctive information of the faces. Second, we integrate the density function in the subwindow region of the face to gain the weight set used in the LBP-based face descriptor to produce weighted chi-square statistics. As an elementary application of the estimation of distinctive information of faces, the proposed method is tested on the FERET FA/FB image sets and yields a recognition rate of 98.2% compared to the 97.3% produced by the method adopted by Ahonen, Hadid, and Pietikainen.

  18. Effects of surface materials on polarimetric-thermal measurements: applications to face recognition.

    PubMed

    Short, Nathaniel J; Yuffa, Alex J; Videen, Gorden; Hu, Shuowen

    2016-07-01

    Materials, such as cosmetics, applied to the face can severely inhibit biometric face-recognition systems operating in the visible spectrum. These products are typically made up of materials having different spectral properties and color pigmentation that distorts the perceived shape of the face. The surface of the face emits thermal radiation, due to the living tissue beneath the surface of the skin. The emissivity of skin is approximately 0.99; in comparison, oil- and plastic-based materials, commonly found in cosmetics and face paints, have an emissivity range of 0.9-0.95 in the long-wavelength infrared part of the spectrum. Due to these properties, all three are good thermal emitters and have little impact on the heat transferred from the face. Polarimetric-thermal imaging provides additional details of the face and is also dependent upon the thermal radiation from the face. In this paper, we provide a theoretical analysis on the thermal conductivity of various materials commonly applied to the face using a metallic sphere. Additionally, we observe the impact of environmental conditions on the strength of the polarimetric signature and the ability to recover geometric details. Finally, we show how these materials degrade the performance of traditional face-recognition methods and provide an approach to mitigating this effect using polarimetric-thermal imaging. PMID:27409214

  19. Gradient feature matching for in-plane rotation invariant face sketch recognition

    NASA Astrophysics Data System (ADS)

    Alex, Ann Theja; Asari, Vijayan K.; Mathew, Alex

    2013-03-01

    Automatic recognition of face sketches is a challenging and interesting problem. An artist drawn sketch is compared against a mugshot database to identify criminals. It is a very cumbersome task to manually compare images. This necessitates a pattern recognition system to perform the comparisons. Existing methods fall into two main categories - those that allow recognition across modalities and methods that require a sketch/photo symthesis step and then copare in some modality. The methods that require synthesis require a lot of computing power since it involves high time and space complexity. Our method allows recognition across modalities. It uses the edge feature of a face sketch and face photo image to create a feature string called 'edge-string' which is a polar coordinate representation of the edge image. To generate a polar coordinate representation, we need the reference point and reference line. Using the center point of the edge image as the reference point and using a horizontal line as the reference line is the simplest solution. But, it cannot handle in-plane rotations. For this reason, we propose an approach for finding the reference line and the centroid point. The edge-strings of the face photo and face sketch are then compared using the Smith-Waterman algorithm for local string alignments. The face photo that gave the highest similarity score is the photo that matches the test face sketch input. The results on CUHK (Chinese University of Hong Kong) student dataset show the effectiveness of the proposed approach in face sketch recognition.

  20. Correlation based efficient face recognition and color change detection

    NASA Astrophysics Data System (ADS)

    Elbouz, M.; Alfalou, A.; Brosseau, C.; Alam, M. S.; Qasmi, S.

    2013-01-01

    Identifying the human face via correlation is a topic attracting widespread interest. At the heart of this technique lies the comparison of an unknown target image to a known reference database of images. However, the color information in the target image remains notoriously difficult to interpret. In this paper, we report a new technique which: (i) is robust against illumination change, (ii) offers discrimination ability to detect color change between faces having similar shape, and (iii) is specifically designed to detect red colored stains (i.e. facial bleeding). We adopt the Vanderlugt correlator (VLC) architecture with a segmented phase filter and we decompose the color target image using normalized red, green, and blue (RGB), and hue, saturation, and value (HSV) scales. We propose a new strategy to effectively utilize color information in signatures for further increasing the discrimination ability. The proposed algorithm has been found to be very efficient for discriminating face subjects with different skin colors, and those having color stains in different areas of the facial image.

  1. A novel window based method for approximating the Hausdorff in 3D range imagery.

    SciTech Connect

    Koch, Mark William

    2004-10-01

    Matching a set of 3D points to another set of 3D points is an important part of any 3D object recognition system. The Hausdorff distance is known for it robustness in the face of obscuration, clutter, and noise. We show how to approximate the 3D Hausdorff fraction with linear time complexity and quadratic space complexity. We empirically demonstrate that the approximation is very good when compared to actual Hausdorff distances.

  2. Image Generation Using Bidirectional Integral Features for Face Recognition with a Single Sample per Person

    PubMed Central

    Lee, Yonggeol; Lee, Minsik; Choi, Sang-Il

    2015-01-01

    In face recognition, most appearance-based methods require several images of each person to construct the feature space for recognition. However, in the real world it is difficult to collect multiple images per person, and in many cases there is only a single sample per person (SSPP). In this paper, we propose a method to generate new images with various illuminations from a single image taken under frontal illumination. Motivated by the integral image, which was developed for face detection, we extract the bidirectional integral feature (BIF) to obtain the characteristics of the illumination condition at the time of the picture being taken. The experimental results for various face databases show that the proposed method results in improved recognition performance under illumination variation. PMID:26414018

  3. Illumination-invariant face recognition with a contrast sensitive silicon retina

    SciTech Connect

    Buhmann, J.M.; Lades, M.; Eeckman, F.

    1993-11-29

    Changes in lighting conditions strongly effect the performance and reliability of computer vision systems. We report face recognition results under drastically changing lighting conditions for a computer vision system which concurrently uses a contrast sensitive silicon retina and a conventional, gain controlled CCD camera. For both input devices the face recognition system employs an elastic matching algorithm with wavelet based features to classify unknown faces. To assess the effect of analog on-chip preprocessing by the silicon retina the CCD images have been digitally preprocessed with a bandpass filter to adjust the power spectrum. The silicon retina with its ability to adjust sensitivity increases the recognition rate up to 50 percent. These comparative experiments demonstrate that preprocessing with an analog VLSI silicon retina generates image data enriched with object-constant features.

  4. Sensor fusion III: 3-D perception and recognition; Proceedings of the Meeting, Boston, MA, Nov. 5-8, 1990

    NASA Technical Reports Server (NTRS)

    Schenker, Paul S. (Editor)

    1991-01-01

    The volume on data fusion from multiple sources discusses fusing multiple views, temporal analysis and 3D motion interpretation, sensor fusion and eye-to-hand coordination, and integration in human shape perception. Attention is given to surface reconstruction, statistical methods in sensor fusion, fusing sensor data with environmental knowledge, computational models for sensor fusion, and evaluation and selection of sensor fusion techniques. Topics addressed include the structure of a scene from two and three projections, optical flow techniques for moving target detection, tactical sensor-based exploration in a robotic environment, and the fusion of human and machine skills for remote robotic operations. Also discussed are K-nearest-neighbor concepts for sensor fusion, surface reconstruction with discontinuities, a sensor-knowledge-command fusion paradigm for man-machine systems, coordinating sensing and local navigation, and terrain map matching using multisensing techniques for applications to autonomous vehicle navigation.

  5. Using pH variations to improve the discrimination of wines by 3D front face fluorescence spectroscopy associated to Independent Components Analysis.

    PubMed

    Saad, Rita; Bouveresse, Delphine Jouan-Rimbaud; Locquet, Nathalie; Rutledge, Douglas N

    2016-06-01

    Wine composition in polyphenols is related to the variety of grape that it contains. These polyphenols play an essential role in its quality as well as a possible protective effect on human health. Their conjugated aromatic structure renders them fluorescent, which means that 3D front-face fluorescence spectroscopy could be a useful tool to differentiate among the grape varieties that characterize each wine. However, fluorescence spectra acquired simply at the natural pH of wine are not always sufficient to discriminate the wines. The structural changes in the polyphenols resulting from modifications in the pH induce significant changes in their fluorescence spectra, making it possible to more clearly separate different wines. 9 wines belonging to three different grape varieties (Shiraz, Cabernet Sauvignon and Pinot Noir) and from 9 different producers, were analyzed over a range of pHs. Independent Components Analysis (ICA) was used to extract characteristic signals from the matrix of unfolded 3D front-face fluorescence spectra and showed that the introduction of pH as an additional parameter in the study of wine fluorescence improved the discrimination of wines. PMID:27130119

  6. Stereotype Priming in Face Recognition: Interactions between Semantic and Visual Information in Face Encoding

    ERIC Educational Resources Information Center

    Hills, Peter J.; Lewis, Michael B.; Honey, R. C.

    2008-01-01

    The accuracy with which previously unfamiliar faces are recognised is increased by the presentation of a stereotype-congruent occupation label [Klatzky, R. L., Martin, G. L., & Kane, R. A. (1982a). "Semantic interpretation effects on memory for faces." "Memory & Cognition," 10, 195-206; Klatzky, R. L., Martin, G. L., & Kane, R. A. (1982b).…

  7. Cross-age effect in recognition performance and memory monitoring for faces.

    PubMed

    Bryce, Margaret S; Dodson, Chad S

    2013-03-01

    The cross-age effect refers to the finding of better memory for own- than other-age faces. We examined 3 issues about this effect: (1) Does it extend to the ability to monitor the likely accuracy of memory judgments for young and old faces? (2) Does it apply to source information that is associated with young and old faces? And (3) what is a likely mechanism underlying the cross-age effect? In Experiment 1, young and older adults viewed young and old faces appearing in different contexts. Young adults exhibited a cross-age effect in their recognition of faces and in their memory-monitoring performance for these faces. Older adults, by contrast, showed no age-of-face effects. Experiment 2 examined whether young adults' cross-age effect depends on or is independent of encoding a mixture of young and old faces. Young adults encoded either a mixture of young and old faces, a set of all young faces, or a set of all old faces. In the mixed-list condition we replicated our finding of young adults' superior memory for own-age faces; in the pure-list conditions, however, there were absolutely no differences in performance between young and old faces. The fact that the pure-list design abolishes the cross-age effect supports social-cognitive theories of this phenomenon. PMID:23066807

  8. Face recognition via edge-based Gabor feature representation for plastic surgery-altered images

    NASA Astrophysics Data System (ADS)

    Chude-Olisah, Chollette C.; Sulong, Ghazali; Chude-Okonkwo, Uche A. K.; Hashim, Siti Z. M.

    2014-12-01

    Plastic surgery procedures on the face introduce skin texture variations between images of the same person (intra-subject), thereby making the task of face recognition more difficult than in normal scenario. Usually, in contemporary face recognition systems, the original gray-level face image is used as input to the Gabor descriptor, which translates to encoding some texture properties of the face image. The texture-encoding process significantly degrades the performance of such systems in the case of plastic surgery due to the presence of surgically induced intra-subject variations. Based on the proposition that the shape of significant facial components such as eyes, nose, eyebrow, and mouth remains unchanged after plastic surgery, this paper employs an edge-based Gabor feature representation approach for the recognition of surgically altered face images. We use the edge information, which is dependent on the shapes of the significant facial components, to address the plastic surgery-induced texture variation problems. To ensure that the significant facial components represent useful edge information with little or no false edges, a simple illumination normalization technique is proposed for preprocessing. Gabor wavelet is applied to the edge image to accentuate on the uniqueness of the significant facial components for discriminating among different subjects. The performance of the proposed method is evaluated on the Georgia Tech (GT) and the Labeled Faces in the Wild (LFW) databases with illumination and expression problems, and the plastic surgery database with texture changes. Results show that the proposed edge-based Gabor feature representation approach is robust against plastic surgery-induced face variations amidst expression and illumination problems and outperforms the existing plastic surgery face recognition methods reported in the literature.

  9. An in-depth cognitive examination of individuals with superior face recognition skills.

    PubMed

    Bobak, Anna K; Bennetts, Rachel J; Parris, Benjamin A; Jansari, Ashok; Bate, Sarah

    2016-09-01

    Previous work has reported the existence of "super-recognisers" (SRs), or individuals with extraordinary face recognition skills. However, the precise underpinnings of this ability have not yet been investigated. In this paper we examine (a) the face-specificity of super recognition, (b) perception of facial identity in SRs, (c) whether SRs present with enhancements in holistic processing and (d) the consistency of these findings across different SRs. A detailed neuropsychological investigation into six SRs indicated domain-specificity in three participants, with some evidence of enhanced generalised visuo-cognitive or socio-emotional processes in the remaining individuals. While superior face-processing skills were restricted to face memory in three of the SRs, enhancements to facial identity perception were observed in the others. Notably, five of the six participants showed at least some evidence of enhanced holistic processing. These findings indicate cognitive heterogeneity in the presentation of superior face recognition, and have implications for our theoretical understanding of the typical face-processing system and the identification of superior face-processing skills in applied settings. PMID:27344238

  10. Development of holistic vs. featural processing in face recognition

    PubMed Central

    Nakabayashi, Kazuyo; Liu, Chang Hong

    2014-01-01

    According to a classic view developed by Carey and Diamond (1977), young children process faces in a piecemeal fashion before adult-like holistic processing starts to emerge at the age of around 10 years. This is known as the encoding switch hypothesis. Since then, a growing body of studies have challenged the theory. This article will provide a critical appraisal of this literature, followed by an analysis of some more recent developments. We will conclude, quite contrary to the classical view, that holistic processing is not only present in early child development, but could even precede the development of part-based processing. PMID:25368565

  11. Visual face-movement sensitive cortex is relevant for auditory-only speech recognition.

    PubMed

    Riedel, Philipp; Ragert, Patrick; Schelinski, Stefanie; Kiebel, Stefan J; von Kriegstein, Katharina

    2015-07-01

    It is commonly assumed that the recruitment of visual areas during audition is not relevant for performing auditory tasks ('auditory-only view'). According to an alternative view, however, the recruitment of visual cortices is thought to optimize auditory-only task performance ('auditory-visual view'). This alternative view is based on functional magnetic resonance imaging (fMRI) studies. These studies have shown, for example, that even if there is only auditory input available, face-movement sensitive areas within the posterior superior temporal sulcus (pSTS) are involved in understanding what is said (auditory-only speech recognition). This is particularly the case when speakers are known audio-visually, that is, after brief voice-face learning. Here we tested whether the left pSTS involvement is causally related to performance in auditory-only speech recognition when speakers are known by face. To test this hypothesis, we applied cathodal transcranial direct current stimulation (tDCS) to the pSTS during (i) visual-only speech recognition of a speaker known only visually to participants and (ii) auditory-only speech recognition of speakers they learned by voice and face. We defined the cathode as active electrode to down-regulate cortical excitability by hyperpolarization of neurons. tDCS to the pSTS interfered with visual-only speech recognition performance compared to a control group without pSTS stimulation (tDCS to BA6/44 or sham). Critically, compared to controls, pSTS stimulation additionally decreased auditory-only speech recognition performance selectively for voice-face learned speakers. These results are important in two ways. First, they provide direct evidence that the pSTS is causally involved in visual-only speech recognition; this confirms a long-standing prediction of current face-processing models. Secondly, they show that visual face-sensitive pSTS is causally involved in optimizing auditory-only speech recognition. These results are in line

  12. Wavelet decomposition based principal component analysis for face recognition using MATLAB

    NASA Astrophysics Data System (ADS)

    Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish

    2016-03-01

    For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.

  13. Effects of facial emotion recognition remediation on visual scanning of novel face stimuli.

    PubMed

    Marsh, Pamela J; Luckett, Gemma; Russell, Tamara; Coltheart, Max; Green, Melissa J

    2012-11-01

    Previous research shows that emotion recognition in schizophrenia can be improved with targeted remediation that draws attention to important facial features (eyes, nose, mouth). Moreover, the effects of training have been shown to last for up to one month after training. The aim of this study was to investigate whether improved emotion recognition of novel faces is associated with concomitant changes in visual scanning of these same novel facial expressions. Thirty-nine participants with schizophrenia received emotion recognition training using Ekman's Micro-Expression Training Tool (METT), with emotion recognition and visual scanpath (VSP) recordings to face stimuli collected simultaneously. Baseline ratings of interpersonal and cognitive functioning were also collected from all participants. Post-METT training, participants showed changes in foveal attention to the features of facial expressions of emotion not used in METT training, which were generally consistent with the information about important features from the METT. In particular, there were changes in how participants looked at the features of facial expressions of emotion surprise, disgust, fear, happiness, and neutral, demonstrating that improved emotion recognition is paralleled by changes in the way participants with schizophrenia viewed novel facial expressions of emotion. However, there were overall decreases in foveal attention to sad and neutral faces that indicate more intensive instruction might be needed for these faces during training. Most importantly, the evidence shows that participant gender may affect training outcomes. PMID:22959743

  14. Toward a unified model of face and object recognition in the human visual system

    PubMed Central

    Wallis, Guy

    2013-01-01

    Our understanding of the mechanisms and neural substrates underlying visual recognition has made considerable progress over the past 30 years. During this period, accumulating evidence has led many scientists to conclude that objects and faces are recognised in fundamentally distinct ways, and in fundamentally distinct cortical areas. In the psychological literature, in particular, this dissociation has led to a palpable disconnect between theories of how we process and represent the two classes of object. This paper follows a trend in part of the recognition literature to try to reconcile what we know about these two forms of recognition by considering the effects of learning. Taking a widely accepted, self-organizing model of object recognition, this paper explains how such a system is affected by repeated exposure to specific stimulus classes. In so doing, it explains how many aspects of recognition generally regarded as unusual to faces (holistic processing, configural processing, sensitivity to inversion, the other-race effect, the prototype effect, etc.) are emergent properties of category-specific learning within such a system. Overall, the paper describes how a single model of recognition learning can and does produce the seemingly very different types of representation associated with faces and objects. PMID:23966963

  15. Oxytocin eliminates the own-race bias in face recognition memory.

    PubMed

    Blandón-Gitlin, Iris; Pezdek, Kathy; Saldivar, Sesar; Steelman, Erin

    2014-09-11

    The neuropeptide Oxytocin influences a number of social behaviors, including processing of faces. We examined whether Oxytocin facilitates the processing of out-group faces and reduce the own-race bias (ORB). The ORB is a robust phenomenon characterized by poor recognition memory of other-race faces compared to the same-race faces. In Experiment 1, participants received intranasal solutions of Oxytocin or placebo prior to viewing White and Black faces. On a subsequent recognition test, whereas in the placebo condition the same-race faces were better recognized than other-race faces, in the Oxytocin condition Black and White faces were equally well recognized, effectively eliminating the ORB. In Experiment 2, Oxytocin was administered after the study phase. The ORB resulted, but Oxytocin did not significantly reduce the effect. This study is the first to show that Oxytocin can enhance face memory of out-group members and underscore the importance of social encoding mechanisms underlying the own-race bias. This article is part of a Special Issue entitled Oxytocin and Social Behav. PMID:23872107

  16. Recognition memory for distractor faces depends on attentional load at exposure.

    PubMed

    Jenkins, Rob; Lavie, Nilli; Driver, Jon

    2005-04-01

    Incidental recognition memory for faces previously exposed as task-irrelevant distractors was assessed as a function of the attentional load of an unrelated task performed on superimposed letter strings at exposure. In Experiment 1, subjects were told to ignore the faces and either to judge the color of the letters (low load) or to search for an angular target letter among other angular letters (high load). A surprise recognition memory test revealed that despite the irrelevance of all faces at exposure, those exposed under low-load conditions were later recognized, but those exposed under high-load conditions were not. Experiment 2 found a similar pattern when both the high- and low-load tasks required shape judgments for the letters but made differing attentional demands. Finally, Experiment 3 showed that high load in a nonface task can significantly reduce even immediate recognition of a fixated face from the preceding trial. These results demonstrate that load in a nonface domain (e.g., letter shape) can reduce face recognition, in accord with Lavie's load theory. In addition to their theoretical impact, these results may have practical implications for eyewitness testimony. PMID:16082812

  17. Is that me or my twin? Lack of self-face recognition advantage in identical twins.

    PubMed

    Martini, Matteo; Bufalari, Ilaria; Stazi, Maria Antonietta; Aglioti, Salvatore Maria

    2015-01-01

    Despite the increasing interest in twin studies and the stunning amount of research on face recognition, the ability of adult identical twins to discriminate their own faces from those of their co-twins has been scarcely investigated. One's own face is the most distinctive feature of the bodily self, and people typically show a clear advantage in recognizing their own face even more than other very familiar identities. Given the very high level of resemblance of their faces, monozygotic twins represent a unique model for exploring self-face processing. Herein we examined the ability of monozygotic twins to distinguish their own face from the face of their co-twin and of a highly familiar individual. Results show that twins equally recognize their own face and their twin's face. This lack of self-face advantage was negatively predicted by how much they felt physically similar to their co-twin and by their anxious or avoidant attachment style. We speculate that in monozygotic twins, the visual representation of the self-face overlaps with that of the co-twin. Thus, to distinguish the self from the co-twin, monozygotic twins have to rely much more than control participants on the multisensory integration processes upon which the sense of bodily self is based. Moreover, in keeping with the notion that attachment style influences perception of self and significant others, we propose that the observed self/co-twin confusion may depend upon insecure attachment. PMID:25853249

  18. Is That Me or My Twin? Lack of Self-Face Recognition Advantage in Identical Twins

    PubMed Central

    Martini, Matteo; Bufalari, Ilaria; Stazi, Maria Antonietta; Aglioti, Salvatore Maria

    2015-01-01

    Despite the increasing interest in twin studies and the stunning amount of research on face recognition, the ability of adult identical twins to discriminate their own faces from those of their co-twins has been scarcely investigated. One’s own face is the most distinctive feature of the bodily self, and people typically show a clear advantage in recognizing their own face even more than other very familiar identities. Given the very high level of resemblance of their faces, monozygotic twins represent a unique model for exploring self-face processing. Herein we examined the ability of monozygotic twins to distinguish their own face from the face of their co-twin and of a highly familiar individual. Results show that twins equally recognize their own face and their twin’s face. This lack of self-face advantage was negatively predicted by how much they felt physically similar to their co-twin and by their anxious or avoidant attachment style. We speculate that in monozygotic twins, the visual representation of the self-face overlaps with that of the co-twin. Thus, to distinguish the self from the co-twin, monozygotic twins have to rely much more than control participants on the multisensory integration processes upon which the sense of bodily self is based. Moreover, in keeping with the notion that attachment style influences perception of self and significant others, we propose that the observed self/co-twin confusion may depend upon insecure attachment. PMID:25853249

  19. A Kernel Gabor-Based Weighted Region Covariance Matrix for Face Recognition

    PubMed Central

    Qin, Huafeng; Qin, Lan; Xue, Lian; Li, Yantao

    2012-01-01

    This paper proposes a novel image region descriptor for face recognition, named kernel Gabor-based weighted region covariance matrix (KGWRCM). As different parts are different effectual in characterizing and recognizing faces, we construct a weighting matrix by computing the similarity of each pixel within a face sample to emphasize features. We then incorporate the weighting matrices into a region covariance matrix, named weighted region covariance matrix (WRCM), to obtain the discriminative features of faces for recognition. Finally, to further preserve discriminative features in higher dimensional space, we develop the kernel Gabor-based weighted region covariance matrix (KGWRCM). Experimental results show that the KGWRCM outperforms other algorithms including the kernel Gabor-based region covariance matrix (KGCRM). PMID:22969351

  20. Differential outcomes training improves face recognition memory in children and in adults with Down syndrome.

    PubMed

    Esteban, Laura; Plaza, Victoria; López-Crespo, Ginesa; Vivas, Ana B; Estévez, Angeles F

    2014-06-01

    Previous studies have demonstrated that the differential outcomes procedure (DOP), which involves paring a unique reward with a specific stimulus, enhances discriminative learning and memory performance in several populations. The present study aimed to further investigate whether this procedure would improve face recognition memory in 5- and 7-year-old children (Experiment 1) and adults with Down syndrome (Experiment 2). In a delayed matching-to-sample task, participants had to select the previously shown face (sample stimulus) among six alternatives faces (comparison stimuli) in four different delays (1, 5, 10, or 15s). Participants were tested in two conditions: differential, where each sample stimulus was paired with a specific outcome; and non-differential outcomes, where reinforcers were administered randomly. The results showed a significantly better face recognition in the differential outcomes condition relative to the non-differential in both experiments. Implications for memory training programs and future research are discussed. PMID:24713518

  1. A kernel Gabor-based weighted region covariance matrix for face recognition.

    PubMed

    Qin, Huafeng; Qin, Lan; Xue, Lian; Li, Yantao

    2012-01-01

    This paper proposes a novel image region descriptor for face recognition, named kernel Gabor-based weighted region covariance matrix (KGWRCM). As different parts are different effectual in characterizing and recognizing faces, we construct a weighting matrix by computing the similarity of each pixel within a face sample to emphasize features. We then incorporate the weighting matrices into a region covariance matrix, named weighted region covariance matrix (WRCM), to obtain the discriminative features of faces for recognition. Finally, to further preserve discriminative features in higher dimensional space, we develop the kernel Gabor-based weighted region covariance matrix (KGWRCM). Experimental results show that the KGWRCM outperforms other algorithms including the kernel Gabor-based region covariance matrix (KGCRM). PMID:22969351

  2. Preliminary study of statistical pattern recognition-based coin counterfeit detection by means of high resolution 3D scanners

    NASA Astrophysics Data System (ADS)

    Leich, Marcus; Kiltz, Stefan; Krätzer, Christian; Dittmann, Jana; Vielhauer, Claus

    2011-03-01

    According to the European Commission around 200,000 counterfeit Euro coins are removed from circulation every year. While approaches exist to automatically detect these coins, satisfying error rates are usually only reached for low quality forgeries, so-called "local classes". High-quality minted forgeries ("common classes") pose a problem for these methods as well as for trained humans. This paper presents a first approach for statistical analysis of coins based on high resolution 3D data acquired with a chromatic white light sensor. The goal of this analysis is to determine whether two coins are of common origin. The test set for these first and new investigations consists of 62 coins from not more than five different sources. The analysis is based on the assumption that, apart from markings caused by wear such as scratches and residue consisting of grease and dust, coins from equal origin have a more similar height field than coins from different mints. First results suggest that the selected approach is heavily affected by influences of wear like dents and scratches and the further research is required the eliminate this influence. A course for future work is outlined.

  3. Uncorrelated regularized local Fisher discriminant analysis for face recognition

    NASA Astrophysics Data System (ADS)

    Wang, Zhan; Ruan, Qiuqi; An, Gaoyun

    2014-07-01

    A local Fisher discriminant analysis can work well for a multimodal problem. However, it often suffers from the undersampled problem, which makes the local within-class scatter matrix singular. We develop a supervised discriminant analysis technique called uncorrelated regularized local Fisher discriminant analysis for image feature extraction. In this technique, the local within-class scatter matrix is approximated by a full-rank matrix that not only solves the undersampled problem but also eliminates the poor impact of small and zero eigenvalues. Statistically uncorrelated features are obtained to remove redundancy. A trace ratio criterion and the corresponding iterative algorithm are employed to globally solve the objective function. Experimental results on four famous face databases indicate that our proposed method is effective and outperforms the conventional dimensionality reduction methods.

  4. Contribution of Bodily and Gravitational Orientation Cues to Face and Letter Recognition.

    PubMed

    Barnett-Cowan, Michael; Snow, Jacqueline C; Culham, Jody C

    2015-01-01

    Sensory information provided by the vestibular system is crucial in cognitive processes such as the ability to recognize objects. The orientation at which objects are most easily recognized--the perceptual upright (PU)--is influenced by body orientation with respect to gravity as detected from the somatosensory and vestibular systems. To date, the influence of these sensory cues on the PU has been measured using a letter recognition task. Here we assessed whether gravitational influences on letter recognition also extend to human face recognition. 13 right-handed observers were positioned in four body orientations (upright, left-side-down, right-side-down, supine) and visually discriminated ambiguous characters ('p'-from-'d'; 'i'-from-'!') and ambiguous faces used in popular visual illusions ('young woman'-from-'old woman'; 'grinning man'-from-'frowning man') in a forced-choice paradigm. The two transition points (e.g., 'p-to-d' and 'd-to-p'; 'young woman-to-old woman' and 'old woman-to-young woman') were fit with a sigmoidal psychometric function and the average of these transitions was taken as the PU for each stimulus category. The results show that both faces and letters are more influenced by body orientation than gravity. However, faces are more optimally recognized when closer in alignment with body orientation than letters--which are more influenced by gravity. Our results indicate that the brain does not utilize a common representation of upright that governs recognition of all object categories. Distinct areas of ventro-temporal cortex that represent faces and letters may weight bodily and gravitational cues differently--possibly to facilitate the specific demands of face and letter recognition. PMID:26595950

  5. A Lack of Sexual Dimorphism in Width-to-Height Ratio in White European Faces Using 2D Photographs, 3D Scans, and Anthropometry

    PubMed Central

    Kramer, Robin S. S.; Jones, Alex L.; Ward, Robert

    2012-01-01

    Facial width-to-height ratio has received a great deal of attention in recent research. Evidence from human skulls suggests that males have a larger relative facial width than females, and that this sexual dimorphism is an honest signal of masculinity, aggression, and related traits. However, evidence that this measure is sexually dimorphic in faces, rather than skulls, is surprisingly weak. We therefore investigated facial width-to-height ratio in three White European samples using three different methods of measurement: 2D photographs, 3D scans, and anthropometry. By measuring the same individuals with multiple methods, we demonstrated high agreement across all measures. However, we found no evidence of sexual dimorphism in the face. In our third study, we also found a link between facial width-to-height ratio and body mass index for both males and females, although this relationship did not account for the lack of dimorphism in our sample. While we showed sufficient power to detect differences between male and female width-to-height ratio, our results failed to support the general hypothesis of sexual dimorphism in the face. PMID:22880088

  6. Sparsity preserving discriminative learning with applications to face recognition

    NASA Astrophysics Data System (ADS)

    Ren, Yingchun; Wang, Zhicheng; Chen, Yufei; Shan, Xiaoying; Zhao, Weidong

    2016-01-01

    The extraction of effective features is extremely important for understanding the intrinsic structure hidden in high-dimensional data. In recent years, sparse representation models have been widely used in feature extraction. A supervised learning method, called sparsity preserving discriminative learning (SPDL), is proposed. SPDL, which attempts to preserve the sparse representation structure of the data and simultaneously maximize the between-class separability, can be regarded as a combiner of manifold learning and sparse representation. More specifically, SPDL first creates a concatenated dictionary by class-wise principal component analysis decompositions and learns the sparse representation structure of each sample under the constructed dictionary using the least squares method. Second, a local between-class separability function is defined to characterize the scatter of the samples in the different submanifolds. Then, SPDL integrates the learned sparse representation information with the local between-class relationship to construct a discriminant function. Finally, the proposed method is transformed into a generalized eigenvalue problem. Extensive experimental results on several popular face databases demonstrate the effectiveness of the proposed approach.

  7. Emotional Faces in Context: Age Differences in Recognition Accuracy and Scanning Patterns

    PubMed Central

    Noh, Soo Rim; Isaacowitz, Derek M.

    2014-01-01

    While age-related declines in facial expression recognition are well documented, previous research relied mostly on isolated faces devoid of context. We investigated the effects of context on age differences in recognition of facial emotions and in visual scanning patterns of emotional faces. While their eye movements were monitored, younger and older participants viewed facial expressions (i.e., anger, disgust) in contexts that were emotionally congruent, incongruent, or neutral to the facial expression to be identified. Both age groups had highest recognition rates of facial expressions in the congruent context, followed by the neutral context, and recognition rates in the incongruent context were worst. These context effects were more pronounced for older adults. Compared to younger adults, older adults exhibited a greater benefit from congruent contextual information, regardless of facial expression. Context also influenced the pattern of visual scanning characteristics of emotional faces in a similar manner across age groups. In addition, older adults initially attended more to context overall. Our data highlight the importance of considering the role of context in understanding emotion recognition in adulthood. PMID:23163713

  8. Emotional face recognition deficit in amnestic patients with mild cognitive impairment: behavioral and electrophysiological evidence

    PubMed Central

    Yang, Linlin; Zhao, Xiaochuan; Wang, Lan; Yu, Lulu; Song, Mei; Wang, Xueyi

    2015-01-01

    Amnestic mild cognitive impairment (MCI) has been conceptualized as a transitional stage between healthy aging and Alzheimer’s disease. Thus, understanding emotional face recognition deficit in patients with amnestic MCI could be useful in determining progression of amnestic MCI. The purpose of this study was to investigate the features of emotional face processing in amnestic MCI by using event-related potentials (ERPs). Patients with amnestic MCI and healthy controls performed a face recognition task, giving old/new responses to previously studied and novel faces with different emotional messages as the stimulus material. Using the learning-recognition paradigm, the experiments were divided into two steps, ie, a learning phase and a test phase. ERPs were analyzed on electroencephalographic recordings. The behavior data indicated high emotion classification accuracy for patients with amnestic MCI and for healthy controls. The mean percentage of correct classifications was 81.19% for patients with amnestic MCI and 96.46% for controls. Our ERP data suggest that patients with amnestic MCI were still be able to undertake personalizing processing for negative faces, but not for neutral or positive faces, in the early frontal processing stage. In the early time window, no differences in frontal old/new effect were found between patients with amnestic MCI and normal controls. However, in the late time window, the three types of stimuli did not elicit any old/new parietal effects in patients with amnestic MCI, suggesting their recollection was impaired. This impairment may be closely associated with amnestic MCI disease. We conclude from our data that face recognition processing and emotional memory is impaired in patients with amnestic MCI. Such damage mainly occurred in the early coding stages. In addition, we found that patients with amnestic MCI had difficulty in post-processing of positive and neutral facial emotions. PMID:26347065

  9. A Smile Enhances 3-Month-Olds' Recognition of an Individual Face

    ERIC Educational Resources Information Center

    Turati, Chiara; Montirosso, Rosario; Brenna, Viola; Ferrara, Veronica; Borgatti, Renato

    2011-01-01

    Recent studies demonstrated that in adults and children recognition of face identity and facial expression mutually interact (Bate, Haslam, & Hodgson, 2009; Spangler, Schwarzer, Korell, & Maier-Karius, 2010). Here, using a familiarization paradigm, we explored the relation between these processes in early infancy, investigating whether 3-month-old…

  10. Cultural In-Group Advantage: Emotion Recognition in African American and European American Faces and Voices

    ERIC Educational Resources Information Center

    Wickline, Virginia B.; Bailey, Wendy; Nowicki, Stephen

    2009-01-01

    The authors explored whether there were in-group advantages in emotion recognition of faces and voices by culture or geographic region. Participants were 72 African American students (33 men, 39 women), 102 European American students (30 men, 72 women), 30 African international students (16 men, 14 women), and 30 European international students…

  11. A Normed Study of Face Recognition in Autism and Related Disorders.

    ERIC Educational Resources Information Center

    Klin, Ami; Sparrow, Sara S.; de Bildt, Annelies; Cicchetti, Domenic V.; Cohen, Donald J.; Volkmar, Fred R.

    1999-01-01

    This study used a well-normed task of face recognition with 102 young children with autism, pervasive developmental disorder (PDD) not otherwise specified, and non-PDD disorders (mental retardation and language disorders) matched for chronological age and either verbal or nonverbal mental age. Autistic subjects exhibited pronounced deficits in…

  12. Kernel-based discriminant image filter learning: application in face recognition

    NASA Astrophysics Data System (ADS)

    Zhang, Lingchen; Wei, Sui; Qu, Lei

    2014-11-01

    The extraction of discriminative and robust feature is a crucial issue in pattern recognition and classification. In this paper, we propose a kernel based discriminant image filter learning method (KDIFL) for local feature enhancement and demonstrate its superiority in the application of face recognition. Instead of designing the image filter in a handcraft or analytical way, we propose to learn the image filter so that after filtering the between-class difference is attenuated and the within-class difference is amplified, thus facilitate the following recognition. During filter learning, the kernel trick is employed to cope with the nonlinear feature space problem caused by expression, pose, illumination, and so on. We show that the proposed filter is generalized and it can be concatenated with classic feature descriptors (e.g. LBP) to further increase the discriminability of extracted features. Our extensive experiments on Yale, ORL and AR face databases validate the effectiveness and robustness of the proposed method.

  13. Study on local Gabor binary patterns for face representation and recognition

    NASA Astrophysics Data System (ADS)

    Ge, Wei; Han, Chunling; Quan, Wei

    2015-12-01

    More recently, Local Binary Patterns(LBP) has received much attention in face representation and recognition. The original LBP operator could describe the spatial structure information, which are the variety edge or variety angle features of local facial images essentially, they are important factors of classify different faces. But the scale and orientation of the edge features include more detail information which could be used to classify different persons efficiently, while original LBP operator could not to extract the information. In this paper, based on the introduction of original LBP-based facial representation and recognition, the histogram sequences of local Gabor binary patterns are used to representation facial image. Principal Component Analysis (PCA) method is used to classification the histogram sequences, which have been converted to vectors. Recognition experimental results show that the method we used in this paper increases nearly 6% than the classification performance of original LBP operator.

  14. Always on My Mind? Recognition of Attractive Faces May Not Depend on Attention

    PubMed Central

    Silva, André; Macedo, António F.; Albuquerque, Pedro B.; Arantes, Joana

    2016-01-01

    Little research has examined what happens to attention and memory as a whole when humans see someone attractive. Hence, we investigated whether attractive stimuli gather more attention and are better remembered than unattractive stimuli. Participants took part in an attention task – in which matrices containing attractive and unattractive male naturalistic photographs were presented to 54 females, and measures of eye-gaze location and fixation duration using an eye-tracker were taken – followed by a recognition task. Eye-gaze was higher for the attractive stimuli compared to unattractive stimuli. Also, attractive photographs produced more hits and false recognitions than unattractive photographs which may indicate that regardless of attention allocation, attractive photographs produce more correct but also more false recognitions. We present an evolutionary explanation for this, as attending to more attractive faces but not always remembering them accurately and differentially compared with unseen attractive faces, may help females secure mates with higher reproductive value. PMID:26858683

  15. PEM-PCA: A Parallel Expectation-Maximization PCA Face Recognition Architecture

    PubMed Central

    Rujirakul, Kanokmon; Arnonkijpanich, Banchar

    2014-01-01

    Principal component analysis or PCA has been traditionally used as one of the feature extraction techniques in face recognition systems yielding high accuracy when requiring a small number of features. However, the covariance matrix and eigenvalue decomposition stages cause high computational complexity, especially for a large database. Thus, this research presents an alternative approach utilizing an Expectation-Maximization algorithm to reduce the determinant matrix manipulation resulting in the reduction of the stages' complexity. To improve the computational time, a novel parallel architecture was employed to utilize the benefits of parallelization of matrix computation during feature extraction and classification stages including parallel preprocessing, and their combinations, so-called a Parallel Expectation-Maximization PCA architecture. Comparing to a traditional PCA and its derivatives, the results indicate lower complexity with an insignificant difference in recognition precision leading to high speed face recognition systems, that is, the speed-up over nine and three times over PCA and Parallel PCA. PMID:24955405

  16. Multi-Directional Multi-Level Dual-Cross Patterns for Robust Face Recognition.

    PubMed

    Ding, Changxing; Choi, Jonghyun; Tao, Dacheng; Davis, Larry S

    2016-03-01

    To perform unconstrained face recognition robust to variations in illumination, pose and expression, this paper presents a new scheme to extract "Multi-Directional Multi-Level Dual-Cross Patterns" (MDML-DCPs) from face images. Specifically, the MDML-DCPs scheme exploits the first derivative of Gaussian operator to reduce the impact of differences in illumination and then computes the DCP feature at both the holistic and component levels. DCP is a novel face image descriptor inspired by the unique textural structure of human faces. It is computationally efficient and only doubles the cost of computing local binary patterns, yet is extremely robust to pose and expression variations. MDML-DCPs comprehensively yet efficiently encodes the invariant characteristics of a face image from multiple levels into patterns that are highly discriminative of inter-personal differences but robust to intra-personal variations. Experimental results on the FERET, CAS-PERL-R1, FRGC 2.0, and LFW databases indicate that DCP outperforms the state-of-the-art local descriptors (e.g., LBP, LTP, LPQ, POEM, tLBP, and LGXP) for both face identification and face verification tasks. More impressively, the best performance is achieved on the challenging LFW and FRGC 2.0 databases by deploying MDML-DCPs in a simple recognition scheme. PMID:27046495

  17. Activation reduction in anterior temporal cortices during repeated recognition of faces of personal acquaintances.

    PubMed

    Sugiura, M; Kawashima, R; Nakamura, K; Sato, N; Nakamura, A; Kato, T; Hatano, K; Schormann, T; Zilles, K; Sato, K; Ito, K; Fukuda, H

    2001-05-01

    Repeated recognition of the face of a familiar individual is known to show semantic repetition priming effect. In this study, normal subjects were repeatedly presented faces of their colleagues, and the effect of repetition on the regional cerebral blood flow change was measured using positron emission tomography. They repeated a set of three tasks: the familiar-face detection (F) task, the facial direction discrimination (D) task, and the perceptual control (C) task. During five repetitions of the F task, familiar faces were presented six times from different views in a pseudorandom order. Activation reduction through the repetition of the F tasks was observed in the bilateral anterior (anterolateral to the polar region) temporal cortices which are suggested to be involved in the access to the long-term memory concerning people. The bilateral amygdala, the hypothalamus, and the medial frontal cortices, were constantly activated during the F tasks, and considered to be associated with the behavioral significance of the presented familiar faces. Constant activation was also observed in the bilateral occipitotemporal regions and fusiform gyri and the right medial temporal regions during perception of the faces, and in the left medial temporal regions during the facial familiarity detection task, which are consistent with the results of previous functional brain imaging studies. The results have provided further information about the functional segregation of the anterior temporal regions in face recognition and long-term memory. PMID:11304083

  18. An ERP investigation of the co-development of hemispheric lateralization of face and word recognition

    PubMed Central

    Dundas, Eva M.; Plaut, David C.; Behrmann, Marlene

    2014-01-01

    The adult human brain would appear to have specialized and independent neural systems for the visual processing of words and faces. Extensive evidence has demonstrated greater selectivity for written words in the left over right hemisphere, and, conversely, greater selectivity for faces in the right over left hemisphere. This study examines the emergence of these complementary neural profiles, as well as the possible relationship between them. Using behavioral and neurophysiological measures, in adults, we observed the standard finding of greater accuracy and a larger N170 ERP component in the left over right hemisphere for words, and conversely, greater accuracy and a larger N170 in the right over the left hemisphere for faces. We also found that, although children aged 7-12 years revealed the adult hemispheric pattern for words, they showed neither a behavioral nor a neural hemispheric superiority for faces. Of particular interest, the magnitude of their N170 for faces in the right hemisphere was related to that of the N170 for words in their left hemisphere. These findings suggest that the hemispheric organization of