Reverse engineering the face space: Discovering the critical features for face identification.
Abudarham, Naphtali; Yovel, Galit
2016-01-01
How do we identify people? What are the critical facial features that define an identity and determine whether two faces belong to the same person or different people? To answer these questions, we applied the face space framework, according to which faces are represented as points in a multidimensional feature space, such that face space distances are correlated with perceptual similarities between faces. In particular, we developed a novel method that allowed us to reveal the critical dimensions (i.e., critical features) of the face space. To that end, we constructed a concrete face space, which included 20 facial features of natural face images, and asked human observers to evaluate feature values (e.g., how thick are the lips). Next, we systematically and quantitatively changed facial features, and measured the perceptual effects of these manipulations. We found that critical features were those for which participants have high perceptual sensitivity (PS) for detecting differences across identities (e.g., which of two faces has thicker lips). Furthermore, these high PS features vary minimally across different views of the same identity, suggesting high PS features support face recognition across different images of the same face. The methods described here set an infrastructure for discovering the critical features of other face categories not studied here (e.g., Asians, familiar) as well as other aspects of face processing, such as attractiveness or trait inferences.
Neural correlates of processing facial identity based on features versus their spacing.
Maurer, D; O'Craven, K M; Le Grand, R; Mondloch, C J; Springer, M V; Lewis, T L; Grady, C L
2007-04-08
Adults' expertise in recognizing facial identity involves encoding subtle differences among faces in the shape of individual facial features (featural processing) and in the spacing among features (a type of configural processing called sensitivity to second-order relations). We used fMRI to investigate the neural mechanisms that differentiate these two types of processing. Participants made same/different judgments about pairs of faces that differed only in the shape of the eyes and mouth, with minimal differences in spacing (featural blocks), or pairs of faces that had identical features but differed in the positions of those features (spacing blocks). From a localizer scan with faces, objects, and houses, we identified regions with comparatively more activity for faces, including the fusiform face area (FFA) in the right fusiform gyrus, other extrastriate regions, and prefrontal cortices. Contrasts between the featural and spacing conditions revealed distributed patterns of activity differentiating the two conditions. A region of the right fusiform gyrus (near but not overlapping the localized FFA) showed greater activity during the spacing task, along with multiple areas of right frontal cortex, whereas left prefrontal activity increased for featural processing. These patterns of activity were not related to differences in performance between the two tasks. The results indicate that the processing of facial features is distinct from the processing of second-order relations in faces, and that these functions are mediated by separate and lateralized networks involving the right fusiform gyrus, although the FFA as defined from a localizer scan is not differentially involved.
Face-space architectures: evidence for the use of independent color-based features.
Nestor, Adrian; Plaut, David C; Behrmann, Marlene
2013-07-01
The concept of psychological face space lies at the core of many theories of face recognition and representation. To date, much of the understanding of face space has been based on principal component analysis (PCA); the structure of the psychological space is thought to reflect some important aspects of a physical face space characterized by PCA applications to face images. In the present experiments, we investigated alternative accounts of face space and found that independent component analysis provided the best fit to human judgments of face similarity and identification. Thus, our results challenge an influential approach to the study of human face space and provide evidence for the role of statistically independent features in face encoding. In addition, our findings support the use of color information in the representation of facial identity, and we thus argue for the inclusion of such information in theoretical and computational constructs of face space.
Yovel, Galit
2009-11-01
It is often argued that picture-plane face inversion impairs discrimination of the spacing among face features to a greater extent than the identity of the facial features. However, several recent studies have reported similar inversion effects for both types of face manipulations. In a recent review, Rossion (2008) claimed that similar inversion effects for spacing and features are due to methodological and conceptual shortcomings and that data still support the idea that inversion impairs the discrimination of features less than that of the spacing among them. Here I will claim that when facial features differ primarily in shape, the effect of inversion on features is not smaller than the one on spacing. It is when color/contrast information is added to facial features that the inversion effect on features decreases. This obvious observation accounts for the discrepancy in the literature and suggests that the large inversion effect that was found for features that differ in shape is not a methodological artifact. These findings together with other data that are discussed are consistent with the idea that the shape of facial features and the spacing among them are integrated rather than dissociated in the holistic representation of faces.
Limitations in 4-Year-Old Children's Sensitivity to the Spacing among Facial Features
ERIC Educational Resources Information Center
Mondloch, Catherine J.; Thomson, Kendra
2008-01-01
Four-year-olds' sensitivity to differences among faces in the spacing of features was tested under 4 task conditions: judging distinctiveness when the external contour was visible and when it was occluded, simultaneous match-to-sample, and recognizing the face of a friend. In each task, the foil differed only in the spacing of features, and…
Super-resolution method for face recognition using nonlinear mappings on coherent features.
Huang, Hua; He, Huiting
2011-01-01
Low-resolution (LR) of face images significantly decreases the performance of face recognition. To address this problem, we present a super-resolution method that uses nonlinear mappings to infer coherent features that favor higher recognition of the nearest neighbor (NN) classifiers for recognition of single LR face image. Canonical correlation analysis is applied to establish the coherent subspaces between the principal component analysis (PCA) based features of high-resolution (HR) and LR face images. Then, a nonlinear mapping between HR/LR features can be built by radial basis functions (RBFs) with lower regression errors in the coherent feature space than in the PCA feature space. Thus, we can compute super-resolved coherent features corresponding to an input LR image according to the trained RBF model efficiently and accurately. And, face identity can be obtained by feeding these super-resolved features to a simple NN classifier. Extensive experiments on the Facial Recognition Technology, University of Manchester Institute of Science and Technology, and Olivetti Research Laboratory databases show that the proposed method outperforms the state-of-the-art face recognition algorithms for single LR image in terms of both recognition rate and robustness to facial variations of pose and expression.
The organization of conspecific face space in nonhuman primates
Parr, Lisa A.; Taubert, Jessica; Little, Anthony C.; Hancock, Peter J. B.
2013-01-01
Humans and chimpanzees demonstrate numerous cognitive specializations for processing faces, but comparative studies with monkeys suggest that these may be the result of recent evolutionary adaptations. The present study utilized the novel approach of face space, a powerful theoretical framework used to understand the representation of face identity in humans, to further explore species differences in face processing. According to the theory, faces are represented by vectors in a multidimensional space, the centre of which is defined by an average face. Each dimension codes features important for describing a face’s identity, and vector length codes the feature’s distinctiveness. Chimpanzees and rhesus monkeys discriminated male and female conspecifics’ faces, rated by humans for their distinctiveness, using a computerized task. Multidimensional scaling analyses showed that the organization of face space was similar between humans and chimpanzees. Distinctive faces had the longest vectors and were the easiest for chimpanzees to discriminate. In contrast, distinctiveness did not correlate with the performance of rhesus monkeys. The feature dimensions for each species’ face space were visualized and described using morphing techniques. These results confirm species differences in the perceptual representation of conspecific faces, which are discussed within an evolutionary framework. PMID:22670823
Transfer learning for bimodal biometrics recognition
NASA Astrophysics Data System (ADS)
Dan, Zhiping; Sun, Shuifa; Chen, Yanfei; Gan, Haitao
2013-10-01
Biometrics recognition aims to identify and predict new personal identities based on their existing knowledge. As the use of multiple biometric traits of the individual may enables more information to be used for recognition, it has been proved that multi-biometrics can produce higher accuracy than single biometrics. However, a common problem with traditional machine learning is that the training and test data should be in the same feature space, and have the same underlying distribution. If the distributions and features are different between training and future data, the model performance often drops. In this paper, we propose a transfer learning method for face recognition on bimodal biometrics. The training and test samples of bimodal biometric images are composed of the visible light face images and the infrared face images. Our algorithm transfers the knowledge across feature spaces, relaxing the assumption of same feature space as well as same underlying distribution by automatically learning a mapping between two different but somewhat similar face images. According to the experiments in the face images, the results show that the accuracy of face recognition has been greatly improved by the proposed method compared with the other previous methods. It demonstrates the effectiveness and robustness of our method.
Reading Faces: From Features to Recognition.
Guntupalli, J Swaroop; Gobbini, M Ida
2017-12-01
Chang and Tsao recently reported that the monkey face patch system encodes facial identity in a space of facial features as opposed to exemplars. Here, we discuss how such coding might contribute to face recognition, emphasizing the critical role of learning and interactions with other brain areas for optimizing the recognition of familiar faces. Copyright © 2017 Elsevier Ltd. All rights reserved.
Pose Invariant Face Recognition Based on Hybrid Dominant Frequency Features
NASA Astrophysics Data System (ADS)
Wijaya, I. Gede Pasek Suta; Uchimura, Keiichi; Hu, Zhencheng
Face recognition is one of the most active research areas in pattern recognition, not only because the face is a human biometric characteristics of human being but also because there are many potential applications of the face recognition which range from human-computer interactions to authentication, security, and surveillance. This paper presents an approach to pose invariant human face image recognition. The proposed scheme is based on the analysis of discrete cosine transforms (DCT) and discrete wavelet transforms (DWT) of face images. From both the DCT and DWT domain coefficients, which describe the facial information, we build compact and meaningful features vector, using simple statistical measures and quantization. This feature vector is called as the hybrid dominant frequency features. Then, we apply a combination of the L2 and Lq metric to classify the hybrid dominant frequency features to a person's class. The aim of the proposed system is to overcome the high memory space requirement, the high computational load, and the retraining problems of previous methods. The proposed system is tested using several face databases and the experimental results are compared to a well-known Eigenface method. The proposed method shows good performance, robustness, stability, and accuracy without requiring geometrical normalization. Furthermore, the purposed method has low computational cost, requires little memory space, and can overcome retraining problem.
ERIC Educational Resources Information Center
Mondloch, Catherine J.; Leis, Anishka; Maurer, Daphne
2006-01-01
Four-year-olds were tested for their ability to use differences in the spacing among features to recognize familiar faces. They were given a storybook depicting multiple views of 2 children. They returned to the laboratory 2 weeks later and used a "magic wand" to play a computer game that tested their ability to recognize the familiarized faces…
Three-dimensional face model reproduction method using multiview images
NASA Astrophysics Data System (ADS)
Nagashima, Yoshio; Agawa, Hiroshi; Kishino, Fumio
1991-11-01
This paper describes a method of reproducing three-dimensional face models using multi-view images for a virtual space teleconferencing system that achieves a realistic visual presence for teleconferencing. The goal of this research, as an integral component of a virtual space teleconferencing system, is to generate a three-dimensional face model from facial images, synthesize images of the model virtually viewed from different angles, and with natural shadow to suit the lighting conditions of the virtual space. The proposed method is as follows: first, front and side view images of the human face are taken by TV cameras. The 3D data of facial feature points are obtained from front- and side-views by an image processing technique based on the color, shape, and correlation of face components. Using these 3D data, the prepared base face models, representing typical Japanese male and female faces, are modified to approximate the input facial image. The personal face model, representing the individual character, is then reproduced. Next, an oblique view image is taken by TV camera. The feature points of the oblique view image are extracted using the same image processing technique. A more precise personal model is reproduced by fitting the boundary of the personal face model to the boundary of the oblique view image. The modified boundary of the personal face model is determined by using face direction, namely rotation angle, which is detected based on the extracted feature points. After the 3D model is established, the new images are synthesized by mapping facial texture onto the model.
Centre-based restricted nearest feature plane with angle classifier for face recognition
NASA Astrophysics Data System (ADS)
Tang, Linlin; Lu, Huifen; Zhao, Liang; Li, Zuohua
2017-10-01
An improved classifier based on the nearest feature plane (NFP), called the centre-based restricted nearest feature plane with the angle (RNFPA) classifier, is proposed for the face recognition problems here. The famous NFP uses the geometrical information of samples to increase the number of training samples, but it increases the computation complexity and it also has an inaccuracy problem coursed by the extended feature plane. To solve the above problems, RNFPA exploits a centre-based feature plane and utilizes a threshold of angle to restrict extended feature space. By choosing the appropriate angle threshold, RNFPA can improve the performance and decrease computation complexity. Experiments in the AT&T face database, AR face database and FERET face database are used to evaluate the proposed classifier. Compared with the original NFP classifier, the nearest feature line (NFL) classifier, the nearest neighbour (NN) classifier and some other improved NFP classifiers, the proposed one achieves competitive performance.
Cross-modal face recognition using multi-matcher face scores
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Blasch, Erik
2015-05-01
The performance of face recognition can be improved using information fusion of multimodal images and/or multiple algorithms. When multimodal face images are available, cross-modal recognition is meaningful for security and surveillance applications. For example, a probe face is a thermal image (especially at nighttime), while only visible face images are available in the gallery database. Matching a thermal probe face onto the visible gallery faces requires crossmodal matching approaches. A few such studies were implemented in facial feature space with medium recognition performance. In this paper, we propose a cross-modal recognition approach, where multimodal faces are cross-matched in feature space and the recognition performance is enhanced with stereo fusion at image, feature and/or score level. In the proposed scenario, there are two cameras for stereo imaging, two face imagers (visible and thermal images) in each camera, and three recognition algorithms (circular Gaussian filter, face pattern byte, linear discriminant analysis). A score vector is formed with three cross-matched face scores from the aforementioned three algorithms. A classifier (e.g., k-nearest neighbor, support vector machine, binomial logical regression [BLR]) is trained then tested with the score vectors by using 10-fold cross validations. The proposed approach was validated with a multispectral stereo face dataset from 105 subjects. Our experiments show very promising results: ACR (accuracy rate) = 97.84%, FAR (false accept rate) = 0.84% when cross-matching the fused thermal faces onto the fused visible faces by using three face scores and the BLR classifier.
Sensitivity to Spacing Changes in Faces and Nonface Objects in Preschool-Aged Children and Adults
ERIC Educational Resources Information Center
Cassia, Viola Macchi; Turati, Chiara; Schwarzer, Gudrun
2011-01-01
Sensitivity to variations in the spacing of features in faces and a class of nonface objects (i.e., frontal images of cars) was tested in 3- and 4-year-old children and adults using a delayed or simultaneous two-alternative forced choice matching-to-sample task. In the adults, detection of spacing information was robust against exemplar…
Face recognition by applying wavelet subband representation and kernel associative memory.
Zhang, Bai-Ling; Zhang, Haihong; Ge, Shuzhi Sam
2004-01-01
In this paper, we propose an efficient face recognition scheme which has two features: 1) representation of face images by two-dimensional (2-D) wavelet subband coefficients and 2) recognition by a modular, personalised classification method based on kernel associative memory models. Compared to PCA projections and low resolution "thumb-nail" image representations, wavelet subband coefficients can efficiently capture substantial facial features while keeping computational complexity low. As there are usually very limited samples, we constructed an associative memory (AM) model for each person and proposed to improve the performance of AM models by kernel methods. Specifically, we first applied kernel transforms to each possible training pair of faces sample and then mapped the high-dimensional feature space back to input space. Our scheme using modular autoassociative memory for face recognition is inspired by the same motivation as using autoencoders for optical character recognition (OCR), for which the advantages has been proven. By associative memory, all the prototypical faces of one particular person are used to reconstruct themselves and the reconstruction error for a probe face image is used to decide if the probe face is from the corresponding person. We carried out extensive experiments on three standard face recognition datasets, the FERET data, the XM2VTS data, and the ORL data. Detailed comparisons with earlier published results are provided and our proposed scheme offers better recognition accuracy on all of the face datasets.
ERIC Educational Resources Information Center
Robbins, Rachel A.; Shergill, Yaadwinder; Maurer, Daphne; Lewis, Terri L.
2011-01-01
Adults are expert at recognizing faces, in part because of exquisite sensitivity to the spacing of facial features. Children are poorer than adults at recognizing facial identity and less sensitive to spacing differences. Here we examined the specificity of the immaturity by comparing the ability of 8-year-olds, 14-year-olds, and adults to…
Fast hierarchical knowledge-based approach for human face detection in color images
NASA Astrophysics Data System (ADS)
Jiang, Jun; Gong, Jie; Zhang, Guilin; Hu, Ruolan
2001-09-01
This paper presents a fast hierarchical knowledge-based approach for automatically detecting multi-scale upright faces in still color images. The approach consists of three levels. At the highest level, skin-like regions are determinated by skin model, which is based on the color attributes hue and saturation in HSV color space, as well color attributes red and green in normalized color space. In level 2, a new eye model is devised to select human face candidates in segmented skin-like regions. An important feature of the eye model is that it is independent of the scale of human face. So it is possible for finding human faces in different scale with scanning image only once, and it leads to reduction the computation time of face detection greatly. In level 3, a human face mosaic image model, which is consistent with physical structure features of human face well, is applied to judge whether there are face detects in human face candidate regions. This model includes edge and gray rules. Experiment results show that the approach has high robustness and fast speed. It has wide application perspective at human-computer interactions and visual telephone etc.
Developmental Changes in Face Processing Skills.
ERIC Educational Resources Information Center
Mondloch, Catherine J.; Geldart, Sybil; Maurer, Daphne; Le Grand, Richard
2003-01-01
Two experiments examined the impact of slow development of processing differences among faces in the spacing among facial features (second-order relations). Computerized tasks involving various face-processing skills were used. Results of experiment with 6-, 8-, and 10-year-olds and with adults indicated that slow development of sensitivity to…
Shy children are less sensitive to some cues to facial recognition.
Brunet, Paul M; Mondloch, Catherine J; Schmidt, Louis A
2010-02-01
Temperamental shyness in children is characterized by avoidance of faces and eye contact, beginning in infancy. We conducted two studies to determine whether temperamental shyness was associated with deficits in sensitivity to some cues to facial identity. In Study 1, 40 typically developing 10-year-old children made same/different judgments about pairs of faces that differed in the appearance of individual features, the shape of the external contour, or the spacing among features; their parent completed the Colorado childhood temperament inventory (CCTI). Children who scored higher on CCTI shyness made more errors than their non-shy counterparts only when discriminating faces based on the spacing of features. Differences in accuracy were not related to other scales of the CCTI. In Study 2, we showed that these differences were face-specific and cannot be attributed to differences in task difficulty. Findings suggest that shy children are less sensitive to some cues to facial recognition possibly underlying their inability to distinguish certain facial emotions in others, leading to a cascade of secondary negative effects in social behaviour.
A face and palmprint recognition approach based on discriminant DCT feature extraction.
Jing, Xiao-Yuan; Zhang, David
2004-12-01
In the field of image processing and recognition, discrete cosine transform (DCT) and linear discrimination are two widely used techniques. Based on them, we present a new face and palmprint recognition approach in this paper. It first uses a two-dimensional separability judgment to select the DCT frequency bands with favorable linear separability. Then from the selected bands, it extracts the linear discriminative features by an improved Fisherface method and performs the classification by the nearest neighbor classifier. We detailedly analyze theoretical advantages of our approach in feature extraction. The experiments on face databases and palmprint database demonstrate that compared to the state-of-the-art linear discrimination methods, our approach obtains better classification performance. It can significantly improve the recognition rates for face and palmprint data and effectively reduce the dimension of feature space.
ERIC Educational Resources Information Center
Kallkvist, Marie; Gomez, Stephen; Andersson, Holger; Lush, David
2009-01-01
The purpose of this study was to create and evaluate personalised virtual learning spaces (PVLSs) in a course that was previously delivered face-to-face only. The study addressed three related questions: (1) Can a PVLS successfully be introduced into a course where IT has not previously featured? (2) Can the PVLSs be used to enhance the assessment…
Internal pedestrian circulation and common open space, also illustrating mature ...
Internal pedestrian circulation and common open space, also illustrating mature landscape features. Building 35 at left foreground. Facing east - Harbor Hills Housing Project, 26607 Western Avenue, Lomita, Los Angeles County, CA
Mapping the emotional face. How individual face parts contribute to successful emotion recognition.
Wegrzyn, Martin; Vogt, Maria; Kireclioglu, Berna; Schneider, Julia; Kissler, Johanna
2017-01-01
Which facial features allow human observers to successfully recognize expressions of emotion? While the eyes and mouth have been frequently shown to be of high importance, research on facial action units has made more precise predictions about the areas involved in displaying each emotion. The present research investigated on a fine-grained level, which physical features are most relied on when decoding facial expressions. In the experiment, individual faces expressing the basic emotions according to Ekman were hidden behind a mask of 48 tiles, which was sequentially uncovered. Participants were instructed to stop the sequence as soon as they recognized the facial expression and assign it the correct label. For each part of the face, its contribution to successful recognition was computed, allowing to visualize the importance of different face areas for each expression. Overall, observers were mostly relying on the eye and mouth regions when successfully recognizing an emotion. Furthermore, the difference in the importance of eyes and mouth allowed to group the expressions in a continuous space, ranging from sadness and fear (reliance on the eyes) to disgust and happiness (mouth). The face parts with highest diagnostic value for expression identification were typically located in areas corresponding to action units from the facial action coding system. A similarity analysis of the usefulness of different face parts for expression recognition demonstrated that faces cluster according to the emotion they express, rather than by low-level physical features. Also, expressions relying more on the eyes or mouth region were in close proximity in the constructed similarity space. These analyses help to better understand how human observers process expressions of emotion, by delineating the mapping from facial features to psychological representation.
Mapping the emotional face. How individual face parts contribute to successful emotion recognition
Wegrzyn, Martin; Vogt, Maria; Kireclioglu, Berna; Schneider, Julia; Kissler, Johanna
2017-01-01
Which facial features allow human observers to successfully recognize expressions of emotion? While the eyes and mouth have been frequently shown to be of high importance, research on facial action units has made more precise predictions about the areas involved in displaying each emotion. The present research investigated on a fine-grained level, which physical features are most relied on when decoding facial expressions. In the experiment, individual faces expressing the basic emotions according to Ekman were hidden behind a mask of 48 tiles, which was sequentially uncovered. Participants were instructed to stop the sequence as soon as they recognized the facial expression and assign it the correct label. For each part of the face, its contribution to successful recognition was computed, allowing to visualize the importance of different face areas for each expression. Overall, observers were mostly relying on the eye and mouth regions when successfully recognizing an emotion. Furthermore, the difference in the importance of eyes and mouth allowed to group the expressions in a continuous space, ranging from sadness and fear (reliance on the eyes) to disgust and happiness (mouth). The face parts with highest diagnostic value for expression identification were typically located in areas corresponding to action units from the facial action coding system. A similarity analysis of the usefulness of different face parts for expression recognition demonstrated that faces cluster according to the emotion they express, rather than by low-level physical features. Also, expressions relying more on the eyes or mouth region were in close proximity in the constructed similarity space. These analyses help to better understand how human observers process expressions of emotion, by delineating the mapping from facial features to psychological representation. PMID:28493921
Dimitriou, D; Leonard, H C; Karmiloff-Smith, A; Johnson, M H; Thomas, M S C
2015-05-01
Configural processing in face recognition is a sensitivity to the spacing between facial features. It has been argued both that its presence represents a high level of expertise in face recognition, and also that it is a developmentally vulnerable process. We report a cross-syndrome investigation of the development of configural face recognition in school-aged children with autism, Down syndrome and Williams syndrome compared with a typically developing comparison group. Cross-sectional trajectory analyses were used to compare configural and featural face recognition utilising the 'Jane faces' task. Trajectories were constructed linking featural and configural performance either to chronological age or to different measures of mental age (receptive vocabulary, visuospatial construction), as well as the Benton face recognition task. An emergent inversion effect across age for detecting configural but not featural changes in faces was established as the marker of typical development. Children from clinical groups displayed atypical profiles that differed across all groups. We discuss the implications for the nature of face processing within the respective developmental disorders, and how the cross-sectional syndrome comparison informs the constraints that shape the typical development of face recognition. © 2014 MENCAP and International Association of the Scientific Study of Intellectual and Developmental Disabilities and John Wiley & Sons Ltd.
Adaboost multi-view face detection based on YCgCr skin color model
NASA Astrophysics Data System (ADS)
Lan, Qi; Xu, Zhiyong
2016-09-01
Traditional Adaboost face detection algorithm uses Haar-like features training face classifiers, whose detection error rate is low in the face region. While under the complex background, the classifiers will make wrong detection easily to the background regions with the similar faces gray level distribution, which leads to the error detection rate of traditional Adaboost algorithm is high. As one of the most important features of a face, skin in YCgCr color space has good clustering. We can fast exclude the non-face areas through the skin color model. Therefore, combining with the advantages of the Adaboost algorithm and skin color detection algorithm, this paper proposes Adaboost face detection algorithm method that bases on YCgCr skin color model. Experiments show that, compared with traditional algorithm, the method we proposed has improved significantly in the detection accuracy and errors.
Zhou, Xiaomei; Short, Lindsey A; Chan, Harmonie S J; Mondloch, Catherine J
2016-09-01
Young and older adults are more sensitive to deviations from normality in young than older adult faces, suggesting that the dimensions of face space are optimized for young adult faces. Here, we extend these findings to own-race faces and provide converging evidence using an attractiveness rating task. In Experiment 1, Caucasian and Chinese adults were shown own- and other-race face pairs; one member was undistorted and the other had compressed or expanded features. Participants indicated which member of each pair was more normal (a task that requires referencing a norm) and which was more expanded (a task that simply requires discrimination). Participants showed an own-race advantage in the normality task but not the discrimination task. In Experiment 2, participants rated the facial attractiveness of own- and other-race faces (Experiment 2a) or young and older adult faces (Experiment 2b). Between-rater variability in ratings of individual faces was higher for other-race and older adult faces; reduced consensus in attractiveness judgments reflects a less refined face space. Collectively, these results provide direct evidence that the dimensions of face space are optimized for own-race and young adult faces, which may underlie face race- and age-based deficits in recognition. © The Author(s) 2016.
Developmental changes in analytic and holistic processes in face perception.
Joseph, Jane E; DiBartolo, Michelle D; Bhatt, Ramesh S
2015-01-01
Although infants demonstrate sensitivity to some kinds of perceptual information in faces, many face capacities continue to develop throughout childhood. One debate is the degree to which children perceive faces analytically versus holistically and how these processes undergo developmental change. In the present study, school-aged children and adults performed a perceptual matching task with upright and inverted face and house pairs that varied in similarity of featural or 2(nd) order configural information. Holistic processing was operationalized as the degree of serial processing when discriminating faces and houses [i.e., increased reaction time (RT), as more features or spacing relations were shared between stimuli]. Analytical processing was operationalized as the degree of parallel processing (or no change in RT as a function of greater similarity of features or spatial relations). Adults showed the most evidence for holistic processing (most strongly for 2(nd) order faces) and holistic processing was weaker for inverted faces and houses. Younger children (6-8 years), in contrast, showed analytical processing across all experimental manipulations. Older children (9-11 years) showed an intermediate pattern with a trend toward holistic processing of 2(nd) order faces like adults, but parallel processing in other experimental conditions like younger children. These findings indicate that holistic face representations emerge around 10 years of age. In adults both 2(nd) order and featural information are incorporated into holistic representations, whereas older children only incorporate 2(nd) order information. Holistic processing was not evident in younger children. Hence, the development of holistic face representations relies on 2(nd) order processing initially then incorporates featural information by adulthood.
NASA Astrophysics Data System (ADS)
Hsieh, Cheng-Ta; Huang, Kae-Horng; Lee, Chang-Hsing; Han, Chin-Chuan; Fan, Kuo-Chin
2017-12-01
Robust face recognition under illumination variations is an important and challenging task in a face recognition system, particularly for face recognition in the wild. In this paper, a face image preprocessing approach, called spatial adaptive shadow compensation (SASC), is proposed to eliminate shadows in the face image due to different lighting directions. First, spatial adaptive histogram equalization (SAHE), which uses face intensity prior model, is proposed to enhance the contrast of each local face region without generating visible noises in smooth face areas. Adaptive shadow compensation (ASC), which performs shadow compensation in each local image block, is then used to produce a wellcompensated face image appropriate for face feature extraction and recognition. Finally, null-space linear discriminant analysis (NLDA) is employed to extract discriminant features from SASC compensated images. Experiments performed on the Yale B, Yale B extended, and CMU PIE face databases have shown that the proposed SASC always yields the best face recognition accuracy. That is, SASC is more robust to face recognition under illumination variations than other shadow compensation approaches.
A Robust Shape Reconstruction Method for Facial Feature Point Detection.
Tan, Shuqiu; Chen, Dongyi; Guo, Chenggang; Huang, Zhiqi
2017-01-01
Facial feature point detection has been receiving great research advances in recent years. Numerous methods have been developed and applied in practical face analysis systems. However, it is still a quite challenging task because of the large variability in expression and gestures and the existence of occlusions in real-world photo shoot. In this paper, we present a robust sparse reconstruction method for the face alignment problems. Instead of a direct regression between the feature space and the shape space, the concept of shape increment reconstruction is introduced. Moreover, a set of coupled overcomplete dictionaries termed the shape increment dictionary and the local appearance dictionary are learned in a regressive manner to select robust features and fit shape increments. Additionally, to make the learned model more generalized, we select the best matched parameter set through extensive validation tests. Experimental results on three public datasets demonstrate that the proposed method achieves a better robustness over the state-of-the-art methods.
Facial recognition using multisensor images based on localized kernel eigen spaces.
Gundimada, Satyanadh; Asari, Vijayan K
2009-06-01
A feature selection technique along with an information fusion procedure for improving the recognition accuracy of a visual and thermal image-based facial recognition system is presented in this paper. A novel modular kernel eigenspaces approach is developed and implemented on the phase congruency feature maps extracted from the visual and thermal images individually. Smaller sub-regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are then projected into higher dimensional spaces using kernel methods. The proposed localized nonlinear feature selection procedure helps to overcome the bottlenecks of illumination variations, partial occlusions, expression variations and variations due to temperature changes that affect the visual and thermal face recognition techniques. AR and Equinox databases are used for experimentation and evaluation of the proposed technique. The proposed feature selection procedure has greatly improved the recognition accuracy for both the visual and thermal images when compared to conventional techniques. Also, a decision level fusion methodology is presented which along with the feature selection procedure has outperformed various other face recognition techniques in terms of recognition accuracy.
Developmental changes in analytic and holistic processes in face perception
Joseph, Jane E.; DiBartolo, Michelle D.; Bhatt, Ramesh S.
2015-01-01
Although infants demonstrate sensitivity to some kinds of perceptual information in faces, many face capacities continue to develop throughout childhood. One debate is the degree to which children perceive faces analytically versus holistically and how these processes undergo developmental change. In the present study, school-aged children and adults performed a perceptual matching task with upright and inverted face and house pairs that varied in similarity of featural or 2nd order configural information. Holistic processing was operationalized as the degree of serial processing when discriminating faces and houses [i.e., increased reaction time (RT), as more features or spacing relations were shared between stimuli]. Analytical processing was operationalized as the degree of parallel processing (or no change in RT as a function of greater similarity of features or spatial relations). Adults showed the most evidence for holistic processing (most strongly for 2nd order faces) and holistic processing was weaker for inverted faces and houses. Younger children (6–8 years), in contrast, showed analytical processing across all experimental manipulations. Older children (9–11 years) showed an intermediate pattern with a trend toward holistic processing of 2nd order faces like adults, but parallel processing in other experimental conditions like younger children. These findings indicate that holistic face representations emerge around 10 years of age. In adults both 2nd order and featural information are incorporated into holistic representations, whereas older children only incorporate 2nd order information. Holistic processing was not evident in younger children. Hence, the development of holistic face representations relies on 2nd order processing initially then incorporates featural information by adulthood. PMID:26300838
Rim seal arrangement having pumping feature
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Ching-Pang; Myers, Caleb
A rim seal arrangement for a gas turbine engine includes a first seal face on a rotor component, and a second seal face on a stationary annular rim centered about a rotation axis of the rotor component. The second seal face is spaced from the first seal face along an axial direction to define a seal gap. The seal gap is located between a radially outer hot gas path and a radially inner rotor cavity. The first seal face has a plurality of circumferentially spaced depressions, each having a depth in an axial direction and extending along a radial extentmore » of the first seal face. The depressions influence flow in the seal gap such that during rotation of the rotor component, fluid in the seal gap is pumped in a radially outward direction to prevent ingestion of a gas path fluid from the hot gas path into the rotor cavity.« less
Processing deficits for familiar and novel faces in patients with left posterior fusiform lesions.
Roberts, Daniel J; Lambon Ralph, Matthew A; Kim, Esther; Tainturier, Marie-Josephe; Beeson, Pelagie M; Rapcsak, Steven Z; Woollams, Anna M
2015-11-01
Pure alexia (PA) arises from damage to the left posterior fusiform gyrus (pFG) and the striking reading disorder that defines this condition has meant that such patients are often cited as evidence for the specialisation of this region to processing of written words. There is, however, an alternative view that suggests this region is devoted to processing of high acuity foveal input, which is particularly salient for complex visual stimuli like letter strings. Previous reports have highlighted disrupted processing of non-linguistic visual stimuli after damage to the left pFG, both for familiar and unfamiliar objects and also for novel faces. This study explored the nature of face processing deficits in patients with left pFG damage. Identification of famous faces was found to be compromised in both expressive and receptive tasks. Discrimination of novel faces was also impaired, particularly for those that varied in terms of second-order spacing information, and this deficit was most apparent for the patients with the more severe reading deficits. Interestingly, discrimination of faces that varied in terms of feature identity was considerably better in these patients and it was performance in this condition that was related to the size of the length effects shown in reading. This finding complements functional imaging studies showing left pFG activation for faces varying only in spacing and frontal activation for faces varying only on features. These results suggest that the sequential part-based processing strategy that promotes the length effect in the reading of these patients also allows them to discriminate between faces on the basis of feature identity, but processing of second-order configural information is most compromised due to their left pFG lesion. This study supports a view in which the left pFG is specialised for processing of high acuity foveal visual information that supports processing of both words and faces. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Processing deficits for familiar and novel faces in patients with left posterior fusiform lesions
Roberts, Daniel J.; Lambon Ralph, Matthew A.; Kim, Esther; Tainturier, Marie-Josephe; Beeson, Pelagie M.; Rapcsak, Steven Z.; Woollams, Anna M.
2015-01-01
Pure alexia (PA) arises from damage to the left posterior fusiform gyrus (pFG) and the striking reading disorder that defines this condition has meant that such patients are often cited as evidence for the specialisation of this region to processing of written words. There is, however, an alternative view that suggests this region is devoted to processing of high acuity foveal input, which is particularly salient for complex visual stimuli like letter strings. Previous reports have highlighted disrupted processing of non-linguistic visual stimuli after damage to the left pFG, both for familiar and unfamiliar objects and also for novel faces. This study explored the nature of face processing deficits in patients with left pFG damage. Identification of famous faces was found to be compromised in both expressive and receptive tasks. Discrimination of novel faces was also impaired, particularly for those that varied in terms of second-order spacing information, and this deficit was most apparent for the patients with the more severe reading deficits. Interestingly, discrimination of faces that varied in terms of feature identity was considerably better in these patients and it was performance in this condition that was related to the size of the length effects shown in reading. This finding complements functional imaging studies showing left pFG activation for faces varying only in spacing and frontal activation for faces varying only on features. These results suggest that the sequential part-based processing strategy that promotes the length effect in the reading of these patients also allows them to discriminate between faces on the basis of feature identity, but processing of second-order configural information is most compromised due to their left pFG lesion. This study supports a view in which the left pFG is specialised for processing of high acuity foveal visual information that supports processing of both words and faces. PMID:25837867
14. VIEW OF MST, FACING SOUTHEAST, AND LAUNCH PAD TAKEN ...
14. VIEW OF MST, FACING SOUTHEAST, AND LAUNCH PAD TAKEN FROM NORTHEAST PHOTO TOWER WITH WINDOW OPEN. FEATURES LEFT TO RIGHT: SOUTH TELEVISION CAMERA TOWER, SOUTHWEST PHOTO TOWER, LAUNCHER, UMBILICAL MAST, MST, AND OXIDIZER APRON. - Vandenberg Air Force Base, Space Launch Complex 3, Launch Pad 3 East, Napa & Alden Roads, Lompoc, Santa Barbara County, CA
ERIC Educational Resources Information Center
Crookes, Kate; Hayward, William G.
2012-01-01
Presenting a face inverted (upside down) disrupts perceptual sensitivity to the spacing between the features. Recently, it has been shown that this disruption is greater for vertical than horizontal changes in eye position. One explanation for this effect proposed that inversion disrupts the processing of long-range (e.g., eye-to-mouth distance)…
ERIC Educational Resources Information Center
de Heering, Adelaide; Schiltz, Christine
2013-01-01
Sensitivity to spacing information within faces improves with age and reaches maturity only at adolescence. In this study, we tested 6-16-year-old children's sensitivity to vertical spacing when the eyes or the mouth is the facial feature selectively manipulated. Despite the similar discriminability of these manipulations when they are embedded in…
Image preprocessing study on KPCA-based face recognition
NASA Astrophysics Data System (ADS)
Li, Xuan; Li, Dehua
2015-12-01
Face recognition as an important biometric identification method, with its friendly, natural, convenient advantages, has obtained more and more attention. This paper intends to research a face recognition system including face detection, feature extraction and face recognition, mainly through researching on related theory and the key technology of various preprocessing methods in face detection process, using KPCA method, focuses on the different recognition results in different preprocessing methods. In this paper, we choose YCbCr color space for skin segmentation and choose integral projection for face location. We use erosion and dilation of the opening and closing operation and illumination compensation method to preprocess face images, and then use the face recognition method based on kernel principal component analysis method for analysis and research, and the experiments were carried out using the typical face database. The algorithms experiment on MATLAB platform. Experimental results show that integration of the kernel method based on PCA algorithm under certain conditions make the extracted features represent the original image information better for using nonlinear feature extraction method, which can obtain higher recognition rate. In the image preprocessing stage, we found that images under various operations may appear different results, so as to obtain different recognition rate in recognition stage. At the same time, in the process of the kernel principal component analysis, the value of the power of the polynomial function can affect the recognition result.
New nonlinear features for inspection, robotics, and face recognition
NASA Astrophysics Data System (ADS)
Casasent, David P.; Talukder, Ashit
1999-10-01
Classification of real-time X-ray images of randomly oriented touching pistachio nuts is discussed. The ultimate objective is the development of a system for automated non- invasive detection of defective product items on a conveyor belt. We discuss the extraction of new features that allow better discrimination between damaged and clean items (pistachio nuts). This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discriminating feature (MRDF) extraction method computes nonlinear features that are used as inputs to a new modified k nearest neighbor classifier. In this work, the MRDF is applied to standard features (rather than iconic data). The MRDF is robust to various probability distributions of the input class and is shown to provide good classification and new ROC (receiver operating characteristic) data. Other applications of these new feature spaces in robotics and face recognition are also noted.
Face recognition increases during saccade preparation.
Lin, Hai; Rizak, Joshua D; Ma, Yuan-ye; Yang, Shang-chuan; Chen, Lin; Hu, Xin-tian
2014-01-01
Face perception is integral to human perception system as it underlies social interactions. Saccadic eye movements are frequently made to bring interesting visual information, such as faces, onto the fovea for detailed processing. Just before eye movement onset, the processing of some basic features, such as the orientation, of an object improves at the saccade landing point. Interestingly, there is also evidence that indicates faces are processed in early visual processing stages similar to basic features. However, it is not known whether this early enhancement of processing includes face recognition. In this study, three experiments were performed to map the timing of face presentation to the beginning of the eye movement in order to evaluate pre-saccadic face recognition. Faces were found to be similarly processed as simple objects immediately prior to saccadic movements. Starting ∼ 120 ms before a saccade to a target face, independent of whether or not the face was surrounded by other faces, the face recognition gradually improved and the critical spacing of the crowding decreased as saccade onset was approaching. These results suggest that an upcoming saccade prepares the visual system for new information about faces at the saccade landing site and may reduce the background in a crowd to target the intended face. This indicates an important role of pre-saccadic eye movement signals in human face recognition.
NASA Astrophysics Data System (ADS)
Houjoh, Haruo
1992-12-01
One specific feature of the aerodynamic sound produced at the face end region is that the radiation becomes equally weak by filling root spaces as by shortening the center distance. However, one can easily expect that such actions make the air flow faster, and consequently make the sound louder. This paper attempts to reveal the reason for such a feature. First, air flow induced by the pumping action of the gear pair was analyzed regarding a series of root spaces as volume varying cavities which have channels to adjacent cavities as well as the exit/inlet at the face ends. The numerical analysis was verified by the hot wire anemometer measurement. Next, from the obtained flow response, the sound source was estimated to be a combination of symmetrically distributed simple sources. Taking the effect of either the center distance or root filling into consideration, it is shown that the simplified model can explain such a feature rationally.
Kuo, Po-Chih; Chen, Yong-Sheng; Chen, Li-Fen
2018-05-01
The main challenge in decoding neural representations lies in linking neural activity to representational content or abstract concepts. The transformation from a neural-based to a low-dimensional representation may hold the key to encoding perceptual processes in the human brain. In this study, we developed a novel model by which to represent two changeable features of faces: face viewpoint and gaze direction. These features are embedded in spatiotemporal brain activity derived from magnetoencephalographic data. Our decoding results demonstrate that face viewpoint and gaze direction can be represented by manifold structures constructed from brain responses in the bilateral occipital face area and right superior temporal sulcus, respectively. Our results also show that the superposition of brain activity in the manifold space reveals the viewpoints of faces as well as directions of gazes as perceived by the subject. The proposed manifold representation model provides a novel opportunity to gain further insight into the processing of information in the human brain. © 2018 Wiley Periodicals, Inc.
Learning representative features for facial images based on a modified principal component analysis
NASA Astrophysics Data System (ADS)
Averkin, Anton; Potapov, Alexey
2013-05-01
The paper is devoted to facial image analysis and particularly deals with the problem of automatic evaluation of the attractiveness of human faces. We propose a new approach for automatic construction of feature space based on a modified principal component analysis. Input data sets for the algorithm are the learning data sets of facial images, which are rated by one person. The proposed approach allows one to extract features of the individual subjective face beauty perception and to predict attractiveness values for new facial images, which were not included into a learning data set. The Pearson correlation coefficient between values predicted by our method for new facial images and personal attractiveness estimation values equals to 0.89. This means that the new approach proposed is promising and can be used for predicting subjective face attractiveness values in real systems of the facial images analysis.
Implicit face prototype learning from geometric information.
Or, Charles C-F; Wilson, Hugh R
2013-04-19
There is evidence that humans implicitly learn an average or prototype of previously studied faces, as the unseen face prototype is falsely recognized as having been learned (Solso & McCarthy, 1981). Here we investigated the extent and nature of face prototype formation where observers' memory was tested after they studied synthetic faces defined purely in geometric terms in a multidimensional face space. We found a strong prototype effect: The basic results showed that the unseen prototype averaged from the studied faces was falsely identified as learned at a rate of 86.3%, whereas individual studied faces were identified correctly 66.3% of the time and the distractors were incorrectly identified as having been learned only 32.4% of the time. This prototype learning lasted at least 1 week. Face prototype learning occurred even when the studied faces were further from the unseen prototype than the median variation in the population. Prototype memory formation was evident in addition to memory formation of studied face exemplars as demonstrated in our models. Additional studies showed that the prototype effect can be generalized across viewpoints, and head shape and internal features separately contribute to prototype formation. Thus, implicit face prototype extraction in a multidimensional space is a very general aspect of geometric face learning. Copyright © 2013 Elsevier Ltd. All rights reserved.
2006-11-10
features based on shape are easy to come by. The Great Pyramids at Giza are instantly identified from space, even at the very coarse spatial... Pyramids at Giza , Egypt, are recognized by their triangular faces in this 1 m resolution Ikonos image, as are nearby rectangular tombs (credit: Space
2012-10-14
ISS033-E-012429 (14 Oct. 2012) --- Attached to the Earth-facing side of the Harmony node, the SpaceX Dragon commercial cargo craft is featured in this image photographed by an Expedition 33 crew member on the International Space Station. Dragon was berthed to Harmony on Oct. 10 and is scheduled to spend 18 days attached to the station.
2012-10-14
ISS033-E-012422 (14 Oct. 2012) --- Attached to the Earth-facing side of the Harmony node, the SpaceX Dragon commercial cargo craft is featured in this image photographed by an Expedition 33 crew member on the International Space Station. Dragon was berthed to Harmony on Oct. 10 and is scheduled to spend 18 days attached to the station.
2012-10-14
ISS033-E-012424 (14 Oct. 2012) --- Attached to the Earth-facing side of the Harmony node, the SpaceX Dragon commercial cargo craft is featured in this image photographed by an Expedition 33 crew member on the International Space Station. Dragon was berthed to Harmony on Oct. 10 and is scheduled to spend 18 days attached to the station.
The Face-Processing Network Is Resilient to Focal Resection of Human Visual Cortex
Jonas, Jacques; Gomez, Jesse; Maillard, Louis; Brissart, Hélène; Hossu, Gabriela; Jacques, Corentin; Loftus, David; Colnat-Coulbois, Sophie; Stigliani, Anthony; Barnett, Michael A.; Grill-Spector, Kalanit; Rossion, Bruno
2016-01-01
Human face perception requires a network of brain regions distributed throughout the occipital and temporal lobes with a right hemisphere advantage. Present theories consider this network as either a processing hierarchy beginning with the inferior occipital gyrus (occipital face area; IOG-faces/OFA) or a multiple-route network with nonhierarchical components. The former predicts that removing IOG-faces/OFA will detrimentally affect downstream stages, whereas the latter does not. We tested this prediction in a human patient (Patient S.P.) requiring removal of the right inferior occipital cortex, including IOG-faces/OFA. We acquired multiple fMRI measurements in Patient S.P. before and after a preplanned surgery and multiple measurements in typical controls, enabling both within-subject/across-session comparisons (Patient S.P. before resection vs Patient S.P. after resection) and between-subject/across-session comparisons (Patient S.P. vs controls). We found that the spatial topology and selectivity of downstream ipsilateral face-selective regions were stable 1 and 8 month(s) after surgery. Additionally, the reliability of distributed patterns of face selectivity in Patient S.P. before versus after resection was not different from across-session reliability in controls. Nevertheless, postoperatively, representations of visual space were typical in dorsal face-selective regions but atypical in ventral face-selective regions and V1 of the resected hemisphere. Diffusion weighted imaging in Patient S.P. and controls identifies white matter tracts connecting retinotopic areas to downstream face-selective regions, which may contribute to the stable and plastic features of the face network in Patient S.P. after surgery. Together, our results support a multiple-route network of face processing with nonhierarchical components and shed light on stable and plastic features of high-level visual cortex following focal brain damage. SIGNIFICANCE STATEMENT Brain networks consist of interconnected functional regions commonly organized in processing hierarchies. Prevailing theories predict that damage to the input of the hierarchy will detrimentally affect later stages. We tested this prediction with multiple brain measurements in a rare human patient requiring surgical removal of the putative input to a network processing faces. Surprisingly, the spatial topology and selectivity of downstream face-selective regions are stable after surgery. Nevertheless, representations of visual space were typical in dorsal face-selective regions but atypical in ventral face-selective regions and V1. White matter connections from outside the face network may support these stable and plastic features. As processing hierarchies are ubiquitous in biological and nonbiological systems, our results have pervasive implications for understanding the construction of resilient networks. PMID:27511014
Study on identifying deciduous forest by the method of feature space transformation
NASA Astrophysics Data System (ADS)
Zhang, Xuexia; Wu, Pengfei
2009-10-01
The thematic remotely sensed information extraction is always one of puzzling nuts which the remote sensing science faces, so many remote sensing scientists devotes diligently to this domain research. The methods of thematic information extraction include two kinds of the visual interpretation and the computer interpretation, the developing direction of which is intellectualization and comprehensive modularization. The paper tries to develop the intelligent extraction method of feature space transformation for the deciduous forest thematic information extraction in Changping district of Beijing city. The whole Chinese-Brazil resources satellite images received in 2005 are used to extract the deciduous forest coverage area by feature space transformation method and linear spectral decomposing method, and the result from remote sensing is similar to woodland resource census data by Chinese forestry bureau in 2004.
NASA Technical Reports Server (NTRS)
Fitzpatrick, Austin J.; Novati, Alexander; Fisher, Diane K.; Leon, Nancy J.; Netting, Ruth
2013-01-01
Space Place Prime is public engagement and education software for use on iPad. It targets a multi-generational audience with news, images, videos, and educational articles from the Space Place Web site and other NASA sources. New content is downloaded daily (or whenever the user accesses the app) via the wireless connection. In addition to the Space Place Web site, several NASA RSS feeds are tapped to provide new content. Content is retained for the previous several days, or some number of editions of each feed. All content is controlled on the server side, so features about the latest news, or changes to any content, can be made without updating the app in the Apple Store. It gathers many popular NASA features into one app. The interface is a boundless, slidable- in-any-direction grid of images, unique for each feature, and iconized as image, video, or article. A tap opens the feature. An alternate list mode presents menus of images, videos, and articles separately. Favorites can be tagged for permanent archive. Face - book, Twitter, and e-mail connections make any feature shareable.
Appearance-based face recognition and light-fields.
Gross, Ralph; Matthews, Iain; Baker, Simon
2004-04-01
Arguably the most important decision to be made when developing an object recognition algorithm is selecting the scene measurements or features on which to base the algorithm. In appearance-based object recognition, the features are chosen to be the pixel intensity values in an image of the object. These pixel intensities correspond directly to the radiance of light emitted from the object along certain rays in space. The set of all such radiance values over all possible rays is known as the plenoptic function or light-field. In this paper, we develop a theory of appearance-based object recognition from light-fields. This theory leads directly to an algorithm for face recognition across pose that uses as many images of the face as are available, from one upwards. All of the pixels, whichever image they come from, are treated equally and used to estimate the (eigen) light-field of the object. The eigen light-field is then used as the set of features on which to base recognition, analogously to how the pixel intensities are used in appearance-based face and object recognition.
If You Give a Nurse a Cookie: Sharing Teaching Strategies for Nurse Educator Development.
Wingo, Nancy P
2017-01-01
Nurse educators often do not have time or a space to discuss ideas about effective teaching. To address this issue, an instructor at one school of nursing initiated Cookie Swap, a bimonthly, school-wide e-mail featuring stories about teaching strategies and tools used in face-to-face, online, and clinical courses. J Contin Educ Nurs. 2017;48(1):12-13. Copyright 2017, SLACK Incorporated.
Video-based face recognition via convolutional neural networks
NASA Astrophysics Data System (ADS)
Bao, Tianlong; Ding, Chunhui; Karmoshi, Saleem; Zhu, Ming
2017-06-01
Face recognition has been widely studied recently while video-based face recognition still remains a challenging task because of the low quality and large intra-class variation of video captured face images. In this paper, we focus on two scenarios of video-based face recognition: 1)Still-to-Video(S2V) face recognition, i.e., querying a still face image against a gallery of video sequences; 2)Video-to-Still(V2S) face recognition, in contrast to S2V scenario. A novel method was proposed in this paper to transfer still and video face images to an Euclidean space by a carefully designed convolutional neural network, then Euclidean metrics are used to measure the distance between still and video images. Identities of still and video images that group as pairs are used as supervision. In the training stage, a joint loss function that measures the Euclidean distance between the predicted features of training pairs and expanding vectors of still images is optimized to minimize the intra-class variation while the inter-class variation is guaranteed due to the large margin of still images. Transferred features are finally learned via the designed convolutional neural network. Experiments are performed on COX face dataset. Experimental results show that our method achieves reliable performance compared with other state-of-the-art methods.
The uncrowded window of object recognition
Pelli, Denis G; Tillman, Katharine A
2009-01-01
It is now emerging that vision is usually limited by object spacing rather than size. The visual system recognizes an object by detecting and then combining its features. ‘Crowding’ occurs when objects are too close together and features from several objects are combined into a jumbled percept. Here, we review the explosion of studies on crowding—in grating discrimination, letter and face recognition, visual search, selective attention, and reading—and find a universal principle, the Bouma law. The critical spacing required to prevent crowding is equal for all objects, although the effect is weaker between dissimilar objects. Furthermore, critical spacing at the cortex is independent of object position, and critical spacing at the visual field is proportional to object distance from fixation. The region where object spacing exceeds critical spacing is the ‘uncrowded window’. Observers cannot recognize objects outside of this window and its size limits the speed of reading and search. PMID:18828191
Adaptive skin segmentation via feature-based face detection
NASA Astrophysics Data System (ADS)
Taylor, Michael J.; Morris, Tim
2014-05-01
Variations in illumination can have significant effects on the apparent colour of skin, which can be damaging to the efficacy of any colour-based segmentation approach. We attempt to overcome this issue by presenting a new adaptive approach, capable of generating skin colour models at run-time. Our approach adopts a Viola-Jones feature-based face detector, in a moderate-recall, high-precision configuration, to sample faces within an image, with an emphasis on avoiding potentially detrimental false positives. From these samples, we extract a set of pixels that are likely to be from skin regions, filter them according to their relative luma values in an attempt to eliminate typical non-skin facial features (eyes, mouths, nostrils, etc.), and hence establish a set of pixels that we can be confident represent skin. Using this representative set, we train a unimodal Gaussian function to model the skin colour in the given image in the normalised rg colour space - a combination of modelling approach and colour space that benefits us in a number of ways. A generated function can subsequently be applied to every pixel in the given image, and, hence, the probability that any given pixel represents skin can be determined. Segmentation of the skin, therefore, can be as simple as applying a binary threshold to the calculated probabilities. In this paper, we touch upon a number of existing approaches, describe the methods behind our new system, present the results of its application to arbitrary images of people with detectable faces, which we have found to be extremely encouraging, and investigate its potential to be used as part of real-time systems.
The Face-Processing Network Is Resilient to Focal Resection of Human Visual Cortex.
Weiner, Kevin S; Jonas, Jacques; Gomez, Jesse; Maillard, Louis; Brissart, Hélène; Hossu, Gabriela; Jacques, Corentin; Loftus, David; Colnat-Coulbois, Sophie; Stigliani, Anthony; Barnett, Michael A; Grill-Spector, Kalanit; Rossion, Bruno
2016-08-10
Human face perception requires a network of brain regions distributed throughout the occipital and temporal lobes with a right hemisphere advantage. Present theories consider this network as either a processing hierarchy beginning with the inferior occipital gyrus (occipital face area; IOG-faces/OFA) or a multiple-route network with nonhierarchical components. The former predicts that removing IOG-faces/OFA will detrimentally affect downstream stages, whereas the latter does not. We tested this prediction in a human patient (Patient S.P.) requiring removal of the right inferior occipital cortex, including IOG-faces/OFA. We acquired multiple fMRI measurements in Patient S.P. before and after a preplanned surgery and multiple measurements in typical controls, enabling both within-subject/across-session comparisons (Patient S.P. before resection vs Patient S.P. after resection) and between-subject/across-session comparisons (Patient S.P. vs controls). We found that the spatial topology and selectivity of downstream ipsilateral face-selective regions were stable 1 and 8 month(s) after surgery. Additionally, the reliability of distributed patterns of face selectivity in Patient S.P. before versus after resection was not different from across-session reliability in controls. Nevertheless, postoperatively, representations of visual space were typical in dorsal face-selective regions but atypical in ventral face-selective regions and V1 of the resected hemisphere. Diffusion weighted imaging in Patient S.P. and controls identifies white matter tracts connecting retinotopic areas to downstream face-selective regions, which may contribute to the stable and plastic features of the face network in Patient S.P. after surgery. Together, our results support a multiple-route network of face processing with nonhierarchical components and shed light on stable and plastic features of high-level visual cortex following focal brain damage. Brain networks consist of interconnected functional regions commonly organized in processing hierarchies. Prevailing theories predict that damage to the input of the hierarchy will detrimentally affect later stages. We tested this prediction with multiple brain measurements in a rare human patient requiring surgical removal of the putative input to a network processing faces. Surprisingly, the spatial topology and selectivity of downstream face-selective regions are stable after surgery. Nevertheless, representations of visual space were typical in dorsal face-selective regions but atypical in ventral face-selective regions and V1. White matter connections from outside the face network may support these stable and plastic features. As processing hierarchies are ubiquitous in biological and nonbiological systems, our results have pervasive implications for understanding the construction of resilient networks. Copyright © 2016 the authors 0270-6474/16/368426-16$15.00/0.
Locally Linear Embedding of Local Orthogonal Least Squares Images for Face Recognition
NASA Astrophysics Data System (ADS)
Hafizhelmi Kamaru Zaman, Fadhlan
2018-03-01
Dimensionality reduction is very important in face recognition since it ensures that high-dimensionality data can be mapped to lower dimensional space without losing salient and integral facial information. Locally Linear Embedding (LLE) has been previously used to serve this purpose, however, the process of acquiring LLE features requires high computation and resources. To overcome this limitation, we propose a locally-applied Local Orthogonal Least Squares (LOLS) model can be used as initial feature extraction before the application of LLE. By construction of least squares regression under orthogonal constraints we can preserve more discriminant information in the local subspace of facial features while reducing the overall features into a more compact form that we called LOLS images. LLE can then be applied on the LOLS images to maps its representation into a global coordinate system of much lower dimensionality. Several experiments carried out using publicly available face datasets such as AR, ORL, YaleB, and FERET under Single Sample Per Person (SSPP) constraint demonstrates that our proposed method can reduce the time required to compute LLE features while delivering better accuracy when compared to when either LLE or OLS alone is used. Comparison against several other feature extraction methods and more recent feature-learning method such as state-of-the-art Convolutional Neural Networks (CNN) also reveal the superiority of the proposed method under SSPP constraint.
Nie, Aiqing; Jiang, Jingguo; Fu, Qiao
2014-08-20
Previous research has found that conjunction faces (whose internal features, e.g. eyes, nose, and mouth, and external features, e.g. hairstyle and ears, are from separate studied faces) and feature faces (partial features of these are studied) can produce higher false alarms than both old and new faces (i.e. those that are exactly the same as the studied faces and those that have not been previously presented) in recognition. The event-related potentials (ERPs) that relate to conjunction and feature faces at recognition, however, have not been described as yet; in addition, the contributions of different facial features toward ERPs have not been differentiated. To address these issues, the present study compared the ERPs elicited by old faces, conjunction faces (the internal and the external features were from two studied faces), old internal feature faces (whose internal features were studied), and old external feature faces (whose external features were studied) with those of new faces separately. The results showed that old faces not only elicited an early familiarity-related FN400, but a more anterior distributed late old/new effect that reflected recollection. Conjunction faces evoked similar late brain waveforms as old internal feature faces, but not to old external feature faces. These results suggest that, at recognition, old faces hold higher familiarity than compound faces in the profiles of ERPs and internal facial features are more crucial than external ones in triggering the brain waveforms that are characterized as reflecting the result of familiarity.
Music-Elicited Emotion Identification Using Optical Flow Analysis of Human Face
NASA Astrophysics Data System (ADS)
Kniaz, V. V.; Smirnova, Z. N.
2015-05-01
Human emotion identification from image sequences is highly demanded nowadays. The range of possible applications can vary from an automatic smile shutter function of consumer grade digital cameras to Biofied Building technologies, which enables communication between building space and residents. The highly perceptual nature of human emotions leads to the complexity of their classification and identification. The main question arises from the subjective quality of emotional classification of events that elicit human emotions. A variety of methods for formal classification of emotions were developed in musical psychology. This work is focused on identification of human emotions evoked by musical pieces using human face tracking and optical flow analysis. Facial feature tracking algorithm used for facial feature speed and position estimation is presented. Facial features were extracted from each image sequence using human face tracking with local binary patterns (LBP) features. Accurate relative speeds of facial features were estimated using optical flow analysis. Obtained relative positions and speeds were used as the output facial emotion vector. The algorithm was tested using original software and recorded image sequences. The proposed technique proves to give a robust identification of human emotions elicited by musical pieces. The estimated models could be used for human emotion identification from image sequences in such fields as emotion based musical background or mood dependent radio.
Genetics Home Reference: Hajdu-Cheney syndrome
... of the face ( midface hypoplasia ), and a large space between the nose and upper lip (a long philtrum ). Some affected children are born with an opening in the roof of the mouth called a cleft palate or with a high arched palate. In affected adults, the facial features ...
Animatronics, Children and Computation
ERIC Educational Resources Information Center
Sempere, Andrew
2005-01-01
In this article, we present CTRL_SPACE: a design for a software environment with companion hardware, developed to introduce preliterate children to basic computational concepts by means of an animatronic face, whose individual features serve as an analogy for a programmable object. In addition to presenting the environment, this article briefly…
Tensor manifold-based extreme learning machine for 2.5-D face recognition
NASA Astrophysics Data System (ADS)
Chong, Lee Ying; Ong, Thian Song; Teoh, Andrew Beng Jin
2018-01-01
We explore the use of the Gabor regional covariance matrix (GRCM), a flexible matrix-based descriptor that embeds the Gabor features in the covariance matrix, as a 2.5-D facial descriptor and an effective means of feature fusion for 2.5-D face recognition problems. Despite its promise, matching is not a trivial problem for GRCM since it is a special instance of a symmetric positive definite (SPD) matrix that resides in non-Euclidean space as a tensor manifold. This implies that GRCM is incompatible with the existing vector-based classifiers and distance matchers. Therefore, we bridge the gap of the GRCM and extreme learning machine (ELM), a vector-based classifier for the 2.5-D face recognition problem. We put forward a tensor manifold-compliant ELM and its two variants by embedding the SPD matrix randomly into reproducing kernel Hilbert space (RKHS) via tensor kernel functions. To preserve the pair-wise distance of the embedded data, we orthogonalize the random-embedded SPD matrix. Hence, classification can be done using a simple ridge regressor, an integrated component of ELM, on the random orthogonal RKHS. Experimental results show that our proposed method is able to improve the recognition performance and further enhance the computational efficiency.
A novel weld seam detection method for space weld seam of narrow butt joint in laser welding
NASA Astrophysics Data System (ADS)
Shao, Wen Jun; Huang, Yu; Zhang, Yong
2018-02-01
Structured light measurement is widely used for weld seam detection owing to its high measurement precision and robust. However, there is nearly no geometrical deformation of the stripe projected onto weld face, whose seam width is less than 0.1 mm and without misalignment. So, it's very difficult to ensure an exact retrieval of the seam feature. This issue is raised as laser welding for butt joint of thin metal plate is widely applied. Moreover, measurement for the seam width, seam center and the normal vector of the weld face at the same time during welding process is of great importance to the welding quality but rarely reported. Consequently, a seam measurement method based on vision sensor for space weld seam of narrow butt joint is proposed in this article. Three laser stripes with different wave length are project on the weldment, in which two red laser stripes are designed and used to measure the three dimensional profile of the weld face by the principle of optical triangulation, and the third green laser stripe is used as light source to measure the edge and the centerline of the seam by the principle of passive vision sensor. The corresponding image process algorithm is proposed to extract the centerline of the red laser stripes as well as the seam feature. All these three laser stripes are captured and processed in a single image so that the three dimensional position of the space weld seam can be obtained simultaneously. Finally, the result of experiment reveals that the proposed method can meet the precision demand of space narrow butt joint.
Wavelet decomposition based principal component analysis for face recognition using MATLAB
NASA Astrophysics Data System (ADS)
Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish
2016-03-01
For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.
The evolution of face processing in primates
Parr, Lisa A.
2011-01-01
The ability to recognize faces is an important socio-cognitive skill that is associated with a number of cognitive specializations in humans. While numerous studies have examined the presence of these specializations in non-human primates, species where face recognition would confer distinct advantages in social situations, results have been mixed. The majority of studies in chimpanzees support homologous face-processing mechanisms with humans, but results from monkey studies appear largely dependent on the type of testing methods used. Studies that employ passive viewing paradigms, like the visual paired comparison task, report evidence of similarities between monkeys and humans, but tasks that use more stringent, operant response tasks, like the matching-to-sample task, often report species differences. Moreover, the data suggest that monkeys may be less sensitive than chimpanzees and humans to the precise spacing of facial features, in addition to the surface-based cues reflected in those features, information that is critical for the representation of individual identity. The aim of this paper is to provide a comprehensive review of the available data from face-processing tasks in non-human primates with the goal of understanding the evolution of this complex cognitive skill. PMID:21536559
Sparse Feature Extraction for Pose-Tolerant Face Recognition.
Abiantun, Ramzi; Prabhu, Utsav; Savvides, Marios
2014-10-01
Automatic face recognition performance has been steadily improving over years of research, however it remains significantly affected by a number of factors such as illumination, pose, expression, resolution and other factors that can impact matching scores. The focus of this paper is the pose problem which remains largely overlooked in most real-world applications. Specifically, we focus on one-to-one matching scenarios where a query face image of a random pose is matched against a set of gallery images. We propose a method that relies on two fundamental components: (a) A 3D modeling step to geometrically correct the viewpoint of the face. For this purpose, we extend a recent technique for efficient synthesis of 3D face models called 3D Generic Elastic Model. (b) A sparse feature extraction step using subspace modeling and ℓ1-minimization to induce pose-tolerance in coefficient space. This in return enables the synthesis of an equivalent frontal-looking face, which can be used towards recognition. We show significant performance improvements in verification rates compared to commercial matchers, and also demonstrate the resilience of the proposed method with respect to degrading input quality. We find that the proposed technique is able to match non-frontal images to other non-frontal images of varying angles.
Modeling Human Dynamics of Face-to-Face Interaction Networks
NASA Astrophysics Data System (ADS)
Starnini, Michele; Baronchelli, Andrea; Pastor-Satorras, Romualdo
2013-04-01
Face-to-face interaction networks describe social interactions in human gatherings, and are the substrate for processes such as epidemic spreading and gossip propagation. The bursty nature of human behavior characterizes many aspects of empirical data, such as the distribution of conversation lengths, of conversations per person, or of interconversation times. Despite several recent attempts, a general theoretical understanding of the global picture emerging from data is still lacking. Here we present a simple model that reproduces quantitatively most of the relevant features of empirical face-to-face interaction networks. The model describes agents that perform a random walk in a two-dimensional space and are characterized by an attractiveness whose effect is to slow down the motion of people around them. The proposed framework sheds light on the dynamics of human interactions and can improve the modeling of dynamical processes taking place on the ensuing dynamical social networks.
An adaptation study of internal and external features in facial representations.
Hills, Charlotte; Romano, Kali; Davies-Thompson, Jodie; Barton, Jason J S
2014-07-01
Prior work suggests that internal features contribute more than external features to face processing. Whether this asymmetry is also true of the mental representations of faces is not known. We used face adaptation to determine whether the internal and external features of faces contribute differently to the representation of facial identity, whether this was affected by familiarity, and whether the results differed if the features were presented in isolation or as part of a whole face. In a first experiment, subjects performed a study of identity adaptation for famous and novel faces, in which the adapting stimuli were whole faces, the internal features alone, or the external features alone. In a second experiment, the same faces were used, but the adapting internal and external features were superimposed on whole faces that were ambiguous to identity. The first experiment showed larger aftereffects for unfamiliar faces, and greater aftereffects from internal than from external features, and the latter was true for both familiar and unfamiliar faces. When internal and external features were presented in a whole-face context in the second experiment, aftereffects from either internal or external features was less than that from the whole face, and did not differ from each other. While we reproduce the greater importance of internal features when presented in isolation, we find this is equally true for familiar and unfamiliar faces. The dominant influence of internal features is reduced when integrated into a whole-face context, suggesting another facet of expert face processing. Copyright © 2014 Elsevier B.V. All rights reserved.
A Generic multi-dimensional feature extraction method using multiobjective genetic programming.
Zhang, Yang; Rockett, Peter I
2009-01-01
In this paper, we present a generic feature extraction method for pattern classification using multiobjective genetic programming. This not only evolves the (near-)optimal set of mappings from a pattern space to a multi-dimensional decision space, but also simultaneously optimizes the dimensionality of that decision space. The presented framework evolves vector-to-vector feature extractors that maximize class separability. We demonstrate the efficacy of our approach by making statistically-founded comparisons with a wide variety of established classifier paradigms over a range of datasets and find that for most of the pairwise comparisons, our evolutionary method delivers statistically smaller misclassification errors. At very worst, our method displays no statistical difference in a few pairwise comparisons with established classifier/dataset combinations; crucially, none of the misclassification results produced by our method is worse than any comparator classifier. Although principally focused on feature extraction, feature selection is also performed as an implicit side effect; we show that both feature extraction and selection are important to the success of our technique. The presented method has the practical consequence of obviating the need to exhaustively evaluate a large family of conventional classifiers when faced with a new pattern recognition problem in order to attain a good classification accuracy.
2017-08-14
A substantial coronal hole rotated into a position where it is facing Earth (Aug. 9-11, 2017). Coronal holes are areas of open magnetic field that spew out charged particles as solar wind that spreads into space. If that solar wind interacts with our own magnetosphere it can generate aurora. In this view of the sun in extreme ultraviolet light, the coronal hole appears as the dark stretch near the center of the sun. It was the most distinctive feature on the sun over the past week. Movies are available at https://photojournal.jpl.nasa.gov/catalog/PIA21874
Contributions of individual face features to face discrimination.
Logan, Andrew J; Gordon, Gael E; Loffler, Gunter
2017-08-01
Faces are highly complex stimuli that contain a host of information. Such complexity poses the following questions: (a) do observers exhibit preferences for specific information? (b) how does sensitivity to individual face parts compare? These questions were addressed by quantifying sensitivity to different face features. Discrimination thresholds were determined for synthetic faces under the following conditions: (i) 'full face': all face features visible; (ii) 'isolated feature': single feature presented in isolation; (iii) 'embedded feature': all features visible, but only one feature modified. Mean threshold elevations for isolated features, relative to full-faces, were 0.84x, 1.08, 2.12, 3.34, 4.07 and 4.47 for head-shape, hairline, nose, mouth, eyes and eyebrows respectively. Hence, when two full faces can be discriminated at threshold, the difference between the eyes is about four times less than what is required when discriminating between isolated eyes. In all cases, sensitivity was higher when features were presented in isolation than when they were embedded within a face context (threshold elevations of 0.94x, 1.74, 2.67, 2.90, 5.94 and 9.94). This reveals a specific pattern of sensitivity to face information. Observers are between two and four times more sensitive to external than internal features. The pattern for internal features (higher sensitivity for the nose, compared to mouth, eyes and eyebrows) is consistent with lower sensitivity for those parts affected by facial dynamics (e.g. facial expressions). That isolated features are easier to discriminate than embedded features supports a holistic face processing mechanism which impedes extraction of information about individual features from full faces. Copyright © 2017 Elsevier Ltd. All rights reserved.
Discriminating Projections for Estimating Face Age in Wild Images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tokola, Ryan A; Bolme, David S; Ricanek, Karl
2014-01-01
We introduce a novel approach to estimating the age of a human from a single uncontrolled image. Current face age estimation algorithms work well in highly controlled images, and some are robust to changes in illumination, but it is usually assumed that images are close to frontal. This bias is clearly seen in the datasets that are commonly used to evaluate age estimation, which either entirely or mostly consist of frontal images. Using pose-specific projections, our algorithm maps image features into a pose-insensitive latent space that is discriminative with respect to age. Age estimation is then performed using a multi-classmore » SVM. We show that our approach outperforms other published results on the Images of Groups dataset, which is the only age-related dataset with a non-trivial number of off-axis face images, and that we are competitive with recent age estimation algorithms on the mostly-frontal FG-NET dataset. We also experimentally demonstrate that our feature projections introduce insensitivity to pose.« less
View of HTV3 berthed to Node 2
2012-07-27
ISS032-E-010473 (27 July 2012) --- The unpiloted Japan Aerospace Exploration Agency (JAXA) H-II Transfer Vehicle (HTV-3) is featured in this image photographed by an Expedition 32 crew member shortly after the HTV-3 was berthed to the Earth-facing port of the International Space Station's Harmony node using the Canadarm2 robotic arm. The attachment was completed at 10:34 a.m. (EDT) on July 27, 2012. Earth?s horizon and the blackness of space provide the backdrop for the scene.
View of HTV3 berthed to Node 2
2012-07-27
ISS032-E-010464 (27 July 2012) --- The unpiloted Japan Aerospace Exploration Agency (JAXA) H-II Transfer Vehicle (HTV-3) is featured in this image photographed by an Expedition 32 crew member shortly after the HTV-3 was berthed to the Earth-facing port of the International Space Station's Harmony node using the Canadarm2 robotic arm. The attachment was completed at 10:34 a.m. (EDT) on July 27, 2012. Earth?s horizon and the blackness of space provide the backdrop for the scene.
View of HTV3 berthed to Node 2
2012-07-27
ISS032-E-010476 (27 July 2012) --- The unpiloted Japan Aerospace Exploration Agency (JAXA) H-II Transfer Vehicle (HTV-3) is featured in this image photographed by an Expedition 32 crew member shortly after the HTV-3 was berthed to the Earth-facing port of the International Space Station's Harmony node using the Canadarm2 robotic arm. The attachment was completed at 10:34 a.m. (EDT) on July 27, 2012. Earth?s horizon and the blackness of space provide the backdrop for the scene.
2015-01-01
Color is one of the most prominent features of an image and used in many skin and face detection applications. Color space transformation is widely used by researchers to improve face and skin detection performance. Despite the substantial research efforts in this area, choosing a proper color space in terms of skin and face classification performance which can address issues like illumination variations, various camera characteristics and diversity in skin color tones has remained an open issue. This research proposes a new three-dimensional hybrid color space termed SKN by employing the Genetic Algorithm heuristic and Principal Component Analysis to find the optimal representation of human skin color in over seventeen existing color spaces. Genetic Algorithm heuristic is used to find the optimal color component combination setup in terms of skin detection accuracy while the Principal Component Analysis projects the optimal Genetic Algorithm solution to a less complex dimension. Pixel wise skin detection was used to evaluate the performance of the proposed color space. We have employed four classifiers including Random Forest, Naïve Bayes, Support Vector Machine and Multilayer Perceptron in order to generate the human skin color predictive model. The proposed color space was compared to some existing color spaces and shows superior results in terms of pixel-wise skin detection accuracy. Experimental results show that by using Random Forest classifier, the proposed SKN color space obtained an average F-score and True Positive Rate of 0.953 and False Positive Rate of 0.0482 which outperformed the existing color spaces in terms of pixel wise skin detection accuracy. The results also indicate that among the classifiers used in this study, Random Forest is the most suitable classifier for pixel wise skin detection applications. PMID:26267377
From scores to face templates: a model-based approach.
Mohanty, Pranab; Sarkar, Sudeep; Kasturi, Rangachar
2007-12-01
Regeneration of templates from match scores has security and privacy implications related to any biometric authentication system. We propose a novel paradigm to reconstruct face templates from match scores using a linear approach. It proceeds by first modeling the behavior of the given face recognition algorithm by an affine transformation. The goal of the modeling is to approximate the distances computed by a face recognition algorithm between two faces by distances between points, representing these faces, in an affine space. Given this space, templates from an independent image set (break-in) are matched only once with the enrolled template of the targeted subject and match scores are recorded. These scores are then used to embed the targeted subject in the approximating affine (non-orthogonal) space. Given the coordinates of the targeted subject in the affine space, the original template of the targeted subject is reconstructed using the inverse of the affine transformation. We demonstrate our ideas using three, fundamentally different, face recognition algorithms: Principal Component Analysis (PCA) with Mahalanobis cosine distance measure, Bayesian intra-extrapersonal classifier (BIC), and a feature-based commercial algorithm. To demonstrate the independence of the break-in set with the gallery set, we select face templates from two different databases: Face Recognition Grand Challenge (FRGC) and Facial Recognition Technology (FERET) Database (FERET). With an operational point set at 1 percent False Acceptance Rate (FAR) and 99 percent True Acceptance Rate (TAR) for 1,196 enrollments (FERET gallery), we show that at most 600 attempts (score computations) are required to achieve a 73 percent chance of breaking in as a randomly chosen target subject for the commercial face recognition system. With similar operational set up, we achieve a 72 percent and 100 percent chance of breaking in for the Bayesian and PCA based face recognition systems, respectively. With three different levels of score quantization, we achieve 69 percent, 68 percent and 49 percent probability of break-in, indicating the robustness of our proposed scheme to score quantization. We also show that the proposed reconstruction scheme has 47 percent more probability of breaking in as a randomly chosen target subject for the commercial system as compared to a hill climbing approach with the same number of attempts. Given that the proposed template reconstruction method uses distinct face templates to reconstruct faces, this work exposes a more severe form of vulnerability than a hill climbing kind of attack where incrementally different versions of the same face are used. Also, the ability of the proposed approach to reconstruct actual face templates of the users increases privacy concerns in biometric systems.
Palmprint and Face Multi-Modal Biometric Recognition Based on SDA-GSVD and Its Kernelization
Jing, Xiao-Yuan; Li, Sheng; Li, Wen-Qian; Yao, Yong-Fang; Lan, Chao; Lu, Jia-Sen; Yang, Jing-Yu
2012-01-01
When extracting discriminative features from multimodal data, current methods rarely concern themselves with the data distribution. In this paper, we present an assumption that is consistent with the viewpoint of discrimination, that is, a person's overall biometric data should be regarded as one class in the input space, and his different biometric data can form different Gaussians distributions, i.e., different subclasses. Hence, we propose a novel multimodal feature extraction and recognition approach based on subclass discriminant analysis (SDA). Specifically, one person's different bio-data are treated as different subclasses of one class, and a transformed space is calculated, where the difference among subclasses belonging to different persons is maximized, and the difference within each subclass is minimized. Then, the obtained multimodal features are used for classification. Two solutions are presented to overcome the singularity problem encountered in calculation, which are using PCA preprocessing, and employing the generalized singular value decomposition (GSVD) technique, respectively. Further, we provide nonlinear extensions of SDA based multimodal feature extraction, that is, the feature fusion based on KPCA-SDA and KSDA-GSVD. In KPCA-SDA, we first apply Kernel PCA on each single modal before performing SDA. While in KSDA-GSVD, we directly perform Kernel SDA to fuse multimodal data by applying GSVD to avoid the singular problem. For simplicity two typical types of biometric data are considered in this paper, i.e., palmprint data and face data. Compared with several representative multimodal biometrics recognition methods, experimental results show that our approaches outperform related multimodal recognition methods and KSDA-GSVD achieves the best recognition performance. PMID:22778600
Palmprint and face multi-modal biometric recognition based on SDA-GSVD and its kernelization.
Jing, Xiao-Yuan; Li, Sheng; Li, Wen-Qian; Yao, Yong-Fang; Lan, Chao; Lu, Jia-Sen; Yang, Jing-Yu
2012-01-01
When extracting discriminative features from multimodal data, current methods rarely concern themselves with the data distribution. In this paper, we present an assumption that is consistent with the viewpoint of discrimination, that is, a person's overall biometric data should be regarded as one class in the input space, and his different biometric data can form different Gaussians distributions, i.e., different subclasses. Hence, we propose a novel multimodal feature extraction and recognition approach based on subclass discriminant analysis (SDA). Specifically, one person's different bio-data are treated as different subclasses of one class, and a transformed space is calculated, where the difference among subclasses belonging to different persons is maximized, and the difference within each subclass is minimized. Then, the obtained multimodal features are used for classification. Two solutions are presented to overcome the singularity problem encountered in calculation, which are using PCA preprocessing, and employing the generalized singular value decomposition (GSVD) technique, respectively. Further, we provide nonlinear extensions of SDA based multimodal feature extraction, that is, the feature fusion based on KPCA-SDA and KSDA-GSVD. In KPCA-SDA, we first apply Kernel PCA on each single modal before performing SDA. While in KSDA-GSVD, we directly perform Kernel SDA to fuse multimodal data by applying GSVD to avoid the singular problem. For simplicity two typical types of biometric data are considered in this paper, i.e., palmprint data and face data. Compared with several representative multimodal biometrics recognition methods, experimental results show that our approaches outperform related multimodal recognition methods and KSDA-GSVD achieves the best recognition performance.
Face-space: A unifying concept in face recognition research.
Valentine, Tim; Lewis, Michael B; Hills, Peter J
2016-10-01
The concept of a multidimensional psychological space, in which faces can be represented according to their perceived properties, is fundamental to the modern theorist in face processing. Yet the idea was not clearly expressed until 1991. The background that led to the development of face-space is explained, and its continuing influence on theories of face processing is discussed. Research that has explored the properties of the face-space and sought to understand caricature, including facial adaptation paradigms, is reviewed. Face-space as a theoretical framework for understanding the effect of ethnicity and the development of face recognition is evaluated. Finally, two applications of face-space in the forensic setting are discussed. From initially being presented as a model to explain distinctiveness, inversion, and the effect of ethnicity, face-space has become a central pillar in many aspects of face processing. It is currently being developed to help us understand adaptation effects with faces. While being in principle a simple concept, face-space has shaped, and continues to shape, our understanding of face perception.
Effects of face feature and contour crowding in facial expression adaptation.
Liu, Pan; Montaser-Kouhsari, Leila; Xu, Hong
2014-12-01
Prolonged exposure to a visual stimulus, such as a happy face, biases the perception of subsequently presented neutral face toward sad perception, the known face adaptation. Face adaptation is affected by visibility or awareness of the adapting face. However, whether it is affected by discriminability of the adapting face is largely unknown. In the current study, we used crowding to manipulate discriminability of the adapting face and test its effect on face adaptation. Instead of presenting flanking faces near the target face, we shortened the distance between facial features (internal feature crowding), and reduced the size of face contour (external contour crowding), to introduce crowding. We are interested in whether internal feature crowding or external contour crowding is more effective in inducing crowding effect in our first experiment. We found that combining internal feature and external contour crowding, but not either of them alone, induced significant crowding effect. In Experiment 2, we went on further to investigate its effect on adaptation. We found that both internal feature crowding and external contour crowding reduced its facial expression aftereffect (FEA) significantly. However, we did not find a significant correlation between discriminability of the adapting face and its FEA. Interestingly, we found a significant correlation between discriminabilities of the adapting and test faces. Experiment 3 found that the reduced adaptation aftereffect in combined crowding by the external face contour and the internal facial features cannot be decomposed into the effects from the face contour and facial features linearly. It thus suggested a nonlinear integration between facial features and face contour in face adaptation.
NASA Astrophysics Data System (ADS)
Kawata, Y.; Niki, N.; Ohmatsu, H.; Aokage, K.; Kusumoto, M.; Tsuchida, T.; Eguchi, K.; Kaneko, M.
2015-03-01
Advantages of CT scanners with high resolution have allowed the improved detection of lung cancers. In the recent release of positive results from the National Lung Screening Trial (NLST) in the US showing that CT screening does in fact have a positive impact on the reduction of lung cancer related mortality. While this study does show the efficacy of CT based screening, physicians often face the problems of deciding appropriate management strategies for maximizing patient survival and for preserving lung function. Several key manifold-learning approaches efficiently reveal intrinsic low-dimensional structures latent in high-dimensional data spaces. This study was performed to investigate whether the dimensionality reduction can identify embedded structures from the CT histogram feature of non-small-cell lung cancer (NSCLC) space to improve the performance in predicting the likelihood of RFS for patients with NSCLC.
Faciotopy—A face-feature map with face-like topology in the human occipital face area
Henriksson, Linda; Mur, Marieke; Kriegeskorte, Nikolaus
2015-01-01
The occipital face area (OFA) and fusiform face area (FFA) are brain regions thought to be specialized for face perception. However, their intrinsic functional organization and status as cortical areas with well-defined boundaries remains unclear. Here we test these regions for “faciotopy”, a particular hypothesis about their intrinsic functional organisation. A faciotopic area would contain a face-feature map on the cortical surface, where cortical patches represent face features and neighbouring patches represent features that are physically neighbouring in a face. The faciotopy hypothesis is motivated by the idea that face regions might develop from a retinotopic protomap and acquire their selectivity for face features through natural visual experience. Faces have a prototypical configuration of features, are usually perceived in a canonical upright orientation, and are frequently fixated in particular locations. To test the faciotopy hypothesis, we presented images of isolated face features at fixation to subjects during functional magnetic resonance imaging. The responses in V1 were best explained by low-level image properties of the stimuli. OFA, and to a lesser degree FFA, showed evidence for faciotopic organization. When a single patch of cortex was estimated for each face feature, the cortical distances between the feature patches reflected the physical distance between the features in a face. Faciotopy would be the first example, to our knowledge, of a cortical map reflecting the topology, not of a part of the organism itself (its retina in retinotopy, its body in somatotopy), but of an external object of particular perceptual significance. PMID:26235800
Faciotopy-A face-feature map with face-like topology in the human occipital face area.
Henriksson, Linda; Mur, Marieke; Kriegeskorte, Nikolaus
2015-11-01
The occipital face area (OFA) and fusiform face area (FFA) are brain regions thought to be specialized for face perception. However, their intrinsic functional organization and status as cortical areas with well-defined boundaries remains unclear. Here we test these regions for "faciotopy", a particular hypothesis about their intrinsic functional organisation. A faciotopic area would contain a face-feature map on the cortical surface, where cortical patches represent face features and neighbouring patches represent features that are physically neighbouring in a face. The faciotopy hypothesis is motivated by the idea that face regions might develop from a retinotopic protomap and acquire their selectivity for face features through natural visual experience. Faces have a prototypical configuration of features, are usually perceived in a canonical upright orientation, and are frequently fixated in particular locations. To test the faciotopy hypothesis, we presented images of isolated face features at fixation to subjects during functional magnetic resonance imaging. The responses in V1 were best explained by low-level image properties of the stimuli. OFA, and to a lesser degree FFA, showed evidence for faciotopic organization. When a single patch of cortex was estimated for each face feature, the cortical distances between the feature patches reflected the physical distance between the features in a face. Faciotopy would be the first example, to our knowledge, of a cortical map reflecting the topology, not of a part of the organism itself (its retina in retinotopy, its body in somatotopy), but of an external object of particular perceptual significance. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Lu, Jiwen; Erin Liong, Venice; Zhou, Jie
2017-08-09
In this paper, we propose a simultaneous local binary feature learning and encoding (SLBFLE) approach for both homogeneous and heterogeneous face recognition. Unlike existing hand-crafted face descriptors such as local binary pattern (LBP) and Gabor features which usually require strong prior knowledge, our SLBFLE is an unsupervised feature learning approach which automatically learns face representation from raw pixels. Unlike existing binary face descriptors such as the LBP, discriminant face descriptor (DFD), and compact binary face descriptor (CBFD) which use a two-stage feature extraction procedure, our SLBFLE jointly learns binary codes and the codebook for local face patches so that discriminative information from raw pixels from face images of different identities can be obtained by using a one-stage feature learning and encoding procedure. Moreover, we propose a coupled simultaneous local binary feature learning and encoding (C-SLBFLE) method to make the proposed approach suitable for heterogeneous face matching. Unlike most existing coupled feature learning methods which learn a pair of transformation matrices for each modality, we exploit both the common and specific information from heterogeneous face samples to characterize their underlying correlations. Experimental results on six widely used face datasets are presented to demonstrate the effectiveness of the proposed method.
Kruskal-Wallis-based computationally efficient feature selection for face recognition.
Ali Khan, Sajid; Hussain, Ayyaz; Basit, Abdul; Akram, Sheeraz
2014-01-01
Face recognition in today's technological world, and face recognition applications attain much more importance. Most of the existing work used frontal face images to classify face image. However these techniques fail when applied on real world face images. The proposed technique effectively extracts the prominent facial features. Most of the features are redundant and do not contribute to representing face. In order to eliminate those redundant features, computationally efficient algorithm is used to select the more discriminative face features. Extracted features are then passed to classification step. In the classification step, different classifiers are ensemble to enhance the recognition accuracy rate as single classifier is unable to achieve the high accuracy. Experiments are performed on standard face database images and results are compared with existing techniques.
Near infrared and visible face recognition based on decision fusion of LBP and DCT features
NASA Astrophysics Data System (ADS)
Xie, Zhihua; Zhang, Shuai; Liu, Guodong; Xiong, Jinquan
2018-03-01
Visible face recognition systems, being vulnerable to illumination, expression, and pose, can not achieve robust performance in unconstrained situations. Meanwhile, near infrared face images, being light- independent, can avoid or limit the drawbacks of face recognition in visible light, but its main challenges are low resolution and signal noise ratio (SNR). Therefore, near infrared and visible fusion face recognition has become an important direction in the field of unconstrained face recognition research. In order to extract the discriminative complementary features between near infrared and visible images, in this paper, we proposed a novel near infrared and visible face fusion recognition algorithm based on DCT and LBP features. Firstly, the effective features in near-infrared face image are extracted by the low frequency part of DCT coefficients and the partition histograms of LBP operator. Secondly, the LBP features of visible-light face image are extracted to compensate for the lacking detail features of the near-infrared face image. Then, the LBP features of visible-light face image, the DCT and LBP features of near-infrared face image are sent to each classifier for labeling. Finally, decision level fusion strategy is used to obtain the final recognition result. The visible and near infrared face recognition is tested on HITSZ Lab2 visible and near infrared face database. The experiment results show that the proposed method extracts the complementary features of near-infrared and visible face images and improves the robustness of unconstrained face recognition. Especially for the circumstance of small training samples, the recognition rate of proposed method can reach 96.13%, which has improved significantly than 92.75 % of the method based on statistical feature fusion.
Where do spontaneous first impressions of faces come from?
Over, Harriet; Cook, Richard
2018-01-01
Humans spontaneously attribute a wide range of traits to strangers based solely on their facial features. These first impressions are known to exert striking effects on our choices and behaviours. In this paper, we provide a theoretical account of the origins of these spontaneous trait inferences. We describe a novel framework ('Trait Inference Mapping') in which trait inferences are products of mappings between locations in 'face space' and 'trait space'. These mappings are acquired during ontogeny and allow excitation of face representations to propagate automatically to associated trait representations. This conceptualization provides a framework within which the relative contribution of ontogenetic experience and genetic inheritance can be considered. Contrary to many existing ideas about the origins of trait inferences, we propose only a limited role for innate mechanisms and natural selection. Instead, our model explains inter-observer consistency by appealing to cultural learning and physiological responses that facilitate or 'canalise' particular face-trait mappings. Our TIM framework has both theoretical and substantive implications, and can be extended to trait inferences from non-facial cues to provide a unified account of first impressions. Copyright © 2017 Elsevier B.V. All rights reserved.
The role of external features in face recognition with central vision loss: A pilot study
Bernard, Jean-Baptiste; Chung, Susana T.L.
2016-01-01
Purpose We evaluated how the performance for recognizing familiar face images depends on the internal (eyebrows, eyes, nose, mouth) and external face features (chin, outline of face, hairline) in individuals with central vision loss. Methods In Experiment 1, we measured eye movements for four observers with central vision loss to determine whether they fixated more often on the internal or the external features of face images while attempting to recognize the images. We then measured the accuracy for recognizing face images that contained only the internal, only the external, or both internal and external features (Experiment 2), and for hybrid images where the internal and external features came from two different source images (Experiment 3), for five observers with central vision loss and four age-matched control observers. Results When recognizing familiar face images, approximately 40% of the fixations of observers with central vision loss were centered on the external features of faces. The recognition accuracy was higher for images containing only external features (66.8±3.3% correct) than for images containing only internal features (35.8±15.0%), a finding contradicting that of control observers. For hybrid face images, observers with central vision loss responded more accurately to the external features (50.4±17.8%) than to the internal features (9.3±4.9%), while control observers did not show the same bias toward responding to the external features. Conclusions Contrary to people with normal vision who rely more on the internal features of face images for recognizing familiar faces, individuals with central vision loss show a higher dependence on using external features of face images. PMID:26829260
The Role of External Features in Face Recognition with Central Vision Loss.
Bernard, Jean-Baptiste; Chung, Susana T L
2016-05-01
We evaluated how the performance of recognizing familiar face images depends on the internal (eyebrows, eyes, nose, mouth) and external face features (chin, outline of face, hairline) in individuals with central vision loss. In experiment 1, we measured eye movements for four observers with central vision loss to determine whether they fixated more often on the internal or the external features of face images while attempting to recognize the images. We then measured the accuracy for recognizing face images that contained only the internal, only the external, or both internal and external features (experiment 2) and for hybrid images where the internal and external features came from two different source images (experiment 3) for five observers with central vision loss and four age-matched control observers. When recognizing familiar face images, approximately 40% of the fixations of observers with central vision loss was centered on the external features of faces. The recognition accuracy was higher for images containing only external features (66.8 ± 3.3% correct) than for images containing only internal features (35.8 ± 15.0%), a finding contradicting that of control observers. For hybrid face images, observers with central vision loss responded more accurately to the external features (50.4 ± 17.8%) than to the internal features (9.3 ± 4.9%), whereas control observers did not show the same bias toward responding to the external features. Contrary to people with normal vision who rely more on the internal features of face images for recognizing familiar faces, individuals with central vision loss show a higher dependence on using external features of face images.
Face recognition: a convolutional neural-network approach.
Lawrence, S; Giles, C L; Tsoi, A C; Back, A D
1997-01-01
We present a hybrid neural-network for human face recognition which compares favourably with other methods. The system combines local image sampling, a self-organizing map (SOM) neural network, and a convolutional neural network. The SOM provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the SOM, and a multilayer perceptron (MLP) in place of the convolutional network for comparison. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze the computational complexity and discuss how new classes could be added to the trained recognizer.
Fuentes, Christina T; Runa, Catarina; Blanco, Xenxo Alvarez; Orvalho, Verónica; Haggard, Patrick
2013-01-01
Despite extensive research on face perception, few studies have investigated individuals' knowledge about the physical features of their own face. In this study, 50 participants indicated the location of key features of their own face, relative to an anchor point corresponding to the tip of the nose, and the results were compared to the true location of the same individual's features from a standardised photograph. Horizontal and vertical errors were analysed separately. An overall bias to underestimate vertical distances revealed a distorted face representation, with reduced face height. Factor analyses were used to identify separable subconfigurations of facial features with correlated localisation errors. Independent representations of upper and lower facial features emerged from the data pattern. The major source of variation across individuals was in representation of face shape, with a spectrum from tall/thin to short/wide representation. Visual identification of one's own face is excellent, and facial features are routinely used for establishing personal identity. However, our results show that spatial knowledge of one's own face is remarkably poor, suggesting that face representation may not contribute strongly to self-awareness.
Brielmann, Aenne A; Bülthoff, Isabelle; Armann, Regine
2014-07-01
Race categorization of faces is a fast and automatic process and is known to affect further face processing profoundly and at earliest stages. Whether processing of own- and other-race faces might rely on different facial cues, as indicated by diverging viewing behavior, is much under debate. We therefore aimed to investigate two open questions in our study: (1) Do observers consider information from distinct facial features informative for race categorization or do they prefer to gain global face information by fixating the geometrical center of the face? (2) Does the fixation pattern, or, if facial features are considered relevant, do these features differ between own- and other-race faces? We used eye tracking to test where European observers look when viewing Asian and Caucasian faces in a race categorization task. Importantly, in order to disentangle centrally located fixations from those towards individual facial features, we presented faces in frontal, half-profile and profile views. We found that observers showed no general bias towards looking at the geometrical center of faces, but rather directed their first fixations towards distinct facial features, regardless of face race. However, participants looked at the eyes more often in Caucasian faces than in Asian faces, and there were significantly more fixations to the nose for Asian compared to Caucasian faces. Thus, observers rely on information from distinct facial features rather than facial information gained by centrally fixating the face. To what extent specific features are looked at is determined by the face's race. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Applying CASE Tools for On-Board Software Development
NASA Astrophysics Data System (ADS)
Brammer, U.; Hönle, A.
For many space projects the software development is facing great pressure with respect to quality, costs and schedule. One way to cope with these challenges is the application of CASE tools for automatic generation of code and documentation. This paper describes two CASE tools: Rhapsody (I-Logix) featuring UML and ISG (BSSE) that provides modeling of finite state machines. Both tools have been used at Kayser-Threde in different space projects for the development of on-board software. The tools are discussed with regard to the full software development cycle.
2008-08-01
identified for static experiments , target arrays have been designed and ground truth systems are already in place. Participation in field ...key objectives are rapid launch and on-orbit checkout, theater commanding, and near -real time theater data integration. It will also feature a rapid...Organisation (DSTO) plan to participate in TacSat-3 experiments . 1. INTRODUCTION In future conflicts, military space forces will likely face
A multiple maximum scatter difference discriminant criterion for facial feature extraction.
Song, Fengxi; Zhang, David; Mei, Dayong; Guo, Zhongwei
2007-12-01
Maximum scatter difference (MSD) discriminant criterion was a recently presented binary discriminant criterion for pattern classification that utilizes the generalized scatter difference rather than the generalized Rayleigh quotient as a class separability measure, thereby avoiding the singularity problem when addressing small-sample-size problems. MSD classifiers based on this criterion have been quite effective on face-recognition tasks, but as they are binary classifiers, they are not as efficient on large-scale classification tasks. To address the problem, this paper generalizes the classification-oriented binary criterion to its multiple counterpart--multiple MSD (MMSD) discriminant criterion for facial feature extraction. The MMSD feature-extraction method, which is based on this novel discriminant criterion, is a new subspace-based feature-extraction method. Unlike most other subspace-based feature-extraction methods, the MMSD computes its discriminant vectors from both the range of the between-class scatter matrix and the null space of the within-class scatter matrix. The MMSD is theoretically elegant and easy to calculate. Extensive experimental studies conducted on the benchmark database, FERET, show that the MMSD out-performs state-of-the-art facial feature-extraction methods such as null space method, direct linear discriminant analysis (LDA), eigenface, Fisherface, and complete LDA.
Axelrod, Vadim; Yovel, Galit
2010-08-15
Most studies of face identity have excluded external facial features by either removing them or covering them with a hat. However, external facial features may modify the representation of internal facial features. Here we assessed whether the representation of face identity in the fusiform face area (FFA), which has been primarily studied for internal facial features, is modified by differences in external facial features. We presented faces in which external and internal facial features were manipulated independently. Our findings show that the FFA was sensitive to differences in external facial features, but this effect was significantly larger when the external and internal features were aligned than misaligned. We conclude that the FFA generates a holistic representation in which the internal and the external facial features are integrated. These results indicate that to better understand real-life face recognition both external and internal features should be included. Copyright (c) 2010 Elsevier Inc. All rights reserved.
Integration of internal and external facial features in 8- to 10-year-old children and adults.
Meinhardt-Injac, Bozana; Persike, Malte; Meinhardt, Günter
2014-06-01
Investigation of whole-part and composite effects in 4- to 6-year-old children gave rise to claims that face perception is fully mature within the first decade of life (Crookes & McKone, 2009). However, only internal features were tested, and the role of external features was not addressed, although external features are highly relevant for holistic face perception (Sinha & Poggio, 1996; Axelrod & Yovel, 2010, 2011). In this study, 8- to 10-year-old children and adults performed a same-different matching task with faces and watches. In this task participants attended to either internal or external features. Holistic face perception was tested using a congruency paradigm, in which face and non-face stimuli either agreed or disagreed in both features (congruent contexts) or just in the attended ones (incongruent contexts). In both age groups, pronounced context congruency and inversion effects were found for faces, but not for watches. These findings indicate holistic feature integration for faces. While inversion effects were highly similar in both age groups, context congruency effects were stronger for children. Moreover, children's face matching performance was generally better when attending to external compared to internal features. Adults tended to perform better when attending to internal features. Our results indicate that both adults and 8- to 10-year-old children integrate external and internal facial features into holistic face representations. However, in children's face representations external features are much more relevant. These findings suggest that face perception is holistic but still not adult-like at the end of the first decade of life. Copyright © 2014 Elsevier B.V. All rights reserved.
Feature-based attentional modulations in the absence of direct visual stimulation.
Serences, John T; Boynton, Geoffrey M
2007-07-19
When faced with a crowded visual scene, observers must selectively attend to behaviorally relevant objects to avoid sensory overload. Often this selection process is guided by prior knowledge of a target-defining feature (e.g., the color red when looking for an apple), which enhances the firing rate of visual neurons that are selective for the attended feature. Here, we used functional magnetic resonance imaging and a pattern classification algorithm to predict the attentional state of human observers as they monitored a visual feature (one of two directions of motion). We find that feature-specific attention effects spread across the visual field-even to regions of the scene that do not contain a stimulus. This spread of feature-based attention to empty regions of space may facilitate the perception of behaviorally relevant stimuli by increasing sensitivity to attended features at all locations in the visual field.
Kramer, Robin S S; Manesi, Zoi; Towler, Alice; Reynolds, Michael G; Burton, A Mike
2018-01-01
As faces become familiar, we come to rely more on their internal features for recognition and matching tasks. Here, we assess whether this same pattern is also observed for a card sorting task. Participants sorted photos showing either the full face, only the internal features, or only the external features into multiple piles, one pile per identity. In Experiments 1 and 2, we showed the standard advantage for familiar faces-sorting was more accurate and showed very few errors in comparison with unfamiliar faces. However, for both familiar and unfamiliar faces, sorting was less accurate for external features and equivalent for internal and full faces. In Experiment 3, we asked whether external features can ever be used to make an accurate sort. Using familiar faces and instructions on the number of identities present, we nevertheless found worse performance for the external in comparison with the internal features, suggesting that less identity information was available in the former. Taken together, we show that full faces and internal features are similarly informative with regard to identity. In comparison, external features contain less identity information and produce worse card sorting performance. This research extends current thinking on the shift in focus, both in attention and importance, toward the internal features and away from the external features as familiarity with a face increases.
Automatic Detection of Frontal Face Midline by Chain-coded Merlin-Farber Hough Trasform
NASA Astrophysics Data System (ADS)
Okamoto, Daichi; Ohyama, Wataru; Wakabayashi, Tetsushi; Kimura, Fumitaka
We propose a novel approach for detection of the facial midline (facial symmetry axis) from a frontal face image. The facial midline has several applications, for instance reducing computational cost required for facial feature extraction (FFE) and postoperative assessment for cosmetic or dental surgery. The proposed method detects the facial midline of a frontal face from an edge image as the symmetry axis using the Merlin-Faber Hough transformation. And a new performance improvement scheme for midline detection by MFHT is present. The main concept of the proposed scheme is suppression of redundant vote on the Hough parameter space by introducing chain code representation for the binary edge image. Experimental results on the image dataset containing 2409 images from FERET database indicate that the proposed algorithm can improve the accuracy of midline detection from 89.9% to 95.1 % for face images with different scales and rotation.
Face recognition using slow feature analysis and contourlet transform
NASA Astrophysics Data System (ADS)
Wang, Yuehao; Peng, Lingling; Zhe, Fuchuan
2018-04-01
In this paper we propose a novel face recognition approach based on slow feature analysis (SFA) in contourlet transform domain. This method firstly use contourlet transform to decompose the face image into low frequency and high frequency part, and then takes technological advantages of slow feature analysis for facial feature extraction. We named the new method combining the slow feature analysis and contourlet transform as CT-SFA. The experimental results on international standard face database demonstrate that the new face recognition method is effective and competitive.
1998-07-16
STS-95 crew members gather around the Vestibular Function Experiment Unit (VFEU) which includes marine fish called toadfish. In foreground, from left, are Mission Specialist Pedro Duque of the European Space Agency (ESA), a technician from the National Space Development Agency of Japan (NASDA), Payload Specialist Chiaki Mukai of NASDA, Pilot Steven W. Lindsey, and Payload Specialist John H. Glenn Jr., who also is a senator from Ohio. At center, facing the camera, are Mission Specialist Scott E. Parazynski and Commander Curtis L. Brown Jr., in back. STS-95 will feature a variety of research payloads, including the Spartan solar-observing deployable spacecraft, the Hubble Space Telescope Orbital Systems Platform, the International Extreme Ultraviolet Hitchhiker, and experiments on space flight and the aging process. STS-95 is targeted for an Oct. 29 launch aboard the Space Shuttle Discovery
Tanaka, James W; Kaiser, Martha D; Hagen, Simen; Pierce, Lara J
2014-05-01
Given that all faces share the same set of features-two eyes, a nose, and a mouth-that are arranged in similar configuration, recognition of a specific face must depend on our ability to discern subtle differences in its featural and configural properties. An enduring question in the face-processing literature is whether featural or configural information plays a larger role in the recognition process. To address this question, the face dimensions task was designed, in which the featural and configural properties in the upper (eye) and lower (mouth) regions of a face were parametrically and independently manipulated. In a same-different task, two faces were sequentially presented and tested in their upright or in their inverted orientation. Inversion disrupted the perception of featural size (Exp. 1), featural shape (Exp. 2), and configural changes in the mouth region, but it had relatively little effect on the discrimination of featural size and shape and configural differences in the eye region. Inversion had little effect on the perception of information in the top and bottom halves of houses (Exp. 3), suggesting that the lower-half impairment was specific to faces. Spatial cueing to the mouth region eliminated the inversion effect (Exp. 4), suggesting that participants have a bias to attend to the eye region of an inverted face. The collective findings from these experiments suggest that inversion does not differentially impair featural or configural face perceptions, but rather impairs the perception of information in the mouth region of the face.
Olivares, Ela I; Saavedra, Cristina; Trujillo-Barreto, Nelson J; Iglesias, Jaime
2013-01-01
In face processing tasks, prior presentation of internal facial features, when compared with external ones, facilitates the recognition of subsequently displayed familiar faces. In a previous ERP study (Olivares & Iglesias, 2010) we found a visibly larger N400-like effect when identity mismatch familiar faces were preceded by internal features, as compared to prior presentation of external ones. In the present study we contrasted the processing of familiar and unfamiliar faces in the face-feature matching task to assess whether the so-called "internal features advantage" relies mainly on the use of stored face-identity-related information or if it might operate independently from stimulus familiarity. Our participants (N = 24) achieved better performance with internal features as primes and, significantly, with familiar faces. Importantly, ERPs elicited by identity mismatch complete faces displayed a negativity around 300-600 msec which was clearly enhanced for familiar faces primed by internal features when compared with the other experimental conditions. Source reconstruction showed incremented activity elicited by familiar stimuli in both posterior (ventral occipitotemporal) and more anterior (parahippocampal (ParaHIP) and orbitofrontal) brain regions. The activity elicited by unfamiliar stimuli was, in general, located in more posterior regions. Our findings suggest that the activation of multiple neural codes is required for optimal individuation in face-feature matching and that a cortical network related to long-term information for face-identity processing seems to support the internal feature effect. Copyright © 2013 Elsevier Ltd. All rights reserved.
SPACE PHYSICS: Developing resources for astrophysics at A-level: the TRUMP Astrophysics project
NASA Astrophysics Data System (ADS)
Swinbank, Elizabeth
1997-01-01
After outlining the astrophysical options now available in A-level physics syllabuses, this paper notes some of the particular challenges facing A-level teachers and students who chose these options and describes a project designed to support them. The paper highlights some key features of the project that could readily be incorporated into other areas of physics curriculum development.
Potter, Timothy; Corneille, Olivier; Ruys, Kirsten I; Rhodes, Ginwan
2007-04-01
Findings on both attractiveness and memory for faces suggest that people should perceive more similarity among attractive than among unattractive faces. A multidimensional scaling approach was used to test this hypothesis in two studies. In Study 1, we derived a psychological face space from similarity ratings of attractive and unattractive Caucasian female faces. In Study 2, we derived a face space for attractive and unattractive male faces of Caucasians and non-Caucasians. Both studies confirm that attractive faces are indeed more tightly clustered than unattractive faces in people's psychological face spaces. These studies provide direct and original support for theoretical assumptions previously made in the face space and face memory literatures.
Face sketch recognition based on edge enhancement via deep learning
NASA Astrophysics Data System (ADS)
Xie, Zhenzhu; Yang, Fumeng; Zhang, Yuming; Wu, Congzhong
2017-11-01
In this paper,we address the face sketch recognition problem. Firstly, we utilize the eigenface algorithm to convert a sketch image into a synthesized sketch face image. Subsequently, considering the low-level vision problem in synthesized face sketch image .Super resolution reconstruction algorithm based on CNN(convolutional neural network) is employed to improve the visual effect. To be specific, we uses a lightweight super-resolution structure to learn a residual mapping instead of directly mapping the feature maps from the low-level space to high-level patch representations, which making the networks are easier to optimize and have lower computational complexity. Finally, we adopt LDA(Linear Discriminant Analysis) algorithm to realize face sketch recognition on synthesized face image before super resolution and after respectively. Extensive experiments on the face sketch database(CUFS) from CUHK demonstrate that the recognition rate of SVM(Support Vector Machine) algorithm improves from 65% to 69% and the recognition rate of LDA(Linear Discriminant Analysis) algorithm improves from 69% to 75%.What'more,the synthesized face image after super resolution can not only better describer image details such as hair ,nose and mouth etc, but also improve the recognition accuracy effectively.
Real-time object-to-features vectorisation via Siamese neural networks
NASA Astrophysics Data System (ADS)
Fedorenko, Fedor; Usilin, Sergey
2017-03-01
Object-to-features vectorisation is a hard problem to solve for objects that can be hard to distinguish. Siamese and Triplet neural networks are one of the more recent tools used for such task. However, most networks used are very deep networks that prove to be hard to compute in the Internet of Things setting. In this paper, a computationally efficient neural network is proposed for real-time object-to-features vectorisation into a Euclidean metric space. We use L2 distance to reflect feature vector similarity during both training and testing. In this way, feature vectors we develop can be easily classified using K-Nearest Neighbours classifier. Such approach can be used to train networks to vectorise such "problematic" objects like images of human faces, keypoint image patches, like keypoints on Arctic maps and surrounding marine areas.
The importance of internal facial features in learning new faces.
Longmore, Christopher A; Liu, Chang Hong; Young, Andrew W
2015-01-01
For familiar faces, the internal features (eyes, nose, and mouth) are known to be differentially salient for recognition compared to external features such as hairstyle. Two experiments are reported that investigate how this internal feature advantage accrues as a face becomes familiar. In Experiment 1, we tested the contribution of internal and external features to the ability to generalize from a single studied photograph to different views of the same face. A recognition advantage for the internal features over the external features was found after a change of viewpoint, whereas there was no internal feature advantage when the same image was used at study and test. In Experiment 2, we removed the most salient external feature (hairstyle) from studied photographs and looked at how this affected generalization to a novel viewpoint. Removing the hair from images of the face assisted generalization to novel viewpoints, and this was especially the case when photographs showing more than one viewpoint were studied. The results suggest that the internal features play an important role in the generalization between different images of an individual's face by enabling the viewer to detect the common identity-diagnostic elements across non-identical instances of the face.
Neurocomputational bases of object and face recognition.
Biederman, I; Kalocsai, P
1997-01-01
A number of behavioural phenomena distinguish the recognition of faces and objects, even when members of a set of objects are highly similar. Because faces have the same parts in approximately the same relations, individuation of faces typically requires specification of the metric variation in a holistic and integral representation of the facial surface. The direct mapping of a hypercolumn-like pattern of activation onto a representation layer that preserves relative spatial filter values in a two-dimensional (2D) coordinate space, as proposed by C. von der Malsburg and his associates, may account for many of the phenomena associated with face recognition. An additional refinement, in which each column of filters (termed a 'jet') is centred on a particular facial feature (or fiducial point), allows selectivity of the input into the holistic representation to avoid incorporation of occluding or nearby surfaces. The initial hypercolumn representation also characterizes the first stage of object perception, but the image variation for objects at a given location in a 2D coordinate space may be too great to yield sufficient predictability directly from the output of spatial kernels. Consequently, objects can be represented by a structural description specifying qualitative (typically, non-accidental) characterizations of an object's parts, the attributes of the parts, and the relations among the parts, largely based on orientation and depth discontinuities (as shown by Hummel & Biederman). A series of experiments on the name priming or physical matching of complementary images (in the Fourier domain) of objects and faces documents that whereas face recognition is strongly dependent on the original spatial filter values, evidence from object recognition indicates strong invariance to these values, even when distinguishing among objects that are as similar as faces. PMID:9304687
Effects of dissuasive packaging on young adult smokers.
Hoek, Janet; Wong, Christiane; Gendall, Philip; Louviere, Jordan; Cong, Karen
2011-05-01
Tobacco industry documents illustrate how packaging promotes smoking experimentation and reinforces existing smokers' behaviour. Plain packaging reduces the perceived attractiveness of smoking and creates an opportunity to introduce larger pictorial warnings that could promote cessation-linked behaviours. However, little is known about the effects such a combined policy measure would have on smokers' behaviour. A 3 (warning size) *4 (branding level) plus control (completely plain pack) best-worst experiment was conducted via face-to-face interviews with 292 young adult smokers from a New Zealand provincial city. The Juster Scale was also used to estimate cessation-linked behaviours among participants. Of the 13 options tested, respondents were significantly less likely to choose those featuring fewer branding elements or larger health warnings. Options that featured more branding elements were still preferred even when they also featured a 50% health warning, but were significantly less likely to be chosen when they featured a 75% warning. Comparison of a control pack representing the status quo (branded with 30% front of pack warning) and a plain pack (with a 75% warning) revealed the latter would be significantly more likely to elicit cessation-related behaviours. Plain packs that feature large graphic health warnings are significantly more likely to promote cessation among young adult smokers than fully or partially branded packs. The findings support the introduction of plain packaging and suggest use of unbranded package space to feature larger health warnings would further promote cessation.
Developmental changes in perceptions of attractiveness: a role of experience?
Cooper, Philip A; Geldart, Sybil S; Mondloch, Catherine J; Maurer, Daphne
2006-09-01
In three experiments, we traced the development of the adult pattern of judgments of attractiveness for faces that have been altered to have internal features in low, average, or high positions. Twelve-year-olds and adults demonstrated identical patterns of results: they rated faces with features in an average location as significantly more attractive than faces with either low or high features. Although both 4-year-olds and 9-year-olds rated faces with high features as least attractive, unlike adults and 12-year-olds, they rated faces with low and average features as equally attractive. Three-year-olds with high levels of peer interaction, but not those with low levels of peer interaction, chose faces with low features as significantly more attractive than those with high-placed features, possibly as a result of their increased experience with the proportions of the faces of peers. Overall, the pattern of results is consistent with the hypothesis that experience influences perceptions of attractiveness, with the proportions of the faces participants see in their everyday lives influencing their perceptions of attractiveness.
Brooks, Kevin R; Kemp, Richard I
2007-01-01
Previous studies of face recognition and of face matching have shown a general improvement for the processing of internal features as a face becomes more familiar to the participant. In this study, we used a psychophysical two-alternative forced-choice paradigm to investigate thresholds for the detection of a displacement of the eyes, nose, mouth, or ears for familiar and unfamiliar faces. No clear division between internal and external features was observed. Rather, for familiar (compared to unfamiliar) faces participants were more sensitive to displacements of internal features such as the eyes or the nose; yet, for our third internal feature-the mouth no such difference was observed. Despite large displacements, many subjects were unable to perform above chance when stimuli involved shifts in the position of the ears. These results are consistent with the proposal that familiarity effects may be mediated by the construction of a robust representation of a face, although the involvement of attention in the encoding of face stimuli cannot be ruled out. Furthermore, these effects are mediated by information from a spatial configuration of features, rather than by purely feature-based information.
Serum N-propeptide of collagen IIA (PIIANP) as a marker of radiographic osteoarthritis burden.
Daghestani, Hikmat N; Jordan, Joanne M; Renner, Jordan B; Doherty, Michael; Wilson, A Gerry; Kraus, Virginia B
2017-01-01
Cartilage homeostasis relies on a balance of catabolism and anabolism of cartilage matrix. Our goal was to evaluate the burden of radiographic osteoarthritis and serum levels of type IIA procollagen amino terminal propeptide (sPIIANP), a biomarker representing type II collagen synthesis, in osteoarthritis. OA burden was quantified on the basis of radiographic features as total joint faces with an osteophyte, joint space narrowing, or in the spine, disc space narrowing. sPIIANP was measured in 1,235 participants from the Genetics of Generalized Osteoarthritis study using a competitive enzyme-linked immunosorbent assay. Separate multivariable linear regression models, adjusted for age, sex, and body mass index and additionally for ipsilateral osteophytes or joint/disc space narrowing, were used to assess the independent association of sPIIANP with osteophytes and with joint/disc space narrowing burden in knees, hips, hands and spine, individually and together. After full adjustment, sPIIANP was significantly associated with a lesser burden of hip joint space narrowing and knee osteophytes. sPIIANP was associated with a lesser burden of hand joint space narrowing but a greater burden of hand osteophytes; these results were only evident upon adjustment for osteoarthritic features in all other joints. There were no associations of sPIIANP and features of spine osteoarthritis. Higher cartilage collagen synthesis, as reflected in systemic PIIANP concentrations, was associated with lesser burden of osteoarthritic features in lower extremity joints (knees and hips), even accounting for osteoarthritis burden in hands and spine, age, sex and body mass index. These results suggest that pro-anabolic agents may be appropriate for early treatment to prevent severe lower extremity large joint osteoarthritis.
When false recognition is out of control: the case of facial conjunctions.
Jones, Todd C; Bartlett, James C
2009-03-01
In three experiments, a dual-process approach to face recognition memory is examined, with a specific focus on the idea that a recollection process can be used to retrieve configural information of a studied face. Subjects could avoid, with confidence, a recognition error to conjunction lure faces (each a reconfiguration of features from separate studied faces) or feature lure faces (each based on a set of old features and a set of new features) by recalling a studied configuration. In Experiment 1, study repetition (one vs. eight presentations) was manipulated, and in Experiments 2 and 3, retention interval over a short number of trials (0-20) was manipulated. Different measures converged on the conclusion that subjects were unable to use a recollection process to retrieve configural information in an effort to temper recognition errors for conjunction or feature lure faces. A single process, familiarity, appears to be the sole process underlying recognition of conjunction and feature faces, and familiarity contributes, perhaps in whole, to discrimination of old from conjunction faces.
A self-organized learning strategy for object recognition by an embedded line of attraction
NASA Astrophysics Data System (ADS)
Seow, Ming-Jung; Alex, Ann T.; Asari, Vijayan K.
2012-04-01
For humans, a picture is worth a thousand words, but to a machine, it is just a seemingly random array of numbers. Although machines are very fast and efficient, they are vastly inferior to humans for everyday information processing. Algorithms that mimic the way the human brain computes and learns may be the solution. In this paper we present a theoretical model based on the observation that images of similar visual perceptions reside in a complex manifold in an image space. The perceived features are often highly structured and hidden in a complex set of relationships or high-dimensional abstractions. To model the pattern manifold, we present a novel learning algorithm using a recurrent neural network. The brain memorizes information using a dynamical system made of interconnected neurons. Retrieval of information is accomplished in an associative sense. It starts from an arbitrary state that might be an encoded representation of a visual image and converges to another state that is stable. The stable state is what the brain remembers. In designing a recurrent neural network, it is usually of prime importance to guarantee the convergence in the dynamics of the network. We propose to modify this picture: if the brain remembers by converging to the state representing familiar patterns, it should also diverge from such states when presented with an unknown encoded representation of a visual image belonging to a different category. That is, the identification of an instability mode is an indication that a presented pattern is far away from any stored pattern and therefore cannot be associated with current memories. These properties can be used to circumvent the plasticity-stability dilemma by using the fluctuating mode as an indicator to create new states. We capture this behavior using a novel neural architecture and learning algorithm, in which the system performs self-organization utilizing a stability mode and an instability mode for the dynamical system. Based on this observation we developed a self- organizing line attractor, which is capable of generating new lines in the feature space to learn unrecognized patterns. Experiments performed on UMIST pose database and CMU face expression variant database for face recognition have shown that the proposed nonlinear line attractor is able to successfully identify the individuals and it provided better recognition rate when compared to the state of the art face recognition techniques. Experiments on FRGC version 2 database has also provided excellent recognition rate in images captured in complex lighting environments. Experiments performed on the Japanese female face expression database and Essex Grimace database using the self organizing line attractor have also shown successful expression invariant face recognition. These results show that the proposed model is able to create nonlinear manifolds in a multidimensional feature space to distinguish complex patterns.
Facebook and MySpace: complement or substitute for face-to-face interaction?
Kujath, Carlyne L
2011-01-01
Previous studies have claimed that social-networking sites are used as a substitute for face-to-face interaction, resulting in deteriorating relationship quality and decreased intimacy among its users. The present study hypothesized that this type of communication is not a substitute for face-to-face interaction; rather, that it is an extension of communication with face-to-face partners. A survey was administered to examine the use of Facebook and MySpace in this regard among 183 college students. The study confirmed that Facebook and MySpace do act as an extension of face-to-face interaction, but that some users do tend to rely on Facebook and MySpace for interpersonal communication more than face-to-face interaction.
SPACE: Vision and Reality: Face to Face. Proceedings Report
NASA Technical Reports Server (NTRS)
1995-01-01
The proceedings of the 11th National Space Symposium entitled 'Vision and Reality: Face to Face' is presented. Technological areas discussed include the following sections: Vision for the future; Positioning for the future; Remote sensing, the emerging era; space opportunities, Competitive vision with acquisition reality; National security requirements in space; The world is into space; and The outlook for space. An appendice is also attached.
Context-Aware Local Binary Feature Learning for Face Recognition.
Duan, Yueqi; Lu, Jiwen; Feng, Jianjiang; Zhou, Jie
2018-05-01
In this paper, we propose a context-aware local binary feature learning (CA-LBFL) method for face recognition. Unlike existing learning-based local face descriptors such as discriminant face descriptor (DFD) and compact binary face descriptor (CBFD) which learn each feature code individually, our CA-LBFL exploits the contextual information of adjacent bits by constraining the number of shifts from different binary bits, so that more robust information can be exploited for face representation. Given a face image, we first extract pixel difference vectors (PDV) in local patches, and learn a discriminative mapping in an unsupervised manner to project each pixel difference vector into a context-aware binary vector. Then, we perform clustering on the learned binary codes to construct a codebook, and extract a histogram feature for each face image with the learned codebook as the final representation. In order to exploit local information from different scales, we propose a context-aware local binary multi-scale feature learning (CA-LBMFL) method to jointly learn multiple projection matrices for face representation. To make the proposed methods applicable for heterogeneous face recognition, we present a coupled CA-LBFL (C-CA-LBFL) method and a coupled CA-LBMFL (C-CA-LBMFL) method to reduce the modality gap of corresponding heterogeneous faces in the feature level, respectively. Extensive experimental results on four widely used face datasets clearly show that our methods outperform most state-of-the-art face descriptors.
High-resolution face verification using pore-scale facial features.
Li, Dong; Zhou, Huiling; Lam, Kin-Man
2015-08-01
Face recognition methods, which usually represent face images using holistic or local facial features, rely heavily on alignment. Their performances also suffer a severe degradation under variations in expressions or poses, especially when there is one gallery per subject only. With the easy access to high-resolution (HR) face images nowadays, some HR face databases have recently been developed. However, few studies have tackled the use of HR information for face recognition or verification. In this paper, we propose a pose-invariant face-verification method, which is robust to alignment errors, using the HR information based on pore-scale facial features. A new keypoint descriptor, namely, pore-Principal Component Analysis (PCA)-Scale Invariant Feature Transform (PPCASIFT)-adapted from PCA-SIFT-is devised for the extraction of a compact set of distinctive pore-scale facial features. Having matched the pore-scale features of two-face regions, an effective robust-fitting scheme is proposed for the face-verification task. Experiments show that, with one frontal-view gallery only per subject, our proposed method outperforms a number of standard verification methods, and can achieve excellent accuracy even the faces are under large variations in expression and pose.
Effects of Orientation on Recognition of Facial Affect
NASA Technical Reports Server (NTRS)
Cohen, M. M.; Mealey, J. B.; Hargens, Alan R. (Technical Monitor)
1997-01-01
The ability to discriminate facial features is often degraded when the orientation of the face and/or the observer is altered. Previous studies have shown that gross distortions of facial features can go unrecognized when the image of the face is inverted, as exemplified by the 'Margaret Thatcher' effect. This study examines how quickly erect and supine observers can distinguish between smiling and frowning faces that are presented at various orientations. The effects of orientation are of particular interest in space, where astronauts frequently view one another in orientations other than the upright. Sixteen observers viewed individual facial images of six people on a computer screen; on a given trial, the image was either smiling or frowning. Each image was viewed when it was erect and when it was rotated (rolled) by 45 degrees, 90 degrees, 135 degrees, 180 degrees, 225 degrees and 270 degrees about the line of sight. The observers were required to respond as rapidly and accurately as possible to identify if the face presented was smiling or frowning. Measures of reaction time were obtained when the observers were both upright and supine. Analyses of variance revealed that mean reaction time, which increased with stimulus rotation (F=18.54, df 7/15, p (is less than) 0.001), was 22% longer when the faces were inverted than when they were erect, but that the orientation of the observer had no significant effect on reaction time (F=1.07, df 1/15, p (is greater than) .30). These data strongly suggest that the orientation of the image of a face on the observer's retina, but not its orientation with respect to gravity, is important in identifying the expression on the face.
Analysis of Big Data from Space
NASA Astrophysics Data System (ADS)
Tan, J.; Osborne, B.
2017-09-01
Massive data have been collected through various space mission. To maximize the investment, the data need to be exploited to the fullest. In this paper, we address key topics on big data from space about the status and future development using the system engineering method. First, we summarized space data including operation data and mission data, on their sources, access way, characteristics of 5Vs and application models based on the concept of big data, as well as the challenges they faced in application. Second, we gave proposals on platform design and architecture to meet the demand and challenges on space data application. It has taken into account of features of space data and their application models. It emphasizes high scalability and flexibility in the aspects of storage, computing and data mining. Thirdly, we suggested typical and promising practices for space data application, that showed valuable methodologies for improving intelligence on space application, engineering, and science. Our work will give an interdisciplinary knowledge to space engineers and information engineers.
Lo, L Y; Cheng, M Y
2017-06-01
Detection of angry and happy faces is generally found to be easier and faster than that of faces expressing emotions other than anger or happiness. This can be explained by the threatening account and the feature account. Few empirical studies have explored the interaction between these two accounts which are seemingly, but not necessarily, mutually exclusive. The present studies hypothesised that prominent facial features are important in facilitating the detection process of both angry and happy expressions; yet the detection of happy faces was more facilitated by the prominent features than angry faces. Results confirmed the hypotheses and indicated that participants reacted faster to the emotional expressions with prominent features (in Study 1) and the detection of happy faces was more facilitated by the prominent feature than angry faces (in Study 2). The findings are compatible with evolutionary speculation which suggests that the angry expression is an alarming signal of potential threats to survival. Compared to the angry faces, the happy faces need more salient physical features to obtain a similar level of processing efficiency. © 2015 International Union of Psychological Science.
NASA Astrophysics Data System (ADS)
Harit, Aditya; Joshi, J. C., Col; Gupta, K. K.
2018-03-01
The paper proposed an automatic facial emotion recognition algorithm which comprises of two main components: feature extraction and expression recognition. The algorithm uses a Gabor filter bank on fiducial points to find the facial expression features. The resulting magnitudes of Gabor transforms, along with 14 chosen FAPs (Facial Animation Parameters), compose the feature space. There are two stages: the training phase and the recognition phase. Firstly, for the present 6 different emotions, the system classifies all training expressions in 6 different classes (one for each emotion) in the training stage. In the recognition phase, it recognizes the emotion by applying the Gabor bank to a face image, then finds the fiducial points, and then feeds it to the trained neural architecture.
Hole Feature on Conical Face Recognition for Turning Part Model
NASA Astrophysics Data System (ADS)
Zubair, A. F.; Abu Mansor, M. S.
2018-03-01
Computer Aided Process Planning (CAPP) is the bridge between CAD and CAM and pre-processing of the CAD data in the CAPP system is essential. For CNC turning part, conical faces of part model is inevitable to be recognised beside cylindrical and planar faces. As the sinus cosines of the cone radius structure differ according to different models, face identification in automatic feature recognition of the part model need special intention. This paper intends to focus hole on feature on conical faces that can be detected by CAD solid modeller ACIS via. SAT file. Detection algorithm of face topology were generated and compared. The study shows different faces setup for similar conical part models with different hole type features. Three types of holes were compared and different between merge faces and unmerge faces were studied.
Zachariou, Valentinos; Nikas, Christine V; Safiullah, Zaid N; Gotts, Stephen J; Ungerleider, Leslie G
2017-08-01
Human face recognition is often attributed to configural processing; namely, processing the spatial relationships among the features of a face. If configural processing depends on fine-grained spatial information, do visuospatial mechanisms within the dorsal visual pathway contribute to this process? We explored this question in human adults using functional magnetic resonance imaging and transcranial magnetic stimulation (TMS) in a same-different face detection task. Within localized, spatial-processing regions of the posterior parietal cortex, configural face differences led to significantly stronger activation compared to featural face differences, and the magnitude of this activation correlated with behavioral performance. In addition, detection of configural relative to featural face differences led to significantly stronger functional connectivity between the right FFA and the spatial processing regions of the dorsal stream, whereas detection of featural relative to configural face differences led to stronger functional connectivity between the right FFA and left FFA. Critically, TMS centered on these parietal regions impaired performance on configural but not featural face difference detections. We conclude that spatial mechanisms within the dorsal visual pathway contribute to the configural processing of facial features and, more broadly, that the dorsal stream may contribute to the veridical perception of faces. Published by Oxford University Press 2016.
Hwang, Wonjun; Wang, Haitao; Kim, Hyunwoo; Kee, Seok-Cheol; Kim, Junmo
2011-04-01
The authors present a robust face recognition system for large-scale data sets taken under uncontrolled illumination variations. The proposed face recognition system consists of a novel illumination-insensitive preprocessing method, a hybrid Fourier-based facial feature extraction, and a score fusion scheme. First, in the preprocessing stage, a face image is transformed into an illumination-insensitive image, called an "integral normalized gradient image," by normalizing and integrating the smoothed gradients of a facial image. Then, for feature extraction of complementary classifiers, multiple face models based upon hybrid Fourier features are applied. The hybrid Fourier features are extracted from different Fourier domains in different frequency bandwidths, and then each feature is individually classified by linear discriminant analysis. In addition, multiple face models are generated by plural normalized face images that have different eye distances. Finally, to combine scores from multiple complementary classifiers, a log likelihood ratio-based score fusion scheme is applied. The proposed system using the face recognition grand challenge (FRGC) experimental protocols is evaluated; FRGC is a large available data set. Experimental results on the FRGC version 2.0 data sets have shown that the proposed method shows an average of 81.49% verification rate on 2-D face images under various environmental variations such as illumination changes, expression changes, and time elapses.
Attractiveness as a Function of Skin Tone and Facial Features: Evidence from Categorization Studies.
Stepanova, Elena V; Strube, Michael J
2018-01-01
Participants rated the attractiveness and racial typicality of male faces varying in their facial features from Afrocentric to Eurocentric and in skin tone from dark to light in two experiments. Experiment 1 provided evidence that facial features and skin tone have an interactive effect on perceptions of attractiveness and mixed-race faces are perceived as more attractive than single-race faces. Experiment 2 further confirmed that faces with medium levels of skin tone and facial features are perceived as more attractive than faces with extreme levels of these factors. Black phenotypes (combinations of dark skin tone and Afrocentric facial features) were rated as more attractive than White phenotypes (combinations of light skin tone and Eurocentric facial features); ambiguous faces (combinations of Afrocentric and Eurocentric physiognomy) with medium levels of skin tone were rated as the most attractive in Experiment 2. Perceptions of attractiveness were relatively independent of racial categorization in both experiments.
The surprisingly high human efficiency at learning to recognize faces
Peterson, Matthew F.; Abbey, Craig K.; Eckstein, Miguel P.
2009-01-01
We investigated the ability of humans to optimize face recognition performance through rapid learning of individual relevant features. We created artificial faces with discriminating visual information heavily concentrated in single features (nose, eyes, chin or mouth). In each of 2500 learning blocks a feature was randomly selected and retained over the course of four trials, during which observers identified randomly sampled, noisy face images. Observers learned the discriminating feature through indirect feedback, leading to large performance gains. Performance was compared to a learning Bayesian ideal observer, resulting in unexpectedly high learning compared to previous studies with simpler stimuli. We explore various explanations and conclude that the higher learning measured with faces cannot be driven by adaptive eye movement strategies but can be mostly accounted for by suboptimalities in human face discrimination when observers are uncertain about the discriminating feature. We show that an initial bias of humans to use specific features to perform the task even though they are informed that each of four features is equally likely to be the discriminatory feature would lead to seemingly supra-optimal learning. We also examine the possibility of inefficient human integration of visual information across the spatially distributed facial features. Together, the results suggest that humans can show large performance improvement effects in discriminating faces as they learn to identify the feature containing the discriminatory information. PMID:19000918
The Roles of Featural and Configural Face Processing in Snap Judgments of Sexual Orientation
Tabak, Joshua A.; Zayas, Vivian
2012-01-01
Research has shown that people are able to judge sexual orientation from faces with above-chance accuracy, but little is known about how these judgments are formed. Here, we investigated the importance of well-established face processing mechanisms in such judgments: featural processing (e.g., an eye) and configural processing (e.g., spatial distance between eyes). Participants judged sexual orientation from faces presented for 50 milliseconds either upright, which recruits both configural and featural processing, or upside-down, when configural processing is strongly impaired and featural processing remains relatively intact. Although participants judged women’s and men’s sexual orientation with above-chance accuracy for upright faces and for upside-down faces, accuracy for upside-down faces was significantly reduced. The reduced judgment accuracy for upside-down faces indicates that configural face processing significantly contributes to accurate snap judgments of sexual orientation. PMID:22629321
Feature instructions improve face-matching accuracy
Bindemann, Markus
2018-01-01
Identity comparisons of photographs of unfamiliar faces are prone to error but important for applied settings, such as person identification at passport control. Finding techniques to improve face-matching accuracy is therefore an important contemporary research topic. This study investigated whether matching accuracy can be improved by instruction to attend to specific facial features. Experiment 1 showed that instruction to attend to the eyebrows enhanced matching accuracy for optimized same-day same-race face pairs but not for other-race faces. By contrast, accuracy was unaffected by instruction to attend to the eyes, and declined with instruction to attend to ears. Experiment 2 replicated the eyebrow-instruction improvement with a different set of same-race faces, comprising both optimized same-day and more challenging different-day face pairs. These findings suggest that instruction to attend to specific features can enhance face-matching accuracy, but feature selection is crucial and generalization across face sets may be limited. PMID:29543822
Adapting Local Features for Face Detection in Thermal Image.
Ma, Chao; Trung, Ngo Thanh; Uchiyama, Hideaki; Nagahara, Hajime; Shimada, Atsushi; Taniguchi, Rin-Ichiro
2017-11-27
A thermal camera captures the temperature distribution of a scene as a thermal image. In thermal images, facial appearances of different people under different lighting conditions are similar. This is because facial temperature distribution is generally constant and not affected by lighting condition. This similarity in face appearances is advantageous for face detection. To detect faces in thermal images, cascade classifiers with Haar-like features are generally used. However, there are few studies exploring the local features for face detection in thermal images. In this paper, we introduce two approaches relying on local features for face detection in thermal images. First, we create new feature types by extending Multi-Block LBP. We consider a margin around the reference and the generally constant distribution of facial temperature. In this way, we make the features more robust to image noise and more effective for face detection in thermal images. Second, we propose an AdaBoost-based training method to get cascade classifiers with multiple types of local features. These feature types have different advantages. In this way we enhance the description power of local features. We did a hold-out validation experiment and a field experiment. In the hold-out validation experiment, we captured a dataset from 20 participants, comprising 14 males and 6 females. For each participant, we captured 420 images with 10 variations in camera distance, 21 poses, and 2 appearances (participant with/without glasses). We compared the performance of cascade classifiers trained by different sets of the features. The experiment results showed that the proposed approaches effectively improve the performance of face detection in thermal images. In the field experiment, we compared the face detection performance in realistic scenes using thermal and RGB images, and gave discussion based on the results.
ERIC Educational Resources Information Center
Goffaux, Valerie; Rossion, Bruno
2007-01-01
Upside-down inversion disrupts the processing of spatial relations between the features of a face, while largely preserving local feature analysis. However, recent studies on face inversion failed to observe a clear dissociation between relational and featural processing. To resolve these discrepancies and clarify how inversion affects face…
NASA Astrophysics Data System (ADS)
Willson, D.; Rask, J. C.; George, S. C.; de Leon, P.; Bonaccorsi, R.; Blank, J.; Slocombe, J.; Silburn, K.; Steele, H.; Gargarno, M.; McKay, C. P.
2014-01-01
We conducted simulated Apollo Extravehicular Activity's (EVA) at the 3.45 Ga Australian 'Pilbara Dawn of life' (Western Australia) trail with field and non-field scientists using the University of North Dakota's NDX-1 pressurizable space suit to overview the effectiveness of scientist astronauts employing their field observation skills while looking for stromatolite fossil evidence. Off-world scientist astronauts will be faced with space suit limitations in vision, human sense perception, mobility, dexterity, the space suit fit, time limitations, and the psychological fear of death from accidents, causing physical fatigue reducing field science performance. Finding evidence of visible biosignatures for past life such as stromatolite fossils, on Mars, is a very significant discovery. Our preliminary overview trials showed that when in simulated EVAs, 25% stromatolite fossil evidence is missed with more incorrect identifications compared to ground truth surveys but providing quality characterization descriptions becomes less affected by simulated EVA limitations as the science importance of the features increases. Field scientists focused more on capturing high value characterization detail from the rock features whereas non-field scientists focused more on finding many features. We identified technologies and training to improve off-world field science performance. The data collected is also useful for NASA's "EVA performance and crew health" research program requirements but further work will be required to confirm the conclusions.
Recovering faces from memory: the distracting influence of external facial features.
Frowd, Charlie D; Skelton, Faye; Atherton, Chris; Pitchford, Melanie; Hepton, Gemma; Holden, Laura; McIntyre, Alex H; Hancock, Peter J B
2012-06-01
Recognition memory for unfamiliar faces is facilitated when contextual cues (e.g., head pose, background environment, hair and clothing) are consistent between study and test. By contrast, inconsistencies in external features, especially hair, promote errors in unfamiliar face-matching tasks. For the construction of facial composites, as carried out by witnesses and victims of crime, the role of external features (hair, ears, and neck) is less clear, although research does suggest their involvement. Here, over three experiments, we investigate the impact of external features for recovering facial memories using a modern, recognition-based composite system, EvoFIT. Participant-constructors inspected an unfamiliar target face and, one day later, repeatedly selected items from arrays of whole faces, with "breeding," to "evolve" a composite with EvoFIT; further participants (evaluators) named the resulting composites. In Experiment 1, the important internal-features (eyes, brows, nose, and mouth) were constructed more identifiably when the visual presence of external features was decreased by Gaussian blur during construction: higher blur yielded more identifiable internal-features. In Experiment 2, increasing the visible extent of external features (to match the target's) in the presented face-arrays also improved internal-features quality, although less so than when external features were masked throughout construction. Experiment 3 demonstrated that masking external-features promoted substantially more identifiable images than using the previous method of blurring external-features. Overall, the research indicates that external features are a distractive rather than a beneficial cue for face construction; the results also provide a much better method to construct composites, one that should dramatically increase identification of offenders.
Familiarity facilitates feature-based face processing.
Visconti di Oleggio Castello, Matteo; Wheeler, Kelsey G; Cipolli, Carlo; Gobbini, M Ida
2017-01-01
Recognition of personally familiar faces is remarkably efficient, effortless and robust. We asked if feature-based face processing facilitates detection of familiar faces by testing the effect of face inversion on a visual search task for familiar and unfamiliar faces. Because face inversion disrupts configural and holistic face processing, we hypothesized that inversion would diminish the familiarity advantage to the extent that it is mediated by such processing. Subjects detected personally familiar and stranger target faces in arrays of two, four, or six face images. Subjects showed significant facilitation of personally familiar face detection for both upright and inverted faces. The effect of familiarity on target absent trials, which involved only rejection of unfamiliar face distractors, suggests that familiarity facilitates rejection of unfamiliar distractors as well as detection of familiar targets. The preserved familiarity effect for inverted faces suggests that facilitation of face detection afforded by familiarity reflects mostly feature-based processes.
Structured Kernel Dictionary Learning with Correlation Constraint for Object Recognition.
Wang, Zhengjue; Wang, Yinghua; Liu, Hongwei; Zhang, Hao
2017-06-21
In this paper, we propose a new discriminative non-linear dictionary learning approach, called correlation constrained structured kernel KSVD, for object recognition. The objective function for dictionary learning contains a reconstructive term and a discriminative term. In the reconstructive term, signals are implicitly non-linearly mapped into a space, where a structured kernel dictionary, each sub-dictionary of which lies in the span of the mapped signals from the corresponding class, is established. In the discriminative term, by analyzing the classification mechanism, the correlation constraint is proposed in kernel form, constraining the correlations between different discriminative codes, and restricting the coefficient vectors to be transformed into a feature space, where the features are highly correlated inner-class and nearly independent between-classes. The objective function is optimized by the proposed structured kernel KSVD. During the classification stage, the specific form of the discriminative feature is needless to be known, while the inner product of the discriminative feature with kernel matrix embedded is available, and is suitable for a linear SVM classifier. Experimental results demonstrate that the proposed approach outperforms many state-of-the-art dictionary learning approaches for face, scene and synthetic aperture radar (SAR) vehicle target recognition.
Face imagery is based on featural representations.
Lobmaier, Janek S; Mast, Fred W
2008-01-01
The effect of imagery on featural and configural face processing was investigated using blurred and scrambled faces. By means of blurring, featural information is reduced; by scrambling a face into its constituent parts configural information is lost. Twenty-four participants learned ten faces together with the sound of a name. In following matching-to-sample tasks participants had to decide whether an auditory presented name belonged to a visually presented scrambled or blurred face in two experimental conditions. In the imagery condition, the name was presented prior to the visual stimulus and participants were required to imagine the corresponding face as clearly and vividly as possible. In the perception condition name and test face were presented simultaneously, thus no facilitation via mental imagery was possible. Analyses of the hit values showed that in the imagery condition scrambled faces were recognized significantly better than blurred faces whereas there was no such effect for the perception condition. The results suggest that mental imagery activates featural representations more than configural representations.
Robust Point Set Matching for Partial Face Recognition.
Weng, Renliang; Lu, Jiwen; Tan, Yap-Peng
2016-03-01
Over the past three decades, a number of face recognition methods have been proposed in computer vision, and most of them use holistic face images for person identification. In many real-world scenarios especially some unconstrained environments, human faces might be occluded by other objects, and it is difficult to obtain fully holistic face images for recognition. To address this, we propose a new partial face recognition approach to recognize persons of interest from their partial faces. Given a pair of gallery image and probe face patch, we first detect keypoints and extract their local textural features. Then, we propose a robust point set matching method to discriminatively match these two extracted local feature sets, where both the textural information and geometrical information of local features are explicitly used for matching simultaneously. Finally, the similarity of two faces is converted as the distance between these two aligned feature sets. Experimental results on four public face data sets show the effectiveness of the proposed approach.
Marečková, Klára; Chakravarty, M Mallar; Huang, Mei; Lawrence, Claire; Leonard, Gabriel; Perron, Michel; Pike, Bruce G; Richer, Louis; Veillette, Suzanne; Pausova, Zdenka; Paus, Tomáš
2013-10-01
In our previous work, we described facial features associated with a successful recognition of the sex of the face (Marečková et al., 2011). These features were based on landmarks placed on the surface of faces reconstructed from magnetic resonance (MR) images; their position was therefore influenced by both soft tissue (fat and muscle) and bone structure of the skull. Here, we ask whether bone structure has dissociable influences on observers' identification of the sex of the face. To answer this question, we used a novel method of studying skull morphology using MR images and explored the relationship between skull features, facial features, and sex recognition in a large sample of adolescents (n=876; including 475 adolescents from our original report). To determine whether skull features mediate the relationship between facial features and identification accuracy, we performed mediation analysis using bootstrapping. In males, skull features mediated fully the relationship between facial features and sex judgments. In females, the skull mediated this relationship only after adjusting facial features for the amount of body fat (estimated with bioimpedance). While body fat had a very slight positive influence on correct sex judgments about male faces, there was a robust negative influence of body fat on the correct sex judgments about female faces. Overall, these results suggest that craniofacial bone structure is essential for correct sex judgments about a male face. In females, body fat influences negatively the accuracy of sex judgments, and craniofacial bone structure alone cannot explain the relationship between facial features and identification of a face as female. Copyright © 2013 Elsevier Inc. All rights reserved.
Perception and Processing of Faces in the Human Brain Is Tuned to Typical Feature Locations
Schwarzkopf, D. Samuel; Alvarez, Ivan; Lawson, Rebecca P.; Henriksson, Linda; Kriegeskorte, Nikolaus; Rees, Geraint
2016-01-01
Faces are salient social stimuli whose features attract a stereotypical pattern of fixations. The implications of this gaze behavior for perception and brain activity are largely unknown. Here, we characterize and quantify a retinotopic bias implied by typical gaze behavior toward faces, which leads to eyes and mouth appearing most often in the upper and lower visual field, respectively. We found that the adult human visual system is tuned to these contingencies. In two recognition experiments, recognition performance for isolated face parts was better when they were presented at typical, rather than reversed, visual field locations. The recognition cost of reversed locations was equal to ∼60% of that for whole face inversion in the same sample. Similarly, an fMRI experiment showed that patterns of activity evoked by eye and mouth stimuli in the right inferior occipital gyrus could be separated with significantly higher accuracy when these features were presented at typical, rather than reversed, visual field locations. Our findings demonstrate that human face perception is determined not only by the local position of features within a face context, but by whether features appear at the typical retinotopic location given normal gaze behavior. Such location sensitivity may reflect fine-tuning of category-specific visual processing to retinal input statistics. Our findings further suggest that retinotopic heterogeneity might play a role for face inversion effects and for the understanding of conditions affecting gaze behavior toward faces, such as autism spectrum disorders and congenital prosopagnosia. SIGNIFICANCE STATEMENT Faces attract our attention and trigger stereotypical patterns of visual fixations, concentrating on inner features, like eyes and mouth. Here we show that the visual system represents face features better when they are shown at retinal positions where they typically fall during natural vision. When facial features were shown at typical (rather than reversed) visual field locations, they were discriminated better by humans and could be decoded with higher accuracy from brain activity patterns in the right occipital face area. This suggests that brain representations of face features do not cover the visual field uniformly. It may help us understand the well-known face-inversion effect and conditions affecting gaze behavior toward faces, such as prosopagnosia and autism spectrum disorders. PMID:27605606
Evidence of a Shift from Featural to Configural Face Processing in Infancy
ERIC Educational Resources Information Center
Schwarzer, Gudrun; Zauner, Nicola; Jovanovic, Bianca
2007-01-01
Two experiments examined whether 4-, 6-, and 10-month-old infants process natural looking faces by feature, i.e. processing internal facial features independently of the facial context or holistically by processing the features in conjunction with the facial context. Infants were habituated to two faces and looking time was measured. After…
1997-11-19
STS-87 Payload Specialist Leonid Kadenyuk of the National Space Agency of Ukraine is assisted with final preparations before launch in the white room at Launch Pad 39B by Danny Wyatt, NASA quality assurance specialist, at left; George Schram, USA mechanical technician, facing Kadenyuk; and Travis Thompson, USA orbiter vehicle closeout chief, at right. STS-87 is the fourth flight of the United States Microgravity Payload and Spartan-201. The 16-day mission will include the Collaborative Ukrainian Experiment (CUE), a collection of 10 plant space biology experiments that will fly in Columbia’s middeck and will feature an educational component that involves evaluating the effects of microgravity on Brassica rapa seedlings
Increased dead space in face mask continuous positive airway pressure in neonates.
Hishikawa, Kenji; Fujinaga, Hideshi; Ito, Yushi
2017-01-01
Continuous positive airway pressure (CPAP) by face mask is commonly performed in newborn resuscitation. We evaluated the effect of face mask CPAP on system dead space. Face mask CPAP increases dead space. A CPAP model study. We estimated the volume of the inner space of the mask. We devised a face mask CPAP model, in which the outlet of the mask was covered with plastic; and three modified face mask CPAP models, in which holes were drilled near to the cushion of the covered face mask to alter the air exit. We passed a continuous flow of 21% oxygen through each model and we controlled the inner pressure to 5 cmH 2 O by adjusting the flow-relief valve. To evaluate the ventilation in the inner space of each model, we measured the oxygen concentration rise time, that is, the time needed for the oxygen concentration of each model to reach 35% after the oxygen concentration of the continuous flow was raised from 21% to 40%. The volume of inner space of the face mask was 38.3 ml. Oxygen concentration rise time in the face mask CPAP model was significantly longer at various continuous flow rates and points of the inner space of the face mask compared with that of the modified face mask CPAP model. Our study indicates that face mask CPAP leads to an increase in dead space and a decrease in ventilation efficiency under certain circumstances. Pediatr Pulmonol. 2017;52:107-111. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
The Van Sant AVHRR image projected onto a rhombicosidodecahedron
NASA Astrophysics Data System (ADS)
Baron, Michael; Morain, Stan
1996-03-01
IDEATION, a design and development corporation, Santa Fe, New Mexico, has modeled Tom Van Sant's ``The Earth From Space'' image to a rhombicosidodecahedron. ``The Earth from Space'' image, produced by the Geosphere® Project in Santa Monica, California, was developed from hundreds of AVHRR pictures and published as a Mercator projection. IDEATION, utilizing a digitized Robinson Projection, fitted the image to foldable, paper components which, when interconnected by means of a unique tabular system, results in a rhombicosidodecahedron representation of the Earth exposing 30 square, 20 triangular, and 12 pentagonal faces. Because the resulting model is not spherical, the borders of the represented features were rectified to match the intersecting planes of the model's faces. The resulting product will be licensed and commercially produced for use by elementary and secondary students. Market research indicates the model will be used in both the demonstration of geometric principles and the teaching of fundamental spatial relations of the Earth's lands and oceans.
The Cliff Reconnaissance Vehicle: a tool to improve astronaut exploration efficiency.
Souchier, Alain
2014-05-01
The close examination of cliff strata on Mars may reveal important information about conditions that existed in the past on that planet. To have access to such difficult-to-reach locations, the Association Planète Mars (France) has, since 2001, been experimenting with designs of manually operated, instrumented vehicles capable of being lowered down the faces of cliffs. The latest tests in the series in which the Cliff Reconnaissance Vehicle (CRV) or Cliffbot was used were conducted as part of the Austrian Space Forum's MARS2013 field analog project in Morocco in February 2013. Experimentation centered on vehicle configuration for maximum all-terrain capabilities; operational procedures, which included use while the operator was wearing an analog space suit; and imaging, mapping, and geological/biological feature detection capabilities. The exercise demonstrated that Cliffbot is capable of examining hard-to-reach rock strata in cliff faces but that it needs further mechanical modification to improve its ability to overcome some particular terrain obstacles and situational awareness by the operator.
Uyghur face recognition method combining 2DDCT with POEM
NASA Astrophysics Data System (ADS)
Yi, Lihamu; Ya, Ermaimaiti
2017-11-01
In this paper, in light of the reduced recognition rate and poor robustness of Uyghur face under illumination and partial occlusion, a Uyghur face recognition method combining Two Dimension Discrete Cosine Transform (2DDCT) with Patterns Oriented Edge Magnitudes (POEM) was proposed. Firstly, the Uyghur face images were divided into 8×8 block matrix, and the Uyghur face images after block processing were converted into frequency-domain status using 2DDCT; secondly, the Uyghur face images were compressed to exclude non-sensitive medium frequency parts and non-high frequency parts, so it can reduce the feature dimensions necessary for the Uyghur face images, and further reduce the amount of computation; thirdly, the corresponding POEM histograms of the Uyghur face images were obtained by calculating the feature quantity of POEM; fourthly, the POEM histograms were cascaded together as the texture histogram of the center feature point to obtain the texture features of the Uyghur face feature points; finally, classification of the training samples was carried out using deep learning algorithm. The simulation experiment results showed that the proposed algorithm further improved the recognition rate of the self-built Uyghur face database, and greatly improved the computing speed of the self-built Uyghur face database, and had strong robustness.
Dall'Asta, Andrea; Schievano, Silvia; Bruse, Jan L; Paramasivam, Gowrishankar; Kaihura, Christine Tita; Dunaway, David; Lees, Christoph C
2017-07-01
The antenatal detection of facial dysmorphism using 3-dimensional ultrasound may raise the suspicion of an underlying genetic condition but infrequently leads to a definitive antenatal diagnosis. Despite advances in array and noninvasive prenatal testing, not all genetic conditions can be ascertained from such testing. The aim of this study was to investigate the feasibility of quantitative assessment of fetal face features using prenatal 3-dimensional ultrasound volumes and statistical shape modeling. STUDY DESIGN: Thirteen normal and 7 abnormal stored 3-dimensional ultrasound fetal face volumes were analyzed, at a median gestation of 29 +4 weeks (25 +0 to 36 +1 ). The 20 3-dimensional surface meshes generated were aligned and served as input for a statistical shape model, which computed the mean 3-dimensional face shape and 3-dimensional shape variations using principal component analysis. Ten shape modes explained more than 90% of the total shape variability in the population. While the first mode accounted for overall size differences, the second highlighted shape feature changes from an overall proportionate toward a more asymmetric face shape with a wide prominent forehead and an undersized, posteriorly positioned chin. Analysis of the Mahalanobis distance in principal component analysis shape space suggested differences between normal and abnormal fetuses (median and interquartile range distance values, 7.31 ± 5.54 for the normal group vs 13.27 ± 9.82 for the abnormal group) (P = .056). This feasibility study demonstrates that objective characterization and quantification of fetal facial morphology is possible from 3-dimensional ultrasound. This technique has the potential to assist in utero diagnosis, particularly of rare conditions in which facial dysmorphology is a feature. Copyright © 2017 Elsevier Inc. All rights reserved.
The recognition of emotional expression in prosopagnosia: decoding whole and part faces.
Stephan, Blossom Christa Maree; Breen, Nora; Caine, Diana
2006-11-01
Prosopagnosia is currently viewed within the constraints of two competing theories of face recognition, one highlighting the analysis of features, the other focusing on configural processing of the whole face. This study investigated the role of feature analysis versus whole face configural processing in the recognition of facial expression. A prosopagnosic patient, SC made expression decisions from whole and incomplete (eyes-only and mouth-only) faces where features had been obscured. SC was impaired at recognizing some (e.g., anger, sadness, and fear), but not all (e.g., happiness) emotional expressions from the whole face. Analyses of his performance on incomplete faces indicated that his recognition of some expressions actually improved relative to his performance on the whole face condition. We argue that in SC interference from damaged configural processes seem to override an intact ability to utilize part-based or local feature cues.
Face recognition algorithm using extended vector quantization histogram features.
Yan, Yan; Lee, Feifei; Wu, Xueqian; Chen, Qiu
2018-01-01
In this paper, we propose a face recognition algorithm based on a combination of vector quantization (VQ) and Markov stationary features (MSF). The VQ algorithm has been shown to be an effective method for generating features; it extracts a codevector histogram as a facial feature representation for face recognition. Still, the VQ histogram features are unable to convey spatial structural information, which to some extent limits their usefulness in discrimination. To alleviate this limitation of VQ histograms, we utilize Markov stationary features (MSF) to extend the VQ histogram-based features so as to add spatial structural information. We demonstrate the effectiveness of our proposed algorithm by achieving recognition results superior to those of several state-of-the-art methods on publicly available face databases.
Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung
2018-01-01
Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases. PMID:29495417
Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung
2018-02-26
Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases.
A regularized approach for geodesic-based semisupervised multimanifold learning.
Fan, Mingyu; Zhang, Xiaoqin; Lin, Zhouchen; Zhang, Zhongfei; Bao, Hujun
2014-05-01
Geodesic distance, as an essential measurement for data dissimilarity, has been successfully used in manifold learning. However, most geodesic distance-based manifold learning algorithms have two limitations when applied to classification: 1) class information is rarely used in computing the geodesic distances between data points on manifolds and 2) little attention has been paid to building an explicit dimension reduction mapping for extracting the discriminative information hidden in the geodesic distances. In this paper, we regard geodesic distance as a kind of kernel, which maps data from linearly inseparable space to linear separable distance space. In doing this, a new semisupervised manifold learning algorithm, namely regularized geodesic feature learning algorithm, is proposed. The method consists of three techniques: a semisupervised graph construction method, replacement of original data points with feature vectors which are built by geodesic distances, and a new semisupervised dimension reduction method for feature vectors. Experiments on the MNIST, USPS handwritten digit data sets, MIT CBCL face versus nonface data set, and an intelligent traffic data set show the effectiveness of the proposed algorithm.
Reduced isothermal feature set for long wave infrared (LWIR) face recognition
NASA Astrophysics Data System (ADS)
Donoso, Ramiro; San Martín, Cesar; Hermosilla, Gabriel
2017-06-01
In this paper, we introduce a new concept in the thermal face recognition area: isothermal features. This consists of a feature vector built from a thermal signature that depends on the emission of the skin of the person and its temperature. A thermal signature is the appearance of the face to infrared sensors and is unique to each person. The infrared face is decomposed into isothermal regions that present the thermal features of the face. Each isothermal region is modeled as circles within a center representing the pixel of the image, and the feature vector is composed of a maximum radius of the circles at the isothermal region. This feature vector corresponds to the thermal signature of a person. The face recognition process is built using a modification of the Expectation Maximization (EM) algorithm in conjunction with a proposed probabilistic index to the classification process. Results obtained using an infrared database are compared with typical state-of-the-art techniques showing better performance, especially in uncontrolled acquisition conditions scenarios.
Hierarchical ensemble of global and local classifiers for face recognition.
Su, Yu; Shan, Shiguang; Chen, Xilin; Gao, Wen
2009-08-01
In the literature of psychophysics and neurophysiology, many studies have shown that both global and local features are crucial for face representation and recognition. This paper proposes a novel face recognition method which exploits both global and local discriminative features. In this method, global features are extracted from the whole face images by keeping the low-frequency coefficients of Fourier transform, which we believe encodes the holistic facial information, such as facial contour. For local feature extraction, Gabor wavelets are exploited considering their biological relevance. After that, Fisher's linear discriminant (FLD) is separately applied to the global Fourier features and each local patch of Gabor features. Thus, multiple FLD classifiers are obtained, each embodying different facial evidences for face recognition. Finally, all these classifiers are combined to form a hierarchical ensemble classifier. We evaluate the proposed method using two large-scale face databases: FERET and FRGC version 2.0. Experiments show that the results of our method are impressively better than the best known results with the same evaluation protocol.
Learning Compact Binary Face Descriptor for Face Recognition.
Lu, Jiwen; Liong, Venice Erin; Zhou, Xiuzhuang; Zhou, Jie
2015-10-01
Binary feature descriptors such as local binary patterns (LBP) and its variations have been widely used in many face recognition systems due to their excellent robustness and strong discriminative power. However, most existing binary face descriptors are hand-crafted, which require strong prior knowledge to engineer them by hand. In this paper, we propose a compact binary face descriptor (CBFD) feature learning method for face representation and recognition. Given each face image, we first extract pixel difference vectors (PDVs) in local patches by computing the difference between each pixel and its neighboring pixels. Then, we learn a feature mapping to project these pixel difference vectors into low-dimensional binary vectors in an unsupervised manner, where 1) the variance of all binary codes in the training set is maximized, 2) the loss between the original real-valued codes and the learned binary codes is minimized, and 3) binary codes evenly distribute at each learned bin, so that the redundancy information in PDVs is removed and compact binary codes are obtained. Lastly, we cluster and pool these binary codes into a histogram feature as the final representation for each face image. Moreover, we propose a coupled CBFD (C-CBFD) method by reducing the modality gap of heterogeneous faces at the feature level to make our method applicable to heterogeneous face recognition. Extensive experimental results on five widely used face datasets show that our methods outperform state-of-the-art face descriptors.
Expectation and Surprise Determine Neural Population Responses in the Ventral Visual Stream
Egner, Tobias; Monti, Jim M.; Summerfield, Christopher
2014-01-01
Visual cortex is traditionally viewed as a hierarchy of neural feature detectors, with neural population responses being driven by bottom-up stimulus features. Conversely, “predictive coding” models propose that each stage of the visual hierarchy harbors two computationally distinct classes of processing unit: representational units that encode the conditional probability of a stimulus and provide predictions to the next lower level; and error units that encode the mismatch between predictions and bottom-up evidence, and forward prediction error to the next higher level. Predictive coding therefore suggests that neural population responses in category-selective visual regions, like the fusiform face area (FFA), reflect a summation of activity related to prediction (“face expectation”) and prediction error (“face surprise”), rather than a homogenous feature detection response. We tested the rival hypotheses of the feature detection and predictive coding models by collecting functional magnetic resonance imaging data from the FFA while independently varying both stimulus features (faces vs houses) and subjects’ perceptual expectations regarding those features (low vs medium vs high face expectation). The effects of stimulus and expectation factors interacted, whereby FFA activity elicited by face and house stimuli was indistinguishable under high face expectation and maximally differentiated under low face expectation. Using computational modeling, we show that these data can be explained by predictive coding but not by feature detection models, even when the latter are augmented with attentional mechanisms. Thus, population responses in the ventral visual stream appear to be determined by feature expectation and surprise rather than by stimulus features per se. PMID:21147999
Cost-Sensitive Local Binary Feature Learning for Facial Age Estimation.
Lu, Jiwen; Liong, Venice Erin; Zhou, Jie
2015-12-01
In this paper, we propose a cost-sensitive local binary feature learning (CS-LBFL) method for facial age estimation. Unlike the conventional facial age estimation methods that employ hand-crafted descriptors or holistically learned descriptors for feature representation, our CS-LBFL method learns discriminative local features directly from raw pixels for face representation. Motivated by the fact that facial age estimation is a cost-sensitive computer vision problem and local binary features are more robust to illumination and expression variations than holistic features, we learn a series of hashing functions to project raw pixel values extracted from face patches into low-dimensional binary codes, where binary codes with similar chronological ages are projected as close as possible, and those with dissimilar chronological ages are projected as far as possible. Then, we pool and encode these local binary codes within each face image as a real-valued histogram feature for face representation. Moreover, we propose a cost-sensitive local binary multi-feature learning method to jointly learn multiple sets of hashing functions using face patches extracted from different scales to exploit complementary information. Our methods achieve competitive performance on four widely used face aging data sets.
Nishino, Ken; Nakamura, Mutsuko; Matsumoto, Masayuki; Tanno, Osamu; Nakauchi, Shigeki
2011-03-28
Light reflected from an object's surface contains much information about its physical and chemical properties. Changes in the physical properties of an object are barely detectable in spectra. Conventional trichromatic systems, on the other hand, cannot detect most spectral features because spectral information is compressively represented as trichromatic signals forming a three-dimensional subspace. We propose a method for designing a filter that optically modulates a camera's spectral sensitivity to find an alternative subspace highlighting an object's spectral features more effectively than the original trichromatic space. We designed and developed a filter that detects cosmetic foundations on human face. Results confirmed that the filter can visualize and nondestructively inspect the foundation distribution.
Marečková, Klára; Weinbrand, Zohar; Chakravarty, M Mallar; Lawrence, Claire; Aleong, Rosanne; Leonard, Gabriel; Perron, Michel; Pike, G Bruce; Richer, Louis; Veillette, Suzanne; Pausova, Zdenka; Paus, Tomáš
2011-11-01
Sex identification of a face is essential for social cognition. Still, perceptual cues indicating the sex of a face, and mechanisms underlying their development, remain poorly understood. Previously, our group described objective age- and sex-related differences in faces of healthy male and female adolescents (12-18 years of age), as derived from magnetic resonance images (MRIs) of the adolescents' heads. In this study, we presented these adolescent faces to 60 female raters to determine which facial features most reliably predicted subjective sex identification. Identification accuracy correlated highly with specific MRI-derived facial features (e.g. broader forehead, chin, jaw, and nose). Facial features that most reliably cued male identity were associated with plasma levels of testosterone (above and beyond age). Perceptible sex differences in face shape are thus associated with specific facial features whose emergence may be, in part, driven by testosterone. Copyright © 2011 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, Hai-Wen; McGurr, Mike
2016-05-01
We have developed a new way for detection and tracking of human full-body and body-parts with color (intensity) patch morphological segmentation and adaptive thresholding for security surveillance cameras. An adaptive threshold scheme has been developed for dealing with body size changes, illumination condition changes, and cross camera parameter changes. Tests with the PETS 2009 and 2014 datasets show that we can obtain high probability of detection and low probability of false alarm for full-body. Test results indicate that our human full-body detection method can considerably outperform the current state-of-the-art methods in both detection performance and computational complexity. Furthermore, in this paper, we have developed several methods using color features for detection and tracking of human body-parts (arms, legs, torso, and head, etc.). For example, we have developed a human skin color sub-patch segmentation algorithm by first conducting a RGB to YIQ transformation and then applying a Subtractive I/Q image Fusion with morphological operations. With this method, we can reliably detect and track human skin color related body-parts such as face, neck, arms, and legs. Reliable body-parts (e.g. head) detection allows us to continuously track the individual person even in the case that multiple closely spaced persons are merged. Accordingly, we have developed a new algorithm to split a merged detection blob back to individual detections based on the detected head positions. Detected body-parts also allow us to extract important local constellation features of the body-parts positions and angles related to the full-body. These features are useful for human walking gait pattern recognition and human pose (e.g. standing or falling down) estimation for potential abnormal behavior and accidental event detection, as evidenced with our experimental tests. Furthermore, based on the reliable head (face) tacking, we have applied a super-resolution algorithm to enhance the face resolution for improved human face recognition performance.
Combined effects of inversion and feature removal on N170 responses elicited by faces and car fronts
Kloth, Nadine; Itier, Roxane J.; Schweinberger, Stefan R.
2014-01-01
The face-sensitive N170 is typically enhanced for inverted compared to upright faces. Itier, Alain, Sedore, and McIntosh (2007) recently suggested that this N170 inversion effect is mainly driven by the eye region which becomes salient when the face configuration is disrupted. Here we tested whether similar effects could be observed with non-face objects that are structurally similar to faces in terms of possessing a homogeneous within-class first-order feature configuration. We presented upright and inverted pictures of intact car fronts, car fronts without lights, and isolated lights, in addition to analogous face conditions. Upright cars elicited substantial N170 responses of similar amplitude to those evoked by upright faces. In strong contrast to face conditions however, the car-elicited N170 was mainly driven by the global shape rather than the presence or absence of lights, and was dramatically reduced for isolated lights. Overall, our data confirm a differential influence of the eye region in upright and inverted faces. Results for car fronts do not suggest similar interactive encoding of eye-like features and configuration for non-face objects, even when these objects possess a similar feature configuration as faces. PMID:23485023
Infrared face recognition based on LBP histogram and KW feature selection
NASA Astrophysics Data System (ADS)
Xie, Zhihua
2014-07-01
The conventional LBP-based feature as represented by the local binary pattern (LBP) histogram still has room for performance improvements. This paper focuses on the dimension reduction of LBP micro-patterns and proposes an improved infrared face recognition method based on LBP histogram representation. To extract the local robust features in infrared face images, LBP is chosen to get the composition of micro-patterns of sub-blocks. Based on statistical test theory, Kruskal-Wallis (KW) feature selection method is proposed to get the LBP patterns which are suitable for infrared face recognition. The experimental results show combination of LBP and KW features selection improves the performance of infrared face recognition, the proposed method outperforms the traditional methods based on LBP histogram, discrete cosine transform(DCT) or principal component analysis(PCA).
Evidence for Working Memory Storage Operations in Perceptual Cortex
Sreenivasan, Kartik K.; Gratton, Caterina; Vytlacil, Jason; D’Esposito, Mark
2014-01-01
Isolating the short-term storage component of working memory (WM) from the myriad of associated executive processes has been an enduring challenge. Recent efforts have identified patterns of activity in visual regions that contain information about items being held in WM. However, it remains unclear (i) whether these representations withstand intervening sensory input and (ii) how communication between multimodal association cortex and unimodal perceptual regions supporting WM representations is involved in WM storage. We present evidence that the features of a face held in WM are stored within face processing regions, that these representations persist across subsequent sensory input, and that information about the match between sensory input and memory representation is relayed forward from perceptual to prefrontal regions. Participants were presented with a series of probe faces and indicated whether each probe matched a Target face held in WM. We parametrically varied the feature similarity between probe and Target faces. Activity within face processing regions scaled linearly with the degree of feature similarity between the probe face and the features of the Target face, suggesting that the features of the Target face were stored in these regions. Furthermore, directed connectivity measures revealed that the direction of information flow that was optimal for performance was from sensory regions that stored the features of the Target face to dorsal prefrontal regions, supporting the notion that sensory input is compared to representations stored within perceptual regions and relayed forward. Together, these findings indicate that WM storage operations are carried out within perceptual cortex. PMID:24436009
Face detection on distorted images using perceptual quality-aware features
NASA Astrophysics Data System (ADS)
Gunasekar, Suriya; Ghosh, Joydeep; Bovik, Alan C.
2014-02-01
We quantify the degradation in performance of a popular and effective face detector when human-perceived image quality is degraded by distortions due to additive white gaussian noise, gaussian blur or JPEG compression. It is observed that, within a certain range of perceived image quality, a modest increase in image quality can drastically improve face detection performance. These results can be used to guide resource or bandwidth allocation in a communication/delivery system that is associated with face detection tasks. A new face detector based on QualHOG features is also proposed that augments face-indicative HOG features with perceptual quality-aware spatial Natural Scene Statistics (NSS) features, yielding improved tolerance against image distortions. The new detector provides statistically significant improvements over a strong baseline on a large database of face images representing a wide range of distortions. To facilitate this study, we created a new Distorted Face Database, containing face and non-face patches from images impaired by a variety of common distortion types and levels. This new dataset is available for download and further experimentation at www.ideal.ece.utexas.edu/˜suriya/DFD/.
Face features and face configurations both contribute to visual crowding.
Sun, Hsin-Mei; Balas, Benjamin
2015-02-01
Crowding refers to the inability to recognize an object in peripheral vision when other objects are presented nearby (Whitney & Levi Trends in Cognitive Sciences, 15, 160-168, 2011). A popular explanation of crowding is that features of the target and flankers are combined inappropriately when they are located within an integration field, thus impairing target recognition (Pelli, Palomares, & Majaj Journal of Vision, 4(12), 12:1136-1169, 2004). However, it remains unclear which features of the target and flankers are combined inappropriately to cause crowding (Levi Vision Research, 48, 635-654, 2008). For example, in a complex stimulus (e.g., a face), to what extent does crowding result from the integration of features at a part-based level or at the level of global processing of the configural appearance? In this study, we used a face categorization task and different types of flankers to examine how much the magnitude of visual crowding depends on the similarity of face parts or of global configurations. We created flankers with face-like features (e.g., the eyes, nose, and mouth) in typical and scrambled configurations to examine the impacts of part appearance and global configuration on the visual crowding of faces. Additionally, we used "electrical socket" flankers that mimicked first-order face configuration but had only schematic features, to examine the extent to which global face geometry impacted crowding. Our results indicated that both face parts and configurations contribute to visual crowding, suggesting that face similarity as realized under crowded conditions includes both aspects of facial appearance.
Neuron’s eye view: Inferring features of complex stimuli from neural responses
Chen, Xin; Beck, Jeffrey M.
2017-01-01
Experiments that study neural encoding of stimuli at the level of individual neurons typically choose a small set of features present in the world—contrast and luminance for vision, pitch and intensity for sound—and assemble a stimulus set that systematically varies along these dimensions. Subsequent analysis of neural responses to these stimuli typically focuses on regression models, with experimenter-controlled features as predictors and spike counts or firing rates as responses. Unfortunately, this approach requires knowledge in advance about the relevant features coded by a given population of neurons. For domains as complex as social interaction or natural movement, however, the relevant feature space is poorly understood, and an arbitrary a priori choice of features may give rise to confirmation bias. Here, we present a Bayesian model for exploratory data analysis that is capable of automatically identifying the features present in unstructured stimuli based solely on neuronal responses. Our approach is unique within the class of latent state space models of neural activity in that it assumes that firing rates of neurons are sensitive to multiple discrete time-varying features tied to the stimulus, each of which has Markov (or semi-Markov) dynamics. That is, we are modeling neural activity as driven by multiple simultaneous stimulus features rather than intrinsic neural dynamics. We derive a fast variational Bayesian inference algorithm and show that it correctly recovers hidden features in synthetic data, as well as ground-truth stimulus features in a prototypical neural dataset. To demonstrate the utility of the algorithm, we also apply it to cluster neural responses and demonstrate successful recovery of features corresponding to monkeys and faces in the image set. PMID:28827790
2010-05-18
ISS023-E-046806 (18 May 2010) --- Backdropped by Earth?s horizon and the blackness of space, the docked space shuttle Atlantis is featured in this image photographed by an Expedition 23 crew member on the International Space Station. The Russian-built Mini-Research Module 1 (MRM-1) is visible in the payload bay as the shuttle robotic arm prepares to unberth the module from Atlantis and position it for handoff to the station robotic arm (visible at right). Named Rassvet, Russian for "dawn," the module is the second in a series of new pressurized components for Russia and will be permanently attached to the Earth-facing port of the Zarya Functional Cargo Block (FGB). Rassvet will be used for cargo storage and will provide an additional docking port to the station.
Electrophysiological evidence for parts and wholes in visual face memory.
Towler, John; Eimer, Martin
2016-10-01
It is often assumed that upright faces are represented in a holistic fashion, while representations of inverted faces are essentially part-based. To assess this hypothesis, we recorded event-related potentials (ERPs) during a sequential face identity matching task where successively presented pairs of upright or inverted faces were either identical or differed with respect to their internal features, their external features, or both. Participants' task was to report on each trial whether the face pair was identical or different. To track the activation of visual face memory representations, we measured N250r components that emerge over posterior face-selective regions during the activation of visual face memory representations by a successful identity match. N250r components to full identity repetitions were smaller and emerged later for inverted as compared to upright faces, demonstrating that image inversion impairs face identity matching processes. For upright faces, N250r components were also elicited by partial repetitions of external or internal features, which suggest that the underlying identity matching processes are not exclusively based on non-decomposable holistic representations. However, the N250r to full identity repetitions was super-additive (i.e., larger than the sum of the two N250r components to partial repetitions of external or internal features) for upright faces, demonstrating that holistic representations were involved in identity matching processes. For inverted faces, N250r components to full and partial identity repetitions were strictly additive, indicating that the identity matching of external and internal features operated in an entirely part-based fashion. These results provide new electrophysiological evidence for qualitative differences between representations of upright and inverted faces in the occipital-temporal face processing system. Copyright © 2016 Elsevier Ltd. All rights reserved.
What makes a cell face-selective: the importance of contrast
Ohayon, Shay; Freiwald, Winrich A; Tsao, Doris Y
2012-01-01
Summary Faces are robustly detected by computer vision algorithms that search for characteristic coarse contrast features. Here, we investigated whether face-selective cells in the primate brain exploit contrast features as well. We recorded from face-selective neurons in macaque inferotemporal cortex, while presenting a face-like collage of regions whose luminances were changed randomly. Modulating contrast combinations between regions induced activity changes ranging from no response to a response greater than that to a real face in 50% of cells. The critical stimulus factor determining response magnitude was contrast polarity, e.g., nose region brighter than left eye. Contrast polarity preferences were consistent across cells, suggesting a common computational strategy across the population, and matched features used by computer vision algorithms for face detection. Furthermore, most cells were tuned both for contrast polarity and for the geometry of facial features, suggesting cells encode information useful both for detection and recognition. PMID:22578507
Automated Detection of Actinic Keratoses in Clinical Photographs
Hames, Samuel C.; Sinnya, Sudipta; Tan, Jean-Marie; Morze, Conrad; Sahebian, Azadeh; Soyer, H. Peter; Prow, Tarl W.
2015-01-01
Background Clinical diagnosis of actinic keratosis is known to have intra- and inter-observer variability, and there is currently no non-invasive and objective measure to diagnose these lesions. Objective The aim of this pilot study was to determine if automatically detecting and circumscribing actinic keratoses in clinical photographs is feasible. Methods Photographs of the face and dorsal forearms were acquired in 20 volunteers from two groups: the first with at least on actinic keratosis present on the face and each arm, the second with no actinic keratoses. The photographs were automatically analysed using colour space transforms and morphological features to detect erythema. The automated output was compared with a senior consultant dermatologist’s assessment of the photographs, including the intra-observer variability. Performance was assessed by the correlation between total lesions detected by automated method and dermatologist, and whether the individual lesions detected were in the same location as the dermatologist identified lesions. Additionally, the ability to limit false positives was assessed by automatic assessment of the photographs from the no actinic keratosis group in comparison to the high actinic keratosis group. Results The correlation between the automatic and dermatologist counts was 0.62 on the face and 0.51 on the arms, compared to the dermatologist’s intra-observer variation of 0.83 and 0.93 for the same. Sensitivity of automatic detection was 39.5% on the face, 53.1% on the arms. Positive predictive values were 13.9% on the face and 39.8% on the arms. Significantly more lesions (p<0.0001) were detected in the high actinic keratosis group compared to the no actinic keratosis group. Conclusions The proposed method was inferior to assessment by the dermatologist in terms of sensitivity and positive predictive value. However, this pilot study used only a single simple feature and was still able to achieve sensitivity of detection of 53.1% on the arms.This suggests that image analysis is a feasible avenue of investigation for overcoming variability in clinical assessment. Future studies should focus on more sophisticated features to improve sensitivity for actinic keratoses without erythema and limit false positives associated with the anatomical structures on the face. PMID:25615930
Newborns' Face Recognition: Role of Inner and Outer Facial Features
ERIC Educational Resources Information Center
Turati, Chiara; Macchi Cassia, Viola; Simion, Francesca; Leo, Irene
2006-01-01
Existing data indicate that newborns are able to recognize individual faces, but little is known about what perceptual cues drive this ability. The current study showed that either the inner or outer features of the face can act as sufficient cues for newborns' face recognition (Experiment 1), but the outer part of the face enjoys an advantage…
Modeling Face Identification Processing in Children and Adults.
ERIC Educational Resources Information Center
Schwarzer, Gudrun; Massaro, Dominic W.
2001-01-01
Two experiments studied whether and how 5-year-olds integrate single facial features to identify faces. Results indicated that children could evaluate and integrate information from eye and mouth features to identify a face when salience of features was varied. A weighted Fuzzy Logical Model of Perception fit better than a Single Channel Model,…
Feature extraction with deep neural networks by a generalized discriminant analysis.
Stuhlsatz, André; Lippel, Jens; Zielke, Thomas
2012-04-01
We present an approach to feature extraction that is a generalization of the classical linear discriminant analysis (LDA) on the basis of deep neural networks (DNNs). As for LDA, discriminative features generated from independent Gaussian class conditionals are assumed. This modeling has the advantages that the intrinsic dimensionality of the feature space is bounded by the number of classes and that the optimal discriminant function is linear. Unfortunately, linear transformations are insufficient to extract optimal discriminative features from arbitrarily distributed raw measurements. The generalized discriminant analysis (GerDA) proposed in this paper uses nonlinear transformations that are learnt by DNNs in a semisupervised fashion. We show that the feature extraction based on our approach displays excellent performance on real-world recognition and detection tasks, such as handwritten digit recognition and face detection. In a series of experiments, we evaluate GerDA features with respect to dimensionality reduction, visualization, classification, and detection. Moreover, we show that GerDA DNNs can preprocess truly high-dimensional input data to low-dimensional representations that facilitate accurate predictions even if simple linear predictors or measures of similarity are used.
Kloth, Nadine; Itier, Roxane J; Schweinberger, Stefan R
2013-04-01
The face-sensitive N170 is typically enhanced for inverted compared to upright faces. Itier, Alain, Sedore, and McIntosh (2007) recently suggested that this N170 inversion effect is mainly driven by the eye region which becomes salient when the face configuration is disrupted. Here we tested whether similar effects could be observed with non-face objects that are structurally similar to faces in terms of possessing a homogeneous within-class first-order feature configuration. We presented upright and inverted pictures of intact car fronts, car fronts without lights, and isolated lights, in addition to analogous face conditions. Upright cars elicited substantial N170 responses of similar amplitude to those evoked by upright faces. In strong contrast to face conditions however, the car-elicited N170 was mainly driven by the global shape rather than the presence or absence of lights, and was dramatically reduced for isolated lights. Overall, our data confirm a differential influence of the eye region in upright and inverted faces. Results for car fronts do not suggest similar interactive encoding of eye-like features and configuration for non-face objects, even when these objects possess a similar feature configuration as faces. Copyright © 2013 Elsevier Inc. All rights reserved.
Colloff, Melissa F; Flowe, Heather D
2016-06-01
False face recognition rates are sometimes higher when faces are learned while under the influence of alcohol. Alcohol myopia theory (AMT) proposes that acute alcohol intoxication during face learning causes people to attend to only the most salient features of a face, impairing the encoding of less salient facial features. Yet, there is currently no direct evidence to support this claim. Our objective was to test whether acute alcohol intoxication impairs face learning by causing subjects to attend to a salient (i.e., distinctive) facial feature over other facial features, as per AMT. We employed a balanced placebo design (N = 100). Subjects in the alcohol group were dosed to achieve a blood alcohol concentration (BAC) of 0.06 %, whereas the no alcohol group consumed tonic water. Alcohol expectancy was controlled. Subjects studied faces with or without a distinctive feature (e.g., scar, piercing). An old-new recognition test followed. Some of the test faces were "old" (i.e., previously studied), and some were "new" (i.e., not previously studied). We varied whether the new test faces had a previously studied distinctive feature versus other familiar characteristics. Intoxicated and sober recognition accuracy was comparable, but subjects in the alcohol group made more positive identifications overall compared to the no alcohol group. The results are not in keeping with AMT. Rather, a more general cognitive mechanism appears to underlie false face recognition in intoxicated subjects. Specifically, acute alcohol intoxication during face learning results in more liberal choosing, perhaps because of an increased reliance on familiarity.
STS-87 Payload Specialist Kadenyuk in white room
NASA Technical Reports Server (NTRS)
1997-01-01
STS-87 Payload Specialist Leonid Kadenyuk of the National Space Agency of Ukraine is assisted with final preparations before launch in the white room at Launch Pad 39B by Danny Wyatt, NASA quality assurance specialist, at left; Dave Law, USA mechanical technician, facing Kadenyuk; and Travis Thompson, USA orbiter vehicle closeout chief, at right. STS-87 is the fourth flight of the United States Microgravity Payload and Spartan-201. The 16- day mission will include the Collaborative Ukrainian Experiment (CUE), a collection of 10 plant space biology experiments that will fly in Columbias middeck and will feature an educational component that involves evaluating the effects of microgravity on Brassica rapa seedlings.
The Muslim Headscarf and Face Perception: “They All Look the Same, Don't They?”
Toseeb, Umar; Bryant, Eleanor J.; Keeble, David R. T.
2014-01-01
The headscarf conceals hair and other external features of a head (such as the ears). It therefore may have implications for the way in which such faces are perceived. Images of faces with hair (H) or alternatively, covered by a headscarf (HS) were used in three experiments. In Experiment 1 participants saw both H and HS faces in a yes/no recognition task in which the external features either remained the same between learning and test (Same) or switched (Switch). Performance was similar for H and HS faces in both the Same and Switch condition, but in the Switch condition it dropped substantially compared to the Same condition. This implies that the mere presence of the headscarf does not reduce performance, rather, the change between the type of external feature (hair or headscarf) causes the drop in performance. In Experiment 2, which used eye-tracking methodology, it was found that almost all fixations were to internal regions, and that there was no difference in the proportion of fixations to external features between the Same and Switch conditions, implying that the headscarf influenced processing by virtue of extrafoveal viewing. In Experiment 3, similarity ratings of the internal features of pairs of HS faces were higher than pairs of H faces, confirming that the internal and external features of a face are perceived as a whole rather than as separate components. PMID:24520313
A system for tracking and recognizing pedestrian faces using a network of loosely coupled cameras
NASA Astrophysics Data System (ADS)
Gagnon, L.; Laliberté, F.; Foucher, S.; Branzan Albu, A.; Laurendeau, D.
2006-05-01
A face recognition module has been developed for an intelligent multi-camera video surveillance system. The module can recognize a pedestrian face in terms of six basic emotions and the neutral state. Face and facial features detection (eyes, nasal root, nose and mouth) are first performed using cascades of boosted classifiers. These features are used to normalize the pose and dimension of the face image. Gabor filters are then sampled on a regular grid covering the face image to build a facial feature vector that feeds a nearest neighbor classifier with a cosine distance similarity measure for facial expression interpretation and face model construction. A graphical user interface allows the user to adjust the module parameters.
Face-iris multimodal biometric scheme based on feature level fusion
NASA Astrophysics Data System (ADS)
Huo, Guang; Liu, Yuanning; Zhu, Xiaodong; Dong, Hongxing; He, Fei
2015-11-01
Unlike score level fusion, feature level fusion demands all the features extracted from unimodal traits with high distinguishability, as well as homogeneity and compatibility, which is difficult to achieve. Therefore, most multimodal biometric research focuses on score level fusion, whereas few investigate feature level fusion. We propose a face-iris recognition method based on feature level fusion. We build a special two-dimensional-Gabor filter bank to extract local texture features from face and iris images, and then transform them by histogram statistics into an energy-orientation variance histogram feature with lower dimensions and higher distinguishability. Finally, through a fusion-recognition strategy based on principal components analysis and support vector machine (FRSPS), feature level fusion and one-to-n identification are accomplished. The experimental results demonstrate that this method can not only effectively extract face and iris features but also provide higher recognition accuracy. Compared with some state-of-the-art fusion methods, the proposed method has a significant performance advantage.
ERIC Educational Resources Information Center
Freire, Alejo; Lee, Kang
2001-01-01
Tested in two studies 4- to 7-year-olds' face recognition by manipulating the faces' configural and featural information. Found that even with only a single 5-second exposure, most children could use configural and featural cues to make identity judgments. Repeated exposure and feedback improved others' performance. Even proficient memories were…
Subject-specific and pose-oriented facial features for face recognition across poses.
Lee, Ping-Han; Hsu, Gee-Sern; Wang, Yun-Wen; Hung, Yi-Ping
2012-10-01
Most face recognition scenarios assume that frontal faces or mug shots are available for enrollment to the database, faces of other poses are collected in the probe set. Given a face from the probe set, one needs to determine whether a match in the database exists. This is under the assumption that in forensic applications, most suspects have their mug shots available in the database, and face recognition aims at recognizing the suspects when their faces of various poses are captured by a surveillance camera. This paper considers a different scenario: given a face with multiple poses available, which may or may not include a mug shot, develop a method to recognize the face with poses different from those captured. That is, given two disjoint sets of poses of a face, one for enrollment and the other for recognition, this paper reports a method best for handling such cases. The proposed method includes feature extraction and classification. For feature extraction, we first cluster the poses of each subject's face in the enrollment set into a few pose classes and then decompose the appearance of the face in each pose class using Embedded Hidden Markov Model, which allows us to define a set of subject-specific and pose-priented (SSPO) facial components for each subject. For classification, an Adaboost weighting scheme is used to fuse the component classifiers with SSPO component features. The proposed method is proven to outperform other approaches, including a component-based classifier with local facial features cropped manually, in an extensive performance evaluation study.
Face recognition via sparse representation of SIFT feature on hexagonal-sampling image
NASA Astrophysics Data System (ADS)
Zhang, Daming; Zhang, Xueyong; Li, Lu; Liu, Huayong
2018-04-01
This paper investigates a face recognition approach based on Scale Invariant Feature Transform (SIFT) feature and sparse representation. The approach takes advantage of SIFT which is local feature other than holistic feature in classical Sparse Representation based Classification (SRC) algorithm and possesses strong robustness to expression, pose and illumination variations. Since hexagonal image has more inherit merits than square image to make recognition process more efficient, we extract SIFT keypoint in hexagonal-sampling image. Instead of matching SIFT feature, firstly the sparse representation of each SIFT keypoint is given according the constructed dictionary; secondly these sparse vectors are quantized according dictionary; finally each face image is represented by a histogram and these so-called Bag-of-Words vectors are classified by SVM. Due to use of local feature, the proposed method achieves better result even when the number of training sample is small. In the experiments, the proposed method gave higher face recognition rather than other methods in ORL and Yale B face databases; also, the effectiveness of the hexagonal-sampling in the proposed method is verified.
Face antispoofing based on frame difference and multilevel representation
NASA Astrophysics Data System (ADS)
Benlamoudi, Azeddine; Aiadi, Kamal Eddine; Ouafi, Abdelkrim; Samai, Djamel; Oussalah, Mourad
2017-07-01
Due to advances in technology, today's biometric systems become vulnerable to spoof attacks made by fake faces. These attacks occur when an intruder attempts to fool an established face-based recognition system by presenting a fake face (e.g., print photo or replay attacks) in front of the camera instead of the intruder's genuine face. For this purpose, face antispoofing has become a hot topic in face analysis literature, where several applications with antispoofing task have emerged recently. We propose a solution for distinguishing between real faces and fake ones. Our approach is based on extracting features from the difference between successive frames instead of individual frames. We also used a multilevel representation that divides the frame difference into multiple multiblocks. Different texture descriptors (local binary patterns, local phase quantization, and binarized statistical image features) have then been applied to each block. After the feature extraction step, a Fisher score is applied to sort the features in ascending order according to the associated weights. Finally, a support vector machine is used to differentiate between real and fake faces. We tested our approach on three publicly available databases: CASIA Face Antispoofing database, Replay-Attack database, and MSU Mobile Face Spoofing database. The proposed approach outperforms the other state-of-the-art methods in different media and quality metrics.
Segmentation of human face using gradient-based approach
NASA Astrophysics Data System (ADS)
Baskan, Selin; Bulut, M. Mete; Atalay, Volkan
2001-04-01
This paper describes a method for automatic segmentation of facial features such as eyebrows, eyes, nose, mouth and ears in color images. This work is an initial step for wide range of applications based on feature-based approaches, such as face recognition, lip-reading, gender estimation, facial expression analysis, etc. Human face can be characterized by its skin color and nearly elliptical shape. For this purpose, face detection is performed using color and shape information. Uniform illumination is assumed. No restrictions on glasses, make-up, beard, etc. are imposed. Facial features are extracted using the vertically and horizontally oriented gradient projections. The gradient of a minimum with respect to its neighbor maxima gives the boundaries of a facial feature. Each facial feature has a different horizontal characteristic. These characteristics are derived by extensive experimentation with many face images. Using fuzzy set theory, the similarity between the candidate and the feature characteristic under consideration is calculated. Gradient-based method is accompanied by the anthropometrical information, for robustness. Ear detection is performed using contour-based shape descriptors. This method detects the facial features and circumscribes each facial feature with the smallest rectangle possible. AR database is used for testing. The developed method is also suitable for real-time systems.
Patient identification using a near-infrared laser scanner
NASA Astrophysics Data System (ADS)
Manit, Jirapong; Bremer, Christina; Schweikard, Achim; Ernst, Floris
2017-03-01
We propose a new biometric approach where the tissue thickness of a person's forehead is used as a biometric feature. Given that the spatial registration of two 3D laser scans of the same human face usually produces a low error value, the principle of point cloud registration and its error metric can be applied to human classification techniques. However, by only considering the spatial error, it is not possible to reliably verify a person's identity. We propose to use a novel near-infrared laser-based head tracking system to determine an additional feature, the tissue thickness, and include this in the error metric. Using MRI as a ground truth, data from the foreheads of 30 subjects was collected from which a 4D reference point cloud was created for each subject. The measurements from the near-infrared system were registered with all reference point clouds using the ICP algorithm. Afterwards, the spatial and tissue thickness errors were extracted, forming a 2D feature space. For all subjects, the lowest feature distance resulted from the registration of a measurement and the reference point cloud of the same person. The combined registration error features yielded two clusters in the feature space, one from the same subject and another from the other subjects. When only the tissue thickness error was considered, these clusters were less distinct but still present. These findings could help to raise safety standards for head and neck cancer patients and lays the foundation for a future human identification technique.
Robust kernel representation with statistical local features for face recognition.
Yang, Meng; Zhang, Lei; Shiu, Simon Chi-Keung; Zhang, David
2013-06-01
Factors such as misalignment, pose variation, and occlusion make robust face recognition a difficult problem. It is known that statistical features such as local binary pattern are effective for local feature extraction, whereas the recently proposed sparse or collaborative representation-based classification has shown interesting results in robust face recognition. In this paper, we propose a novel robust kernel representation model with statistical local features (SLF) for robust face recognition. Initially, multipartition max pooling is used to enhance the invariance of SLF to image registration error. Then, a kernel-based representation model is proposed to fully exploit the discrimination information embedded in the SLF, and robust regression is adopted to effectively handle the occlusion in face images. Extensive experiments are conducted on benchmark face databases, including extended Yale B, AR (A. Martinez and R. Benavente), multiple pose, illumination, and expression (multi-PIE), facial recognition technology (FERET), face recognition grand challenge (FRGC), and labeled faces in the wild (LFW), which have different variations of lighting, expression, pose, and occlusions, demonstrating the promising performance of the proposed method.
NASA Astrophysics Data System (ADS)
Chidananda, H.; Reddy, T. Hanumantha
2017-06-01
This paper presents a natural representation of numerical digit(s) using hand activity analysis based on number of fingers out stretched for each numerical digit in sequence extracted from a video. The analysis is based on determining a set of six features from a hand image. The most important features used from each frame in a video are the first fingertip from top, palm-line, palm-center, valley points between the fingers exists above the palm-line. Using this work user can convey any number of numerical digits using right or left or both the hands naturally in a video. Each numerical digit ranges from 0 to9. Hands (right/left/both) used to convey digits can be recognized accurately using the valley points and with this recognition whether the user is a right / left handed person in practice can be analyzed. In this work, first the hand(s) and face parts are detected by using YCbCr color space and face part is removed by using ellipse based method. Then, the hand(s) are analyzed to recognize the activity that represents a series of numerical digits in a video. This work uses pixel continuity algorithm using 2D coordinate geometry system and does not use regular use of calculus, contours, convex hull and datasets.
Detection of emotional faces: salient physical features guide effective visual search.
Calvo, Manuel G; Nummenmaa, Lauri
2008-08-01
In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent, surprised and disgusted faces was found both under upright and inverted display conditions. Inversion slowed down the detection of these faces less than that of others (fearful, angry, and sad). Accordingly, the detection advantage involves processing of featural rather than configural information. The facial features responsible for the detection advantage are located in the mouth rather than the eye region. Computationally modeled visual saliency predicted both attentional orienting and detection. Saliency was greatest for the faces (happy) and regions (mouth) that were fixated earlier and detected faster, and there was close correspondence between the onset of the modeled saliency peak and the time at which observers initially fixated the faces. The authors conclude that visual saliency of specific facial features--especially the smiling mouth--is responsible for facilitated initial orienting, which thus shortens detection. (PsycINFO Database Record (c) 2008 APA, all rights reserved).
iFER: facial expression recognition using automatically selected geometric eye and eyebrow features
NASA Astrophysics Data System (ADS)
Oztel, Ismail; Yolcu, Gozde; Oz, Cemil; Kazan, Serap; Bunyak, Filiz
2018-03-01
Facial expressions have an important role in interpersonal communications and estimation of emotional states or intentions. Automatic recognition of facial expressions has led to many practical applications and became one of the important topics in computer vision. We present a facial expression recognition system that relies on geometry-based features extracted from eye and eyebrow regions of the face. The proposed system detects keypoints on frontal face images and forms a feature set using geometric relationships among groups of detected keypoints. Obtained feature set is refined and reduced using the sequential forward selection (SFS) algorithm and fed to a support vector machine classifier to recognize five facial expression classes. The proposed system, iFER (eye-eyebrow only facial expression recognition), is robust to lower face occlusions that may be caused by beards, mustaches, scarves, etc. and lower face motion during speech production. Preliminary experiments on benchmark datasets produced promising results outperforming previous facial expression recognition studies using partial face features, and comparable results to studies using whole face information, only slightly lower by ˜ 2.5 % compared to the best whole face facial recognition system while using only ˜ 1 / 3 of the facial region.
Hierarchical and chemical space partitioning in new intermetallic borides MNi21B20 (M = In, Sn).
Wagner, Frank R; Zheng, Qiang; Gumeniuk, Roman; Bende, David; Prots, Yurii; Bobnar, Matej; Hu, Dong-Li; Burkhardt, Ulrich; Grin, Yuri; Leithe-Jasper, Andreas
2017-10-10
The compounds MNi 21 B 20 (M = In, Sn) have been synthesized and their cubic crystal structure determined (space group Pm3[combining macron]m, lattice parameters a = 7.1730(1) Å and a = 7.1834(1) Å, respectively). The structure can be described as a hierarchical partitioning of space based on a reo-e net formed by Ni3 species with large cubical, cuboctahedral and rhombicuboctahedral voids being filled according to [Ni1@Ni3 8 ], [M@Ni3 12 ], and [Ni2 6 @B 20 @Ni3 24 ], respectively. The [Ni 6 @B 20 ] motif inside the rhombicuboctahedral voids features an empty [Ni 6 ] octahedron surrounded by a [B 20 ] cage recently described in E 2 Ni 21 B 20 (E = Zn, Ga). Position-space bonding analysis using ELI-D and QTAIM space partitioning as well as 2- and 3-center delocalization indices gives strong support to an alternative chemical description of space partitioning based on face-condensed [B@Ni 6 ] trigonal prisms as basic building blocks. The shortest B-B contacts display locally nested 3-center B-B-Ni bonding inside each trigonal prism. This clearly rules out the notion of [Ni 6 @B 20 ] clusters and leads to the arrangement of 20 face-condensed [B@Ni2 3 Ni3 3 ] trigonal prisms resulting in a triple-shell like situation Ni2 6 @B 20 @Ni3 24 (reo-e), where the shells display comparable intra- and inter-shell bonding. Both compounds are Pauli paramagnets displaying metallic conductivity.
The Role of Familiarity for Representations in Norm-Based Face Space
Faerber, Stella J.; Kaufmann, Jürgen M.; Leder, Helmut; Martin, Eva Maria; Schweinberger, Stefan R.
2016-01-01
According to the norm-based version of the multidimensional face space model (nMDFS, Valentine, 1991), any given face and its corresponding anti-face (which deviates from the norm in exactly opposite direction as the original face) should be equidistant to a hypothetical prototype face (norm), such that by definition face and anti-face should bear the same level of perceived typicality. However, it has been argued that familiarity affects perceived typicality and that representations of familiar faces are qualitatively different (e.g., more robust and image-independent) from those for unfamiliar faces. Here we investigated the role of face familiarity for rated typicality, using two frequently used operationalisations of typicality (deviation-based: DEV), and distinctiveness (face in the crowd: FITC) for faces of celebrities and their corresponding anti-faces. We further assessed attractiveness, likeability and trustworthiness ratings of the stimuli, which are potentially related to typicality. For unfamiliar faces and their corresponding anti-faces, in line with the predictions of the nMDFS, our results demonstrate comparable levels of perceived typicality (DEV). In contrast, familiar faces were perceived much less typical than their anti-faces. Furthermore, familiar faces were rated higher than their anti-faces in distinctiveness, attractiveness, likability and trustworthiness. These findings suggest that familiarity strongly affects the distribution of facial representations in norm-based face space. Overall, our study suggests (1) that familiarity needs to be considered in studies of mental representations of faces, and (2) that familiarity, general distance-to-norm and more specific vector directions in face space make different and interactive contributions to different types of facial evaluations. PMID:27168323
The Role of Familiarity for Representations in Norm-Based Face Space.
Faerber, Stella J; Kaufmann, Jürgen M; Leder, Helmut; Martin, Eva Maria; Schweinberger, Stefan R
2016-01-01
According to the norm-based version of the multidimensional face space model (nMDFS, Valentine, 1991), any given face and its corresponding anti-face (which deviates from the norm in exactly opposite direction as the original face) should be equidistant to a hypothetical prototype face (norm), such that by definition face and anti-face should bear the same level of perceived typicality. However, it has been argued that familiarity affects perceived typicality and that representations of familiar faces are qualitatively different (e.g., more robust and image-independent) from those for unfamiliar faces. Here we investigated the role of face familiarity for rated typicality, using two frequently used operationalisations of typicality (deviation-based: DEV), and distinctiveness (face in the crowd: FITC) for faces of celebrities and their corresponding anti-faces. We further assessed attractiveness, likeability and trustworthiness ratings of the stimuli, which are potentially related to typicality. For unfamiliar faces and their corresponding anti-faces, in line with the predictions of the nMDFS, our results demonstrate comparable levels of perceived typicality (DEV). In contrast, familiar faces were perceived much less typical than their anti-faces. Furthermore, familiar faces were rated higher than their anti-faces in distinctiveness, attractiveness, likability and trustworthiness. These findings suggest that familiarity strongly affects the distribution of facial representations in norm-based face space. Overall, our study suggests (1) that familiarity needs to be considered in studies of mental representations of faces, and (2) that familiarity, general distance-to-norm and more specific vector directions in face space make different and interactive contributions to different types of facial evaluations.
Cascaded face alignment via intimacy definition feature
NASA Astrophysics Data System (ADS)
Li, Hailiang; Lam, Kin-Man; Chiu, Man-Yau; Wu, Kangheng; Lei, Zhibin
2017-09-01
Recent years have witnessed the emerging popularity of regression-based face aligners, which directly learn mappings between facial appearance and shape-increment manifolds. We propose a random-forest based, cascaded regression model for face alignment by using a locally lightweight feature, namely intimacy definition feature. This feature is more discriminative than the pose-indexed feature, more efficient than the histogram of oriented gradients feature and the scale-invariant feature transform feature, and more compact than the local binary feature (LBF). Experimental validation of our algorithm shows that our approach achieves state-of-the-art performance when testing on some challenging datasets. Compared with the LBF-based algorithm, our method achieves about twice the speed, 20% improvement in terms of alignment accuracy and saves an order of magnitude on memory requirement.
3D Face Modeling Using the Multi-Deformable Method
Hwang, Jinkyu; Yu, Sunjin; Kim, Joongrock; Lee, Sangyoun
2012-01-01
In this paper, we focus on the problem of the accuracy performance of 3D face modeling techniques using corresponding features in multiple views, which is quite sensitive to feature extraction errors. To solve the problem, we adopt a statistical model-based 3D face modeling approach in a mirror system consisting of two mirrors and a camera. The overall procedure of our 3D facial modeling method has two primary steps: 3D facial shape estimation using a multiple 3D face deformable model and texture mapping using seamless cloning that is a type of gradient-domain blending. To evaluate our method's performance, we generate 3D faces of 30 individuals and then carry out two tests: accuracy test and robustness test. Our method shows not only highly accurate 3D face shape results when compared with the ground truth, but also robustness to feature extraction errors. Moreover, 3D face rendering results intuitively show that our method is more robust to feature extraction errors than other 3D face modeling methods. An additional contribution of our method is that a wide range of face textures can be acquired by the mirror system. By using this texture map, we generate realistic 3D face for individuals at the end of the paper. PMID:23201976
Face recognition system and method using face pattern words and face pattern bytes
Zheng, Yufeng
2014-12-23
The present invention provides a novel system and method for identifying individuals and for face recognition utilizing facial features for face identification. The system and method of the invention comprise creating facial features or face patterns called face pattern words and face pattern bytes for face identification. The invention also provides for pattern recognitions for identification other than face recognition. The invention further provides a means for identifying individuals based on visible and/or thermal images of those individuals by utilizing computer software implemented by instructions on a computer or computer system and a computer readable medium containing instructions on a computer system for face recognition and identification.
The fractal based analysis of human face and DNA variations during aging.
Namazi, Hamidreza; Akrami, Amin; Hussaini, Jamal; Silva, Osmar N; Wong, Albert; Kulish, Vladimir V
2017-01-16
Human DNA is the main unit that shapes human characteristics and features such as behavior. Thus, it is expected that changes in DNA (DNA mutation) influence human characteristics and features. Face is one of the human features which is unique and also dependent on his gen. In this paper, for the first time we analyze the variations of human DNA and face simultaneously. We do this job by analyzing the fractal dimension of DNA walk and face during human aging. The results of this study show the human DNA and face get more complex by aging. These complexities are mapped on fractal exponents of DNA walk and human face. The method discussed in this paper can be further developed in order to investigate the direct influence of DNA mutation on the face variations during aging, and accordingly making a model between human face fractality and the complexity of DNA walk.
Sexual Dimorphism Analysis and Gender Classification in 3D Human Face
NASA Astrophysics Data System (ADS)
Hu, Yuan; Lu, Li; Yan, Jingqi; Liu, Zhi; Shi, Pengfei
In this paper, we present the sexual dimorphism analysis in 3D human face and perform gender classification based on the result of sexual dimorphism analysis. Four types of features are extracted from a 3D human-face image. By using statistical methods, the existence of sexual dimorphism is demonstrated in 3D human face based on these features. The contributions of each feature to sexual dimorphism are quantified according to a novel criterion. The best gender classification rate is 94% by using SVMs and Matcher Weighting fusion method.This research adds to the knowledge of 3D faces in sexual dimorphism and affords a foundation that could be used to distinguish between male and female in 3D faces.
Face to Face Communications in Space
NASA Technical Reports Server (NTRS)
Cohen, Malcolm M.; Davon, Bonnie P. (Technical Monitor)
1999-01-01
It has been reported that human face-to-face communications in space are compromised by facial edema, variations in the orientations of speakers and listeners, and background noises that are encountered in the shuttle and in space stations. To date, nearly all reports have been anecdotal or subjective, in the form of post-flight interviews or questionnaires; objective and quantitative data are generally lacking. Although it is acknowledged that efficient face-to-face communications are essential for astronauts to work safely and effectively, specific ways in which the space environment interferes with non-linguistic communication cues are poorly documented. Because we have only a partial understanding of how non-linguistic communication cues may change with mission duration, it is critically important to obtain objective data, and to evaluate these cues under well-controlled experimental conditions.
Holistic processing and reliance on global viewing strategies in older adults' face perception.
Meinhardt-Injac, Bozana; Persike, Malte; Meinhardt, Günter
2014-09-01
There is increasing evidence that face recognition might be impaired in older adults, but it is unclear whether the impairment is truly perceptual, and face specific. In order to address this question we compared performance in same/different matching tasks with face and non-face objects (watches) among young (mean age 23.7) and older adults (mean age 70.4) using a context congruency paradigm (Meinhardt-Injac, Persike & Meinhardt, 2010, Meinhardt-Injac, Persike and Meinhardt, 2011a). Older adults were less accurate than young adults with both object classes, while face matching was notably impaired. Effects of context congruency and inversion, measured as the hallmarks of holistic processing, were equally strong in both age groups, and were found only for faces, but not for watches. The face specific decline in older adults revealed deficits in handling internal facial features, while young adults matched external and internal features equally well. Comparison with non-face stimuli showed that this decline was face specific, and did not concern processing of object features in general. Taken together, the results indicate no age-related decline in the capabilities to process faces holistically. Rather, strong holistic effects, combined with a loss of precision in handling internal features indicate that older adults rely on global viewing strategies for faces. At the same time, access to the exact properties of inner face details becomes restricted. Copyright © 2014. Published by Elsevier B.V.
Coaching the exploration and exploitation in active learning for interactive video retrieval.
Wei, Xiao-Yong; Yang, Zhen-Qun
2013-03-01
Conventional active learning approaches for interactive video/image retrieval usually assume the query distribution is unknown, as it is difficult to estimate with only a limited number of labeled instances available. Thus, it is easy to put the system in a dilemma whether to explore the feature space in uncertain areas for a better understanding of the query distribution or to harvest in certain areas for more relevant instances. In this paper, we propose a novel approach called coached active learning that makes the query distribution predictable through training and, therefore, avoids the risk of searching on a completely unknown space. The estimated distribution, which provides a more global view of the feature space, can be used to schedule not only the timing but also the step sizes of the exploration and the exploitation in a principled way. The results of the experiments on a large-scale data set from TRECVID 2005-2009 validate the efficiency and effectiveness of our approach, which demonstrates an encouraging performance when facing domain-shift, outperforms eight conventional active learning methods, and shows superiority to six state-of-the-art interactive video retrieval systems.
Searching for substructures in fragment spaces.
Ehrlich, Hans-Christian; Volkamer, Andrea; Rarey, Matthias
2012-12-21
A common task in drug development is the selection of compounds fulfilling specific structural features from a large data pool. While several methods that iteratively search through such data sets exist, their application is limited compared to the infinite character of molecular space. The introduction of the concept of fragment spaces (FSs), which are composed of molecular fragments and their connection rules, made the representation of large combinatorial data sets feasible. At the same time, search algorithms face the problem of structural features spanning over multiple fragments. Due to the combinatorial nature of FSs, an enumeration of all products is impossible. In order to overcome these time and storage issues, we present a method that is able to find substructures in FSs without explicit product enumeration. This is accomplished by splitting substructures into subsubstructures and mapping them onto fragments with respect to fragment connectivity rules. The method has been evaluated on three different drug discovery scenarios considering the exploration of a molecule class, the elaboration of decoration patterns for a molecular core, and the exhaustive query for peptides in FSs. FSs can be searched in seconds, and found products contain novel compounds not present in the PubChem database which may serve as hints for new lead structures.
Gender in facial representations: a contrast-based study of adaptation within and between the sexes.
Oruç, Ipek; Guo, Xiaoyue M; Barton, Jason J S
2011-01-18
Face aftereffects are proving to be an effective means of examining the properties of face-specific processes in the human visual system. We examined the role of gender in the neural representation of faces using a contrast-based adaptation method. If faces of different genders share the same representational face space, then adaptation to a face of one gender should affect both same- and different-gender faces. Further, if these aftereffects differ in magnitude, this may indicate distinct gender-related factors in the organization of this face space. To control for a potential confound between physical similarity and gender, we used a Bayesian ideal observer and human discrimination data to construct a stimulus set in which pairs of different-gender faces were equally dissimilar as same-gender pairs. We found that the recognition of both same-gender and different-gender faces was suppressed following a brief exposure of 100 ms. Moreover, recognition was more suppressed for test faces of a different-gender than those of the same-gender as the adaptor, despite the equivalence in physical and psychophysical similarity. Our results suggest that male and female faces likely occupy the same face space, allowing transfer of aftereffects between the genders, but that there are special properties that emerge along gender-defining dimensions of this space.
Robust Head-Pose Estimation Based on Partially-Latent Mixture of Linear Regressions.
Drouard, Vincent; Horaud, Radu; Deleforge, Antoine; Ba, Sileye; Evangelidis, Georgios
2017-03-01
Head-pose estimation has many applications, such as social event analysis, human-robot and human-computer interaction, driving assistance, and so forth. Head-pose estimation is challenging, because it must cope with changing illumination conditions, variabilities in face orientation and in appearance, partial occlusions of facial landmarks, as well as bounding-box-to-face alignment errors. We propose to use a mixture of linear regressions with partially-latent output. This regression method learns to map high-dimensional feature vectors (extracted from bounding boxes of faces) onto the joint space of head-pose angles and bounding-box shifts, such that they are robustly predicted in the presence of unobservable phenomena. We describe in detail the mapping method that combines the merits of unsupervised manifold learning techniques and of mixtures of regressions. We validate our method with three publicly available data sets and we thoroughly benchmark four variants of the proposed algorithm with several state-of-the-art head-pose estimation methods.
Lim, Meng-Hui; Teoh, Andrew Beng Jin; Toh, Kar-Ann
2013-06-01
Biometric discretization is a key component in biometric cryptographic key generation. It converts an extracted biometric feature vector into a binary string via typical steps such as segmentation of each feature element into a number of labeled intervals, mapping of each interval-captured feature element onto a binary space, and concatenation of the resulted binary output of all feature elements into a binary string. Currently, the detection rate optimized bit allocation (DROBA) scheme is one of the most effective biometric discretization schemes in terms of its capability to assign binary bits dynamically to user-specific features with respect to their discriminability. However, we learn that DROBA suffers from potential discriminative feature misdetection and underdiscretization in its bit allocation process. This paper highlights such drawbacks and improves upon DROBA based on a novel two-stage algorithm: 1) a dynamic search method to efficiently recapture such misdetected features and to optimize the bit allocation of underdiscretized features and 2) a genuine interval concealment technique to alleviate crucial information leakage resulted from the dynamic search. Improvements in classification accuracy on two popular face data sets vindicate the feasibility of our approach compared with DROBA.
How Category Structure Influences the Perception of Object Similarity: The Atypicality Bias
Tanaka, James William; Kantner, Justin; Bartlett, Marni
2011-01-01
Why do some faces appear more similar than others? Beyond structural factors, we speculate that similarity is governed by the organization of faces located in a multi-dimensional face space. To test this hypothesis, we morphed a typical face with an atypical face. If similarity judgments are guided purely by their physical properties, the morph should be perceived to be equally similar to its typical parent as its atypical parent. However, contrary to the structural prediction, our results showed that the morph face was perceived to be more similar to the atypical face than the typical face. Our empirical studies show that the atypicality bias is not limited to faces, but extends to other object categories (birds) whose members share common shape properties. We also demonstrate atypicality bias is malleable and can change subject to category learning and experience. Collectively, the empirical evidence indicates that perceptions of face and object similarity are affected by the distribution of stimuli in a face or object space. In this framework, atypical stimuli are located in a sparser region of the space where there is less competition for recognition and therefore, these representations capture a broader range of inputs. In contrast, typical stimuli are located in a denser region of category space where there is increased competition for recognition and hence, these representation draw a more restricted range of face inputs. These results suggest that the perceived likeness of an object is influenced by the organization of surrounding exemplars in the category space. PMID:22685441
3. VIEW OF STORAGE (FEATURE 24), FACING NORTH. HEADFRAME AND ...
3. VIEW OF STORAGE (FEATURE 24), FACING NORTH. HEADFRAME AND STORAGE TANKS (FEATURE 18) VISIBLE AT RIGHT. - Copper Canyon Camp of the International Smelting & Refining Company, Storage, Copper Canyon, Battle Mountain, Lander County, NV
Miki, Kensaku; Takeshima, Yasuyuki; Watanabe, Shoko; Honda, Yukiko; Kakigi, Ryusuke
2011-04-06
We investigated the effects of inverting facial contour (hair and chin) and features (eyes, nose and mouth) on processing for static and dynamic face perception using magnetoencephalography (MEG). We used apparent motion, in which the first stimulus (S1) was replaced by a second stimulus (S2) with no interstimulus interval and subjects perceived visual motion, and presented three conditions as follows: (1) U&U: Upright contour and Upright features, (2) U&I: Upright contour and Inverted features, and (3) I&I: Inverted contour and Inverted features. In static face perception (S1 onset), the peak latency of the fusiform area's activity, which was related to static face perception, was significantly longer for U&I and I&I than for U&U in the right hemisphere and for U&I than for U&U and I&I in the left. In dynamic face perception (S2 onset), the strength (moment) of the occipitotemporal area's activity, which was related to dynamic face perception, was significantly larger for I&I than for U&U and U&I in the right hemisphere, but not the left. These results can be summarized as follows: (1) in static face perception, the activity of the right fusiform area was more affected by the inversion of features while that of the left fusiform area was more affected by the disruption of the spatial relation between the contour and features, and (2) in dynamic face perception, the activity of the right occipitotemporal area was affected by the inversion of the facial contour. Copyright © 2011 Elsevier B.V. All rights reserved.
Impairments in the Face-Processing Network in Developmental Prosopagnosia and Semantic Dementia
Mendez, Mario F.; Ringman, John M.; Shapira, Jill S.
2015-01-01
Background Developmental prosopagnosia (DP) and semantic dementia (SD) may be the two most common neurologic disorders of face processing, but their main clinical and pathophysiologic differences have not been established. To identify those features, we compared patients with DP and SD. Methods Five patients with DP, five with right temporal-predominant SD, and ten normal controls underwent cognitive, visual perceptual, and face-processing tasks. Results Although the patients with SD were more cognitively impaired than those with DP, the two groups did not differ statistically on the visual perceptual tests. On the face-processing tasks, the DP group had difficulty with configural analysis and they reported relying on serial, feature-by-feature analysis or awareness of salient features to recognize faces. By contrast, the SD group had problems with person knowledge and made semantically related errors. The SD group had better face familiarity scores, suggesting a potentially useful clinical test for distinguishing SD from DP. Conclusions These two disorders of face processing represent clinically distinguishable disturbances along a right hemisphere face-processing network: DP, characterized by early configural agnosia for faces, and SD, characterized primarily by a multimodal person knowledge disorder. We discuss these preliminary findings in the context of the current literature on the face-processing network; recent studies suggest an additional right anterior temporal, unimodal face familiarity-memory deficit consistent with an “associative prosopagnosia.” PMID:26705265
Learning the spherical harmonic features for 3-D face recognition.
Liu, Peijiang; Wang, Yunhong; Huang, Di; Zhang, Zhaoxiang; Chen, Liming
2013-03-01
In this paper, a competitive method for 3-D face recognition (FR) using spherical harmonic features (SHF) is proposed. With this solution, 3-D face models are characterized by the energies contained in spherical harmonics with different frequencies, thereby enabling the capture of both gross shape and fine surface details of a 3-D facial surface. This is in clear contrast to most 3-D FR techniques which are either holistic or feature based, using local features extracted from distinctive points. First, 3-D face models are represented in a canonical representation, namely, spherical depth map, by which SHF can be calculated. Then, considering the predictive contribution of each SHF feature, especially in the presence of facial expression and occlusion, feature selection methods are used to improve the predictive performance and provide faster and more cost-effective predictors. Experiments have been carried out on three public 3-D face datasets, SHREC2007, FRGC v2.0, and Bosphorus, with increasing difficulties in terms of facial expression, pose, and occlusion, and which demonstrate the effectiveness of the proposed method.
Deep features for efficient multi-biometric recognition with face and ear images
NASA Astrophysics Data System (ADS)
Omara, Ibrahim; Xiao, Gang; Amrani, Moussa; Yan, Zifei; Zuo, Wangmeng
2017-07-01
Recently, multimodal biometric systems have received considerable research interest in many applications especially in the fields of security. Multimodal systems can increase the resistance to spoof attacks, provide more details and flexibility, and lead to better performance and lower error rate. In this paper, we present a multimodal biometric system based on face and ear, and propose how to exploit the extracted deep features from Convolutional Neural Networks (CNNs) on the face and ear images to introduce more powerful discriminative features and robust representation ability for them. First, the deep features for face and ear images are extracted based on VGG-M Net. Second, the extracted deep features are fused by using a traditional concatenation and a Discriminant Correlation Analysis (DCA) algorithm. Third, multiclass support vector machine is adopted for matching and classification. The experimental results show that the proposed multimodal system based on deep features is efficient and achieves a promising recognition rate up to 100 % by using face and ear. In addition, the results indicate that the fusion based on DCA is superior to traditional fusion.
NASA Astrophysics Data System (ADS)
Feng, Guang; Li, Hengjian; Dong, Jiwen; Chen, Xi; Yang, Huiru
2018-04-01
In this paper, we proposed a joint and collaborative representation with Volterra kernel convolution feature (JCRVK) for face recognition. Firstly, the candidate face images are divided into sub-blocks in the equal size. The blocks are extracted feature using the two-dimensional Voltera kernels discriminant analysis, which can better capture the discrimination information from the different faces. Next, the proposed joint and collaborative representation is employed to optimize and classify the local Volterra kernels features (JCR-VK) individually. JCR-VK is very efficiently for its implementation only depending on matrix multiplication. Finally, recognition is completed by using the majority voting principle. Extensive experiments on the Extended Yale B and AR face databases are conducted, and the results show that the proposed approach can outperform other recently presented similar dictionary algorithms on recognition accuracy.
Facial expression identification using 3D geometric features from Microsoft Kinect device
NASA Astrophysics Data System (ADS)
Han, Dongxu; Al Jawad, Naseer; Du, Hongbo
2016-05-01
Facial expression identification is an important part of face recognition and closely related to emotion detection from face images. Various solutions have been proposed in the past using different types of cameras and features. Microsoft Kinect device has been widely used for multimedia interactions. More recently, the device has been increasingly deployed for supporting scientific investigations. This paper explores the effectiveness of using the device in identifying emotional facial expressions such as surprise, smile, sad, etc. and evaluates the usefulness of 3D data points on a face mesh structure obtained from the Kinect device. We present a distance-based geometric feature component that is derived from the distances between points on the face mesh and selected reference points in a single frame. The feature components extracted across a sequence of frames starting and ending by neutral emotion represent a whole expression. The feature vector eliminates the need for complex face orientation correction, simplifying the feature extraction process and making it more efficient. We applied the kNN classifier that exploits a feature component based similarity measure following the principle of dynamic time warping to determine the closest neighbors. Preliminary tests on a small scale database of different facial expressions show promises of the newly developed features and the usefulness of the Kinect device in facial expression identification.
Wang, Hailing; Ip, Chengteng; Fu, Shimin; Sun, Pei
2017-05-01
Face recognition theories suggest that our brains process invariant (e.g., gender) and changeable (e.g., emotion) facial dimensions separately. To investigate whether these two dimensions are processed in different time courses, we analyzed the selection negativity (SN, an event-related potential component reflecting attentional modulation) elicited by face gender and emotion during a feature selective attention task. Participants were instructed to attend to a combination of face emotion and gender attributes in Experiment 1 (bi-dimensional task) and to either face emotion or gender in Experiment 2 (uni-dimensional task). The results revealed that face emotion did not elicit a substantial SN, whereas face gender consistently generated a substantial SN in both experiments. These results suggest that face gender is more sensitive to feature-selective attention and that face emotion is encoded relatively automatically on SN, implying the existence of different underlying processing mechanisms for invariant and changeable facial dimensions. Copyright © 2017 Elsevier Ltd. All rights reserved.
Gilad-Gutnick, Sharon; Harmatz, Elia Samuel; Tsourides, Kleovoulos; Yovel, Galit; Sinha, Pawan
2018-07-01
We report here an unexpectedly robust ability of healthy human individuals ( n = 40) to recognize extremely distorted needle-like facial images, challenging the well-entrenched notion that veridical spatial configuration is necessary for extracting facial identity. In face identification tasks of parametrically compressed internal and external features, we found that the sum of performances on each cue falls significantly short of performance on full faces, despite the equal visual information available from both measures (with full faces essentially being a superposition of internal and external features). We hypothesize that this large deficit stems from the use of positional information about how the internal features are positioned relative to the external features. To test this, we systematically changed the relations between internal and external features and found preferential encoding of vertical but not horizontal spatial relationships in facial representations ( n = 20). Finally, we employ magnetoencephalography imaging ( n = 20) to demonstrate a close mapping between the behavioral psychometric curve and the amplitude of the M250 face familiarity, but not M170 face-sensitive evoked response field component, providing evidence that the M250 can be modulated by faces that are perceptually identifiable, irrespective of extreme distortions to the face's veridical configuration. We theorize that the tolerance to compressive distortions has evolved from the need to recognize faces across varying viewpoints. Our findings help clarify the important, but poorly defined, concept of facial configuration and also enable an association between behavioral performance and previously reported neural correlates of face perception.
Tensor discriminant color space for face recognition.
Wang, Su-Jing; Yang, Jian; Zhang, Na; Zhou, Chun-Guang
2011-09-01
Recent research efforts reveal that color may provide useful information for face recognition. For different visual tasks, the choice of a color space is generally different. How can a color space be sought for the specific face recognition problem? To address this problem, this paper represents a color image as a third-order tensor and presents the tensor discriminant color space (TDCS) model. The model can keep the underlying spatial structure of color images. With the definition of n-mode between-class scatter matrices and within-class scatter matrices, TDCS constructs an iterative procedure to obtain one color space transformation matrix and two discriminant projection matrices by maximizing the ratio of these two scatter matrices. The experiments are conducted on two color face databases, AR and Georgia Tech face databases, and the results show that both the performance and the efficiency of the proposed method are better than those of the state-of-the-art color image discriminant model, which involve one color space transformation matrix and one discriminant projection matrix, specifically in a complicated face database with various pose variations.
1. VIEW OF STAFF HOUSE (FEATURE 10), FACING SOUTHWEST. DUPLEX ...
1. VIEW OF STAFF HOUSE (FEATURE 10), FACING SOUTHWEST. DUPLEX (FEATURE 7) IS VISIBLE IN THE BACKGROUND AT RIGHT. - Copper Canyon Camp of the International Smelting & Refining Company, Staff House, Copper Canyon, Battle Mountain, Lander County, NV
Multiview face detection based on position estimation over multicamera surveillance system
NASA Astrophysics Data System (ADS)
Huang, Ching-chun; Chou, Jay; Shiu, Jia-Hou; Wang, Sheng-Jyh
2012-02-01
In this paper, we propose a multi-view face detection system that locates head positions and indicates the direction of each face in 3-D space over a multi-camera surveillance system. To locate 3-D head positions, conventional methods relied on face detection in 2-D images and projected the face regions back to 3-D space for correspondence. However, the inevitable false face detection and rejection usually degrades the system performance. Instead, our system searches for the heads and face directions over the 3-D space using a sliding cube. Each searched 3-D cube is projected onto the 2-D camera views to determine the existence and direction of human faces. Moreover, a pre-process to estimate the locations of candidate targets is illustrated to speed-up the searching process over the 3-D space. In summary, our proposed method can efficiently fuse multi-camera information and suppress the ambiguity caused by detection errors. Our evaluation shows that the proposed approach can efficiently indicate the head position and face direction on real video sequences even under serious occlusion.
Xu, Xinxing; Li, Wen; Xu, Dong
2015-12-01
In this paper, we propose a new approach to improve face verification and person re-identification in the RGB images by leveraging a set of RGB-D data, in which we have additional depth images in the training data captured using depth cameras such as Kinect. In particular, we extract visual features and depth features from the RGB images and depth images, respectively. As the depth features are available only in the training data, we treat the depth features as privileged information, and we formulate this task as a distance metric learning with privileged information problem. Unlike the traditional face verification and person re-identification tasks that only use visual features, we further employ the extra depth features in the training data to improve the learning of distance metric in the training process. Based on the information-theoretic metric learning (ITML) method, we propose a new formulation called ITML with privileged information (ITML+) for this task. We also present an efficient algorithm based on the cyclic projection method for solving the proposed ITML+ formulation. Extensive experiments on the challenging faces data sets EUROCOM and CurtinFaces for face verification as well as the BIWI RGBD-ID data set for person re-identification demonstrate the effectiveness of our proposed approach.
LDEF's map experiment foil perforations yield hypervelocity impact penetration parameters
NASA Technical Reports Server (NTRS)
Mcdonnell, J. A. M.
1992-01-01
The space exposure of LDEF for 5.75 years, forming a host target in low earth orbit (LEO) orbit to a wide distribution of hypervelocity particulates of varying dimensions and different impact velocities, has yielded a multiplicity of impact features. Although the projectile parameters are generally unknown and, in fact not identical for any two impacts on a target, the great number of impacts provides statistically meaningful basis for the valid comparison of the response of different targets. Given sufficient impacts for example, a comparison of impact features (even without knowledge of the project parameters) is possible between: (1) differing material types (for the same incident projectile distribution); (2) differing target configurations (e.g., thick and thin targets for the same material projectiles; and (3) different velocities (using LDEF's different faces). A comparison between different materials is presented for infinite targets of aluminum, Teflon, and brass in the same pointing direction; the maximum finite-target penetration (ballistic limit) is also compared to that of the penetration of similar materials comprising of a semi-infinite target. For comparison of impacts on similar materials at different velocities, use is made of the pointing direction relative to LDEF's orbital motion. First, however, care must be exercised to separate the effect of spatial flux anisotropies from those resulting from the spacecraft velocity through a geocentrically referenced dust distribution. Data comprising thick and thin target impacts, impacts on different materials, and in different pointing directions is presented; hypervelocity impact parameters are derived. Results are also shown for flux modeling codes developed to decode the relative fluxes of Earth orbital and unbound interplanetary components intercepting LDEF. Modeling shows the west and space pointing faces are dominated by interplanetary particles and yields a mean velocity of 23.5 km/s at LDEF, corresponding to a V(infinity) Earth approach velocity = 20.9 km/s. Normally resolved average impact velocities on LDEF's cardinal point faces are shown. As 'excess' flux on the east, north, and south faces is observed, compatible with an Earth orbital component below some 5 microns in particle diameter.
Mars Orbiter Camera Views the 'Face on Mars' - Best View from Viking
NASA Technical Reports Server (NTRS)
1998-01-01
Shortly after midnight Sunday morning (5 April 1998 12:39 AM PST), the Mars Orbiter Camera (MOC) on the Mars Global Surveyor (MGS) spacecraft successfully acquired a high resolution image of the 'Face on Mars' feature in the Cydonia region. The image was transmitted to Earth on Sunday, and retrieved from the mission computer data base Monday morning (6 April 1998). The image was processed at the Malin Space Science Systems (MSSS) facility 9:15 AM and the raw image immediately transferred to the Jet Propulsion Laboratory (JPL) for release to the Internet. The images shown here were subsequently processed at MSSS.
The picture was acquired 375 seconds after the spacecraft's 220th close approach to Mars. At that time, the 'Face', located at approximately 40.8o N, 9.6o W, was 275 miles (444 km) from the spacecraft. The 'morning' sun was 25o above the horizon. The picture has a resolution of 14.1 feet (4.3 meters) per pixel, making it ten times higher resolution than the best previous image of the feature, which was taken by the Viking Mission in the mid-1970's. The full image covers an area 2.7 miles (4.4 km) wide and 25.7 miles (41.5 km) long.This Viking Orbiter image is one of the best Viking pictures of the area Cydonia where the 'Face' is located. Marked on the image are the 'footprint' of the high resolution (narrow angle) Mars Orbiter Camera image and the area seen in enlarged views (dashed box). See PIA01440-1442 for these images in raw and processed form.Malin Space Science Systems and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.Moghadam, Saeed Montazeri; Seyyedsalehi, Seyyed Ali
2018-05-31
Nonlinear components extracted from deep structures of bottleneck neural networks exhibit a great ability to express input space in a low-dimensional manifold. Sharing and combining the components boost the capability of the neural networks to synthesize and interpolate new and imaginary data. This synthesis is possibly a simple model of imaginations in human brain where the components are expressed in a nonlinear low dimensional manifold. The current paper introduces a novel Dynamic Deep Bottleneck Neural Network to analyze and extract three main features of videos regarding the expression of emotions on the face. These main features are identity, emotion and expression intensity that are laid in three different sub-manifolds of one nonlinear general manifold. The proposed model enjoying the advantages of recurrent networks was used to analyze the sequence and dynamics of information in videos. It is noteworthy to mention that this model also has also the potential to synthesize new videos showing variations of one specific emotion on the face of unknown subjects. Experiments on discrimination and recognition ability of extracted components showed that the proposed model has an average of 97.77% accuracy in recognition of six prominent emotions (Fear, Surprise, Sadness, Anger, Disgust, and Happiness), and 78.17% accuracy in the recognition of intensity. The produced videos revealed variations from neutral to the apex of an emotion on the face of the unfamiliar test subject which is on average 0.8 similar to reference videos in the scale of the SSIM method. Copyright © 2018 Elsevier Ltd. All rights reserved.
2010-05-18
ISS023-E-047431 (18 May 2010) --- Intersecting the thin line of Earth's atmosphere, the docked space shuttle Atlantis is featured in this image photographed by an Expedition 23 crew member on the International Space Station. The Russian-built Mini-Research Module 1 (MRM-1) is visible in the payload bay as the shuttle robotic arm prepares to unberth the module from Atlantis and position it for handoff to the station robotic arm. Named Rassvet, Russian for "dawn," the module is the second in a series of new pressurized components for Russia and will be permanently attached to the Earth-facing port of the Zarya Functional Cargo Block (FGB). Rassvet will be used for cargo storage and will provide an additional docking port to the station.
PROBING FOR EVIDENCE OF PLUMES ON EUROPA WITH HST /STIS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sparks, W. B.; Bergeron, E.; Cracraft, M.
2016-10-01
Roth et al. (2014a) reported evidence for plumes of water venting from a southern high latitude region on Europa: spectroscopic detection of off-limb line emission from the dissociation products of water. Here, we present Hubble Space Telescope direct images of Europa in the far-ultraviolet (FUV) as it transited the smooth face of Jupiter to measure absorption from gas or aerosols beyond the Europa limb. Out of 10 observations, we found 3 in which plume activity could be implicated. Two observations showed statistically significant features at latitudes similar to Roth et al., and the third at a more equatorial location. Wemore » consider potential systematic effects that might influence the statistical analysis and create artifacts, and are unable to find any that can definitively explain the features, although there are reasons to be cautious. If the apparent absorption features are real, the magnitude of implied outgassing is similar to that of the Roth et al. feature; however, the apparent activity appears more frequently in our data.« less
Transferring of speech movements from video to 3D face space.
Pei, Yuru; Zha, Hongbin
2007-01-01
We present a novel method for transferring speech animation recorded in low quality videos to high resolution 3D face models. The basic idea is to synthesize the animated faces by an interpolation based on a small set of 3D key face shapes which span a 3D face space. The 3D key shapes are extracted by an unsupervised learning process in 2D video space to form a set of 2D visemes which are then mapped to the 3D face space. The learning process consists of two main phases: 1) Isomap-based nonlinear dimensionality reduction to embed the video speech movements into a low-dimensional manifold and 2) K-means clustering in the low-dimensional space to extract 2D key viseme frames. Our main contribution is that we use the Isomap-based learning method to extract intrinsic geometry of the speech video space and thus to make it possible to define the 3D key viseme shapes. To do so, we need only to capture a limited number of 3D key face models by using a general 3D scanner. Moreover, we also develop a skull movement recovery method based on simple anatomical structures to enhance 3D realism in local mouth movements. Experimental results show that our method can achieve realistic 3D animation effects with a small number of 3D key face models.
Gender in Facial Representations: A Contrast-Based Study of Adaptation within and between the Sexes
Oruç, Ipek; Guo, Xiaoyue M.; Barton, Jason J. S.
2011-01-01
Face aftereffects are proving to be an effective means of examining the properties of face-specific processes in the human visual system. We examined the role of gender in the neural representation of faces using a contrast-based adaptation method. If faces of different genders share the same representational face space, then adaptation to a face of one gender should affect both same- and different-gender faces. Further, if these aftereffects differ in magnitude, this may indicate distinct gender-related factors in the organization of this face space. To control for a potential confound between physical similarity and gender, we used a Bayesian ideal observer and human discrimination data to construct a stimulus set in which pairs of different-gender faces were equally dissimilar as same-gender pairs. We found that the recognition of both same-gender and different-gender faces was suppressed following a brief exposure of 100ms. Moreover, recognition was more suppressed for test faces of a different-gender than those of the same-gender as the adaptor, despite the equivalence in physical and psychophysical similarity. Our results suggest that male and female faces likely occupy the same face space, allowing transfer of aftereffects between the genders, but that there are special properties that emerge along gender-defining dimensions of this space. PMID:21267414
Method for Face-Emotion Retrieval Using A Cartoon Emotional Expression Approach
NASA Astrophysics Data System (ADS)
Kostov, Vlaho; Yanagisawa, Hideyoshi; Johansson, Martin; Fukuda, Shuichi
A simple method for extracting emotion from a human face, as a form of non-verbal communication, was developed to cope with and optimize mobile communication in a globalized and diversified society. A cartoon face based model was developed and used to evaluate emotional content of real faces. After a pilot survey, basic rules were defined and student subjects were asked to express emotion using the cartoon face. Their face samples were then analyzed using principal component analysis and the Mahalanobis distance method. Feature parameters considered as having relations with emotions were extracted and new cartoon faces (based on these parameters) were generated. The subjects evaluated emotion of these cartoon faces again and we confirmed these parameters were suitable. To confirm how these parameters could be applied to real faces, we asked subjects to express the same emotions which were then captured electronically. Simple image processing techniques were also developed to extract these features from real faces and we then compared them with the cartoon face parameters. It is demonstrated via the cartoon face that we are able to express the emotions from very small amounts of information. As a result, real and cartoon faces correspond to each other. It is also shown that emotion could be extracted from still and dynamic real face images using these cartoon-based features.
The morphometrics of "masculinity" in human faces.
Mitteroecker, Philipp; Windhager, Sonja; Müller, Gerd B; Schaefer, Katrin
2015-01-01
In studies of social inference and human mate preference, a wide but inconsistent array of tools for computing facial masculinity has been devised. Several of these approaches implicitly assumed that the individual expression of sexually dimorphic shape features, which we refer to as maleness, resembles facial shape features perceived as masculine. We outline a morphometric strategy for estimating separately the face shape patterns that underlie perceived masculinity and maleness, and for computing individual scores for these shape patterns. We further show how faces with different degrees of masculinity or maleness can be constructed in a geometric morphometric framework. In an application of these methods to a set of human facial photographs, we found that shape features typically perceived as masculine are wide faces with a wide inter-orbital distance, a wide nose, thin lips, and a large and massive lower face. The individual expressions of this combination of shape features--the masculinity shape scores--were the best predictor of rated masculinity among the compared methods (r = 0.5). The shape features perceived as masculine only partly resembled the average face shape difference between males and females (sexual dimorphism). Discriminant functions and Procrustes distances to the female mean shape were poor predictors of perceived masculinity.
Face verification system for Android mobile devices using histogram based features
NASA Astrophysics Data System (ADS)
Sato, Sho; Kobayashi, Kazuhiro; Chen, Qiu
2016-07-01
This paper proposes a face verification system that runs on Android mobile devices. In this system, facial image is captured by a built-in camera on the Android device firstly, and then face detection is implemented using Haar-like features and AdaBoost learning algorithm. The proposed system verify the detected face using histogram based features, which are generated by binary Vector Quantization (VQ) histogram using DCT coefficients in low frequency domains, as well as Improved Local Binary Pattern (Improved LBP) histogram in spatial domain. Verification results with different type of histogram based features are first obtained separately and then combined by weighted averaging. We evaluate our proposed algorithm by using publicly available ORL database and facial images captured by an Android tablet.
Spaced Learning Enhances Subsequent Recognition Memory by Reducing Neural Repetition Suppression
Xue, Gui; Mei, Leilei; Chen, Chuansheng; Lu, Zhong-Lin; Poldrack, Russell; Dong, Qi
2012-01-01
Spaced learning usually leads to better recognition memory as compared with massed learning, yet the underlying neural mechanisms remain elusive. One open question is whether the spacing effect is achieved by reducing neural repetition suppression. In this fMRI study, participants were scanned while intentionally memorizing 120 novel faces, half under the massed learning condition (i.e., four consecutive repetitions with jittered interstimulus interval) and the other half under the spaced learning condition (i.e., the four repetitions were interleaved). Recognition memory tests afterward revealed a significant spacing effect: Participants recognized more items learned under the spaced learning condition than under the massed learning condition. Successful face memory encoding was associated with stronger activation in the bilateral fusiform gyrus, which showed a significant repetition suppression effect modulated by subsequent memory status and spaced learning. Specifically, remembered faces showed smaller repetition suppression than forgotten faces under both learning conditions, and spaced learning significantly reduced repetition suppression. These results suggest that spaced learning enhances recognition memory by reducing neural repetition suppression. PMID:20617892
Spaced learning enhances subsequent recognition memory by reducing neural repetition suppression.
Xue, Gui; Mei, Leilei; Chen, Chuansheng; Lu, Zhong-Lin; Poldrack, Russell; Dong, Qi
2011-07-01
Spaced learning usually leads to better recognition memory as compared with massed learning, yet the underlying neural mechanisms remain elusive. One open question is whether the spacing effect is achieved by reducing neural repetition suppression. In this fMRI study, participants were scanned while intentionally memorizing 120 novel faces, half under the massed learning condition (i.e., four consecutive repetitions with jittered interstimulus interval) and the other half under the spaced learning condition (i.e., the four repetitions were interleaved). Recognition memory tests afterward revealed a significant spacing effect: Participants recognized more items learned under the spaced learning condition than under the massed learning condition. Successful face memory encoding was associated with stronger activation in the bilateral fusiform gyrus, which showed a significant repetition suppression effect modulated by subsequent memory status and spaced learning. Specifically, remembered faces showed smaller repetition suppression than forgotten faces under both learning conditions, and spaced learning significantly reduced repetition suppression. These results suggest that spaced learning enhances recognition memory by reducing neural repetition suppression.
2D DOST based local phase pattern for face recognition
NASA Astrophysics Data System (ADS)
Moniruzzaman, Md.; Alam, Mohammad S.
2017-05-01
A new two dimensional (2-D) Discrete Orthogonal Stcokwell Transform (DOST) based Local Phase Pattern (LPP) technique has been proposed for efficient face recognition. The proposed technique uses 2-D DOST as preliminary preprocessing and local phase pattern to form robust feature signature which can effectively accommodate various 3D facial distortions and illumination variations. The S-transform, is an extension of the ideas of the continuous wavelet transform (CWT), is also known for its local spectral phase properties in time-frequency representation (TFR). It provides a frequency dependent resolution of the time-frequency space and absolutely referenced local phase information while maintaining a direct relationship with the Fourier spectrum which is unique in TFR. After utilizing 2-D Stransform as the preprocessing and build local phase pattern from extracted phase information yield fast and efficient technique for face recognition. The proposed technique shows better correlation discrimination compared to alternate pattern recognition techniques such as wavelet or Gabor based face recognition. The performance of the proposed method has been tested using the Yale and extended Yale facial database under different environments such as illumination variation and 3D changes in facial expressions. Test results show that the proposed technique yields better performance compared to alternate time-frequency representation (TFR) based face recognition techniques.
Facial Asymmetry-Based Age Group Estimation: Role in Recognizing Age-Separated Face Images.
Sajid, Muhammad; Taj, Imtiaz Ahmad; Bajwa, Usama Ijaz; Ratyal, Naeem Iqbal
2018-04-23
Face recognition aims to establish the identity of a person based on facial characteristics. On the other hand, age group estimation is the automatic calculation of an individual's age range based on facial features. Recognizing age-separated face images is still a challenging research problem due to complex aging processes involving different types of facial tissues, skin, fat, muscles, and bones. Certain holistic and local facial features are used to recognize age-separated face images. However, most of the existing methods recognize face images without incorporating the knowledge learned from age group estimation. In this paper, we propose an age-assisted face recognition approach to handle aging variations. Inspired by the observation that facial asymmetry is an age-dependent intrinsic facial feature, we first use asymmetric facial dimensions to estimate the age group of a given face image. Deeply learned asymmetric facial features are then extracted for face recognition using a deep convolutional neural network (dCNN). Finally, we integrate the knowledge learned from the age group estimation into the face recognition algorithm using the same dCNN. This integration results in a significant improvement in the overall performance compared to using the face recognition algorithm alone. The experimental results on two large facial aging datasets, the MORPH and FERET sets, show that the proposed age group estimation based on the face recognition approach yields superior performance compared to some existing state-of-the-art methods. © 2018 American Academy of Forensic Sciences.
Antiferromagnetism and phase diagram in ammoniated alkali fulleride salts
Takenobu; Muro; Iwasa; Mitani
2000-07-10
Intercalation of neutral ammonia molecules into trivalent face-centered-cubic (fcc) fulleride superconductors induces a dramatic change in electronic states. Monoammoniated alkali fulleride salts (NH3)K3-xRbxC60, forming an isostructural orthorhombic series, undergo an antiferromagnetic transition, which was found by the electron spin resonance experiment. The Neel temperature first increases with the interfullerene spacing and then decreases for (NH3)Rb3C60, forming a maximum at 76 K. This feature is explained by the generalized phase diagram of Mott-Hubbard transition with an antiferromagnetic ground state.
Wang, Dayong; Otto, Charles; Jain, Anil K
2017-06-01
Given the prevalence of social media websites, one challenge facing computer vision researchers is to devise methods to search for persons of interest among the billions of shared photos on these websites. Despite significant progress in face recognition, searching a large collection of unconstrained face images remains a difficult problem. To address this challenge, we propose a face search system which combines a fast search procedure, coupled with a state-of-the-art commercial off the shelf (COTS) matcher, in a cascaded framework. Given a probe face, we first filter the large gallery of photos to find the top- k most similar faces using features learned by a convolutional neural network. The k retrieved candidates are re-ranked by combining similarities based on deep features and those output by the COTS matcher. We evaluate the proposed face search system on a gallery containing 80 million web-downloaded face images. Experimental results demonstrate that while the deep features perform worse than the COTS matcher on a mugshot dataset (93.7 percent versus 98.6 percent TAR@FAR of 0.01 percent), fusing the deep features with the COTS matcher improves the overall performance ( 99.5 percent TAR@FAR of 0.01 percent). This shows that the learned deep features provide complementary information over representations used in state-of-the-art face matchers. On the unconstrained face image benchmarks, the performance of the learned deep features is competitive with reported accuracies. LFW database: 98.20 percent accuracy under the standard protocol and 88.03 percent TAR@FAR of 0.1 percent under the BLUFR protocol; IJB-A benchmark: 51.0 percent TAR@FAR of 0.1 percent (verification), rank 1 retrieval of 82.2 percent (closed-set search), 61.5 percent FNIR@FAR of 1 percent (open-set search). The proposed face search system offers an excellent trade-off between accuracy and scalability on galleries with millions of images. Additionally, in a face search experiment involving photos of the Tsarnaev brothers, convicted of the Boston Marathon bombing, the proposed cascade face search system could find the younger brother's (Dzhokhar Tsarnaev) photo at rank 1 in 1 second on a 5 M gallery and at rank 8 in 7 seconds on an 80 M gallery.
Recovering Faces from Memory: The Distracting Influence of External Facial Features
ERIC Educational Resources Information Center
Frowd, Charlie D.; Skelton, Faye; Atherton, Chris; Pitchford, Melanie; Hepton, Gemma; Holden, Laura; McIntyre, Alex H.; Hancock, Peter J. B.
2012-01-01
Recognition memory for unfamiliar faces is facilitated when contextual cues (e.g., head pose, background environment, hair and clothing) are consistent between study and test. By contrast, inconsistencies in external features, especially hair, promote errors in unfamiliar face-matching tasks. For the construction of facial composites, as carried…
Caharel, Stéphanie; Leleu, Arnaud; Bernard, Christian; Viggiano, Maria-Pia; Lalonde, Robert; Rebaï, Mohamed
2013-11-01
The properties of the face-sensitive N170 component of the event-related brain potential (ERP) were explored through an orientation discrimination task using natural faces, objects, and Arcimboldo paintings presented upright or inverted. Because Arcimboldo paintings are composed of non-face objects but have a global face configuration, they provide great control to disentangle high-level face-like or object-like visual processes at the level of the N170, and may help to examine the implication of each hemisphere in the global/holistic processing of face formats. For upright position, N170 amplitudes in the right occipito-temporal region did not differ between natural faces and Arcimboldo paintings but were larger for both of these categories than for objects, supporting the view that as early as the N170 time-window, the right hemisphere is involved in holistic perceptual processing of face-like configurations irrespective of their features. Conversely, in the left hemisphere, N170 amplitudes differed between Arcimboldo portraits and natural faces, suggesting that this hemisphere processes local facial features. For upside-down orientation in both hemispheres, N170 amplitudes did not differ between Arcimboldo paintings and objects, but were reduced for both categories compared to natural faces, indicating that the disruption of holistic processing with inversion leads to an object-like processing of Arcimboldo paintings due to the lack of local facial features. Overall, these results provide evidence that global/holistic perceptual processing of faces and face-like formats involves the right hemisphere as early as the N170 time-window, and that the local processing of face features is rather implemented in the left hemisphere. © 2013.
Kurosumi, M; Mizukoshi, K
2018-05-01
The types of shape feature that constitutes a face have not been comprehensively established, and most previous studies of age-related changes in facial shape have focused on individual characteristics, such as wrinkle, sagging skin, etc. In this study, we quantitatively measured differences in face shape between individuals and investigated how shape features changed with age. We analyzed three-dimensionally the faces of 280 Japanese women aged 20-69 years and used principal component analysis to establish the shape features that characterized individual differences. We also evaluated the relationships between each feature and age, clarifying the shape features characteristic of different age groups. Changes in facial shape in middle age were a decreased volume of the upper face and increased volume of the whole cheeks and around the chin. Changes in older people were an increased volume of the lower cheeks and around the chin, sagging skin, and jaw distortion. Principal component analysis was effective for identifying facial shape features that represent individual and age-related differences. This method allowed straightforward measurements, such as the increase or decrease in cheeks caused by soft tissue changes or skeletal-based changes to the forehead or jaw, simply by acquiring three-dimensional facial images. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Familiarity effects in the construction of facial-composite images using modern software systems.
Frowd, Charlie D; Skelton, Faye C; Butt, Neelam; Hassan, Amal; Fields, Stephen; Hancock, Peter J B
2011-12-01
We investigate the effect of target familiarity on the construction of facial composites, as used by law enforcement to locate criminal suspects. Two popular software construction methods were investigated. Participants were shown a target face that was either familiar or unfamiliar to them and constructed a composite of it from memory using a typical 'feature' system, involving selection of individual facial features, or one of the newer 'holistic' types, involving repeated selection and breeding from arrays of whole faces. This study found that composites constructed of a familiar face were named more successfully than composites of an unfamiliar face; also, naming of composites of internal and external features was equivalent for construction of unfamiliar targets, but internal features were better named than the external features for familiar targets. These findings applied to both systems, although benefit emerged for the holistic type due to more accurate construction of internal features and evidence for a whole-face advantage. STATEMENT OF RELEVANCE: This work is of relevance to practitioners who construct facial composites with witnesses to and victims of crime, as well as for software designers to help them improve the effectiveness of their composite systems.
Sex differences in face gender recognition: an event-related potential study.
Sun, Yueting; Gao, Xiaochao; Han, Shihui
2010-04-23
Multiple level neurocognitive processes are involved in face processing in humans. The present study examined whether the early face processing such as structural encoding is modulated by task demands that manipulate attention to perceptual or social features of faces and such an effect, if any, is different between men and women. Event-related brain potentials were recorded from male and female adults while they identified a low-level perceptual feature of faces (i.e., face orientation) and a high-level social feature of faces (i.e., gender). We found that task demands that required the processing of face orientations or face gender resulted in modulations of both the early occipital/temporal negativity (N170) and the late central/parietal positivity (P3). The N170 amplitude was smaller in the gender relative to the orientation identification task whereas the P3 amplitude was larger in the gender identification task relative to the orientation identification task. In addition, these effects were much stronger in women than in men. Our findings suggest that attention to social information in faces such as gender modulates both the early encoding of facial structures and late evaluative process of faces to a greater degree in women than in men.
2002-05-31
DETERMINING IF SPACE IS AN APPLICABLE COMPONENT TO INTELLIGENCE PREPARATION OF THE BATTLEFIELD FOR RANGER OPERATIONS WHEN FACING NON-NATION-STATE...Candidate: Major Michael Bruce Johnson Thesis Title: Determining if Space is an Applicable Component to Intelligence Preparation of the Battlefield for ...iii ABSTRACT DETERMINING IF SPACE IS AN APPLICABLE COMPONENT TO INTELLIGENCE PREPARATION OF THE BATTLEFIELD FOR RANGER OPERATIONS WHEN FACING NON
Collins, Heather R; Zhu, Xun; Bhatt, Ramesh S; Clark, Jonathan D; Joseph, Jane E
2012-12-01
The degree to which face-specific brain regions are specialized for different kinds of perceptual processing is debated. This study parametrically varied demands on featural, first-order configural, or second-order configural processing of faces and houses in a perceptual matching task to determine the extent to which the process of perceptual differentiation was selective for faces regardless of processing type (domain-specific account), specialized for specific types of perceptual processing regardless of category (process-specific account), engaged in category-optimized processing (i.e., configural face processing or featural house processing), or reflected generalized perceptual differentiation (i.e., differentiation that crosses category and processing type boundaries). ROIs were identified in a separate localizer run or with a similarity regressor in the face-matching runs. The predominant principle accounting for fMRI signal modulation in most regions was generalized perceptual differentiation. Nearly all regions showed perceptual differentiation for both faces and houses for more than one processing type, even if the region was identified as face-preferential in the localizer run. Consistent with process specificity, some regions showed perceptual differentiation for first-order processing of faces and houses (right fusiform face area and occipito-temporal cortex and right lateral occipital complex), but not for featural or second-order processing. Somewhat consistent with domain specificity, the right inferior frontal gyrus showed perceptual differentiation only for faces in the featural matching task. The present findings demonstrate that the majority of regions involved in perceptual differentiation of faces are also involved in differentiation of other visually homogenous categories.
Collins, Heather R.; Zhu, Xun; Bhatt, Ramesh S.; Clark, Jonathan D.; Joseph, Jane E.
2015-01-01
The degree to which face-specific brain regions are specialized for different kinds of perceptual processing is debated. The present study parametrically varied demands on featural, first-order configural or second-order configural processing of faces and houses in a perceptual matching task to determine the extent to which the process of perceptual differentiation was selective for faces regardless of processing type (domain-specific account), specialized for specific types of perceptual processing regardless of category (process-specific account), engaged in category-optimized processing (i.e., configural face processing or featural house processing) or reflected generalized perceptual differentiation (i.e. differentiation that crosses category and processing type boundaries). Regions of interest were identified in a separate localizer run or with a similarity regressor in the face-matching runs. The predominant principle accounting for fMRI signal modulation in most regions was generalized perceptual differentiation. Nearly all regions showed perceptual differentiation for both faces and houses for more than one processing type, even if the region was identified as face-preferential in the localizer run. Consistent with process-specificity, some regions showed perceptual differentiation for first-order processing of faces and houses (right fusiform face area and occipito-temporal cortex, and right lateral occipital complex), but not for featural or second-order processing. Somewhat consistent with domain-specificity, the right inferior frontal gyrus showed perceptual differentiation only for faces in the featural matching task. The present findings demonstrate that the majority of regions involved in perceptual differentiation of faces are also involved in differentiation of other visually homogenous categories. PMID:22849402
NASA Astrophysics Data System (ADS)
Zhao, Yiqun; Wang, Zhihui
2015-12-01
The Internet of things (IOT) is a kind of intelligent networks which can be used to locate, track, identify and supervise people and objects. One of important core technologies of intelligent visual internet of things ( IVIOT) is the intelligent visual tag system. In this paper, a research is done into visual feature extraction and establishment of visual tags of the human face based on ORL face database. Firstly, we use the principal component analysis (PCA) algorithm for face feature extraction, then adopt the support vector machine (SVM) for classifying and face recognition, finally establish a visual tag for face which is already classified. We conducted a experiment focused on a group of people face images, the result show that the proposed algorithm have good performance, and can show the visual tag of objects conveniently.
Emotion-independent face recognition
NASA Astrophysics Data System (ADS)
De Silva, Liyanage C.; Esther, Kho G. P.
2000-12-01
Current face recognition techniques tend to work well when recognizing faces under small variations in lighting, facial expression and pose, but deteriorate under more extreme conditions. In this paper, a face recognition system to recognize faces of known individuals, despite variations in facial expression due to different emotions, is developed. The eigenface approach is used for feature extraction. Classification methods include Euclidean distance, back propagation neural network and generalized regression neural network. These methods yield 100% recognition accuracy when the training database is representative, containing one image representing the peak expression for each emotion of each person apart from the neutral expression. The feature vectors used for comparison in the Euclidean distance method and for training the neural network must be all the feature vectors of the training set. These results are obtained for a face database consisting of only four persons.
Mapping attractor fields in face space: the atypicality bias in face recognition.
Tanaka, J; Giles, M; Kremen, S; Simon, V
1998-09-01
A familiar face can be recognized across many changes in the stimulus input. In this research, the many-to-one mapping of face stimuli to a single face memory is referred to as a face memory's 'attractor field'. According to the attractor field approach, a face memory will be activated by any stimuli falling within the boundaries of its attractor field. It was predicted that by virtue of its location in a multi-dimensional face space, the attractor field of an atypical face will be larger than the attractor field of a typical face. To test this prediction, subjects make likeness judgments to morphed faces that contained a 50/50 contribution from an atypical and a typical parent face. The main result of four experiments was that the morph face was judged to bear a stronger resemblance to the atypical face parent than the typical face parent. The computational basis of the atypicality bias was demonstrated in a neural network simulation where morph inputs of atypical and typical representations elicited stronger activation of atypical output units than of typical output units. Together, the behavioral and simulation evidence supports the view that the attractor fields of atypical faces span over a broader region of face space that the attractor fields of typical faces.
Instrument and method for focusing x rays, gamma rays, and neutrons
Smither, R.K.
1981-04-20
A crystal diffraction instrument is described which has an improved crystalline structure having a face for receiving a beam of photons or neutrons and diffraction planar spacing along that face with the spacing increasing progressively along the face to provide a decreasing Bragg angle and thereby increasing the usable area and acceptance angle. The increased planar spacing is provided by the use of a temperature differential across the crystalline structure, by assembling a plurality of crystalline structure with different compositions, by an individual crystalline structure with a varying composition and thereby a changing planar spacing along its face, and by combinations of these techniques.
2017-01-01
Experimental studies have revealed evidence of both parts-based and holistic representations of objects and faces in the primate visual system. However, it is still a mystery how such seemingly contradictory types of processing can coexist within a single system. Here, we propose a novel theory called mixture of sparse coding models, inspired by the formation of category-specific subregions in the inferotemporal (IT) cortex. We developed a hierarchical network that constructed a mixture of two sparse coding submodels on top of a simple Gabor analysis. The submodels were each trained with face or non-face object images, which resulted in separate representations of facial parts and object parts. Importantly, evoked neural activities were modeled by Bayesian inference, which had a top-down explaining-away effect that enabled recognition of an individual part to depend strongly on the category of the whole input. We show that this explaining-away effect was indeed crucial for the units in the face submodel to exhibit significant selectivity to face images over object images in a similar way to actual face-selective neurons in the macaque IT cortex. Furthermore, the model explained, qualitatively and quantitatively, several tuning properties to facial features found in the middle patch of face processing in IT as documented by Freiwald, Tsao, and Livingstone (2009). These included, in particular, tuning to only a small number of facial features that were often related to geometrically large parts like face outline and hair, preference and anti-preference of extreme facial features (e.g., very large/small inter-eye distance), and reduction of the gain of feature tuning for partial face stimuli compared to whole face stimuli. Thus, we hypothesize that the coding principle of facial features in the middle patch of face processing in the macaque IT cortex may be closely related to mixture of sparse coding models. PMID:28742816
Hosoya, Haruo; Hyvärinen, Aapo
2017-07-01
Experimental studies have revealed evidence of both parts-based and holistic representations of objects and faces in the primate visual system. However, it is still a mystery how such seemingly contradictory types of processing can coexist within a single system. Here, we propose a novel theory called mixture of sparse coding models, inspired by the formation of category-specific subregions in the inferotemporal (IT) cortex. We developed a hierarchical network that constructed a mixture of two sparse coding submodels on top of a simple Gabor analysis. The submodels were each trained with face or non-face object images, which resulted in separate representations of facial parts and object parts. Importantly, evoked neural activities were modeled by Bayesian inference, which had a top-down explaining-away effect that enabled recognition of an individual part to depend strongly on the category of the whole input. We show that this explaining-away effect was indeed crucial for the units in the face submodel to exhibit significant selectivity to face images over object images in a similar way to actual face-selective neurons in the macaque IT cortex. Furthermore, the model explained, qualitatively and quantitatively, several tuning properties to facial features found in the middle patch of face processing in IT as documented by Freiwald, Tsao, and Livingstone (2009). These included, in particular, tuning to only a small number of facial features that were often related to geometrically large parts like face outline and hair, preference and anti-preference of extreme facial features (e.g., very large/small inter-eye distance), and reduction of the gain of feature tuning for partial face stimuli compared to whole face stimuli. Thus, we hypothesize that the coding principle of facial features in the middle patch of face processing in the macaque IT cortex may be closely related to mixture of sparse coding models.
von Braunmühl, T; Hartmann, D; Tietze, J K; Cekovic, D; Kunte, C; Ruzicka, T; Berking, C; Sattler, E C
2016-11-01
Optical coherence tomography (OCT) has become a valuable non-invasive tool in the in vivo diagnosis of non-melanoma skin cancer, especially of basal cell carcinoma (BCC). Due to an updated software-supported algorithm, a new en-face mode - similar to the horizontal en-face mode in high-definition OCT and reflectance confocal microscopy - surface-parallel imaging is possible which, in combination with the established slice mode of frequency domain (FD-)OCT, may offer additional information in the diagnosis of BCC. To define characteristic morphologic features of BCC using the new en-face mode in addition to the conventional cross-sectional imaging mode for three-dimensional imaging of BCC in FD-OCT. A total of 33 BCC were examined preoperatively by imaging in en-face mode as well as cross-sectional mode in FD-OCT. Characteristic features were evaluated and correlated with histopathology findings. Features established in the cross-sectional imaging mode as well as additional features were present in the en-face mode of FD-OCT: lobulated structures (100%), dark peritumoral rim (75%), bright peritumoral stroma (96%), branching vessels (90%), compressed fibrous bundles between lobulated nests ('star shaped') (78%), and intranodular small bright dots (51%). These features were also evaluated according to the histopathological subtype. In the en-face mode, the lobulated structures with compressed fibrous bundles of the BCC were more distinct than in the slice mode. FD-OCT with a new depiction for horizontal and vertical imaging modes offers additional information in the diagnosis of BCC, especially in nodular BCC, and enhances the possibility of the evaluation of morphologic tumour features. © 2016 European Academy of Dermatology and Venereology.
Faces in-between: evaluations reflect the interplay of facial features and task-dependent fluency.
Winkielman, Piotr; Olszanowski, Michal; Gola, Mateusz
2015-04-01
Facial features influence social evaluations. For example, faces are rated as more attractive and trustworthy when they have more smiling features and also more female features. However, the influence of facial features on evaluations should be qualified by the affective consequences of fluency (cognitive ease) with which such features are processed. Further, fluency (along with its affective consequences) should depend on whether the current task highlights conflict between specific features. Four experiments are presented. In 3 experiments, participants saw faces varying in expressions ranging from pure anger, through mixed expression, to pure happiness. Perceivers first categorized faces either on a control dimension, or an emotional dimension (angry/happy). Thus, the emotional categorization task made "pure" expressions fluent and "mixed" expressions disfluent. Next, participants made social evaluations. Results show that after emotional categorization, but not control categorization, targets with mixed expressions are relatively devalued. Further, this effect is mediated by categorization disfluency. Additional data from facial electromyography reveal that on a basic physiological level, affective devaluation of mixed expressions is driven by their objective ambiguity. The fourth experiment shows that the relative devaluation of mixed faces that vary in gender ambiguity requires a gender categorization task. Overall, these studies highlight that the impact of facial features on evaluation is qualified by their fluency, and that the fluency of features is a function of the current task. The discussion highlights the implications of these findings for research on emotional reactions to ambiguity. (c) 2015 APA, all rights reserved).
Leading the Public Face of Space
NASA Technical Reports Server (NTRS)
Dumbacher, Daniel L.
2010-01-01
The National Aeronautics and Space Administration (NASA) is fully committed to sharing the excitement of America's international space missions with its stakeholders, particularly the general public. In 2009, the Space Shuttle delivered astronauts to the Hubble Space Telescope to service that great observatory and to the International Space Station to install the observation platform on the Japanese Kibo laboratory. The Lunar Reconnaissance Orbiter is showing an unprecedented view of the Moon, confirming the presence of hardware left behind during the Apollo missions decades ago and helping scientists better understand Earth's natural satellite. These and numerous other exciting missions are fertile subjects for public education and outreach. NASA's core mission includes engaging the public face of space in many forms and forums. Agency goals include communicating with people across the United States and through international opportunities. NASA has created a culture where communication opportunities are valued avenues to deliver information about scientific findings and exploration possibilities. As this presentation will show, NASA's leaders act as ambassadors in the public arena and set expectations for involvement across their organizations. This presentation will focus on the qualities that NASA leaders cultivate to achieve challenging missions, to expand horizons and question "why". Leaders act with integrity and recognize the power of the team multiplier effect on delivering technical performance within budget and schedule, as well as through participation in education and outreach opportunities. Leaders are responsible for budgeting the resources needed to reach target audiences with compelling, relevant information and serve as role models, delivering key messages to various audiences. Examples that will be featured in this presentation include the Student Launch Projects and Great Moonbuggy race, which reach hundreds of students who are a promising pipeline for new scientists and engineers for a new generation of discovery. The popular Exploration Experience trailer is an interactive-exhibit environment that travels across the United States, conveying the innovation necessary for space travel and the wonder of discovery that comes from viewing our planet as part of the larger space-scape.
Leading the Public Face of Space
NASA Technical Reports Server (NTRS)
Dumbacher, Daniel L.
2009-01-01
The National Aeronautics and Space Administration (NASA) is fully committed to sharing the excitement of America's international space missions with its stakeholders, particularly the general public. In 2009, the Space Shuttle delivered astronauts to the Hubble Space Telescope to service that great observatory and to the International Space Station to install the observation platform on the Japanese Kibo laboratory. The Lunar Reconnaissance Orbiter is showing an unprecedented view of the Moon, confirming the presence of hardware left behind during the Apollo missions decades ago and helping scientists better understand Earth's natural satellite. These and numerous other exciting missions are fertile subjects for public education and outreach. NASA's core mission includes engaging the public face of space in many forms and forums. Agency goals include communicating with people across the United States and through international opportunities. NASA has created a culture where communication opportunities are valued avenues to deliver information about scientific findings and exploration possibilities. As this presentation will show, NASA's leaders act as ambassadors in the public arena and set expectations for involvement across their organizations. This presentation will focus on the qualities that NASA leaders cultivate to achieve challenging missions, to expand horizons and question "why". Leaders act with integrity and recognize the power of the team multiplier effect on delivering technical performance within budget and schedule, as well as through participation in education and outreach opportunities. Leaders are responsible for budgeting the resources needed to reach target audiences with compelling, relevant information and serve as role models, delivering key messages to various audiences. Examples that will be featured in this presentation include the Student Launch Projects and Great Moonbuggy race, which reach hundreds of students who are a promising pipeline for new scientists and engineers for a new generation of discovery. The popular Exploration Experience trailer is an interactive-exhibit environment that travels across the United States, conveying the innovation necessary for space travel and the wonder of discovery that comes from viewing our planet as part of the larger space-scape.
Multiple Representations-Based Face Sketch-Photo Synthesis.
Peng, Chunlei; Gao, Xinbo; Wang, Nannan; Tao, Dacheng; Li, Xuelong; Li, Jie
2016-11-01
Face sketch-photo synthesis plays an important role in law enforcement and digital entertainment. Most of the existing methods only use pixel intensities as the feature. Since face images can be described using features from multiple aspects, this paper presents a novel multiple representations-based face sketch-photo-synthesis method that adaptively combines multiple representations to represent an image patch. In particular, it combines multiple features from face images processed using multiple filters and deploys Markov networks to exploit the interacting relationships between the neighboring image patches. The proposed framework could be solved using an alternating optimization strategy and it normally converges in only five outer iterations in the experiments. Our experimental results on the Chinese University of Hong Kong (CUHK) face sketch database, celebrity photos, CUHK Face Sketch FERET Database, IIIT-D Viewed Sketch Database, and forensic sketches demonstrate the effectiveness of our method for face sketch-photo synthesis. In addition, cross-database and database-dependent style-synthesis evaluations demonstrate the generalizability of this novel method and suggest promising solutions for face identification in forensic science.
Taubert, Jessica; Parr, Lisa A
2011-01-01
All primates can recognize faces and do so by analyzing the subtle variation that exists between faces. Through a series of three experiments, we attempted to clarify the nature of second-order information processing in nonhuman primates. Experiment one showed that both chimpanzees (Pan troglodytes) and rhesus monkeys (Macaca mulatta) tolerate geometric distortions along the vertical axis, suggesting that information about absolute position of features does not contribute to accurate face recognition. Chimpanzees differed from monkeys, however, in that they were more sensitive to distortions along the horizontal axis, suggesting that when building a global representation of facial identity, horizontal relations between features are more diagnostic of identity than vertical relations. Two further experiments were performed to determine whether the monkeys were simply less sensitive to horizontal relations compared to chimpanzees or were instead relying on local features. The results of these experiments confirm that monkeys can utilize a holistic strategy when discriminating between faces regardless of familiarity. In contrast, our data show that chimpanzees, like humans, use a combination of holistic and local features when the faces are unfamiliar, but primarily holistic information when the faces become familiar. We argue that our comparative approach to the study of face recognition reveals the impact that individual experience and social organization has on visual cognition.
The role of skin colour in face recognition.
Bar-Haim, Yair; Saidel, Talia; Yovel, Galit
2009-01-01
People have better memory for faces from their own racial group than for faces from other races. It has been suggested that this own-race recognition advantage depends on an initial categorisation of faces into own and other race based on racial markers, resulting in poorer encoding of individual variations in other-race faces. Here, we used a study--test recognition task with stimuli in which the skin colour of African and Caucasian faces was manipulated to produce four categories representing the cross-section between skin colour and facial features. We show that, despite the notion that skin colour plays a major role in categorising faces into own and other-race faces, its effect on face recognition is minor relative to differences across races in facial features.
Bayesian Face Recognition and Perceptual Narrowing in Face-Space
Balas, Benjamin
2012-01-01
During the first year of life, infants’ face recognition abilities are subject to “perceptual narrowing,” the end result of which is that observers lose the ability to distinguish previously discriminable faces (e.g. other-race faces) from one another. Perceptual narrowing has been reported for faces of different species and different races, in developing humans and primates. Though the phenomenon is highly robust and replicable, there have been few efforts to model the emergence of perceptual narrowing as a function of the accumulation of experience with faces during infancy. The goal of the current study is to examine how perceptual narrowing might manifest as statistical estimation in “face space,” a geometric framework for describing face recognition that has been successfully applied to adult face perception. Here, I use a computer vision algorithm for Bayesian face recognition to study how the acquisition of experience in face space and the presence of race categories affect performance for own and other-race faces. Perceptual narrowing follows from the establishment of distinct race categories, suggesting that the acquisition of category boundaries for race is a key computational mechanism in developing face expertise. PMID:22709406
2. VIEW OF THE HATCH ADIT (FEATURE B28), FACING NORTH. ...
2. VIEW OF THE HATCH ADIT (FEATURE B-28), FACING NORTH. ADIT ROAD IS VISIBLE IN THE FOREGROUND AND OFFICE (FEATURE B-1) IN THE BACKGROUND. - Nevada Lucky Tiger Mill & Mine, Hatch Adit, East slope of Buckskin Mountain, Paradise Valley, Humboldt County, NV
Newborns' Mooney-Face Perception
ERIC Educational Resources Information Center
Leo, Irene; Simion, Francesca
2009-01-01
The aim of this study is to investigate whether newborns detect a face on the basis of a Gestalt representation based on first-order relational information (i.e., the basic arrangement of face features) by using Mooney stimuli. The incomplete 2-tone Mooney stimuli were used because they preclude focusing both on the local features (i.e., the fine…
ERIC Educational Resources Information Center
Faja, Susan; Webb, Sara Jane; Merkle, Kristen; Aylward, Elizabeth; Dawson, Geraldine
2009-01-01
The present study investigates the accuracy and speed of face processing employed by high-functioning adults with autism spectrum disorders (ASDs). Two behavioral experiments measured sensitivity to distances between features and face recognition when performance depended on holistic versus featural information. Results suggest adults with ASD…
Developmental Change in Infant Categorization: The Perception of Correlations among Facial Features.
ERIC Educational Resources Information Center
Younger, Barbara
1992-01-01
Tested 7 and 10 month olds for perception of correlations among facial features. After habituation to faces displaying a pattern of correlation, 10 month olds generalized to a novel face that preserved the pattern of correlation but showed increased attention to a novel face that violated the pattern. (BC)
Person Authentication Using Learned Parameters of Lifting Wavelet Filters
NASA Astrophysics Data System (ADS)
Niijima, Koichi
2006-10-01
This paper proposes a method for identifying persons by the use of the lifting wavelet parameters learned by kurtosis-minimization. Our learning method uses desirable properties of kurtosis and wavelet coefficients of a facial image. Exploiting these properties, the lifting parameters are trained so as to minimize the kurtosis of lifting wavelet coefficients computed for the facial image. Since this minimization problem is an ill-posed problem, it is solved by the aid of Tikhonov's regularization method. Our learning algorithm is applied to each of the faces to be identified to generate its feature vector whose components consist of the learned parameters. The constructed feature vectors are memorized together with the corresponding faces in a feature vectors database. Person authentication is performed by comparing the feature vector of a query face with those stored in the database. In numerical experiments, the lifting parameters are trained for each of the neutral faces of 132 persons (74 males and 58 females) in the AR face database. Person authentication is executed by using the smile and anger faces of the same persons in the database.
Surface charge features of kaolinite particles and their interactions
NASA Astrophysics Data System (ADS)
Gupta, Vishal
Kaolinite is both a blessing and a curse. As an important industrial mineral commodity, kaolinite clays are extensively used in the paper, ceramic, paint, plastic and rubber industries. In all these applications the wettability, aggregation, dispersion, flotation and thickening of kaolinite particles are affected by its crystal structure and surface properties. It is therefore the objective of this research to investigate selected physical and surface chemical properties of kaolinite, specifically the surface charge of kaolinite particles. A pool of advanced analytical techniques such as XRD, XRF, SEM, AFM, FTIR and ISS were utilized to investigate the morphological and surface chemistry features of kaolinite. Surface force measurements revealed that the silica tetrahedral face of kaolinite is negatively charged at pH>4, whereas the alumina octahedral face of kaolinite is positively charged at pH<6, and negatively charged at pH>8. Based on electrophoresis measurements, the apparent iso-electric point for kaolinite particles was determined to be less than pH 3. In contrast, the point of zero charge was determined to be pH 4.5 by titration techniques, which corresponds to the iso-electric point of between pH 4 and 5 as determined by surface force measurements. Results from kaolinite particle interactions indicate that the silica face--alumina face interaction is dominant for kaolinite particle aggregation at low and intermediate pH values, which explains the maximum shear yield stress at pH 5-5.5. Lattice resolution images reveal the hexagonal lattice structure of these two face surfaces of kaolinite. Analysis of the silica face of kaolinite showed that the center of the hexagonal ring of oxygen atoms is vacant, whereas the alumina face showed that the hexagonal surface lattice ring of hydroxyls surround another hydroxyl in the center of the ring. High resolution transmission electron microscopy investigation of kaolinite has indicated that kaolinite is indeed composed of silica/alumina bilayers with a c-spacing of 7.2 A. The surface charge densities of the silica face, the alumina face and the edge surface of kaolinite all influence particle interactions, and thereby affect the mechanical properties of kaolinite suspensions. The improved knowledge of kaolinite surface chemistry from this dissertation research provides a foundation for the development of improved process strategies for both the use and disposal of clay particles such as kaolinite.
Rous, Andrew M.; McLean, Adrienne R.; Barber, Jessica; Bravener, Gale; Castro-Santos, Theodore; Holbrook, Christopher M.; Imre, Istvan; Pratt, Thomas C.; McLaughlin, Robert L.
2017-01-01
Crucial to the management of invasive species is understanding space use and the environmental features affecting space use. Improved understanding of space use by invasive sea lamprey (Petromyzon marinus) could help researchers discern why trap success in large rivers is lower than needed for effective control. We tested whether manipulating discharge nightly could increase trap success at a hydroelectric generating station on the St. Marys River. We quantified numbers of acoustically tagged sea lampreys migrating up to, and their space use at, the hydroelectric generating station. In 2011 and 2012, 78% and 68%, respectively, of tagged sea lampreys reached the generating station. Sea lampreys were active along the face, but more likely to occur at the bottom and away from the traps near the surface, especially when discharge was high. Our findings suggest that a low probability of encountering traps was due to spatial (vertical) mismatch between space use by sea lamprey and trap locations and that increasing discharge did not alter space use in ways that increased trap encounter. Understanding space use by invasive species can help managers assess the efficacy of trapping and ways of improving trapping success.
Early sensitivity for eyes within faces: a new neuronal account of holistic and featural processing
Nemrodov, Dan; Anderson, Thomas; Preston, Frank F.; Itier, Roxane J.
2017-01-01
Eyes are central to face processing however their role in early face encoding as reflected by the N170 ERP component is unclear. Using eye tracking to enforce fixation on specific facial features, we found that the N170 was larger for fixation on the eyes compared to fixation on the forehead, nasion, nose or mouth, which all yielded similar amplitudes. This eye sensitivity was seen in both upright and inverted faces and was lost in eyeless faces, demonstrating it was due to the presence of eyes at fovea. Upright eyeless faces elicited largest N170 at nose fixation. Importantly, the N170 face inversion effect (FIE) was strongly attenuated in eyeless faces when fixation was on the eyes but was less attenuated for nose fixation and was normal when fixation was on the mouth. These results suggest the impact of eye removal on the N170 FIE is a function of the angular distance between the fixated feature and the eye location. We propose the Lateral Inhibition, Face Template and Eye Detector based (LIFTED) model which accounts for all the present N170 results including the FIE and its interaction with eye removal. Although eyes elicit the largest N170 response, reflecting the activity of an eye detector, the processing of upright faces is holistic and entails an inhibitory mechanism from neurons coding parafoveal information onto neurons coding foveal information. The LIFTED model provides a neuronal account of holistic and featural processing involved in upright and inverted faces and offers precise predictions for further testing. PMID:24768932
1988-03-01
of site 39ST282................................227 39 Plan of site 39ST283................................230 40 Detailed plans of Features 1 and 2...268 53 Plan of site 39DW64 .............................. 272 54 Plan of Feature 1, site 39DW64 ................... 273 55 Plan of site 39DW65...facing E ........................... 228 46 Site 39ST283, facing NE .......................... 232 47 Detail of Feature 1, site 39ST283, facing NW
Place recognition and heading retrieval are mediated by dissociable cognitive systems in mice.
Julian, Joshua B; Keinath, Alexander T; Muzzio, Isabel A; Epstein, Russell A
2015-05-19
A lost navigator must identify its current location and recover its facing direction to restore its bearings. We tested the idea that these two tasks--place recognition and heading retrieval--might be mediated by distinct cognitive systems in mice. Previous work has shown that numerous species, including young children and rodents, use the geometric shape of local space to regain their sense of direction after disorientation, often ignoring nongeometric cues even when they are informative. Notably, these experiments have almost always been performed in single-chamber environments in which there is no ambiguity about place identity. We examined the navigational behavior of mice in a two-chamber paradigm in which animals had to both recognize the chamber in which they were located (place recognition) and recover their facing direction within that chamber (heading retrieval). In two experiments, we found that mice used nongeometric features for place recognition, but simultaneously failed to use these same features for heading retrieval, instead relying exclusively on spatial geometry. These results suggest the existence of separate systems for place recognition and heading retrieval in mice that are differentially sensitive to geometric and nongeometric cues. We speculate that a similar cognitive architecture may underlie human navigational behavior.
Poirier, Frédéric J A M; Faubert, Jocelyn
2012-06-22
Facial expressions are important for human communications. Face perception studies often measure the impact of major degradation (e.g., noise, inversion, short presentations, masking, alterations) on natural expression recognition performance. Here, we introduce a novel face perception technique using rich and undegraded stimuli. Participants modified faces to create optimal representations of given expressions. Using sliders, participants adjusted 53 face components (including 37 dynamic) including head, eye, eyebrows, mouth, and nose shape and position. Data was collected from six participants and 10 conditions (six emotions + pain + gender + neutral). Some expressions had unique features (e.g., frown for anger, upward-curved mouth for happiness), whereas others had shared features (e.g., open eyes and mouth for surprise and fear). Happiness was different from other emotions. Surprise was different from other emotions except fear. Weighted sum morphing provides acceptable stimuli for gender-neutral and dynamic stimuli. Many features were correlated, including (1) head size with internal feature sizes as related to gender, (2) internal feature scaling, and (3) eyebrow height and eye openness as related to surprise and fear. These findings demonstrate the method's validity for measuring the optimal facial expressions, which we argue is a more direct measure of their internal representations.
NASA Astrophysics Data System (ADS)
`Adrir'Scott, Michael
2012-12-01
Massively multiplayer online role-playing games (MMORPGs) produce dynamic socio-ludic worlds that nurture both culture and gameplay to shape experiences. Despite the persistent nature of these games, however, the virtual spaces that anchor these worlds may not always be able to exist in perpetuity. Encouraging a community to migrate from one space to another is a challenge now facing some game developers. This paper examines the case of Guild Wars® and its "Hall of Monuments", a feature that bridges the accomplishments of players from the current game to the forthcoming sequel. Two factor analyses describe the perspectives of 105 and 187 self-selected participants. The results reveal four factors affecting attitudes towards the feature, but they do not strongly correlate with existing motivational frameworks, and significant differences were found between different cultures within the game. This informs a discussion about the implications and facilitation of such transitions, investigating themes of capital, value perception and assumptive worlds. It is concluded that the way subcultures produce meaning needs to be considered when attempting to preserve the socio-cultural landscape.
Anderson, Jordan M.; Kier, Brandon; Jurban, Brice; Byrne, Aimee; Shu, Irene; Eidenschink, Lisa A.; Shcherbakov, Alexander A.; Hudson, Mike; Fesinmeyer, R. M.; Andersen, Niels H.
2017-01-01
We have extended our studies of Trp/Trp to other Aryl/Aryl through-space interactions that stabilize hairpins and other small polypeptide folds. Herein we detail the NMR and CD spectroscopic features of these types of interactions. NMR data remains the best diagnostic for characterizing the common T-shape orientation. Designated as an edge-to-face (EtF or FtE) interaction, large ring current shifts are produced at the edge aryl ring hydrogens and, in most cases, large exciton couplets appear in the far UV circular dichroic (CD) spectrum. The preference for the face aryl in FtE clusters is W≫Y≥F (there are some exceptions in the Y/F order); this sequence corresponds to the order of fold stability enhancement and always predicts the amplitude of the lower energy feature of the exciton couplet in the CD spectrum. The CD spectra for FtE W/W, W/Y, Y/W, and Y/Y pairs all include an intense feature at 225–232 nm. An additional couplet feature seen for W/Y, W/F, Y/Y and F/Y clusters, is a negative feature at 197–200 nm. Tyr/Tyr (as well as F/Y and F/F) interactions produce much smaller exciton couplet amplitudes. The Trp-cage fold was employed to search for the CD effects of other Trp/Trp and Trp/Tyr cluster geometries: several were identified. In this account, we provide additional examples of the application of cross-strand aryl/aryl clusters for the design of stable β-sheet models and a scale of fold stability increments associated with all possible FtE Ar/Ar clusters in several structural contexts. PMID:26850220
NASA Astrophysics Data System (ADS)
Cui, Chen; Asari, Vijayan K.
2014-03-01
Biometric features such as fingerprints, iris patterns, and face features help to identify people and restrict access to secure areas by performing advanced pattern analysis and matching. Face recognition is one of the most promising biometric methodologies for human identification in a non-cooperative security environment. However, the recognition results obtained by face recognition systems are a affected by several variations that may happen to the patterns in an unrestricted environment. As a result, several algorithms have been developed for extracting different facial features for face recognition. Due to the various possible challenges of data captured at different lighting conditions, viewing angles, facial expressions, and partial occlusions in natural environmental conditions, automatic facial recognition still remains as a difficult issue that needs to be resolved. In this paper, we propose a novel approach to tackling some of these issues by analyzing the local textural descriptions for facial feature representation. The textural information is extracted by an enhanced local binary pattern (ELBP) description of all the local regions of the face. The relationship of each pixel with respect to its neighborhood is extracted and employed to calculate the new representation. ELBP reconstructs a much better textural feature extraction vector from an original gray level image in different lighting conditions. The dimensionality of the texture image is reduced by principal component analysis performed on each local face region. Each low dimensional vector representing a local region is now weighted based on the significance of the sub-region. The weight of each sub-region is determined by employing the local variance estimate of the respective region, which represents the significance of the region. The final facial textural feature vector is obtained by concatenating the reduced dimensional weight sets of all the modules (sub-regions) of the face image. Experiments conducted on various popular face databases show promising performance of the proposed algorithm in varying lighting, expression, and partial occlusion conditions. Four databases were used for testing the performance of the proposed system: Yale Face database, Extended Yale Face database B, Japanese Female Facial Expression database, and CMU AMP Facial Expression database. The experimental results in all four databases show the effectiveness of the proposed system. Also, the computation cost is lower because of the simplified calculation steps. Research work is progressing to investigate the effectiveness of the proposed face recognition method on pose-varying conditions as well. It is envisaged that a multilane approach of trained frameworks at different pose bins and an appropriate voting strategy would lead to a good recognition rate in such situation.
Numerical Simulations of Silverpit Crater Collapse
NASA Technical Reports Server (NTRS)
Collins, G. S.; Ivanov, B. A.; Turtle, E. P.; Melosh, H. J.
2003-01-01
The Silverpit crater is a recently discovered, 60-65 Myr old complex crater, which lies buried beneath the North Sea, about 150 km east of Britain. High-resolution images of Silverpit's subsurface structure, provided by three-dimensional seismic reflection data, reveal an inner-crater morphology similar to that expected for a 5-8 km diameter terrestrial crater. The crater walls show evidence of terrace-style slumping and there is a distinct central uplift, which may have produced a central peak in the pristine crater morphology. However, Silverpit is not a typical 5-km diameter terrestrial crater, because it exhibits multiple, concentric rings outside the main cavity. External concentric rings are normally associated with much larger impact structures, for example Chicxulub on Earth, or Orientale on the Moon. Furthermore, external rings associated with large impacts on the terrestrial planets and moons are widely-spaced, predominantly inwardly-facing, asymmetric scarps. However, the seismic data show that the external rings at Silverpit represent closely-spaced, concentric faultbound graben, with both inwardly and outwardly facing fault-scarps. This type of multi-ring structure directly analogous to the Valhalla-type multi-ring basins found on the icy satellites. Thus, the presence and style of the multiple rings at Silverpit is surprising given both the size of the crater and its planetary setting. A further curiosity of the Silverpit structure is that the external concentric rings appear to be extensional features on the West side of the crater and compressional features on the East side. The crater also lies in a local depression, thought to be created by postimpact movement of a salt layer buried beneath the crater.
Feature level fusion of hand and face biometrics
NASA Astrophysics Data System (ADS)
Ross, Arun A.; Govindarajan, Rohin
2005-03-01
Multibiometric systems utilize the evidence presented by multiple biometric sources (e.g., face and fingerprint, multiple fingers of a user, multiple matchers, etc.) in order to determine or verify the identity of an individual. Information from multiple sources can be consolidated in several distinct levels, including the feature extraction level, match score level and decision level. While fusion at the match score and decision levels have been extensively studied in the literature, fusion at the feature level is a relatively understudied problem. In this paper we discuss fusion at the feature level in 3 different scenarios: (i) fusion of PCA and LDA coefficients of face; (ii) fusion of LDA coefficients corresponding to the R,G,B channels of a face image; (iii) fusion of face and hand modalities. Preliminary results are encouraging and help in highlighting the pros and cons of performing fusion at this level. The primary motivation of this work is to demonstrate the viability of such a fusion and to underscore the importance of pursuing further research in this direction.
Face detection in color images using skin color, Laplacian of Gaussian, and Euler number
NASA Astrophysics Data System (ADS)
Saligrama Sundara Raman, Shylaja; Kannanedhi Narasimha Sastry, Balasubramanya Murthy; Subramanyam, Natarajan; Senkutuvan, Ramya; Srikanth, Radhika; John, Nikita; Rao, Prateek
2010-02-01
In this a paper, a feature based approach to face detection has been proposed using an ensemble of algorithms. The method uses chrominance values and edge features to classify the image as skin and nonskin regions. The edge detector used for this purpose is Laplacian of Gaussian (LoG) which is found to be appropriate when images having multiple faces with noise in them. Eight connectivity analysis of these regions will segregate them as probable face or nonface. The procedure is made more robust by identifying local features within these skin regions which include number of holes, percentage of skin and the golden ratio. The method proposed has been tested on color face images of various races obtained from different sources and its performance is found to be encouraging as the color segmentation cleans up almost all the complex facial features. The result obtained has a calculated accuracy of 86.5% on a test set of 230 images.
ERIC Educational Resources Information Center
Short, Lindsey A.; Hatry, Alexandra J.; Mondloch, Catherine J.
2011-01-01
The current research investigated the organization of children's face space by examining whether 5- and 8-year-olds show race-contingent aftereffects. Participants read a storybook in which Caucasian and Chinese children's faces were distorted in opposite directions. Before and after adaptation, participants judged the normality/attractiveness of…
Spaced-retrieval effects on name-face recognition in older adults with probable Alzheimer's disease.
Hawley, Karri S; Cherry, Katie E
2004-03-01
Six older adults with probable Alzheimer's disease (AD) were trained to recall a name-face association using the spaced-retrieval method. We administered six training sessions over a 2-week period. On each trial, participants selected a target photograph and stated the target name, from eight other photographs, at increasingly longer retention intervals. Results yielded a positive effect of spaced-retrieval training for name-face recognition. All participants were able to select the target photograph and state the target's name for longer periods of time within and across training sessions. A live-person transfer task was administered to determine whether the name-face association, trained by spaced-retrieval, would transfer to a live person. Half of the participants were able to call the live person by the correct name. These data provide initial evidence that spaced-retrieval training can aid older adults with probable AD in recall of a name-face association and in transfer of that association to an actual person.
Jansari, Ashok; Miller, Scott; Pearce, Laura; Cobb, Stephanie; Sagiv, Noam; Williams, Adrian L.; Tree, Jeremy J.; Hanley, J. Richard
2015-01-01
We report the case of an individual with acquired prosopagnosia who experiences extreme difficulties in recognizing familiar faces in everyday life despite excellent object recognition skills. Formal testing indicates that he is also severely impaired at remembering pre-experimentally unfamiliar faces and that he takes an extremely long time to identify famous faces and to match unfamiliar faces. Nevertheless, he performs as accurately and quickly as controls at identifying inverted familiar and unfamiliar faces and can recognize famous faces from their external features. He also performs as accurately as controls at recognizing famous faces when fracturing conceals the configural information in the face. He shows evidence of impaired global processing but normal local processing of Navon figures. This case appears to reflect the clearest example yet of an acquired prosopagnosic patient whose familiar face recognition deficit is caused by a severe configural processing deficit in the absence of any problems in featural processing. These preserved featural skills together with apparently intact visual imagery for faces allow him to identify a surprisingly large number of famous faces when unlimited time is available. The theoretical implications of this pattern of performance for understanding the nature of acquired prosopagnosia are discussed. PMID:26236212
Heterogeneous Face Attribute Estimation: A Deep Multi-Task Learning Approach.
Han, Hu; K Jain, Anil; Shan, Shiguang; Chen, Xilin
2017-08-10
Face attribute estimation has many potential applications in video surveillance, face retrieval, and social media. While a number of methods have been proposed for face attribute estimation, most of them did not explicitly consider the attribute correlation and heterogeneity (e.g., ordinal vs. nominal and holistic vs. local) during feature representation learning. In this paper, we present a Deep Multi-Task Learning (DMTL) approach to jointly estimate multiple heterogeneous attributes from a single face image. In DMTL, we tackle attribute correlation and heterogeneity with convolutional neural networks (CNNs) consisting of shared feature learning for all the attributes, and category-specific feature learning for heterogeneous attributes. We also introduce an unconstrained face database (LFW+), an extension of public-domain LFW, with heterogeneous demographic attributes (age, gender, and race) obtained via crowdsourcing. Experimental results on benchmarks with multiple face attributes (MORPH II, LFW+, CelebA, LFWA, and FotW) show that the proposed approach has superior performance compared to state of the art. Finally, evaluations on a public-domain face database (LAP) with a single attribute show that the proposed approach has excellent generalization ability.
Face aging effect simulation model based on multilayer representation and shearlet transform
NASA Astrophysics Data System (ADS)
Li, Yuancheng; Li, Yan
2017-09-01
In order to extract detailed facial features, we build a face aging effect simulation model based on multilayer representation and shearlet transform. The face is divided into three layers: the global layer of the face, the local features layer, and texture layer, which separately establishes the aging model. First, the training samples are classified according to different age groups, and we use active appearance model (AAM) at the global level to obtain facial features. The regression equations of shape and texture with age are obtained by fitting the support vector machine regression, which is based on the radial basis function. We use AAM to simulate the aging of facial organs. Then, for the texture detail layer, we acquire the significant high-frequency characteristic components of the face by using the multiscale shearlet transform. Finally, we get the last simulated aging images of the human face by the fusion algorithm. Experiments are carried out on the FG-NET dataset, and the experimental results show that the simulated face images have less differences from the original image and have a good face aging simulation effect.
Detection of Emotional Faces: Salient Physical Features Guide Effective Visual Search
ERIC Educational Resources Information Center
Calvo, Manuel G.; Nummenmaa, Lauri
2008-01-01
In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent,…
FEATURE 2, OPEN SIDE OF SHELTER, VIEW FACING NORTHEAST. ...
FEATURE 2, OPEN SIDE OF SHELTER, VIEW FACING NORTHEAST. - Naval Air Station Barbers Point, Anti-Aircraft Battery Complex-Shelter, East of Coral Sea Road, northwest of Hamilton Road, Ewa, Honolulu County, HI
FEATURE 4, ARMCO HUT, INTERIOR , VIEW FACING NORTHWEST. ...
FEATURE 4, ARMCO HUT, INTERIOR , VIEW FACING NORTHWEST. - Naval Air Station Barbers Point, Anti-Aircraft Battery Complex-ARMCO Hut, East of Coral Sea Road, northwest of Hamilton Road, Ewa, Honolulu County, HI
Good match exploration for infrared face recognition
NASA Astrophysics Data System (ADS)
Yang, Changcai; Zhou, Huabing; Sun, Sheng; Liu, Renfeng; Zhao, Ji; Ma, Jiayi
2014-11-01
Establishing good feature correspondence is a critical prerequisite and a challenging task for infrared (IR) face recognition. Recent studies revealed that the scale invariant feature transform (SIFT) descriptor outperforms other local descriptors for feature matching. However, it only uses local appearance information for matching, and hence inevitably leads to a number of false matches. To address this issue, this paper explores global structure information (GSI) among SIFT correspondences, and proposes a new method SIFT-GSI for good match exploration. This is achieved by fitting a smooth mapping function for the underlying correct matches, which involves softassign and deterministic annealing. Quantitative comparisons with state-of-the-art methods on a publicly available IR human face database demonstrate that SIFT-GSI significantly outperforms other methods for feature matching, and hence it is able to improve the reliability of IR face recognition systems.
Prism adaptation does not alter configural processing of faces
Bultitude, Janet H.; Downing, Paul E.; Rafal, Robert D.
2013-01-01
Patients with hemispatial neglect (‘neglect’) following a brain lesion show difficulty responding or orienting to objects and events on the left side of space. Substantial evidence supports the use of a sensorimotor training technique called prism adaptation as a treatment for neglect. Reaching for visual targets viewed through prismatic lenses that induce a rightward shift in the visual image results in a leftward recalibration of reaching movements that is accompanied by a reduction of symptoms in patients with neglect. The understanding of prism adaptation has also been advanced through studies of healthy participants, in whom adaptation to leftward prismatic shifts results in temporary neglect-like performance. Interestingly, prism adaptation can also alter aspects of non-lateralised spatial attention. We previously demonstrated that prism adaptation alters the extent to which neglect patients and healthy participants process local features versus global configurations of visual stimuli. Since deficits in non-lateralised spatial attention are thought to contribute to the severity of neglect symptoms, it is possible that the effect of prism adaptation on these deficits contributes to its efficacy. This study examines the pervasiveness of the effects of prism adaptation on perception by examining the effect of prism adaptation on configural face processing using a composite face task. The composite face task is a persuasive demonstration of the automatic global-level processing of faces: the top and bottom halves of two familiar faces form a seemingly new, unknown face when viewed together. Participants identified the top or bottom halves of composite faces before and after prism adaptation. Sensorimotor adaptation was confirmed by significant pointing aftereffect, however there was no significant change in the extent to which the irrelevant face half interfered with processing. The results support the proposal that the therapeutic effects of prism adaptation are limited to dorsal stream processing. PMID:25110574
Prism adaptation does not alter configural processing of faces.
Bultitude, Janet H; Downing, Paul E; Rafal, Robert D
2013-01-01
Patients with hemispatial neglect ('neglect') following a brain lesion show difficulty responding or orienting to objects and events on the left side of space. Substantial evidence supports the use of a sensorimotor training technique called prism adaptation as a treatment for neglect. Reaching for visual targets viewed through prismatic lenses that induce a rightward shift in the visual image results in a leftward recalibration of reaching movements that is accompanied by a reduction of symptoms in patients with neglect. The understanding of prism adaptation has also been advanced through studies of healthy participants, in whom adaptation to leftward prismatic shifts results in temporary neglect-like performance. Interestingly, prism adaptation can also alter aspects of non-lateralised spatial attention. We previously demonstrated that prism adaptation alters the extent to which neglect patients and healthy participants process local features versus global configurations of visual stimuli. Since deficits in non-lateralised spatial attention are thought to contribute to the severity of neglect symptoms, it is possible that the effect of prism adaptation on these deficits contributes to its efficacy. This study examines the pervasiveness of the effects of prism adaptation on perception by examining the effect of prism adaptation on configural face processing using a composite face task. The composite face task is a persuasive demonstration of the automatic global-level processing of faces: the top and bottom halves of two familiar faces form a seemingly new, unknown face when viewed together. Participants identified the top or bottom halves of composite faces before and after prism adaptation. Sensorimotor adaptation was confirmed by significant pointing aftereffect, however there was no significant change in the extent to which the irrelevant face half interfered with processing. The results support the proposal that the therapeutic effects of prism adaptation are limited to dorsal stream processing.
Recognizing Action Units for Facial Expression Analysis
Tian, Ying-li; Kanade, Takeo; Cohn, Jeffrey F.
2010-01-01
Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more often communicated by changes in one or a few discrete facial features. In this paper, we develop an Automatic Face Analysis (AFA) system to analyze facial expressions based on both permanent facial features (brows, eyes, mouth) and transient facial features (deepening of facial furrows) in a nearly frontal-view face image sequence. The AFA system recognizes fine-grained changes in facial expression into action units (AUs) of the Facial Action Coding System (FACS), instead of a few prototypic expressions. Multistate face and facial component models are proposed for tracking and modeling the various facial features, including lips, eyes, brows, cheeks, and furrows. During tracking, detailed parametric descriptions of the facial features are extracted. With these parameters as the inputs, a group of action units (neutral expression, six upper face AUs and 10 lower face AUs) are recognized whether they occur alone or in combinations. The system has achieved average recognition rates of 96.4 percent (95.4 percent if neutral expressions are excluded) for upper face AUs and 96.7 percent (95.6 percent with neutral expressions excluded) for lower face AUs. The generalizability of the system has been tested by using independent image databases collected and FACS-coded for ground-truth by different research teams. PMID:25210210
Task-irrelevant emotion facilitates face discrimination learning.
Lorenzino, Martina; Caudek, Corrado
2015-03-01
We understand poorly how the ability to discriminate faces from one another is shaped by visual experience. The purpose of the present study is to determine whether face discrimination learning can be facilitated by facial emotions. To answer this question, we used a task-irrelevant perceptual learning paradigm because it closely mimics the learning processes that, in daily life, occur without a conscious intention to learn and without an attentional focus on specific facial features. We measured face discrimination thresholds before and after training. During the training phase (4 days), participants performed a contrast discrimination task on face images. They were not informed that we introduced (task-irrelevant) subtle variations in the face images from trial to trial. For the Identity group, the task-irrelevant features were variations along a morphing continuum of facial identity. For the Emotion group, the task-irrelevant features were variations along an emotional expression morphing continuum. The Control group did not undergo contrast discrimination learning and only performed the pre-training and post-training tests, with the same temporal gap between them as the other two groups. Results indicate that face discrimination improved, but only for the Emotion group. Participants in the Emotion group, moreover, showed face discrimination improvements also for stimulus variations along the facial identity dimension, even if these (task-irrelevant) stimulus features had not been presented during training. The present results highlight the importance of emotions for face discrimination learning. Copyright © 2015 Elsevier Ltd. All rights reserved.
Understanding Digital Learning and Its Variable Effects
NASA Astrophysics Data System (ADS)
Means, B.
2016-12-01
An increasing proportion of undergraduate courses use an online or blended learning format. This trend signals major changes in the kind of instruction students receive in their STEM courses, yet evidence about the effectiveness of these new approaches is sparse. Existing syntheses and meta-analyses summarize outcomes from experimental or quasi-experimental studies of online and blended courses and document how few studies incorporate proper controls for differences in student characteristics, instructor behaviors, and other course conditions. The evidence that is available suggests that on average blended courses are equal to or better than traditional face-to-face courses and that online courses are equivalent in terms of learning outcomes. But these averages conceal a tremendous underlying variability. Results vary markedly from course to course, even when the same technology is used in both. Some research suggests that online instruction puts lower-achieving students at a disadvantage. It is clear that introducing digital learning per se is no guarantee that student engagement and learning will be enhanced. Getting more consistently positive impacts out of learning technologies is going to require systematic characterization of the features of learning technologies and associated instructional practices as well as attention to context and student characteristics. This presentation will present a framework for characterizing essential features of digital learning resources, implementation practices, and conditions. It will also summarize the research evidence with respect to the learning impacts of specific technology features including spaced practice, immediate feedback, mastery learning based pacing, visualizations and simulations, gaming features, prompts for explanations and reflection, and tools for online collaboration.
Research on facial expression simulation based on depth image
NASA Astrophysics Data System (ADS)
Ding, Sha-sha; Duan, Jin; Zhao, Yi-wu; Xiao, Bo; Wang, Hao
2017-11-01
Nowadays, face expression simulation is widely used in film and television special effects, human-computer interaction and many other fields. Facial expression is captured by the device of Kinect camera .The method of AAM algorithm based on statistical information is employed to detect and track faces. The 2D regression algorithm is applied to align the feature points. Among them, facial feature points are detected automatically and 3D cartoon model feature points are signed artificially. The aligned feature points are mapped by keyframe techniques. In order to improve the animation effect, Non-feature points are interpolated based on empirical models. Under the constraint of Bézier curves we finish the mapping and interpolation. Thus the feature points on the cartoon face model can be driven if the facial expression varies. In this way the purpose of cartoon face expression simulation in real-time is came ture. The experiment result shows that the method proposed in this text can accurately simulate the facial expression. Finally, our method is compared with the previous method. Actual data prove that the implementation efficiency is greatly improved by our method.
Van Rheenen, Tamsyn E; Joshua, Nicole; Castle, David J; Rossell, Susan L
2017-03-01
Emotion recognition impairments have been demonstrated in schizophrenia (Sz), but are less consistent and lesser in magnitude in bipolar disorder (BD). This may be related to the extent to which different face processing strategies are engaged during emotion recognition in each of these disorders. We recently showed that Sz patients had impairments in the use of both featural and configural face processing strategies, whereas BD patients were impaired only in the use of the latter. Here we examine the influence that these impairments have on facial emotion recognition in these cohorts. Twenty-eight individuals with Sz, 28 individuals with BD, and 28 healthy controls completed a facial emotion labeling task with two conditions designed to separate the use of featural and configural face processing strategies; part-based and whole-face emotion recognition. Sz patients performed worse than controls on both conditions, and worse than BD patients on the whole-face condition. BD patients performed worse than controls on the whole-face condition only. Configural processing deficits appear to influence the recognition of facial emotions in BD, whereas both configural and featural processing abnormalities impair emotion recognition in Sz. This may explain discrepancies in the profiles of emotion recognition between the disorders. (JINS, 2017, 23, 287-291).
Main interior space facing south toward the ocean. Original scissor ...
Main interior space facing south toward the ocean. Original scissor trusses and deck roof are visible at the top. Octagonal window with large picture windows face the ocean. - San Luis Yacht Club, Avila Pier, South of Front Street, Avila Beach, San Luis Obispo County, CA
What's in a "face file"? Feature binding with facial identity, emotion, and gaze direction.
Fitousi, Daniel
2017-07-01
A series of four experiments investigated the binding of facial (i.e., facial identity, emotion, and gaze direction) and non-facial (i.e., spatial location and response location) attributes. Evidence for the creation and retrieval of temporary memory face structures across perception and action has been adduced. These episodic structures-dubbed herein "face files"-consisted of both visuo-visuo and visuo-motor bindings. Feature binding was indicated by partial-repetition costs. That is repeating a combination of facial features or altering them altogether, led to faster responses than repeating or alternating only one of the features. Taken together, the results indicate that: (a) "face files" affect both action and perception mechanisms, (b) binding can take place with facial dimensions and is not restricted to low-level features (Hommel, Visual Cognition 5:183-216, 1998), and (c) the binding of facial and non-facial attributes is facilitated if the dimensions share common spatial or motor codes. The theoretical contributions of these results to "person construal" theories (Freeman, & Ambady, Psychological Science, 20(10), 1183-1188, 2011), as well as to face recognition models (Haxby, Hoffman, & Gobbini, Biological Psychiatry, 51(1), 59-67, 2000) are discussed.
Infrared and visible fusion face recognition based on NSCT domain
NASA Astrophysics Data System (ADS)
Xie, Zhihua; Zhang, Shuai; Liu, Guodong; Xiong, Jinquan
2018-01-01
Visible face recognition systems, being vulnerable to illumination, expression, and pose, can not achieve robust performance in unconstrained situations. Meanwhile, near infrared face images, being light- independent, can avoid or limit the drawbacks of face recognition in visible light, but its main challenges are low resolution and signal noise ratio (SNR). Therefore, near infrared and visible fusion face recognition has become an important direction in the field of unconstrained face recognition research. In this paper, a novel fusion algorithm in non-subsampled contourlet transform (NSCT) domain is proposed for Infrared and visible face fusion recognition. Firstly, NSCT is used respectively to process the infrared and visible face images, which exploits the image information at multiple scales, orientations, and frequency bands. Then, to exploit the effective discriminant feature and balance the power of high-low frequency band of NSCT coefficients, the local Gabor binary pattern (LGBP) and Local Binary Pattern (LBP) are applied respectively in different frequency parts to obtain the robust representation of infrared and visible face images. Finally, the score-level fusion is used to fuse the all the features for final classification. The visible and near infrared face recognition is tested on HITSZ Lab2 visible and near infrared face database. Experiments results show that the proposed method extracts the complementary features of near-infrared and visible-light images and improves the robustness of unconstrained face recognition.
Royer, Jessica; Blais, Caroline; Barnabé-Lortie, Vincent; Carré, Mélissa; Leclerc, Josiane; Fiset, Daniel
2016-06-01
Faces are encountered in highly diverse angles in real-world settings. Despite this considerable diversity, most individuals are able to easily recognize familiar faces. The vast majority of studies in the field of face recognition have nonetheless focused almost exclusively on frontal views of faces. Indeed, a number of authors have investigated the diagnostic facial features for the recognition of frontal views of faces previously encoded in this same view. However, the nature of the information useful for identity matching when the encoded face and test face differ in viewing angle remains mostly unexplored. The present study addresses this issue using individual differences and bubbles, a method that pinpoints the facial features effectively used in a visual categorization task. Our results indicate that the use of features located in the center of the face, the lower left portion of the nose area and the center of the mouth, are significantly associated with individual efficiency to generalize a face's identity across different viewpoints. However, as faces become more familiar, the reliance on this area decreases, while the diagnosticity of the eye region increases. This suggests that a certain distinction can be made between the visual mechanisms subtending viewpoint invariance and face recognition in the case of unfamiliar face identification. Our results further support the idea that the eye area may only come into play when the face stimulus is particularly familiar to the observer. Copyright © 2016 Elsevier Ltd. All rights reserved.
Rigid Facial Motion Influences Featural, But Not Holistic, Face Processing
Xiao, Naiqi; Quinn, Paul C.; Ge, Liezhong; Lee, Kang
2012-01-01
We report three experiments in which we investigated the effect of rigid facial motion on face processing. Specifically, we used the face composite effect to examine whether rigid facial motion influences primarily featural or holistic processing of faces. In Experiments 1, 2, and 3, participants were first familiarized with dynamic displays in which a target face turned from one side to another; then at test, participants judged whether the top half of a composite face (the top half of the target face aligned or misaligned with the bottom half of a foil face) belonged to the target face. We compared performance in the dynamic condition to various static control conditions in Experiments 1, 2, and 3, which differed from each other in terms of the display order of the multiple static images or the inter stimulus interval (ISI) between the images. We found that the size of the face composite effect in the dynamic condition was significantly smaller than that in the static conditions. In other words, the dynamic face display influenced participants to process the target faces in a part-based manner and consequently their recognition of the upper portion of the composite face at test became less interfered with by the aligned lower part of the foil face. The findings from the present experiments provide the strongest evidence to date to suggest that the rigid facial motion mainly influences facial featural, but not holistic, processing. PMID:22342561
Davies, Oliver
2016-01-01
New data is emerging from evolutionary anthropology and the neuroscience of social cognition on our species-specific hyper-cooperation (HC). This paper attempts an integration of third-person archaeological and second-person, neuroscientific perspectives on the structure of HC, through a post-Ricoeurian development in hermeneutical phenomenology. We argue for the relatively late evolution of advanced linguistic consciousness (ALC) (Hiscock in Biological Theory 9:27-41, 2014), as a reflexive system based on the 'in-between' or 'cognitive system' as reported by Vogeley et al. (in: Interdisziplinäre anthropologie, Heidelberg, Springer, 2014) of face-to-face social cognition, as well as tool use. The possibility of a positive or negative tension between the more recent ALC and the more ancient, pre-thematic, self-organizing 'in-between' frames an 'internal' niche construction. This indexes the internal structure of HC as 'convergence', where complex, engaged, social reasoning in ALC mirrors the cognitive structure of the pre-thematic 'in-between', extending the bio-energy of our social cognition, through reflexive amplification, in the production of 'social place' as 'humanized space'. If individual word/phrase acquisition, in contextual actuality, is the distinctive feature of human language (Hurford in European Reviews 12:551-565, 2004), then human language is a hyperbolic, species-wide training in particularized co-location, developing consciousness of a shared world. The humanization of space and production of HC, through co-location, requires the 'disarming' of language as a medium of control, and a foregrounding of the materiality of the sign. The production of 'hyper-place' as solidarity beyond the face-to-face, typical of world religions, becomes possible where internal niche construction as convergence with the 'in-between' (world in us) combines with religious cosmologies reflecting an external 'cosmic' niche construction (world outside us).
FEATURE 1, SMALL GUN POSITION, VIEW FACING NORTH. Naval ...
FEATURE 1, SMALL GUN POSITION, VIEW FACING NORTH. - Naval Air Station Barbers Point, Anti-Aircraft Battery Complex-Small Gun Position, East of Coral Sea Road, northwest of Hamilton Road, Ewa, Honolulu County, HI
FEATURE 4, ARMCO HUT, ENTRANCE FACADE, VIEW FACING EASTSOUTHEAST. ...
FEATURE 4, ARMCO HUT, ENTRANCE FACADE, VIEW FACING EAST-SOUTHEAST. - Naval Air Station Barbers Point, Anti-Aircraft Battery Complex-ARMCO Hut, East of Coral Sea Road, northwest of Hamilton Road, Ewa, Honolulu County, HI
FEATURE 2, SHELTER, NORTHNORTHEAST SIDE, VIEW FACING SOUTHSOUTHWEST. Naval ...
FEATURE 2, SHELTER, NORTH-NORTHEAST SIDE, VIEW FACING SOUTH-SOUTHWEST. - Naval Air Station Barbers Point, Anti-Aircraft Battery Complex-Shelter, East of Coral Sea Road, northwest of Hamilton Road, Ewa, Honolulu County, HI
FEATURE C, TYPE 1 PILLBOX, EAST SIDE, VIEW FACING WESTNORTHWEST. ...
FEATURE C, TYPE 1 PILLBOX, EAST SIDE, VIEW FACING WEST-NORTHWEST. - Naval Air Station Barbers Point, Shore Pillbox Complex-Type 1 Pillbox, Along shoreline, seaward of Coral Sea Road, Ewa, Honolulu County, HI
FEATURED D, TYPE 1 PILLBOX, SOUTH SIDE, VIEW FACING NORTH. ...
FEATURED D, TYPE 1 PILLBOX, SOUTH SIDE, VIEW FACING NORTH. - Naval Air Station Barbers Point, Shore Pillbox Complex-Type 1 Pillbox, Along shoreline, seaward of Coral Sea Road, Ewa, Honolulu County, HI
Face Pareidolia in the Rhesus Monkey.
Taubert, Jessica; Wardle, Susan G; Flessert, Molly; Leopold, David A; Ungerleider, Leslie G
2017-08-21
Face perception in humans and nonhuman primates is rapid and accurate [1-4]. In the human brain, a network of visual-processing regions is specialized for faces [5-7]. Although face processing is a priority of the primate visual system, face detection is not infallible. Face pareidolia is the compelling illusion of perceiving facial features on inanimate objects, such as the illusory face on the surface of the moon. Although face pareidolia is commonly experienced by humans, its presence in other species is unknown. Here we provide evidence for face pareidolia in a species known to possess a complex face-processing system [8-10]: the rhesus monkey (Macaca mulatta). In a visual preference task [11, 12], monkeys looked longer at photographs of objects that elicited face pareidolia in human observers than at photographs of similar objects that did not elicit illusory faces. Examination of eye movements revealed that monkeys fixated the illusory internal facial features in a pattern consistent with how they view photographs of faces [13]. Although the specialized response to faces observed in humans [1, 3, 5-7, 14] is often argued to be continuous across primates [4, 15], it was previously unclear whether face pareidolia arose from a uniquely human capacity. For example, pareidolia could be a product of the human aptitude for perceptual abstraction or result from frequent exposure to cartoons and illustrations that anthropomorphize inanimate objects. Instead, our results indicate that the perception of illusory facial features on inanimate objects is driven by a broadly tuned face-detection mechanism that we share with other species. Published by Elsevier Ltd.
Effects of configural processing on the perceptual spatial resolution for face features.
Namdar, Gal; Avidan, Galia; Ganel, Tzvi
2015-11-01
Configural processing governs human perception across various domains, including face perception. An established marker of configural face perception is the face inversion effect, in which performance is typically better for upright compared to inverted faces. In two experiments, we tested whether configural processing could influence basic visual abilities such as perceptual spatial resolution (i.e., the ability to detect spatial visual changes). Face-related perceptual spatial resolution was assessed by measuring the just noticeable difference (JND) to subtle positional changes between specific features in upright and inverted faces. The results revealed robust inversion effect for spatial sensitivity to configural-based changes, such as the distance between the mouth and the nose, or the distance between the eyes and the nose. Critically, spatial resolution for face features within the region of the eyes (e.g., the interocular distance between the eyes) was not affected by inversion, suggesting that the eye region operates as a separate 'gestalt' unit which is relatively immune to manipulations that would normally hamper configural processing. Together these findings suggest that face orientation modulates fundamental psychophysical abilities including spatial resolution. Furthermore, they indicate that classic psychophysical methods can be used as a valid measure of configural face processing. Copyright © 2015 Elsevier Ltd. All rights reserved.
Face liveness detection using shearlet-based feature descriptors
NASA Astrophysics Data System (ADS)
Feng, Litong; Po, Lai-Man; Li, Yuming; Yuan, Fang
2016-07-01
Face recognition is a widely used biometric technology due to its convenience but it is vulnerable to spoofing attacks made by nonreal faces such as photographs or videos of valid users. The antispoof problem must be well resolved before widely applying face recognition in our daily life. Face liveness detection is a core technology to make sure that the input face is a live person. However, this is still very challenging using conventional liveness detection approaches of texture analysis and motion detection. The aim of this paper is to propose a feature descriptor and an efficient framework that can be used to effectively deal with the face liveness detection problem. In this framework, new feature descriptors are defined using a multiscale directional transform (shearlet transform). Then, stacked autoencoders and a softmax classifier are concatenated to detect face liveness. We evaluated this approach using the CASIA Face antispoofing database and replay-attack database. The experimental results show that our approach performs better than the state-of-the-art techniques following the provided protocols of these databases, and it is possible to significantly enhance the security of the face recognition biometric system. In addition, the experimental results also demonstrate that this framework can be easily extended to classify different spoofing attacks.
The Morphometrics of “Masculinity” in Human Faces
Mitteroecker, Philipp; Windhager, Sonja; Müller, Gerd B.; Schaefer, Katrin
2015-01-01
In studies of social inference and human mate preference, a wide but inconsistent array of tools for computing facial masculinity has been devised. Several of these approaches implicitly assumed that the individual expression of sexually dimorphic shape features, which we refer to as maleness, resembles facial shape features perceived as masculine. We outline a morphometric strategy for estimating separately the face shape patterns that underlie perceived masculinity and maleness, and for computing individual scores for these shape patterns. We further show how faces with different degrees of masculinity or maleness can be constructed in a geometric morphometric framework. In an application of these methods to a set of human facial photographs, we found that shape features typically perceived as masculine are wide faces with a wide inter-orbital distance, a wide nose, thin lips, and a large and massive lower face. The individual expressions of this combination of shape features—the masculinity shape scores—were the best predictor of rated masculinity among the compared methods (r = 0.5). The shape features perceived as masculine only partly resembled the average face shape difference between males and females (sexual dimorphism). Discriminant functions and Procrustes distances to the female mean shape were poor predictors of perceived masculinity. PMID:25671667
Generating One Biometric Feature from Another: Faces from Fingerprints
Ozkaya, Necla; Sagiroglu, Seref
2010-01-01
This study presents a new approach based on artificial neural networks for generating one biometric feature (faces) from another (only fingerprints). An automatic and intelligent system was designed and developed to analyze the relationships among fingerprints and faces and also to model and to improve the existence of the relationships. The new proposed system is the first study that generates all parts of the face including eyebrows, eyes, nose, mouth, ears and face border from only fingerprints. It is also unique and different from similar studies recently presented in the literature with some superior features. The parameter settings of the system were achieved with the help of Taguchi experimental design technique. The performance and accuracy of the system have been evaluated with 10-fold cross validation technique using qualitative evaluation metrics in addition to the expanded quantitative evaluation metrics. Consequently, the results were presented on the basis of the combination of these objective and subjective metrics for illustrating the qualitative properties of the proposed methods as well as a quantitative evaluation of their performances. Experimental results have shown that one biometric feature can be determined from another. These results have once more indicated that there is a strong relationship between fingerprints and faces. PMID:22399877
Tolerance of geometric distortions in infant's face recognition.
Yamashita, Wakayo; Kanazawa, So; Yamaguchi, Masami K
2014-02-01
The aim of the current study is to reveal the effect of global linear transformations (shearing, horizontal stretching, and vertical stretching) on the recognition of familiar faces (e.g., a mother's face) in 6- to 7-month-old infants. In this experiment, we applied the global linear transformations to both the infants' own mother's face and to a stranger's face, and we tested infants' preference between these faces. We found that only 7-month-old infants maintained preference for their own mother's face during the presentation of vertical stretching, while the preference for the mother's face disappeared during the presentation of shearing or horizontal stretching. These findings suggest that 7-month-old infants might not recognize faces based on calculating the absolute distance between facial features, and that the vertical dimension of facial features might be more related to infants' face recognition rather than the horizontal dimension. Copyright © 2013 Elsevier Inc. All rights reserved.
Mount Everest (Chomolungma, Goddess Mother of the World)
NASA Technical Reports Server (NTRS)
2002-01-01
Mt. Everest is the highest (29,035 feet, 8850 meters) mountain in the world. This detailed look at Mt. Everest and Lhotse is part of a more extensive photograph of the central Himalaya taken in October 1993 that is one of the best views of the mountain captured by astronauts to date. It shows the North and South Faces of Everest in shadow with the Kangshung Face in morning light. Other major peaks in the immediate area are Nuptse and Bei Peak (Changtse). The picture was taken looking slightly obliquely when the spacecraft was north of Everest. Everest holds a powerful fascination for climbers and trekkers from around the world. The paths for typical North and South climbing routes are sketched on this image. Much of the regional context can be seen in the complete photograph, which shows Mt. Everest and other large peaks to the northwest. More information on the photograph STS058-101-12 can be found at the Gateway to Astronaut Photography of Earth. An unannotated version can also be downloaded. The digital images shown have been reduced to a spatial resolution equivalent to 48 m / pixel; a high-resolution digital image of the same photograph would be at 12 meters per pixel. A new interactive tutorial, Find Mt. Everest From Space, is now available on the Web. The presentation was created by the Earth Sciences and Image Analysis Laboratory, Johnson Space Center, from astronaut training materials developed by William R. Muehlberger (University of Texas, Austin), who has instructed astronauts in geology since the Apollo missions. While circling the globe every 90 minutes, astronauts have only seconds to find key peaks in the Himalayas. These photographs are used to train their eyes so they can rapidly find and photograph Everest when they pass over. The tutorial features astronaut photographs of the Himalayas, interactive graphics that illustrate key geographic features for locating Mt. Everest, and. information on the geology of the region. The lesson concludes with a test of your ability to identify Everest in different photographs taken from the Space Shuttle. Earth Sciences and Image Analysis Laboratory, Johnson Space Center
Meinhardt, Günter; Kurbel, David; Meinhardt-Injac, Bozana; Persike, Malte
2018-03-22
Some years ago an asymmetry was reported for the inversion effect for horizontal (H) and vertical (V) relational face manipulations (Goffaux & Rossion, 2007). Subsequent research examined whether a specific disruption of long-range relations underlies the H/V inversion asymmetry (Sekunova & Barton, 2008). Here, we tested how detection of changes in interocular distance (H) and eye height (V) depends on cardinal internal features and external feature surround. Results replicated the H/V inversion asymmetry. Moreover, we found very different face cue dependencies for both change types. Performance and inversion effects did not depend on the presence of other face cues for detecting H changes. In contrast, accuracy for detecting V changes strongly depended on internal and external features, showing cumulative improvement when more cues were added. Inversion effects were generally large, and larger with external feature surround. The cue independence in detecting H relational changes indicates specialized local processing tightly tuned to the eyes region, while the strong cue dependency in detecting V relational changes indicates a global mechanism of cue integration across different face regions. These findings suggest that the H/V asymmetry of the inversion effect rests on an H/V anisotropy of face cue dependency, since only the global V mechanism suffers from disruption of cue integration as the major effect of face inversion. Copyright © 2018. Published by Elsevier Ltd.
Extracted facial feature of racial closely related faces
NASA Astrophysics Data System (ADS)
Liewchavalit, Chalothorn; Akiba, Masakazu; Kanno, Tsuneo; Nagao, Tomoharu
2010-02-01
Human faces contain a lot of demographic information such as identity, gender, age, race and emotion. Human being can perceive these pieces of information and use it as an important clue in social interaction with other people. Race perception is considered the most delicacy and sensitive parts of face perception. There are many research concerning image-base race recognition, but most of them are focus on major race group such as Caucasoid, Negroid and Mongoloid. This paper focuses on how people classify race of the racial closely related group. As a sample of racial closely related group, we choose Japanese and Thai face to represents difference between Northern and Southern Mongoloid. Three psychological experiment was performed to study the strategies of face perception on race classification. As a result of psychological experiment, it can be suggested that race perception is an ability that can be learn. Eyes and eyebrows are the most attention point and eyes is a significant factor in race perception. The Principal Component Analysis (PCA) was performed to extract facial features of sample race group. Extracted race features of texture and shape were used to synthesize faces. As the result, it can be suggested that racial feature is rely on detailed texture rather than shape feature. This research is a indispensable important fundamental research on the race perception which are essential in the establishment of human-like race recognition system.
View of EPA Farm cattle shelter (featuring horse trailer), facing ...
View of EPA Farm cattle shelter (featuring horse trailer), facing northwest - Nevada Test Site, Environmental Protection Agency Farm, Shelter Unit Type, Area 15, Yucca Flat, 10-2 Road near Circle Road, Mercury, Nye County, NV
1. VIEW OF THE HOIST (FEATURE B26), FACING NORTHEAST. IT ...
1. VIEW OF THE HOIST (FEATURE B-26), FACING NORTHEAST. IT IS SITUATED ADJACENT TO THE HATCH ADIT. - Nevada Lucky Tiger Mill & Mine, Hoist, East slope of Buckskin Mountain, Paradise Valley, Humboldt County, NV
FEATURE A. CONCRETE ANTIAIRCRAFT GUN POSITION, VIEW FACING NORTHNORTHEAST. ...
FEATURE A. CONCRETE ANTI-AIRCRAFT GUN POSITION, VIEW FACING NORTH-NORTHEAST. - Naval Air Station Barbers Point, Battery-Anti-Aircraft Gun Position, South of Point Cruz Road & west of Coral Sea Road, Ewa, Honolulu County, HI
FEATURE B. MACHINE GUN POSITION WITH LEWIS MOUNT, VIEW FACING ...
FEATURE B. MACHINE GUN POSITION WITH LEWIS MOUNT, VIEW FACING NORTHWEST. - Naval Air Station Barbers Point, Battery-Machine Gun Positions, South of Point Cruz Road & west of Coral Sea Road, Ewa, Honolulu County, HI
FEATURED D, TYPE 1 PILLBOX, WEST SIDE, VIEW FACING EAST ...
FEATURED D, TYPE 1 PILLBOX, WEST SIDE, VIEW FACING EAST (with scale stick). - Naval Air Station Barbers Point, Shore Pillbox Complex-Type 1 Pillbox, Along shoreline, seaward of Coral Sea Road, Ewa, Honolulu County, HI
Short, Lindsey A; Hatry, Alexandra J; Mondloch, Catherine J
2011-02-01
The current research investigated the organization of children's face space by examining whether 5- and 8-year-olds show race-contingent aftereffects. Participants read a storybook in which Caucasian and Chinese children's faces were distorted in opposite directions. Before and after adaptation, participants judged the normality/attractiveness of expanded, compressed, and undistorted Caucasian and Chinese faces. The method was validated with adults and then refined to test 8- and 5-year-olds. The 5-year-olds were also tested in a simple aftereffects paradigm. The current research provides the first evidence for simple attractiveness aftereffects in 5-year-olds and for race-contingent aftereffects in both 5- and 8-year-olds. Evidence that adults and 5-year-olds may possess only a weak prototype for Chinese children's faces suggests that Caucasian adults' prototype for Chinese adult faces does not generalize to child faces and that children's face space undergoes a period of increasing differentiation between 5 and 8 years of age. Copyright © 2010 Elsevier Inc. All rights reserved.
Face recognition algorithm based on Gabor wavelet and locality preserving projections
NASA Astrophysics Data System (ADS)
Liu, Xiaojie; Shen, Lin; Fan, Honghui
2017-07-01
In order to solve the effects of illumination changes and differences of personal features on the face recognition rate, this paper presents a new face recognition algorithm based on Gabor wavelet and Locality Preserving Projections (LPP). The problem of the Gabor filter banks with high dimensions was solved effectively, and also the shortcoming of the LPP on the light illumination changes was overcome. Firstly, the features of global image information were achieved, which used the good spatial locality and orientation selectivity of Gabor wavelet filters. Then the dimensions were reduced by utilizing the LPP, which well-preserved the local information of the image. The experimental results shown that this algorithm can effectively extract the features relating to facial expressions, attitude and other information. Besides, it can reduce influence of the illumination changes and the differences in personal features effectively, which improves the face recognition rate to 99.2%.
Orientation-sensitivity to facial features explains the Thatcher illusion.
Psalta, Lilia; Young, Andrew W; Thompson, Peter; Andrews, Timothy J
2014-10-09
The Thatcher illusion provides a compelling example of the perceptual cost of face inversion. The Thatcher illusion is often thought to result from a disruption to the processing of spatial relations between face features. Here, we show the limitations of this account and instead demonstrate that the effect of inversion in the Thatcher illusion is better explained by a disruption to the processing of purely local facial features. Using a matching task, we found that participants were able to discriminate normal and Thatcherized versions of the same face when they were presented in an upright orientation, but not when the images were inverted. Next, we showed that the effect of inversion was also apparent when only the eye region or only the mouth region was visible. These results demonstrate that a key component of the Thatcher illusion is to be found in orientation-specific encoding of the expressive features (eyes and mouth) of the face. © 2014 ARVO.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-06
... be active during all dynamic tests conducted to show compliance with Sec. 25.562. (2) The design and... novel or unusual design feature(s) associated with multiple place and single place side- facing seats... not contain adequate or appropriate safety standards for this design feature. These proposed special...
A multi-view face recognition system based on cascade face detector and improved Dlib
NASA Astrophysics Data System (ADS)
Zhou, Hongjun; Chen, Pei; Shen, Wei
2018-03-01
In this research, we present a framework for multi-view face detect and recognition system based on cascade face detector and improved Dlib. This method is aimed to solve the problems of low efficiency and low accuracy in multi-view face recognition, to build a multi-view face recognition system, and to discover a suitable monitoring scheme. For face detection, the cascade face detector is used to extracted the Haar-like feature from the training samples, and Haar-like feature is used to train a cascade classifier by combining Adaboost algorithm. Next, for face recognition, we proposed an improved distance model based on Dlib to improve the accuracy of multiview face recognition. Furthermore, we applied this proposed method into recognizing face images taken from different viewing directions, including horizontal view, overlooks view, and looking-up view, and researched a suitable monitoring scheme. This method works well for multi-view face recognition, and it is also simulated and tested, showing satisfactory experimental results.
Holistic processing, contact, and the other-race effect in face recognition.
Zhao, Mintao; Hayward, William G; Bülthoff, Isabelle
2014-12-01
Face recognition, holistic processing, and processing of configural and featural facial information are known to be influenced by face race, with better performance for own- than other-race faces. However, whether these various other-race effects (OREs) arise from the same underlying mechanisms or from different processes remains unclear. The present study addressed this question by measuring the OREs in a set of face recognition tasks, and testing whether these OREs are correlated with each other. Participants performed different tasks probing (1) face recognition, (2) holistic processing, (3) processing of configural information, and (4) processing of featural information for both own- and other-race faces. Their contact with other-race people was also assessed with a questionnaire. The results show significant OREs in tasks testing face memory and processing of configural information, but not in tasks testing either holistic processing or processing of featural information. Importantly, there was no cross-task correlation between any of the measured OREs. Moreover, the level of other-race contact predicted only the OREs obtained in tasks testing face memory and processing of configural information. These results indicate that these various cross-race differences originate from different aspects of face processing, in contrary to the view that the ORE in face recognition is due to cross-race differences in terms of holistic processing. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Feature saliency in judging the sex and familiarity of faces.
Roberts, T; Bruce, V
1988-01-01
Two experiments are reported on the effect of feature masking on judgements of the sex and familiarity of faces. In experiment 1 the effect of masking the eyes, nose, or mouth of famous and nonfamous, male and female faces on response times in two tasks was investigated. In the first, recognition, task only masking of the eyes had a significant effect on response times. In the second, sex-judgement, task masking of the nose gave rise to a significant and large increase in response times. In experiment 2 it was found that when facial features were presented in isolation in a sex-judgement task, responses to noses were at chance level, unlike those for eyes or mouths. It appears that visual information available from the nose in isolation from the rest of the face is not sufficient for sex judgement, yet masking of the nose may disrupt the extraction of information about the overall topography of the face, information that may be more useful for sex judgement than for identification of a face.
Face Alignment via Regressing Local Binary Features.
Ren, Shaoqing; Cao, Xudong; Wei, Yichen; Sun, Jian
2016-03-01
This paper presents a highly efficient and accurate regression approach for face alignment. Our approach has two novel components: 1) a set of local binary features and 2) a locality principle for learning those features. The locality principle guides us to learn a set of highly discriminative local binary features for each facial landmark independently. The obtained local binary features are used to jointly learn a linear regression for the final output. This approach achieves the state-of-the-art results when tested on the most challenging benchmarks to date. Furthermore, because extracting and regressing local binary features are computationally very cheap, our system is much faster than previous methods. It achieves over 3000 frames per second (FPS) on a desktop or 300 FPS on a mobile phone for locating a few dozens of landmarks. We also study a key issue that is important but has received little attention in the previous research, which is the face detector used to initialize alignment. We investigate several face detectors and perform quantitative evaluation on how they affect alignment accuracy. We find that an alignment friendly detector can further greatly boost the accuracy of our alignment method, reducing the error up to 16% relatively. To facilitate practical usage of face detection/alignment methods, we also propose a convenient metric to measure how good a detector is for alignment initialization.
Impaired holistic processing of unfamiliar individual faces in acquired prosopagnosia.
Ramon, Meike; Busigny, Thomas; Rossion, Bruno
2010-03-01
Prosopagnosia is an impairment at individualizing faces that classically follows brain damage. Several studies have reported observations supporting an impairment of holistic/configural face processing in acquired prosopagnosia. However, this issue may require more compelling evidence as the cases reported were generally patients suffering from integrative visual agnosia, and the sensitivity of the paradigms used to measure holistic/configural face processing in normal individuals remains unclear. Here we tested a well-characterized case of acquired prosopagnosia (PS) with no object recognition impairment, in five behavioral experiments (whole/part and composite face paradigms with unfamiliar faces). In all experiments, for normal observers we found that processing of a given facial feature was affected by the location and identity of the other features in a whole face configuration. In contrast, the patient's results over these experiments indicate that she encodes local facial information independently of the other features embedded in the whole facial context. These observations and a survey of the literature indicate that abnormal holistic processing of the individual face may be a characteristic hallmark of prosopagnosia following brain damage, perhaps with various degrees of severity. Copyright (c) 2009 Elsevier Ltd. All rights reserved.
Sandford, Adam; Burton, A Mike
2014-09-01
Face recognition is widely held to rely on 'configural processing', an analysis of spatial relations between facial features. We present three experiments in which viewers were shown distorted faces, and asked to resize these to their correct shape. Based on configural theories appealing to metric distances between features, we reason that this should be an easier task for familiar than unfamiliar faces (whose subtle arrangements of features are unknown). In fact, participants were inaccurate at this task, making between 8% and 13% errors across experiments. Importantly, we observed no advantage for familiar faces: in one experiment participants were more accurate with unfamiliars, and in two experiments there was no difference. These findings were not due to general task difficulty - participants were able to resize blocks of colour to target shapes (squares) more accurately. We also found an advantage of familiarity for resizing other stimuli (brand logos). If configural processing does underlie face recognition, these results place constraints on the definition of 'configural'. Alternatively, familiar face recognition might rely on more complex criteria - based on tolerance to within-person variation rather than highly specific measurement. Copyright © 2014 Elsevier B.V. All rights reserved.
A model of face selection in viewing video stories.
Suda, Yuki; Kitazawa, Shigeru
2015-01-19
When typical adults watch TV programs, they show surprisingly stereo-typed gaze behaviours, as indicated by the almost simultaneous shifts of their gazes from one face to another. However, a standard saliency model based on low-level physical features alone failed to explain such typical gaze behaviours. To find rules that explain the typical gaze behaviours, we examined temporo-spatial gaze patterns in adults while they viewed video clips with human characters that were played with or without sound, and in the forward or reverse direction. We here show the following: 1) the "peak" face scanpath, which followed the face that attracted the largest number of views but ignored other objects in the scene, still retained the key features of actual scanpaths, 2) gaze behaviours remained unchanged whether the sound was provided or not, 3) the gaze behaviours were sensitive to time reversal, and 4) nearly 60% of the variance of gaze behaviours was explained by the face saliency that was defined as a function of its size, novelty, head movements, and mouth movements. These results suggest that humans share a face-oriented network that integrates several visual features of multiple faces, and directs our eyes to the most salient face at each moment.
Blood perfusion construction for infrared face recognition based on bio-heat transfer.
Xie, Zhihua; Liu, Guodong
2014-01-01
To improve the performance of infrared face recognition for time-lapse data, a new construction of blood perfusion is proposed based on bio-heat transfer. Firstly, by quantifying the blood perfusion based on Pennes equation, the thermal information is converted into blood perfusion rate, which is stable facial biological feature of face image. Then, the separability discriminant criterion in Discrete Cosine Transform (DCT) domain is applied to extract the discriminative features of blood perfusion information. Experimental results demonstrate that the features of blood perfusion are more concentrative and discriminative for recognition than those of thermal information. The infrared face recognition based on the proposed blood perfusion is robust and can achieve better recognition performance compared with other state-of-the-art approaches.
ERIC Educational Resources Information Center
Anzures, Gizelle; Kelly, David J.; Pascalis, Olivier; Quinn, Paul C.; Slater, Alan M.; de Viviés, Xavier; Lee, Kang
2014-01-01
We used a matching-to-sample task and manipulated facial pose and feature composition to examine the other-race effect (ORE) in face identity recognition between 5 and 10 years of age. Overall, the present findings provide a genuine measure of own- and other-race face identity recognition in children that is independent of photographic and image…
FEATURE 1, SMALL GUN POSITION, VIEW FACING NORTH, (with scale ...
FEATURE 1, SMALL GUN POSITION, VIEW FACING NORTH, (with scale stick). - Naval Air Station Barbers Point, Anti-Aircraft Battery Complex-Small Gun Position, East of Coral Sea Road, northwest of Hamilton Road, Ewa, Honolulu County, HI
FEATURE 3, LARGE GUN POSITION, SHOWING MULTIPLE COMPARTMENTS, VIEW FACING ...
FEATURE 3, LARGE GUN POSITION, SHOWING MULTIPLE COMPARTMENTS, VIEW FACING SOUTH. - Naval Air Station Barbers Point, Anti-Aircraft Battery Complex-Large Gun Position, East of Coral Sea Road, northwest of Hamilton Road, Ewa, Honolulu County, HI
FEATURE B. MACHINE GUN POSITION WITH LEWIS MOUNT, VIEW FACING ...
FEATURE B. MACHINE GUN POSITION WITH LEWIS MOUNT, VIEW FACING NORTHWEST (with scale stick). - Naval Air Station Barbers Point, Battery-Machine Gun Positions, South of Point Cruz Road & west of Coral Sea Road, Ewa, Honolulu County, HI
FEATURE A. CONCRETE ANTIAIRCRAFT GUN POSITION, VIEW FACING NORTH ...
FEATURE A. CONCRETE ANTI-AIRCRAFT GUN POSITION, VIEW FACING NORTH - NORTHEAST (with scale stick). - Naval Air Station Barbers Point, Battery-Anti-Aircraft Gun Position, South of Point Cruz Road & west of Coral Sea Road, Ewa, Honolulu County, HI
FEATURE 2, SHELTER, NORTHNORTHEAST SIDE, VIEW FACING SOUTHSOUTHWEST (with scale ...
FEATURE 2, SHELTER, NORTH-NORTHEAST SIDE, VIEW FACING SOUTH-SOUTHWEST (with scale stick). - Naval Air Station Barbers Point, Anti-Aircraft Battery Complex-Shelter, East of Coral Sea Road, northwest of Hamilton Road, Ewa, Honolulu County, HI
FEATURE 4, ARMCO HUT, REAR AND SOUTHWEST SIDE, VIEW FACING ...
FEATURE 4, ARMCO HUT, REAR AND SOUTHWEST SIDE, VIEW FACING NORTH-NORTHWEST. - Naval Air Station Barbers Point, Anti-Aircraft Battery Complex-ARMCO Hut, East of Coral Sea Road, northwest of Hamilton Road, Ewa, Honolulu County, HI
FEATURE 4, ARMCO HUT, ENTRANCE FACADE, VIEW FACING EASTSOUTHEAST (with ...
FEATURE 4, ARMCO HUT, ENTRANCE FACADE, VIEW FACING EAST-SOUTHEAST (with scale stick). - Naval Air Station Barbers Point, Anti-Aircraft Battery Complex-ARMCO Hut, East of Coral Sea Road, northwest of Hamilton Road, Ewa, Honolulu County, HI
Deep neural network features for horses identity recognition using multiview horses' face pattern
NASA Astrophysics Data System (ADS)
Jarraya, Islem; Ouarda, Wael; Alimi, Adel M.
2017-03-01
To control the state of horses in the born, breeders needs a monitoring system with a surveillance camera that can identify and distinguish between horses. We proposed in [5] a method of horse's identification at a distance using the frontal facial biometric modality. Due to the change of views, the face recognition becomes more difficult. In this paper, the number of images used in our THoDBRL'2015 database (Tunisian Horses DataBase of Regim Lab) is augmented by adding other images of other views. Thus, we used front, right and left profile face's view. Moreover, we suggested an approach for multiview face recognition. First, we proposed to use the Gabor filter for face characterization. Next, due to the augmentation of the number of images, and the large number of Gabor features, we proposed to test the Deep Neural Network with the auto-encoder to obtain the more pertinent features and to reduce the size of features vector. Finally, we performed the proposed approach on our THoDBRL'2015 database and we used the linear SVM for classification.
3D face analysis by using Mesh-LBP feature
NASA Astrophysics Data System (ADS)
Wang, Haoyu; Yang, Fumeng; Zhang, Yuming; Wu, Congzhong
2017-11-01
Objective: Face Recognition is one of the widely application of image processing. Corresponding two-dimensional limitations, such as the pose and illumination changes, to a certain extent restricted its accurate rate and further development. How to overcome the pose and illumination changes and the effects of self-occlusion is the research hotspot and difficulty, also attracting more and more domestic and foreign experts and scholars to study it. 3D face recognition fusing shape and texture descriptors has become a very promising research direction. Method: Our paper presents a 3D point cloud based on mesh local binary pattern grid (Mesh-LBP), then feature extraction for 3D face recognition by fusing shape and texture descriptors. 3D Mesh-LBP not only retains the integrity of the 3D geometry, is also reduces the need for recognition process of normalization steps, because the triangle Mesh-LBP descriptor is calculated on 3D grid. On the other hand, in view of multi-modal consistency in face recognition advantage, construction of LBP can fusing shape and texture information on Triangular Mesh. In this paper, some of the operators used to extract Mesh-LBP, Such as the normal vectors of the triangle each face and vertex, the gaussian curvature, the mean curvature, laplace operator and so on. Conclusion: First, Kinect devices obtain 3D point cloud face, after the pretreatment and normalization, then transform it into triangular grid, grid local binary pattern feature extraction from face key significant parts of face. For each local face, calculate its Mesh-LBP feature with Gaussian curvature, mean curvature laplace operator and so on. Experiments on the our research database, change the method is robust and high recognition accuracy.
NASA Technical Reports Server (NTRS)
Johnson, Teresa A.
2006-01-01
Knowledge Management is a proactive pursuit for the future success of any large organization faced with the imminent possibility that their senior managers/engineers with gained experiences and lessons learned plan to retire in the near term. Safety and Mission Assurance (S&MA) is proactively pursuing unique mechanism to ensure knowledge learned is retained and lessons learned captured and documented. Knowledge Capture Event/Activities/Management helps to provide a gateway between future retirees and our next generation of managers/engineers. S&MA hosted two Knowledge Capture Events during 2005 featuring three of its retiring fellows (Axel Larsen, Dave Whittle and Gary Johnson). The first Knowledge Capture Event February 24, 2005 focused on two Safety and Mission Assurance Safety Panels (Space Shuttle System Safety Review Panel (SSRP); Payload Safety Review Panel (PSRP) and the latter event December 15, 2005 featured lessons learned during Apollo, Skylab, and Space Shuttle which could be applicable in the newly created Crew Exploration Vehicle (CEV)/Constellation development program. Gemini, Apollo, Skylab and the Space Shuttle promised and delivered exciting human advances in space and benefits of space in people s everyday lives on earth. Johnson Space Center's Safety & Mission Assurance team work over the last 20 years has been mostly focused on operations we are now beginning the Exploration development program. S&MA will promote an atmosphere of knowledge sharing in its formal and informal cultures and work processes, and reward the open dissemination and sharing of information; we are asking "Why embrace relearning the "lessons learned" in the past?" On the Exploration program the focus will be on Design, Development, Test, & Evaluation (DDT&E); therefore, it is critical to understand the lessons from these past programs during the DDT&E phase.
Surgical anatomy of the middle premasseter space and its application in sub-SMAS face lift surgery.
Mendelson, Bryan C; Wong, Chin-Ho
2013-07-01
The premasseter space is a recognized, sub-superficial musculoaponeurotic system (SMAS) soft-tissue space overlying the lower masseter immediately anterior to the parotid. The performance, safety, and effectiveness of composite face lifts are enhanced when the space is used. This has drawn attention to the need for better understanding of the premasseter anatomy above the space. The anatomy of the upper premasseter region was investigated in 20 fresh cadaver dissections as well as intraoperatively in hundreds of composite face lifts. A small, transverse, rectangular soft-tissue space overlies the upper masseter and was named the middle premasseter space. The space (transverse width, 25 to 28 mm; vertical width, 10 mm) is separated from the originally described (lower) premasseter space by a double membrane. It is a safe space between the upper and lower buccal trunks of the facial nerve, which are immediately outside the space and separated from it by the respective upper and lower boundary membranes. The parotid duct immediately beneath the floor of the space usually underlies the upper boundary membrane. The middle premasseter space is significant, as it is the center of the key anatomy immediately cephalad to the lower premasseter space. When used in composite face lifts, the space provides predictable sub-SMAS dissection between the buccal trunks of the facial nerve to the mobile area beyond the anterior border of the masseter where the SMAS overlies the buccal fat pad.
Finessing filter scarcity problem in face recognition via multi-fold filter convolution
NASA Astrophysics Data System (ADS)
Low, Cheng-Yaw; Teoh, Andrew Beng-Jin
2017-06-01
The deep convolutional neural networks for face recognition, from DeepFace to the recent FaceNet, demand a sufficiently large volume of filters for feature extraction, in addition to being deep. The shallow filter-bank approaches, e.g., principal component analysis network (PCANet), binarized statistical image features (BSIF), and other analogous variants, endure the filter scarcity problem that not all PCA and ICA filters available are discriminative to abstract noise-free features. This paper extends our previous work on multi-fold filter convolution (ℳ-FFC), where the pre-learned PCA and ICA filter sets are exponentially diversified by ℳ folds to instantiate PCA, ICA, and PCA-ICA offspring. The experimental results unveil that the 2-FFC operation solves the filter scarcity state. The 2-FFC descriptors are also evidenced to be superior to that of PCANet, BSIF, and other face descriptors, in terms of rank-1 identification rate (%).
Unaware person recognition from the body when face identification fails.
Rice, Allyson; Phillips, P Jonathon; Natu, Vaidehi; An, Xiaobo; O'Toole, Alice J
2013-11-01
How does one recognize a person when face identification fails? Here, we show that people rely on the body but are unaware of doing so. State-of-the-art face-recognition algorithms were used to select images of people with almost no useful identity information in the face. Recognition of the face alone in these cases was near chance level, but recognition of the person was accurate. Accuracy in identifying the person without the face was identical to that in identifying the whole person. Paradoxically, people reported relying heavily on facial features over noninternal face and body features in making their identity decisions. Eye movements indicated otherwise, with gaze duration and fixations shifting adaptively toward the body and away from the face when the body was a better indicator of identity than the face. This shift occurred with no cost to accuracy or response time. Human identity processing may be partially inaccessible to conscious awareness.
van den Hurk, J; Gentile, F; Jansma, B M
2011-12-01
The identification of a face comprises processing of both visual features and conceptual knowledge. Studies showing that the fusiform face area (FFA) is sensitive to face identity generally neglect this dissociation. The present study is the first that isolates conceptual face processing by using words presented in a person context instead of faces. The design consisted of 2 different conditions. In one condition, participants were presented with blocks of words related to each other at the categorical level (e.g., brands of cars, European cities). The second condition consisted of blocks of words linked to the personality features of a specific face. Both conditions were created from the same 8 × 8 word matrix, thereby controlling for visual input across conditions. Univariate statistical contrasts did not yield any significant differences between the 2 conditions in FFA. However, a machine learning classification algorithm was able to successfully learn the functional relationship between the 2 contexts and their underlying response patterns in FFA, suggesting that these activation patterns can code for different semantic contexts. These results suggest that the level of processing in FFA goes beyond facial features. This has strong implications for the debate about the role of FFA in face identification.
Isomura, Tomoko; Ogawa, Shino; Yamada, Satoko; Shibasaki, Masahiro; Masataka, Nobuo
2014-01-01
Previous studies have demonstrated that angry faces capture humans' attention more rapidly than emotionally positive faces. This phenomenon is referred to as the anger superiority effect (ASE). Despite atypical emotional processing, adults and children with Autism Spectrum Disorders (ASD) have been reported to show ASE as well as typically developed (TD) individuals. So far, however, few studies have clarified whether or not the mechanisms underlying ASE are the same for both TD and ASD individuals. Here, we tested how TD and ASD children process schematic emotional faces during detection by employing a recognition task in combination with a face-in-the-crowd task. Results of the face-in-the-crowd task revealed the prevalence of ASE both in TD and ASD children. However, the results of the recognition task revealed group differences: In TD children, detection of angry faces required more configural face processing and disrupted the processing of local features. In ASD children, on the other hand, it required more feature-based processing rather than configural processing. Despite the small sample sizes, these findings provide preliminary evidence that children with ASD, in contrast to TD children, show quick detection of angry faces by extracting local features in faces. PMID:24904477
Tanaka, James W; Wolf, Julie M; Klaiman, Cheryl; Koenig, Kathleen; Cockburn, Jeffrey; Herlihy, Lauren; Brown, Carla; Stahl, Sherin; Kaiser, Martha D; Schultz, Robert T
2010-08-01
An emerging body of evidence indicates that relative to typically developing children, children with autism are selectively impaired in their ability to recognize facial identity. A critical question is whether face recognition skills can be enhanced through a direct training intervention. In a randomized clinical trial, children diagnosed with autism spectrum disorder were pre-screened with a battery of subtests (the Let's Face It! Skills battery) examining face and object processing abilities. Participants who were significantly impaired in their face processing abilities were assigned to either a treatment or a waitlist group. Children in the treatment group (N = 42) received 20 hours of face training with the Let's Face It! (LFI!) computer-based intervention. The LFI! program is comprised of seven interactive computer games that target the specific face impairments associated with autism, including the recognition of identity across image changes in expression, viewpoint and features, analytic and holistic face processing strategies and attention to information in the eye region. Time 1 and Time 2 performance for the treatment and waitlist groups was assessed with the Let's Face It! Skills battery. The main finding was that relative to the control group (N = 37), children in the face training group demonstrated reliable improvements in their analytic recognition of mouth features and holistic recognition of a face based on its eyes features. These results indicate that a relatively short-term intervention program can produce measurable improvements in the face recognition skills of children with autism. As a treatment for face processing deficits, the Let's Face It! program has advantages of being cost-free, adaptable to the specific learning needs of the individual child and suitable for home and school applications.
Akechi, Hironori; Kikuchi, Yukiko; Tojo, Yoshikuni; Osanai, Hiroo; Hasegawa, Toshikazu
2014-01-27
Numerous studies have revealed atypical face processing in autism spectrum disorders (ASD) characterized by social interaction and communication difficulties. This study investigated sensitivity to face-likeness in ASD. In Experiment 1, we found a strong positive correlation between the face-likeness ratings of non-face objects in the ASD (11-19 years old) and the typically developing (TD) group (9-21 years old). In Experiment 2 (the scalp-recorded event-related potential experiment), the participants of both groups (ASD, 12-19 years old; TD, 12-18 years old) exhibited an enhanced face-sensitive N170 amplitude to a face-like object. Whereas the TD adolescents showed an enhanced N170 during the face-likeness judgements, adolescents with ASD did not. Thus, both individuals with ASD and TD individuals have a perceptual and neural sensitivity to face-like features in objects. When required to process face-like features, a face-related brain system reacts more strongly in TD individuals but not in individuals with ASD.
Palcu, Johanna; Sudkamp, Jennifer; Florack, Arnd
2017-01-01
Banner advertising is a popular means of promoting products and brands online. Although banner advertisements are often designed to be particularly attention grabbing, they frequently go unnoticed. Applying an eye-tracking procedure, the present research aimed to (a) determine whether presenting human faces (static or animated) in banner advertisements is an adequate tool for capturing consumers' attention and thus overcoming the frequently observed phenomenon of banner blindness, (b) to examine whether the gaze of a featured face possesses the ability to direct consumers' attention toward specific elements (i.e., the product) in an advertisement, and (c) to establish whether the gaze direction of an advertised face influences consumers subsequent evaluation of the advertised product. We recorded participants' eye gaze while they viewed a fictional online shopping page displaying banner advertisements that featured either no human face or a human face that was either static or animated and involved different gaze directions (toward or away from the advertised product). Moreover, we asked participants to subsequently evaluate a set of products, one of which was the product previously featured in the banner advertisement. Results showed that, when advertisements included a human face, participants' attention was more attracted by and they looked longer at animated compared with static banner advertisements. Moreover, when a face gazed toward the product region, participants' likelihood of looking at the advertised product increased regardless of whether the face was animated or not. Most important, gaze direction influenced subsequent product evaluations; that is, consumers indicated a higher intention to buy a product when it was previously presented in a banner advertisement that featured a face that gazed toward the product. The results suggest that while animation in banner advertising constitutes a salient feature that captures consumers' visual attention, gaze cuing can be an effective tool for driving viewers' attention toward specific elements in the advertisement and even shaping consumers' intentions to purchase the advertised product.
Palcu, Johanna; Sudkamp, Jennifer; Florack, Arnd
2017-01-01
Banner advertising is a popular means of promoting products and brands online. Although banner advertisements are often designed to be particularly attention grabbing, they frequently go unnoticed. Applying an eye-tracking procedure, the present research aimed to (a) determine whether presenting human faces (static or animated) in banner advertisements is an adequate tool for capturing consumers’ attention and thus overcoming the frequently observed phenomenon of banner blindness, (b) to examine whether the gaze of a featured face possesses the ability to direct consumers’ attention toward specific elements (i.e., the product) in an advertisement, and (c) to establish whether the gaze direction of an advertised face influences consumers subsequent evaluation of the advertised product. We recorded participants’ eye gaze while they viewed a fictional online shopping page displaying banner advertisements that featured either no human face or a human face that was either static or animated and involved different gaze directions (toward or away from the advertised product). Moreover, we asked participants to subsequently evaluate a set of products, one of which was the product previously featured in the banner advertisement. Results showed that, when advertisements included a human face, participants’ attention was more attracted by and they looked longer at animated compared with static banner advertisements. Moreover, when a face gazed toward the product region, participants’ likelihood of looking at the advertised product increased regardless of whether the face was animated or not. Most important, gaze direction influenced subsequent product evaluations; that is, consumers indicated a higher intention to buy a product when it was previously presented in a banner advertisement that featured a face that gazed toward the product. The results suggest that while animation in banner advertising constitutes a salient feature that captures consumers’ visual attention, gaze cuing can be an effective tool for driving viewers’ attention toward specific elements in the advertisement and even shaping consumers’ intentions to purchase the advertised product. PMID:28626436
Meaux, Emilie; Vuilleumier, Patrik
2016-11-01
The ability to decode facial emotions is of primary importance for human social interactions; yet, it is still debated how we analyze faces to determine their expression. Here we compared the processing of emotional face expressions through holistic integration and/or local analysis of visual features, and determined which brain systems mediate these distinct processes. Behavioral, physiological, and brain responses to happy and angry faces were assessed by presenting congruent global configurations of expressions (e.g., happy top+happy bottom), incongruent composite configurations (e.g., angry top+happy bottom), and isolated features (e.g. happy top only). Top and bottom parts were always from the same individual. Twenty-six healthy volunteers were scanned using fMRI while they classified the expression in either the top or the bottom face part but ignored information in the other non-target part. Results indicate that the recognition of happy and anger expressions is neither strictly holistic nor analytic Both routes were involved, but with a different role for analytic and holistic information depending on the emotion type, and different weights of local features between happy and anger expressions. Dissociable neural pathways were engaged depending on emotional face configurations. In particular, regions within the face processing network differed in their sensitivity to holistic expression information, which predominantly activated fusiform, inferior occipital areas and amygdala when internal features were congruent (i.e. template matching), whereas more local analysis of independent features preferentially engaged STS and prefrontal areas (IFG/OFC) in the context of full face configurations, but early visual areas and pulvinar when seen in isolated parts. Collectively, these findings suggest that facial emotion recognition recruits separate, but interactive dorsal and ventral routes within the face processing networks, whose engagement may be shaped by reciprocal interactions and modulated by task demands. Copyright © 2016 Elsevier Inc. All rights reserved.
Interior detail of tower space; camera facing southwest. Mare ...
Interior detail of tower space; camera facing southwest. - Mare Island Naval Shipyard, Defense Electronics Equipment Operating Center, I Street, terminus west of Cedar Avenue, Vallejo, Solano County, CA
Mars Orbiter Camera Views the 'Face on Mars' - Calibrated, contrast enhanced, filtered,
NASA Technical Reports Server (NTRS)
1998-01-01
Shortly after midnight Sunday morning (5 April 1998 12:39 AM PST), the Mars Orbiter Camera (MOC) on the Mars Global Surveyor (MGS) spacecraft successfully acquired a high resolution image of the 'Face on Mars' feature in the Cydonia region. The image was transmitted to Earth on Sunday, and retrieved from the mission computer data base Monday morning (6 April 1998). The image was processed at the Malin Space Science Systems (MSSS) facility 9:15 AM and the raw image immediately transferred to the Jet Propulsion Laboratory (JPL) for release to the Internet. The images shown here were subsequently processed at MSSS.
The picture was acquired 375 seconds after the spacecraft's 220th close approach to Mars. At that time, the 'Face', located at approximately 40.8o N, 9.6o W, was 275 miles (444 km) from the spacecraft. The 'morning' sun was 25o above the horizon. The picture has a resolution of 14.1 feet (4.3 meters) per pixel, making it ten times higher resolution than the best previous image of the feature, which was taken by the Viking Mission in the mid-1970's. The full image covers an area 2.7 miles (4.4 km) wide and 25.7 miles (41.5 km) long. Processing Image processing has been applied to the images in order to improve the visibility of features. This processing included the following steps: The image was processed to remove the sensitivity differences between adjacent picture elements (calibrated). This removes the vertical streaking. The contrast and brightness of the image was adjusted, and 'filters' were applied to enhance detail at several scales. The image was then geometrically warped to meet the computed position information for a mercator-type map. This corrected for the left-right flip, and the non-vertical viewing angle (about 45o from vertical), but also introduced some vertical 'elongation' of the image for the same reason Greenland looks larger than Africa on a mercator map of the Earth. A section of the image, containing the 'Face' and a couple of nearly impact craters and hills, was 'cut' out of the full image and reproduced separately.See PIA01440-1442 for additional processing steps. Also see PIA01236 for the raw image.Malin Space Science Systems and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.Mars Orbiter Camera Views the 'Face on Mars' - Calibrated, contrast enhanced, filtered
NASA Technical Reports Server (NTRS)
1998-01-01
Shortly after midnight Sunday morning (5 April 1998 12:39 AM PST), the Mars Orbiter Camera (MOC) on the Mars Global Surveyor (MGS) spacecraft successfully acquired a high resolution image of the 'Face on Mars' feature in the Cydonia region. The image was transmitted to Earth on Sunday, and retrieved from the mission computer data base Monday morning (6 April 1998). The image was processed at the Malin Space Science Systems (MSSS) facility 9:15 AM and the raw image immediately transferred to the Jet Propulsion Laboratory (JPL) for release to the Internet. The images shown here were subsequently processed at MSSS.
The picture was acquired 375 seconds after the spacecraft's 220th close approach to Mars. At that time, the 'Face', located at approximately 40.8o N, 9.6o W, was 275 miles (444 km) from the spacecraft. The 'morning' sun was 25o above the horizon. The picture has a resolution of 14.1 feet (4.3 meters) per pixel, making it ten times higher resolution than the best previous image of the feature, which was taken by the Viking Mission in the mid-1970's. The full image covers an area 2.7 miles (4.4 km) wide and 25.7 miles (41.5 km) long. Processing Image processing has been applied to the images in order to improve the visibility of features. This processing included the following steps: The image was processed to remove the sensitivity differences between adjacent picture elements (calibrated). This removes the vertical streaking. The contrast and brightness of the image was adjusted, and 'filters' were applied to enhance detail at several scales. The image was then geometrically warped to meet the computed position information for a mercator-type map. This corrected for the left-right flip, and the non-vertical viewing angle (about 45o from vertical), but also introduced some vertical 'elongation' of the image for the same reason Greenland looks larger than Africa on a mercator map of the Earth. A section of the image, containing the 'Face' and a couple of nearly impact craters and hills, was 'cut' out of the full image and reproduced separately.See PIA01441-1442 for additional processing steps. Also see PIA01236 for the raw image.Malin Space Science Systems and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.A Multi-Modal Face Recognition Method Using Complete Local Derivative Patterns and Depth Maps
Yin, Shouyi; Dai, Xu; Ouyang, Peng; Liu, Leibo; Wei, Shaojun
2014-01-01
In this paper, we propose a multi-modal 2D + 3D face recognition method for a smart city application based on a Wireless Sensor Network (WSN) and various kinds of sensors. Depth maps are exploited for the 3D face representation. As for feature extraction, we propose a new feature called Complete Local Derivative Pattern (CLDP). It adopts the idea of layering and has four layers. In the whole system, we apply CLDP separately on Gabor features extracted from a 2D image and depth map. Then, we obtain two features: CLDP-Gabor and CLDP-Depth. The two features weighted by the corresponding coefficients are combined together in the decision level to compute the total classification distance. At last, the probe face is assigned the identity with the smallest classification distance. Extensive experiments are conducted on three different databases. The results demonstrate the robustness and superiority of the new approach. The experimental results also prove that the proposed multi-modal 2D + 3D method is superior to other multi-modal ones and CLDP performs better than other Local Binary Pattern (LBP) based features. PMID:25333290
Instrument and method for focusing X-rays, gamma rays and neutrons
Smither, Robert K.
1984-01-01
A crystal diffraction instrument or diffraction grating instrument with an improved crystalline structure or grating spacing structure having a face for receiving a beam of photons or neutrons and diffraction planar spacing or grating spacing along that face with the spacing increasing progressively along the face to provide a decreasing Bragg diffraction angle for a monochromatic radiation and thereby increasing the usable area and acceptance angle. The increased planar spacing for the diffraction crystal is provided by the use of a temperature differential across the crystalline structure, by assembling a plurality of crystalline structures with different compositions, by an individual crystalline structure with a varying composition and thereby a changing planar spacing along its face, and by combinations of these techniques. The increased diffraction grating element spacing is generated during the fabrication of the diffraction grating by controlling the cutting tool that is cutting the grooves or controlling the laser beam, electron beam or ion beam that is exposing the resist layer, etc. It is also possible to vary this variation in grating spacing by applying a thermal gradient to the diffraction grating in much the same manner as is done in the crystal diffraction case.
Instrument and method for focusing x rays, gamma rays, and neutrons
Smither, R.K.
1982-03-25
A crystal-diffraction instrument or diffraction-grating instrument is described with an improved crystalline structure or grating spacing structure having a face for receiving a beam of photons or neutrons and diffraction planar spacing or grating spacing along that face with the spacing increasing progressively along the face to provide a decreasing Bragg diffraction angle for a monochromatic radiation and thereby increasing the usable area and acceptance angle. The increased planar spacing for the diffraction crystal is provided by the use of a temperature differential across the line structures with different compositions, by an individual crystalline structure with a varying composition and thereby a changing planar spacing along its face, and by combinations of these techniques. The increased diffraction grating element spacing is generated during the fabrication of the diffraction grating by controlling the cutting tool that is cutting the grooves or controlling the laser beam, electron beam, or ion beam that is exposing the resist layer, etc. It is also possible to vary this variation in grating spacing by applying a thermal gradient to the diffraction grating in much the same manner as is done in the crystal-diffraction case.
FEATURE 3, LARGE GUN POSITION, SHOWING MULTIPLE COMPARTMENTS, VIEW FACING ...
FEATURE 3, LARGE GUN POSITION, SHOWING MULTIPLE COMPARTMENTS, VIEW FACING SOUTH (with scale stick). - Naval Air Station Barbers Point, Anti-Aircraft Battery Complex-Large Gun Position, East of Coral Sea Road, northwest of Hamilton Road, Ewa, Honolulu County, HI
Undercut feature recognition for core and cavity generation
NASA Astrophysics Data System (ADS)
Yusof, Mursyidah Md; Salman Abu Mansor, Mohd
2018-01-01
Core and cavity is one of the important components in injection mould where the quality of the final product is mostly dependent on it. In the industry, with years of experience and skill, mould designers commonly use commercial CAD software to design the core and cavity which is time consuming. This paper proposes an algorithm that detect possible undercut features and generate the core and cavity. Two approaches are presented; edge convexity and face connectivity approach. The edge convexity approach is used to recognize undercut features while face connectivity is used to divide the faces into top and bottom region.
Face processing in Williams syndrome is already atypical in infancy.
D'Souza, Dean; Cole, Victoria; Farran, Emily K; Brown, Janice H; Humphreys, Kate; Howard, John; Rodic, Maja; Dekker, Tessa M; D'Souza, Hana; Karmiloff-Smith, Annette
2015-01-01
Face processing is a crucial socio-cognitive ability. Is it acquired progressively or does it constitute an innately-specified, face-processing module? The latter would be supported if some individuals with seriously impaired intelligence nonetheless showed intact face-processing abilities. Some theorists claim that Williams syndrome (WS) provides such evidence since, despite IQs in the 50s, adolescents/adults with WS score in the normal range on standardized face-processing tests. Others argue that atypical neural and cognitive processes underlie WS face-processing proficiencies. But what about infants with WS? Do they start with typical face-processing abilities, with atypicality developing later, or are atypicalities already evident in infancy? We used an infant familiarization/novelty design and compared infants with WS to typically developing controls as well as to a group of infants with Down syndrome matched on both mental and chronological age. Participants were familiarized with a schematic face, after which they saw a novel face in which either the features (eye shape) were changed or just the configuration of the original features. Configural changes were processed successfully by controls, but not by infants with WS who were only sensitive to featural changes and who showed syndrome-specific profiles different from infants with the other neurodevelopmental disorder. Our findings indicate that theorists can no longer use the case of WS to support claims that evolution has endowed the human brain with an independent face-processing module.
Interpretation of Appearance: The Effect of Facial Features on First Impressions and Personality
Wolffhechel, Karin; Fagertun, Jens; Jacobsen, Ulrik Plesner; Majewski, Wiktor; Hemmingsen, Astrid Sofie; Larsen, Catrine Lohmann; Lorentzen, Sofie Katrine; Jarmer, Hanne
2014-01-01
Appearance is known to influence social interactions, which in turn could potentially influence personality development. In this study we focus on discovering the relationship between self-reported personality traits, first impressions and facial characteristics. The results reveal that several personality traits can be read above chance from a face, and that facial features influence first impressions. Despite the former, our prediction model fails to reliably infer personality traits from either facial features or first impressions. First impressions, however, could be inferred more reliably from facial features. We have generated artificial, extreme faces visualising the characteristics having an effect on first impressions for several traits. Conclusively, we find a relationship between first impressions, some personality traits and facial features and consolidate that people on average assess a given face in a highly similar manner. PMID:25233221
Interpretation of appearance: the effect of facial features on first impressions and personality.
Wolffhechel, Karin; Fagertun, Jens; Jacobsen, Ulrik Plesner; Majewski, Wiktor; Hemmingsen, Astrid Sofie; Larsen, Catrine Lohmann; Lorentzen, Sofie Katrine; Jarmer, Hanne
2014-01-01
Appearance is known to influence social interactions, which in turn could potentially influence personality development. In this study we focus on discovering the relationship between self-reported personality traits, first impressions and facial characteristics. The results reveal that several personality traits can be read above chance from a face, and that facial features influence first impressions. Despite the former, our prediction model fails to reliably infer personality traits from either facial features or first impressions. First impressions, however, could be inferred more reliably from facial features. We have generated artificial, extreme faces visualising the characteristics having an effect on first impressions for several traits. Conclusively, we find a relationship between first impressions, some personality traits and facial features and consolidate that people on average assess a given face in a highly similar manner.
Spaced education activates students in a theoretical radiological science course: a pilot study.
Nkenke, Emeka; Vairaktaris, Elefterios; Bauersachs, Anne; Eitner, Stephan; Budach, Alexander; Knipfer, Christian; Stelzle, Florian
2012-05-23
The present study aimed at determining if the addition of spaced education to traditional face-to-face lectures increased the time students kept busy with the learning content of a theoretical radiological science course. The study comprised two groups of 21 third-year dental students. The students were randomly assigned to a "traditional group" and a "spaced education group". Both groups followed a traditional face-to-face course. The intervention in the spaced education group was performed in way that these students received e-mails with a delay of 14 days to each face-to-face lecture. These e-mails contained multiple choice questions on the learning content of the lectures. The students returned their answers to the questions also by e-mail. On return they received an additional e-mail that included the correct answers and additional explanatory material.All students of both groups documented the time they worked on the learning content of the different lectures before a multiple choice exam was held after the completion of the course. All students of both groups completed the TRIL questionnaire (Trierer Inventar zur Lehrevaluation) for the evaluation of courses at university after the completion of the course. The results for the time invested in the learning content and the results of the questionnaire for the two groups were compared using the Mann-Whitney-U test. The spaced education group spent significantly more time (216.2 ± 123.9 min) on keeping busy with the learning content compared to the traditional group (58.4 ± 94.8 min, p < .0005). The spaced education group rated the didactics of the course significantly better than the traditional group (p = .034). The students of the spaced education group also felt that their needs were fulfilled significantly better compared to the traditional group as far as communication with the teacher was concerned (p = .022). Adding spaced education to a face-to-face theoretical radiological science course activates students in a way that they spend significantly more time on keeping busy with the learning content.
Adjudicating between face-coding models with individual-face fMRI responses
Kriegeskorte, Nikolaus
2017-01-01
The perceptual representation of individual faces is often explained with reference to a norm-based face space. In such spaces, individuals are encoded as vectors where identity is primarily conveyed by direction and distinctiveness by eccentricity. Here we measured human fMRI responses and psychophysical similarity judgments of individual face exemplars, which were generated as realistic 3D animations using a computer-graphics model. We developed and evaluated multiple neurobiologically plausible computational models, each of which predicts a representational distance matrix and a regional-mean activation profile for 24 face stimuli. In the fusiform face area, a face-space coding model with sigmoidal ramp tuning provided a better account of the data than one based on exemplar tuning. However, an image-processing model with weighted banks of Gabor filters performed similarly. Accounting for the data required the inclusion of a measurement-level population averaging mechanism that approximates how fMRI voxels locally average distinct neuronal tunings. Our study demonstrates the importance of comparing multiple models and of modeling the measurement process in computational neuroimaging. PMID:28746335
Information Theory for Gabor Feature Selection for Face Recognition
NASA Astrophysics Data System (ADS)
Shen, Linlin; Bai, Li
2006-12-01
A discriminative and robust feature—kernel enhanced informative Gabor feature—is proposed in this paper for face recognition. Mutual information is applied to select a set of informative and nonredundant Gabor features, which are then further enhanced by kernel methods for recognition. Compared with one of the top performing methods in the 2004 Face Verification Competition (FVC2004), our methods demonstrate a clear advantage over existing methods in accuracy, computation efficiency, and memory cost. The proposed method has been fully tested on the FERET database using the FERET evaluation protocol. Significant improvements on three of the test data sets are observed. Compared with the classical Gabor wavelet-based approaches using a huge number of features, our method requires less than 4 milliseconds to retrieve a few hundreds of features. Due to the substantially reduced feature dimension, only 4 seconds are required to recognize 200 face images. The paper also unified different Gabor filter definitions and proposed a training sample generation algorithm to reduce the effects caused by unbalanced number of samples available in different classes.
The relative importance of external and internal features of facial composites.
Frowd, Charlie; Bruce, Vicki; McIntyre, Alex; Hancock, Peter
2007-02-01
Three experiments are reported that compare the quality of external with internal regions within a set of facial composites using two matching-type tasks. Composites are constructed with the aim of triggering recognition from people familiar with the targets, and past research suggests internal face features dominate representations of familiar faces in memory. However the experiments reported here show that the internal regions of composites are very poorly matched against the faces they purport to represent, while external feature regions alone were matched almost as well as complete composites. In Experiments 1 and 2 the composites used were constructed by participant-witnesses who were unfamiliar with the targets and therefore were predicted to demonstrate a bias towards the external parts of a face. In Experiment 3 we compared witnesses who were familiar or unfamiliar with the target items, but for both groups the external features were much better reproduced in the composites, suggesting it is the process of composite construction itself which is responsible for the poverty of the internal features. Practical implications of these results are discussed.
False match elimination for face recognition based on SIFT algorithm
NASA Astrophysics Data System (ADS)
Gu, Xuyuan; Shi, Ping; Shao, Meide
2011-06-01
The SIFT (Scale Invariant Feature Transform) is a well known algorithm used to detect and describe local features in images. It is invariant to image scale, rotation and robust to the noise and illumination. In this paper, a novel method used for face recognition based on SIFT is proposed, which combines the optimization of SIFT, mutual matching and Progressive Sample Consensus (PROSAC) together and can eliminate the false matches of face recognition effectively. Experiments on ORL face database show that many false matches can be eliminated and better recognition rate is achieved.
Interior view of second floor space; camera facing southwest. ...
Interior view of second floor space; camera facing southwest. - Mare Island Naval Shipyard, Hospital Ward, Johnson Lane, west side at intersection of Johnson Lane & Cossey Street, Vallejo, Solano County, CA
Shaded Relief of Rio Sao Francisco, Brazil
NASA Technical Reports Server (NTRS)
2000-01-01
This topographic image acquired by SRTM shows an area south of the Sao Francisco River in Brazil. The scrub forest terrain shows relief of about 400 meters (1300 feet). Areas such as these are difficult to map by traditional methods because of frequent cloud cover and local inaccessibility. This region has little topographic relief, but even subtle changes in topography have far-reaching effects on regional ecosystems. The image covers an area of 57 km x 79 km and represents one quarter of the 225 km SRTM swath. Colors range from dark blue at water level to white and brown at hill tops. The terrain features that are clearly visible in this image include tributaries of the Sao Francisco, the dark-blue branch-like features visible from top right to bottom left, and on the left edge of the image, and hills rising up from the valley floor. The San Francisco River is a major source of water for irrigation and hydroelectric power. Mapping such regions will allow scientists to better understand the relationships between flooding cycles, forestation and human influences on ecosystems.
This shaded relief image was generated using topographic data from the Shuttle Radar Topography Mission. A computer-generated artificial light source illuminates the elevation data to produce a pattern of light and shadows. Slopes facing the light appear bright, while those facing away are shaded. On flatter surfaces, the pattern of light and shadows can reveal subtle features in the terrain. Shaded relief maps are commonly used in applications such as geologic mapping and land use planning.The Shuttle Radar Topography Mission (SRTM), launched on February 11, 2000, uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission is designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, an additional C-band imaging antenna and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency (NIMA) and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Earth Science Enterprise, Washington, DC.ERIC Educational Resources Information Center
Campbell, Ruth; And Others
1995-01-01
Studied 4- to 10-year-olds' familiarity judgments of peers. Found that, contrary to adults, external facial features were key. Also found that the switch to adult recognition pattern takes place after the ninth year. (ETB)
Fusion of LBP and SWLD using spatio-spectral information for hyperspectral face recognition
NASA Astrophysics Data System (ADS)
Xie, Zhihua; Jiang, Peng; Zhang, Shuai; Xiong, Jinquan
2018-01-01
Hyperspectral imaging, recording intrinsic spectral information of the skin cross different spectral bands, become an important issue for robust face recognition. However, the main challenges for hyperspectral face recognition are high data dimensionality, low signal to noise ratio and inter band misalignment. In this paper, hyperspectral face recognition based on LBP (Local binary pattern) and SWLD (Simplified Weber local descriptor) is proposed to extract discriminative local features from spatio-spectral fusion information. Firstly, the spatio-spectral fusion strategy based on statistical information is used to attain discriminative features of hyperspectral face images. Secondly, LBP is applied to extract the orientation of the fusion face edges. Thirdly, SWLD is proposed to encode the intensity information in hyperspectral images. Finally, we adopt a symmetric Kullback-Leibler distance to compute the encoded face images. The hyperspectral face recognition is tested on Hong Kong Polytechnic University Hyperspectral Face database (PolyUHSFD). Experimental results show that the proposed method has higher recognition rate (92.8%) than the state of the art hyperspectral face recognition algorithms.
6. INTERIOR MAIN SPACE DETAIL VIEW, FACING EAST. BUILDING NO ...
6. INTERIOR MAIN SPACE DETAIL VIEW, FACING EAST. BUILDING NO 42 GARAGE & TRANSPORTATION MAINTENANCE FACILITY - NASA Industrial Plant, Garage & Transportation Maintenance Facility, 12214 Lakewood Boulevard, Downey, Los Angeles County, CA
5. INTERIOR MAIN SPACE DETAIL VIEW, FACING WEST. BUILDING NO ...
5. INTERIOR MAIN SPACE DETAIL VIEW, FACING WEST. BUILDING NO 42 GARAGE & TRANSPORTATION MAINTENANCE FACILITY - NASA Industrial Plant, Garage & Transportation Maintenance Facility, 12214 Lakewood Boulevard, Downey, Los Angeles County, CA
Hubble Observes Surface of Titan
NASA Technical Reports Server (NTRS)
1994-01-01
Scientists for the first time have made images of the surface of Saturn's giant, haze-shrouded moon, Titan. They mapped light and dark features over the surface of the satellite during nearly a complete 16-day rotation. One prominent bright area they discovered is a surface feature 2,500 miles across, about the size of the continent of Australia.
Titan, larger than Mercury and slightly smaller than Mars, is the only body in the solar system, other than Earth, that may have oceans and rainfall on its surface, albeit oceans and rain of ethane-methane rather than water. Scientists suspect that Titan's present environment -- although colder than minus 289 degrees Fahrenheit, so cold that water ice would be as hard as granite -- might be similar to that on Earth billions of years ago, before life began pumping oxygen into the atmosphere.Peter H. Smith of the University of Arizona Lunar and Planetary Laboratory and his team took the images with the Hubble Space Telescope during 14 observing runs between Oct. 4 - 18. Smith announced the team's first results last week at the 26th annual meeting of the American Astronomical Society Division for Planetary Sciences in Bethesda, Md. Co-investigators on the team are Mark Lemmon, a doctoral candidate with the UA Lunar and Planetary Laboratory; John Caldwell of York University, Canada; Larry Sromovsky of the University of Wisconsin; and Michael Allison of the Goddard Institute for Space Studies, New York City.Titan's atmosphere, about four times as dense as Earth's atmosphere, is primarily nitrogen laced with such poisonous substances as methane and ethane. This thick, orange, hydrocarbon haze was impenetrable to cameras aboard the Pioneer and Voyager spacecraft that flew by the Saturn system in the late 1970s and early 1980s. The haze is formed as methane in the atmosphere is destroyed by sunlight. The hydrocarbons produced by this methane destruction form a smog similar to that found over large cities, but is much thicker.Smith's group used the Hubble Space Telescope's WideField/Planetary Camera 2 at near-infrared wavelengths (between .85 and 1.05 microns). Titan's haze is transparent enough in this wavelength range to allow mapping of surface features according to their reflectivity. Only Titan's polar regions could not be mapped this way, due to the telescope's viewing angle of the poles and the thick haze near the edge of the disk. Their image-resolution (that is, the smallest distance seen in detail) with the WFPC2 at the near-infrared wavelength is 360 miles. The 14 images processed and compiled into the Titan surface map were as 'noise' free, or as free of signal interference, as the space telescope allows, Smith said.Titan makes one complete orbit around Saturn in 16 days, roughly the duration of the imaging project. Scientists have suspected that Titan's rotation also takes 16 days, so that the same hemisphere of Titan always faces Saturn, just as the same hemisphere of the Earth's moon always faces the Earth. Recent observations by Lemmon and colleagues at the University of Arizona confirm this true.It's too soon to conclude much about what the dark and bright areas in the Hubble Space Telescope images are -- continents, oceans, impact craters or other features, Smith said. Scientists have long suspected that Titan's surface was covered with a global ehtane-methane ocean. The new images show that there is at least some solid surface.Smith's team made a total 50 images of Titan last month in their program, a project to search for small scale features in Titan's lower atmosphere and surface. They have yet to analyze images for information about Titan's clouds and winds. That analysis could help explain if the bright areas are major impact craters in the frozen water ice-and-rock or higher-altitude features.The images are important information for the Cassini mission, which is to launch a robotic spacecraft on a 7-year journey to Saturn in October 1997. About three weeks before Cassini's first flyby of Titan, the spacecraft is to release the European Space Agency's Huygens Probe to parachute to Titan's surface. Images like Smith's team has taken of Titan can be used to identify choice landing spots - - and help engineers and scientists understand how Titan's winds will blow the parachute through the satellite's atmosphere.UA scientists play major roles in the Cassini mission: Carolyn C. Porco, an associate professor at the Lunar and Planetary Laboratory, leads the 14-member Cassini Imaging Team. Jonathan I. Lunine, also an associate professor at the lab, is the only American selected by the European Space Agency to be on the three-member Huygens Probe interdisciplinary science team. Smith is a member of research professor Martin G. Tomasko's international team of scientists who will image the surface of Titan in visible light and in color with the Descent Imager/Spectral Radiometer, one of five instruments in the Huygens Probe's French, German, Italian and U.S. experiment payload. Senior research associate Lyn R. Doose is also on Tomasko's team. Lunine and LPL professor Donald M. Hunten are members of the science team for another U.S. instrument on that payload, the gas chromatograph mass spectrometer. Hunten was on the original Cassini mission science definition team back in 1983.PHOTO CAPTION: Four global projections of the HST Titan data, separated in longitude by 90 degrees. Upper left: hemisphere facing Saturn. Upper right: leading hemisphere (brightest region). Lower left: the hemisphere which never faces Saturn. Lower right: trailing hemisphere. Not that these assignments assume that the rotation is synchronous. The imaging team says its data strongly support this assumption -- a longer time baseline is needed for proof. The surface near the poles is never visible to an observer in Titan's equatorial plane because of the large optical path.The Wide Field/Planetary Camera 2 was developed by the Jet Propulsion Laboratory and managed by the Goddard Spaced Flight Center for NASA's Office of Space Science.This image and other images and data received from the Hubble Space Telescope are posted on the World Wide Web on the Space Telescope Science Institute home page at URL http://oposite.stsci.edu/pubinfo/NASA Astrophysics Data System (ADS)
Zhang, Yi-Qing; Cui, Jing; Zhang, Shu-Min; Zhang, Qi; Li, Xiang
2016-02-01
Modelling temporal networks of human face-to-face contacts is vital both for understanding the spread of airborne pathogens and word-of-mouth spreading of information. Although many efforts have been devoted to model these temporal networks, there are still two important social features, public activity and individual reachability, have been ignored in these models. Here we present a simple model that captures these two features and other typical properties of empirical face-to-face contact networks. The model describes agents which are characterized by an attractiveness to slow down the motion of nearby people, have event-triggered active probability and perform an activity-dependent biased random walk in a square box with periodic boundary. The model quantitatively reproduces two empirical temporal networks of human face-to-face contacts which are testified by their network properties and the epidemic spread dynamics on them.
Influence of skin ageing features on Chinese women's perception of facial age and attractiveness.
Porcheron, A; Latreille, J; Jdid, R; Tschachler, E; Morizot, F
2014-08-01
Ageing leads to characteristic changes in the appearance of facial skin. Among these changes, we can distinguish the skin topographic cues (skin sagging and wrinkles), the dark spots and the dark circles around the eyes. Although skin changes are similar in Caucasian and Chinese faces, the age of occurrence and the severity of age-related features differ between the two populations. Little is known about how the ageing of skin influences the perception of female faces in Chinese women. The aim of this study is to evaluate the contribution of the different age-related skin features to the perception of age and attractiveness in Chinese women. Facial images of Caucasian women and Chinese women in their 60s were manipulated separately to reduce the following skin features: (i) skin sagging and wrinkles, (ii) dark spots and (iii) dark circles. Finally, all signs were reduced simultaneously (iv). Female Chinese participants were asked to estimate the age difference between the modified and original images and evaluate the attractiveness of modified and original faces. Chinese women perceived the Chinese faces as younger after the manipulation of dark spots than after the reduction in wrinkles/sagging, whereas they perceived the Caucasian faces as the youngest after the manipulation of wrinkles/sagging. Interestingly, Chinese women evaluated faces with reduced dark spots as being the most attractive whatever the origin of the face. The manipulation of dark circles contributed to making Caucasian and Chinese faces being perceived younger and more attractive than the original faces, although the effect was less pronounced than for the two other types of manipulation. This is the first study to have examined the influence of various age-related skin features on the facial age and attractiveness perception of Chinese women. The results highlight different contributions of dark spots, sagging/wrinkles and dark circles to their perception of Chinese and Caucasian faces. © 2014 The Authors. International Journal of Cosmetic Science published by John Wiley & Sons Ltd on behalf of Society of Cosmetic Scientists and Societe Francaise de Cosmetologie.
Diagnostic Features of Emotional Expressions Are Processed Preferentially
Scheller, Elisa; Büchel, Christian; Gamer, Matthias
2012-01-01
Diagnostic features of emotional expressions are differentially distributed across the face. The current study examined whether these diagnostic features are preferentially attended to even when they are irrelevant for the task at hand or when faces appear at different locations in the visual field. To this aim, fearful, happy and neutral faces were presented to healthy individuals in two experiments while measuring eye movements. In Experiment 1, participants had to accomplish an emotion classification, a gender discrimination or a passive viewing task. To differentiate fast, potentially reflexive, eye movements from a more elaborate scanning of faces, stimuli were either presented for 150 or 2000 ms. In Experiment 2, similar faces were presented at different spatial positions to rule out the possibility that eye movements only reflect a general bias for certain visual field locations. In both experiments, participants fixated the eye region much longer than any other region in the face. Furthermore, the eye region was attended to more pronouncedly when fearful or neutral faces were shown whereas more attention was directed toward the mouth of happy facial expressions. Since these results were similar across the other experimental manipulations, they indicate that diagnostic features of emotional expressions are preferentially processed irrespective of task demands and spatial locations. Saliency analyses revealed that a computational model of bottom-up visual attention could not explain these results. Furthermore, as these gaze preferences were evident very early after stimulus onset and occurred even when saccades did not allow for extracting further information from these stimuli, they may reflect a preattentive mechanism that automatically detects relevant facial features in the visual field and facilitates the orientation of attention towards them. This mechanism might crucially depend on amygdala functioning and it is potentially impaired in a number of clinical conditions such as autism or social anxiety disorders. PMID:22848607
Diagnostic features of emotional expressions are processed preferentially.
Scheller, Elisa; Büchel, Christian; Gamer, Matthias
2012-01-01
Diagnostic features of emotional expressions are differentially distributed across the face. The current study examined whether these diagnostic features are preferentially attended to even when they are irrelevant for the task at hand or when faces appear at different locations in the visual field. To this aim, fearful, happy and neutral faces were presented to healthy individuals in two experiments while measuring eye movements. In Experiment 1, participants had to accomplish an emotion classification, a gender discrimination or a passive viewing task. To differentiate fast, potentially reflexive, eye movements from a more elaborate scanning of faces, stimuli were either presented for 150 or 2000 ms. In Experiment 2, similar faces were presented at different spatial positions to rule out the possibility that eye movements only reflect a general bias for certain visual field locations. In both experiments, participants fixated the eye region much longer than any other region in the face. Furthermore, the eye region was attended to more pronouncedly when fearful or neutral faces were shown whereas more attention was directed toward the mouth of happy facial expressions. Since these results were similar across the other experimental manipulations, they indicate that diagnostic features of emotional expressions are preferentially processed irrespective of task demands and spatial locations. Saliency analyses revealed that a computational model of bottom-up visual attention could not explain these results. Furthermore, as these gaze preferences were evident very early after stimulus onset and occurred even when saccades did not allow for extracting further information from these stimuli, they may reflect a preattentive mechanism that automatically detects relevant facial features in the visual field and facilitates the orientation of attention towards them. This mechanism might crucially depend on amygdala functioning and it is potentially impaired in a number of clinical conditions such as autism or social anxiety disorders.
Measuring sexual dimorphism with a race-gender face space.
Hopper, William J; Finklea, Kristin M; Winkielman, Piotr; Huber, David E
2014-10-01
Faces are complex visual objects, and faces chosen to vary in 1 regard may unintentionally vary in other ways, particularly if the correlation is a property of the population of faces. Here, we present an example of a correlation that arises from differences in the degree of sexual dimorphism. In Experiment 1, paired similarity ratings were collected for a set of 40 real face images chosen to vary in terms of gender and race (Asian vs. White). Multidimensional scaling (MDS) placed these stimuli in a "face space," with different attributes corresponding to different dimensions. Gender was found to vary more for White faces, resulting in a negative or positive correlation between gender and race when only considering male or only considering female faces. This increased sexual dimorphism for White faces may provide an alternative explanation for differences in face processing between White and Asian faces (e.g., the own-race bias, face attractiveness biases, etc.). Studies of face processing that are unconfounded by this difference in the degree of sexual dimorphism require stimuli that are decorrelated in terms of race and gender. Decorrelated faces were created using a morphing technique, spacing the morphs uniformly around a ring in the 2-dimensional (2D) race-gender plane. In Experiment 2, paired similarity ratings confirmed the 2D positions of the morph faces. In Experiment 3, race and gender category judgments varied uniformly for these decorrelated stimuli. Our results and stimuli should prove useful for studying sexual dimorphism and for the study of face processing more generally.
Collaborative Spaces for GIS-Based Multimedia Cartography in Blended Environments
ERIC Educational Resources Information Center
Balram, Shivanand; Dragicevic, Suzana
2008-01-01
The interaction spaces between instructors and learners in the traditional face-to-face classroom environment are being changed by the diffusion and adoption of many forms of computer-based pedagogy. An integrated understanding of these evolving interaction spaces together with how they interconnect and leverage learning are needed to develop…
Social Software and The Future of Conferences Right Now
ERIC Educational Resources Information Center
Suter, Vicki; Alexander, Bryan; Kaplan, Pascal
2005-01-01
Until recently, the models for conceptualizing activities in physical space and in Internet space have been limited by the thought that one or the other has to be chosen. An initial integration of these apparently disparate spaces emerged when participants in face-to-face meetings (e.g., annual professional society meetings) supplemented their…
The Difficulties That the Undergraduate Students Face about Inner Product Space
ERIC Educational Resources Information Center
Burhanzade, Hülya; Aygör, Nilgün
2016-01-01
In this qualitative research, we studied difficulties that undergraduate students face while learning the concept of inner product space. Participants were 35 first-year mathematics students from Yildiz Technical University in the 2011 and 2012 academic years. We asked participants to solve 5 inner product space questions. Data were jointly…
Hills, Peter J; Pake, J Michael
2013-12-01
Own-race faces are recognised more accurately than other-race faces and may even be viewed differently as measured by an eye-tracker (Goldinger, Papesh, & He, 2009). Alternatively, observer race might direct eye-movements (Blais, Jack, Scheepers, Fiset, & Caldara, 2008). Observer differences in eye-movements are likely to be based on experience of the physiognomic characteristics that are differentially discriminating for Black and White faces. Two experiments are reported that employed standard old/new recognition paradigms in which Black and White observers viewed Black and White faces with their eye-movements recorded. Experiment 1 showed that there were observer race differences in terms of the features scanned but observers employed the same strategy across different types of faces. Experiment 2 demonstrated that other-race faces could be recognised more accurately if participants had their first fixation directed to more diagnostic features using fixation crosses. These results are entirely consistent with those presented by Blais et al. (2008) and with the perceptual interpretation that the own-race bias is due to inappropriate attention allocated to the facial features (Hills & Lewis, 2006, 2011). Copyright © 2013 Elsevier B.V. All rights reserved.
A model of face selection in viewing video stories
Suda, Yuki; Kitazawa, Shigeru
2015-01-01
When typical adults watch TV programs, they show surprisingly stereo-typed gaze behaviours, as indicated by the almost simultaneous shifts of their gazes from one face to another. However, a standard saliency model based on low-level physical features alone failed to explain such typical gaze behaviours. To find rules that explain the typical gaze behaviours, we examined temporo-spatial gaze patterns in adults while they viewed video clips with human characters that were played with or without sound, and in the forward or reverse direction. We here show the following: 1) the “peak” face scanpath, which followed the face that attracted the largest number of views but ignored other objects in the scene, still retained the key features of actual scanpaths, 2) gaze behaviours remained unchanged whether the sound was provided or not, 3) the gaze behaviours were sensitive to time reversal, and 4) nearly 60% of the variance of gaze behaviours was explained by the face saliency that was defined as a function of its size, novelty, head movements, and mouth movements. These results suggest that humans share a face-oriented network that integrates several visual features of multiple faces, and directs our eyes to the most salient face at each moment. PMID:25597621
Dead space variability of face masks for valved holding chambers.
Amirav, Israel; Newhouse, Michael T
2008-03-01
Valved holding chambers with masks are commonly used to deliver inhaled medications to young children with asthma. Optimal mask properties such as their dead space volume have received little attention. The smaller the mask the more likely it is that a greater proportion of the dose in the VHC will be inhaled with each breath, thus speeding VHC emptying and improving overall aerosol delivery efficiency and dose. Masks may have different DSV and thus different performance. To compare both physical dead space and functional dead space of different face masks under various applied pressures. The DSV of three commonly used face masks of VHCs was measured by water displacement both under various pressures (to simulate real-life application, dynamic DSV) and under no pressure (static DSV). There was a great variability of both static and dynamic dead space among various face mask for VHCs, which is probably related to their flexibility. Different masks have different DSV characteristics. This variability should be taken into account when comparing the clinical efficacy of various VHCs.
The complex duration perception of emotional faces: effects of face direction.
Kliegl, Katrin M; Limbrecht-Ecklundt, Kerstin; Dürr, Lea; Traue, Harald C; Huckauf, Anke
2015-01-01
The perceived duration of emotional face stimuli strongly depends on the expressed emotion. But, emotional faces also differ regarding a number of other features like gaze, face direction, or sex. Usually, these features have been controlled by only using pictures of female models with straight gaze and face direction. Doi and Shinohara (2009) reported that an overestimation of angry faces could only be found when the model's gaze was oriented toward the observer. We aimed at replicating this effect for face direction. Moreover, we explored the effect of face direction on the duration perception sad faces. Controlling for the sex of the face model and the participant, female and male participants rated the duration of neutral, angry, and sad face stimuli of both sexes photographed from different perspectives in a bisection task. In line with current findings, we report a significant overestimation of angry compared to neutral face stimuli that was modulated by face direction. Moreover, the perceived duration of sad face stimuli did not differ from that of neutral faces and was not influenced by face direction. Furthermore, we found that faces of the opposite sex appeared to last longer than those of the same sex. This outcome is discussed with regards to stimulus parameters like the induced arousal, social relevance, and an evolutionary context.
NASA Astrophysics Data System (ADS)
Uzbaş, Betül; Arslan, Ahmet
2018-04-01
Gender is an important step for human computer interactive processes and identification. Human face image is one of the important sources to determine gender. In the present study, gender classification is performed automatically from facial images. In order to classify gender, we propose a combination of features that have been extracted face, eye and lip regions by using a hybrid method of Local Binary Pattern and Gray-Level Co-Occurrence Matrix. The features have been extracted from automatically obtained face, eye and lip regions. All of the extracted features have been combined and given as input parameters to classification methods (Support Vector Machine, Artificial Neural Networks, Naive Bayes and k-Nearest Neighbor methods) for gender classification. The Nottingham Scan face database that consists of the frontal face images of 100 people (50 male and 50 female) is used for this purpose. As the result of the experimental studies, the highest success rate has been achieved as 98% by using Support Vector Machine. The experimental results illustrate the efficacy of our proposed method.
Semantic and visual determinants of face recognition in a prosopagnosic patient.
Dixon, M J; Bub, D N; Arguin, M
1998-05-01
Prosopagnosia is the neuropathological inability to recognize familiar people by their faces. It can occur in isolation or can coincide with recognition deficits for other nonface objects. Often, patients whose prosopagnosia is accompanied by object recognition difficulties have more trouble identifying certain categories of objects relative to others. In previous research, we demonstrated that objects that shared multiple visual features and were semantically close posed severe recognition difficulties for a patient with temporal lobe damage. We now demonstrate that this patient's face recognition is constrained by these same parameters. The prosopagnosic patient ELM had difficulties pairing faces to names when the faces shared visual features and the names were semantically related (e.g., Tonya Harding, Nancy Kerrigan, and Josee Chouinard -three ice skaters). He made tenfold fewer errors when the exact same faces were associated with semantically unrelated people (e.g., singer Celine Dion, actress Betty Grable, and First Lady Hillary Clinton). We conclude that prosopagnosia and co-occurring category-specific recognition problems both stem from difficulties disambiguating the stored representations of objects that share multiple visual features and refer to semantically close identities or concepts.
Ding, Liya; Martinez, Aleix M
2010-11-01
The appearance-based approach to face detection has seen great advances in the last several years. In this approach, we learn the image statistics describing the texture pattern (appearance) of the object class we want to detect, e.g., the face. However, this approach has had limited success in providing an accurate and detailed description of the internal facial features, i.e., eyes, brows, nose, and mouth. In general, this is due to the limited information carried by the learned statistical model. While the face template is relatively rich in texture, facial features (e.g., eyes, nose, and mouth) do not carry enough discriminative information to tell them apart from all possible background images. We resolve this problem by adding the context information of each facial feature in the design of the statistical model. In the proposed approach, the context information defines the image statistics most correlated with the surroundings of each facial component. This means that when we search for a face or facial feature, we look for those locations which most resemble the feature yet are most dissimilar to its context. This dissimilarity with the context features forces the detector to gravitate toward an accurate estimate of the position of the facial feature. Learning to discriminate between feature and context templates is difficult, however, because the context and the texture of the facial features vary widely under changing expression, pose, and illumination, and may even resemble one another. We address this problem with the use of subclass divisions. We derive two algorithms to automatically divide the training samples of each facial feature into a set of subclasses, each representing a distinct construction of the same facial component (e.g., closed versus open eyes) or its context (e.g., different hairstyles). The first algorithm is based on a discriminant analysis formulation. The second algorithm is an extension of the AdaBoost approach. We provide extensive experimental results using still images and video sequences for a total of 3,930 images. We show that the results are almost as good as those obtained with manual detection.
Face-infringement space: the frame of reference of the ventral intraparietal area.
McCollum, Gin; Klam, François; Graf, Werner
2012-07-01
Experimental studies have shown that responses of ventral intraparietal area (VIP) neurons specialize in head movements and the environment near the head. VIP neurons respond to visual, auditory, and tactile stimuli, smooth pursuit eye movements, and passive and active movements of the head. This study demonstrates mathematical structure on a higher organizational level created within VIP by the integration of a complete set of variables covering face-infringement. Rather than positing dynamics in an a priori defined coordinate system such as those of physical space, we assemble neuronal receptive fields to find out what space of variables VIP neurons together cover. Section 1 presents a view of neurons as multidimensional mathematical objects. Each VIP neuron occupies or is responsive to a region in a sensorimotor phase space, thus unifying variables relevant to the disparate sensory modalities and movements. Convergence on one neuron joins variables functionally, as space and time are joined in relativistic physics to form a unified spacetime. The space of position and motion together forms a neuronal phase space, bridging neurophysiology and the physics of face-infringement. After a brief review of the experimental literature, the neuronal phase space natural to VIP is sequentially characterized, based on experimental data. Responses of neurons indicate variables that may serve as axes of neural reference frames, and neuronal responses have been so used in this study. The space of sensory and movement variables covered by VIP receptive fields joins visual and auditory space to body-bound sensory modalities: somatosensation and the inertial senses. This joining of allocentric and egocentric modalities is in keeping with the known relationship of the parietal lobe to the sense of self in space and to hemineglect, in both humans and monkeys. Following this inductive step, variables are formalized in terms of the mathematics of graph theory to deduce which combinations are complete as a multidimensional neural structure that provides the organism with a complete set of options regarding objects impacting the face, such as acceptance, pursuit, and avoidance. We consider four basic variable types: position and motion of the face and of an external object. Formalizing the four types of variables allows us to generalize to any sensory system and to determine the necessary and sufficient conditions for a neural center (for example, a cortical region) to provide a face-infringement space. We demonstrate that VIP includes at least one such face-infringement space.
Designing Clinical Space for the Delivery of Integrated Behavioral Health and Primary Care.
Gunn, Rose; Davis, Melinda M; Hall, Jennifer; Heintzman, John; Muench, John; Smeds, Brianna; Miller, Benjamin F; Miller, William L; Gilchrist, Emma; Brown Levey, Shandra; Brown, Jacqueline; Wise Romero, Pam; Cohen, Deborah J
2015-01-01
This study sought to describe features of the physical space in which practices integrating primary care and behavioral health care work and to identify the arrangements that enable integration of care. We conducted an observational study of 19 diverse practices located across the United States. Practice-level data included field notes from 2-4-day site visits, transcripts from semistructured interviews with clinicians and clinical staff, online implementation diary posts, and facility photographs. A multidisciplinary team used a 4-stage, systematic approach to analyze data and identify how physical layout enabled the work of integrated care teams. Two dominant spatial layouts emerged across practices: type-1 layouts were characterized by having primary care clinicians (PCCs) and behavioral health clinicians (BHCs) located in separate work areas, and type-2 layouts had BHCs and PCCs sharing work space. We describe these layouts and the influence they have on situational awareness, interprofessional "bumpability," and opportunities for on-the-fly communication. We observed BHCs and PCCs engaging in more face-to-face methods for coordinating integrated care for patients in type 2 layouts (41.5% of observed encounters vs 11.7%; P < .05). We show that practices needed to strike a balance between professional proximity and private work areas to accomplish job tasks. Private workspace was needed for focused work, to see patients, and for consults between clinicians and clinical staff. We describe the ways practices modified and built new space and provide 2 recommended layouts for practices integrating care based on study findings. Physical layout and positioning of professionals' workspace is an important consideration in practices implementing integrated care. Clinicians, researchers, and health-care administrators are encouraged to consider the role of professional proximity and private working space when creating new facilities or redesigning existing space to foster delivery of integrated behavioral health and primary care. © Copyright 2015 by the American Board of Family Medicine.
Eye movement identification based on accumulated time feature
NASA Astrophysics Data System (ADS)
Guo, Baobao; Wu, Qiang; Sun, Jiande; Yan, Hua
2017-06-01
Eye movement is a new kind of feature for biometrical recognition, it has many advantages compared with other features such as fingerprint, face, and iris. It is not only a sort of static characteristics, but also a combination of brain activity and muscle behavior, which makes it effective to prevent spoofing attack. In addition, eye movements can be incorporated with faces, iris and other features recorded from the face region into multimode systems. In this paper, we do an exploring study on eye movement identification based on the eye movement datasets provided by Komogortsev et al. in 2011 with different classification methods. The time of saccade and fixation are extracted from the eye movement data as the eye movement features. Furthermore, the performance analysis was conducted on different classification methods such as the BP, RBF, ELMAN and SVM in order to provide a reference to the future research in this field.
Scherer, Max-Adam
2016-12-01
Cosmetologists in the last decade face a permanently increasing number of male patients. The necessity of a gender-adjusted approach in treatment of this patient category is obvious. An adequate correction requires consideration of the anatomic and physiologic features of male faces together with a whole set of interrelated aspects of psychologic perception of the male face esthetics, socially formed understanding of masculine features and appropriate emotional expressions, also of the motivations and expectations of men coming to a cosmetologist. The author explains in detail the elaborated out of own vast experience methods of complex male face correction using the above-mentioned gender-specific approach to create a naturally looking and harmonic facial expression and appearance. The presented botulinum therapy specifics concern the injection point location and toxin doses for every point. As a result, a rather distinct smoothening of the skin profile without detriment to the facial expressiveness and gender-related features is achieved. The importance and methods of an extremely delicate approach to volumetric plasty with stabilized hyaluronic acid-based fillers in men for avoiding hypercorrection and retaining the gender-specific features are discussed. © 2016 Wiley Periodicals, Inc.
Pedrana, Alisa E; Stoove, Mark A; Chang, Shanton; Howard, Steve; Asselin, Jason; Ilic, Olivia; Batrouney, Colin; Hellard, Margaret E
2012-01-01
Online social networking sites offer a novel setting for the delivery of health promotion interventions due to their potential to reach a large population and the possibility for two-way engagement. However, few have attempted to host interventions on these sites, or to use the range of interactive functions available to enhance the delivery of health-related messages. This paper presents lessons learnt from “The FaceSpace Project”, a sexual health promotion intervention using social networking sites targeting two key at-risk groups. Based on our experience, we make recommendations for developing and implementing health promotion interventions on these sites. Elements crucial for developing interventions include establishing a multidisciplinary team, allowing adequate time for obtaining approvals, securing sufficient resources for building and maintaining an online presence, and developing an integrated process and impact evaluation framework. With two-way interaction an important and novel feature of health promotion interventions in this medium, we also present strategies trialled to generate interest and engagement in our intervention. Social networking sites are now an established part of the online environment; our experience in developing and implementing a health promotion intervention using this medium are of direct relevance and utility for all health organizations creating a presence in this new environment. PMID:22374589
The contribution of local features to familiarity judgments in music.
Bigand, Emmanuel; Gérard, Yannick; Molin, Paul
2009-07-01
The contributions of local and global features to object identification depend upon the context. For example, while local features play an essential role in identification of words and objects, the global features are more influential in face recognition. In order to evaluate the respective strengths of local and global features for face recognition, researchers usually ask participants to recognize human faces (famous or learned) in normal and scrambled pictures. In this paper, we address a similar issue in music. We present the results of an experiment in which musically untrained participants were asked to differentiate famous from unknown musical excerpts that were presented in normal or scrambled ways. Manipulating the size of the temporal window on which the scrambling procedure was applied allowed us to evaluate the minimal length of time necessary for participants to make a familiarity judgment. Quite surprisingly, the minimum duration for differentiation of famous from unknown pieces is extremely short. This finding highlights the contribution of very local features to music memory.
Building Face Composites Can Harm Lineup Identification Performance
ERIC Educational Resources Information Center
Wells, Gary L.; Charman, Steve D.; Olson, Elizabeth A.
2005-01-01
Face composite programs permit eyewitnesses to build likenesses of target faces by selecting facial features and combining them into an intact face. Research has shown that these composites are generally poor likenesses of the target face. Two experiments tested the proposition that this composite-building process could harm the builder's memory…
Gelbard-Sagiv, Hagar; Faivre, Nathan; Mudrik, Liad; Koch, Christof
2016-01-01
The scope and limits of unconscious processing are a matter of ongoing debate. Lately, continuous flash suppression (CFS), a technique for suppressing visual stimuli, has been widely used to demonstrate surprisingly high-level processing of invisible stimuli. Yet, recent studies showed that CFS might actually allow low-level features of the stimulus to escape suppression and be consciously perceived. The influence of such low-level awareness on high-level processing might easily go unnoticed, as studies usually only probe the visibility of the feature of interest, and not that of lower-level features. For instance, face identity is held to be processed unconsciously since subjects who fail to judge the identity of suppressed faces still show identity priming effects. Here we challenge these results, showing that such high-level priming effects are indeed induced by faces whose identity is invisible, but critically, only when a lower-level feature, such as color or location, is visible. No evidence for identity processing was found when subjects had no conscious access to any feature of the suppressed face. These results suggest that high-level processing of an image might be enabled by-or co-occur with-conscious access to some of its low-level features, even when these features are not relevant to the processed dimension. Accordingly, they call for further investigation of lower-level awareness during CFS, and reevaluation of other unconscious high-level processing findings.
Akechi, Hironori; Kikuchi, Yukiko; Tojo, Yoshikuni; Osanai, Hiroo; Hasegawa, Toshikazu
2014-01-01
Numerous studies have revealed atypical face processing in autism spectrum disorders (ASD) characterized by social interaction and communication difficulties. This study investigated sensitivity to face-likeness in ASD. In Experiment 1, we found a strong positive correlation between the face-likeness ratings of non-face objects in the ASD (11–19 years old) and the typically developing (TD) group (9–21 years old). In Experiment 2 (the scalp-recorded event-related potential experiment), the participants of both groups (ASD, 12–19 years old; TD, 12–18 years old) exhibited an enhanced face-sensitive N170 amplitude to a face-like object. Whereas the TD adolescents showed an enhanced N170 during the face-likeness judgements, adolescents with ASD did not. Thus, both individuals with ASD and TD individuals have a perceptual and neural sensitivity to face-like features in objects. When required to process face-like features, a face-related brain system reacts more strongly in TD individuals but not in individuals with ASD. PMID:24464152
Implicit Binding of Facial Features During Change Blindness
Lyyra, Pessi; Mäkelä, Hanna; Hietanen, Jari K.; Astikainen, Piia
2014-01-01
Change blindness refers to the inability to detect visual changes if introduced together with an eye-movement, blink, flash of light, or with distracting stimuli. Evidence of implicit detection of changed visual features during change blindness has been reported in a number of studies using both behavioral and neurophysiological measurements. However, it is not known whether implicit detection occurs only at the level of single features or whether complex organizations of features can be implicitly detected as well. We tested this in adult humans using intact and scrambled versions of schematic faces as stimuli in a change blindness paradigm while recording event-related potentials (ERPs). An enlargement of the face-sensitive N170 ERP component was observed at the right temporal electrode site to changes from scrambled to intact faces, even if the participants were not consciously able to report such changes (change blindness). Similarly, the disintegration of an intact face to scrambled features resulted in attenuated N170 responses during change blindness. Other ERP deflections were modulated by changes, but unlike the N170 component, they were indifferent to the direction of the change. The bidirectional modulation of the N170 component during change blindness suggests that implicit change detection can also occur at the level of complex features in the case of facial stimuli. PMID:24498165
Implicit binding of facial features during change blindness.
Lyyra, Pessi; Mäkelä, Hanna; Hietanen, Jari K; Astikainen, Piia
2014-01-01
Change blindness refers to the inability to detect visual changes if introduced together with an eye-movement, blink, flash of light, or with distracting stimuli. Evidence of implicit detection of changed visual features during change blindness has been reported in a number of studies using both behavioral and neurophysiological measurements. However, it is not known whether implicit detection occurs only at the level of single features or whether complex organizations of features can be implicitly detected as well. We tested this in adult humans using intact and scrambled versions of schematic faces as stimuli in a change blindness paradigm while recording event-related potentials (ERPs). An enlargement of the face-sensitive N170 ERP component was observed at the right temporal electrode site to changes from scrambled to intact faces, even if the participants were not consciously able to report such changes (change blindness). Similarly, the disintegration of an intact face to scrambled features resulted in attenuated N170 responses during change blindness. Other ERP deflections were modulated by changes, but unlike the N170 component, they were indifferent to the direction of the change. The bidirectional modulation of the N170 component during change blindness suggests that implicit change detection can also occur at the level of complex features in the case of facial stimuli.
An effective method on pornographic images realtime recognition
NASA Astrophysics Data System (ADS)
Wang, Baosong; Lv, Xueqiang; Wang, Tao; Wang, Chengrui
2013-03-01
In this paper, skin detection, texture filtering and face detection are used to extract feature on an image library, training them with the decision tree arithmetic to create some rules as a decision tree classifier to distinguish an unknown image. Experiment based on more than twenty thousand images, the precision rate can get 76.21% when testing on 13025 pornographic images and elapsed time is less than 0.2s. This experiment shows it has a good popularity. Among the steps mentioned above, proposing a new skin detection model which called irregular polygon region skin detection model based on YCbCr color space. This skin detection model can lower the false detection rate on skin detection. A new method called sequence region labeling on binary connected area can calculate features on connected area, it is faster and needs less memory than other recursive methods.
Formal implementation of a performance evaluation model for the face recognition system.
Shin, Yong-Nyuo; Kim, Jason; Lee, Yong-Jun; Shin, Woochang; Choi, Jin-Young
2008-01-01
Due to usability features, practical applications, and its lack of intrusiveness, face recognition technology, based on information, derived from individuals' facial features, has been attracting considerable attention recently. Reported recognition rates of commercialized face recognition systems cannot be admitted as official recognition rates, as they are based on assumptions that are beneficial to the specific system and face database. Therefore, performance evaluation methods and tools are necessary to objectively measure the accuracy and performance of any face recognition system. In this paper, we propose and formalize a performance evaluation model for the biometric recognition system, implementing an evaluation tool for face recognition systems based on the proposed model. Furthermore, we performed evaluations objectively by providing guidelines for the design and implementation of a performance evaluation system, formalizing the performance test process.
NASA Astrophysics Data System (ADS)
Luo, Yuan; Wang, Bo-yu; Zhang, Yi; Zhao, Li-ming
2018-03-01
In this paper, under different illuminations and random noises, focusing on the local texture feature's defects of a face image that cannot be completely described because the threshold of local ternary pattern (LTP) cannot be calculated adaptively, a local three-value model of improved adaptive local ternary pattern (IALTP) is proposed. Firstly, the difference function between the center pixel and the neighborhood pixel weight is established to obtain the statistical characteristics of the central pixel and the neighborhood pixel. Secondly, the adaptively gradient descent iterative function is established to calculate the difference coefficient which is defined to be the threshold of the IALTP operator. Finally, the mean and standard deviation of the pixel weight of the local region are used as the coding mode of IALTP. In order to reflect the overall properties of the face and reduce the dimension of features, the two-directional two-dimensional PCA ((2D)2PCA) is adopted. The IALTP is used to extract local texture features of eyes and mouth area. After combining the global features and local features, the fusion features (IALTP+) are obtained. The experimental results on the Extended Yale B and AR standard face databases indicate that under different illuminations and random noises, the algorithm proposed in this paper is more robust than others, and the feature's dimension is smaller. The shortest running time reaches 0.329 6 s, and the highest recognition rate reaches 97.39%.
Main interior space facing the bar. The more recent kitchen ...
Main interior space facing the bar. The more recent kitchen and restroom additions are behind the rear wall. - San Luis Yacht Club, Avila Pier, South of Front Street, Avila Beach, San Luis Obispo County, CA
NASA Astrophysics Data System (ADS)
Iqtait, M.; Mohamad, F. S.; Mamat, M.
2018-03-01
Biometric is a pattern recognition system which is used for automatic recognition of persons based on characteristics and features of an individual. Face recognition with high recognition rate is still a challenging task and usually accomplished in three phases consisting of face detection, feature extraction, and expression classification. Precise and strong location of trait point is a complicated and difficult issue in face recognition. Cootes proposed a Multi Resolution Active Shape Models (ASM) algorithm, which could extract specified shape accurately and efficiently. Furthermore, as the improvement of ASM, Active Appearance Models algorithm (AAM) is proposed to extracts both shape and texture of specified object simultaneously. In this paper we give more details about the two algorithms and give the results of experiments, testing their performance on one dataset of faces. We found that the ASM is faster and gains more accurate trait point location than the AAM, but the AAM gains a better match to the texture.
Face processing in autism: Reduced integration of cross-feature dynamics.
Shah, Punit; Bird, Geoffrey; Cook, Richard
2016-02-01
Characteristic problems with social interaction have prompted considerable interest in the face processing of individuals with Autism Spectrum Disorder (ASD). Studies suggest that reduced integration of information from disparate facial regions likely contributes to difficulties recognizing static faces in this population. Recent work also indicates that observers with ASD have problems using patterns of facial motion to judge identity and gender, and may be less able to derive global motion percepts. These findings raise the possibility that feature integration deficits also impact the perception of moving faces. To test this hypothesis, we examined whether observers with ASD exhibit susceptibility to a new dynamic face illusion, thought to index integration of moving facial features. When typical observers view eye-opening and -closing in the presence of asynchronous mouth-opening and -closing, the concurrent mouth movements induce a strong illusory slowing of the eye transitions. However, we find that observers with ASD are not susceptible to this illusion, suggestive of weaker integration of cross-feature dynamics. Nevertheless, observers with ASD and typical controls were equally able to detect the physical differences between comparison eye transitions. Importantly, this confirms that observers with ASD were able to fixate the eye-region, indicating that the striking group difference has a perceptual, not attentional origin. The clarity of the present results contrasts starkly with the modest effect sizes and equivocal findings seen throughout the literature on static face perception in ASD. We speculate that differences in the perception of facial motion may be a more reliable feature of this condition. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Parallel approaches to composite production: interfaces that behave contrary to expectation.
Frowd, Charlie D; Bruce, Vicki; Ness, Hayley; Bowie, Leslie; Paterson, Jenny; Thomson-Bogner, Claire; McIntyre, Alexander; Hancock, Peter J B
2007-04-01
This paper examines two facial composite systems that present multiple faces during construction to more closely resemble natural face processing. A 'parallel' version of PRO-fit was evaluated, which presents facial features in sets of six or twelve, and EvoFIT, a system in development, which contains a holistic face model and an evolutionary interface. The PRO-fit parallel interface turned out not to be quite as good as the 'serial' version as it appeared to interfere with holistic face processing. Composites from EvoFIT were named almost three times better than PRO-fit, but a benefit emerged under feature encoding, suggesting that recall has a greater role for EvoFIT than was previously thought. In general, an advantage was found for feature encoding, replicating a previous finding in this area, and also for a novel 'holistic' interview.
Hybrid Feature Extraction-based Approach for Facial Parts Representation and Recognition
NASA Astrophysics Data System (ADS)
Rouabhia, C.; Tebbikh, H.
2008-06-01
Face recognition is a specialized image processing which has attracted a considerable attention in computer vision. In this article, we develop a new facial recognition system from video sequences images dedicated to person identification whose face is partly occulted. This system is based on a hybrid image feature extraction technique called ACPDL2D (Rouabhia et al. 2007), it combines two-dimensional principal component analysis and two-dimensional linear discriminant analysis with neural network. We performed the feature extraction task on the eyes and the nose images separately then a Multi-Layers Perceptron classifier is used. Compared to the whole face, the results of simulation are in favor of the facial parts in terms of memory capacity and recognition (99.41% for the eyes part, 98.16% for the nose part and 97.25 % for the whole face).
Heinen, Klaartje; Feredoes, Eva; Weiskopf, Nikolaus; Ruff, Christian C; Driver, Jon
2014-11-01
Voluntary selective attention can prioritize different features in a visual scene. The frontal eye-fields (FEF) are one potential source of such feature-specific top-down signals, but causal evidence for influences on visual cortex (as was shown for "spatial" attention) has remained elusive. Here, we show that transcranial magnetic stimulation (TMS) applied to right FEF increased the blood oxygen level-dependent (BOLD) signals in visual areas processing "target feature" but not in "distracter feature"-processing regions. TMS-induced BOLD signals increase in motion-responsive visual cortex (MT+) when motion was attended in a display with moving dots superimposed on face stimuli, but in face-responsive fusiform area (FFA) when faces were attended to. These TMS effects on BOLD signal in both regions were negatively related to performance (on the motion task), supporting the behavioral relevance of this pathway. Our findings provide new causal evidence for the human FEF in the control of nonspatial "feature"-based attention, mediated by dynamic influences on feature-specific visual cortex that vary with the currently attended property. © The Author 2013. Published by Oxford University Press.
Neural Correlate of the Thatcher Face Illusion in a Monkey Face-Selective Patch.
Taubert, Jessica; Van Belle, Goedele; Vanduffel, Wim; Rossion, Bruno; Vogels, Rufin
2015-07-08
Compelling evidence that our sensitivity to facial structure is conserved across the primate order comes from studies of the "Thatcher face illusion": humans and monkeys notice changes in the orientation of facial features (e.g., the eyes) only when faces are upright, not when faces are upside down. Although it is presumed that face perception in primates depends on face-selective neurons in the inferior temporal (IT) cortex, it is not known whether these neurons respond differentially to upright faces with inverted features. Using microelectrodes guided by functional MRI mapping, we recorded cell responses in three regions of monkey IT cortex. We report an interaction in the middle lateral face patch (ML) between the global orientation of a face and the local orientation of its eyes, a response profile consistent with the perception of the Thatcher illusion. This increased sensitivity to eye orientation in upright faces resisted changes in screen location and was not found among face-selective neurons in other areas of IT cortex, including neurons in another face-selective region, the anterior lateral face patch. We conclude that the Thatcher face illusion is correlated with a pattern of activity in the ML that encodes faces according to a flexible holistic template. Copyright © 2015 the authors 0270-6474/15/359872-07$15.00/0.
... poisoning URL of this page: //medlineplus.gov/ency/article/002700.htm Face powder poisoning To use the sharing features on this page, please enable JavaScript. Face powder poisoning occurs when someone swallows or ...
Identification of Novel Desiccation-Tolerant S. cerevisiae Strains for Deep Space Biosensors
NASA Technical Reports Server (NTRS)
Tieze, Sofia Massaro; Santa Maria, Sergio R.; Liddell, Lauren C.; Bhattacharya, Sharmila
2017-01-01
NASA's BioSentinel mission, a secondary payload that will fly on the Space Launch System's first Exploration Mission (EM-1), utilizes the budding yeast S. cerevisiae to study the biological response to the deep space radiation environment. Yeast samples are desiccated prior to launch to suspend growth and metabolism while the spacecraft travels to its target heliocentric orbit beyond Low Earth Orbit. Each sample is then rehydrated at the desired time points to reactivate the cells. A major risk in this mission is the loss of cell viability that occurs in the recovery period following the desiccation and rehydration process. Cell survival is essential for the detection of the biological response to features in the deep space environment, including ionizing radiation. The aim of this study is to mitigate viable cell loss in future biosensors by identifying mutations and genes that confer tolerance to desiccation stress in rad51, a radiation-sensitive yeast strain. We initiated a screen for desiccation-tolerance after rehydrating cells that were desiccated for three years, and selected various clones exhibiting robust growth. To verify retention of radiation sensitivity in the isolated clones - a crucial feature for a successful biosensor - we exposed them to ionizing radiation. Finally, to elucidate the genetic and molecular bases for observed desiccation-tolerance, we will perform whole-genome sequencing of those rad51 clones that exhibit both robust growth and radiation sensitivity following desiccation. The identification and characterization of desiccation-tolerant strains will allow us to engineer a biological model that will be resilient in face of the challenges of the deep space environment, and will thus ensure the experimental success of future biosensor missions.
Identification of Novel Desiccation-Tolerant S. cerevisiae Strains for Deep Space Biosensors
NASA Technical Reports Server (NTRS)
Tieze, Sofia Massaro; Santa Maria, Sergio R.; Liddell, Lauren; Bhattacharya, Sharmila
2017-01-01
NASA's BioSentinel mission, a secondary payload that will fly on the Space Launch Systems first Exploration Mission (EM-1), utilizes the budding yeast S. cerevisiae to study the biological response to the deep space radiation environment. Yeast samples are desiccated prior to launch to suspend growth and metabolism while the spacecraft travels to its target heliocentric orbit beyond Low Earth Orbit. Each sample is then rehydrated at the desired time points to reactivate the cells. A major risk in this mission is the loss of cell viability that occurs in the recovery period following the desiccation and rehydration process. Cell survival is essential for the detection of the biological response to features in the deep space environment, including ionizing radiation.The aim of this study is to mitigate viable cell loss in future biosensors by identifying mutations and genes that confer tolerance to desiccation stress in rad51, a radiation-sensitive yeast strain. We initiated a screen for desiccation-tolerance after rehydrating cells that were desiccated for three years, and selected various clones exhibiting robust growth. To verify retention of radiation sensitivity in the isolated clonesa crucial feature for a successful biosensorwe exposed them to ionizing radiation. Finally, to elucidate the genetic and molecular bases for observed desiccation-tolerance, we will perform whole-genome sequencing of those rad51 clones that exhibit both robust growth and radiation sensitivity following desiccation. The identification and characterization of desiccation-tolerant strains will allow us to engineer a biological model that will be resilient in face of the challenges of the deep space environment, and will thus ensure the experimental success of future biosensor missions.
Rajaei, Karim; Khaligh-Razavi, Seyed-Mahdi; Ghodrati, Masoud; Ebrahimpour, Reza; Shiri Ahmad Abadi, Mohammad Ebrahim
2012-01-01
The brain mechanism of extracting visual features for recognizing various objects has consistently been a controversial issue in computational models of object recognition. To extract visual features, we introduce a new, biologically motivated model for facial categorization, which is an extension of the Hubel and Wiesel simple-to-complex cell hierarchy. To address the synaptic stability versus plasticity dilemma, we apply the Adaptive Resonance Theory (ART) for extracting informative intermediate level visual features during the learning process, which also makes this model stable against the destruction of previously learned information while learning new information. Such a mechanism has been suggested to be embedded within known laminar microcircuits of the cerebral cortex. To reveal the strength of the proposed visual feature learning mechanism, we show that when we use this mechanism in the training process of a well-known biologically motivated object recognition model (the HMAX model), it performs better than the HMAX model in face/non-face classification tasks. Furthermore, we demonstrate that our proposed mechanism is capable of following similar trends in performance as humans in a psychophysical experiment using a face versus non-face rapid categorization task.
Usability issues concerning child restraint system harness design.
Rudin-Brown, Christina M; Kumagai, Jason K; Angel, Harry A; Iwasa-Madge, Kim M; Noy, Y Ian
2003-05-01
A study was conducted to assess usability issues relating to child restraint system (CRS) harness design. Four convertible child restraint systems representing a wide variety of design features were used. Forty-two participants installed two child test dummies in both forward- and rear-facing configurations either inside or outside a test vehicle. Observer-scored checklists determined the degree to which each harness was installed correctly. Participant-scored questionnaires evaluated the 'ease-of-use' of various design features. While the percentage of correct installations exceeded 83% for all designs when installed in the forward-facing configuration, in the rear-facing position (that intended for children under 9-10 kg), there was a significant (between 65 and 89%) percentage of incorrect installations for all models. This finding is of particular interest and may be indicative of a more generalized problem with 'convertible' CRS designs when they are used in the rear-facing configuration. Furthermore, while certain design features were perceived by users as providing significantly better protection in the event of a collision, these also tended to be the features that were misused most often. The benefits and costs of various design features are discussed, and a method to test harness design usability is presented.
Full-Featured Web Conferencing Systems
ERIC Educational Resources Information Center
Foreman, Joel; Jenkins, Roy
2005-01-01
In order to match the customary strengths of the still dominant face-to-face instructional mode, a high-performance online learning system must employ synchronous as well as asynchronous communications; buttress graphics, animation, and text with live audio and video; and provide many of the features and processes associated with course management…
Nimbalkar, Smita; Oh, Yih Y; Mok, Reei Y; Tioh, Jing Y; Yew, Kai J; Patil, Pravinkumar G
2018-03-16
Buccal corridor space and its variations greatly influence smile attractiveness. Facial types are different for different ethnic populations, and so is smile attractiveness. The subjective perception of smile attractiveness of different populations may vary in regard to different buccal corridor spaces and facial patterns. The purpose of this study was to determine esthetic perceptions of the Malaysian population regarding the width of buccal corridor spaces and their effect on smile esthetics in individuals with short, normal, and long faces. The image of a smiling individual with a mesofacial face was modified to create 2 different facial types (brachyfacial and dolicofacial). Each face form was further modified into 5 different buccal corridors (2%, 10%, 15%, 22%, and 28%). The images were submitted to 3 different ethnic groups of evaluators (Chinese, Malay, Indian; 100 each), ranging between 17 and 21 years of age. A visual analog scale (50 mm in length) was used for assessment. The scores given to each image were compared with the Kruskal-Wallis test, and pairwise comparison was performed using the Mann-Whitney U test (α=.05). All 3 groups of evaluators could distinguish gradations of dark spaces in the buccal corridor at 2%, 10%, and 28%. Statistically significant differences were observed among 3 groups of evaluators in esthetic perception when pairwise comparisons were performed. A 15% buccal corridor was found to score esthetically equally within 3 face types by all 3 groups of evaluators. The Indian population was more critical in evaluation than the Chinese or Malay populations. In a pairwise comparison, more significant differences were found between long and short faces and the normal face; the normal face was compared with long and short faces separately. The width of the buccal corridor space influences smile attractiveness in different facial types. A medium buccal corridor (15%) is the esthetic characteristic preferred by all groups of evaluators in short, normal, and long face types. Copyright © 2017 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Learning toward practical head pose estimation
NASA Astrophysics Data System (ADS)
Sang, Gaoli; He, Feixiang; Zhu, Rong; Xuan, Shibin
2017-08-01
Head pose is useful information for many face-related tasks, such as face recognition, behavior analysis, human-computer interfaces, etc. Existing head pose estimation methods usually assume that the face images have been well aligned or that sufficient and precise training data are available. In practical applications, however, these assumptions are very likely to be invalid. This paper first investigates the impact of the failure of these assumptions, i.e., misalignment of face images, uncertainty and undersampling of training data, on head pose estimation accuracy of state-of-the-art methods. A learning-based approach is then designed to enhance the robustness of head pose estimation to these factors. To cope with misalignment, instead of using hand-crafted features, it seeks suitable features by learning from a set of training data with a deep convolutional neural network (DCNN), such that the training data can be best classified into the correct head pose categories. To handle uncertainty and undersampling, it employs multivariate labeling distributions (MLDs) with dense sampling intervals to represent the head pose attributes of face images. The correlation between the features and the dense MLD representations of face images is approximated by a maximum entropy model, whose parameters are optimized on the given training data. To estimate the head pose of a face image, its MLD representation is first computed according to the model based on the features extracted from the image by the trained DCNN, and its head pose is then assumed to be the one corresponding to the peak in its MLD. Evaluation experiments on the Pointing'04, FacePix, Multi-PIE, and CASIA-PEAL databases prove the effectiveness and efficiency of the proposed method.
NASA Technical Reports Server (NTRS)
Vyhnal, Richard F.
1993-01-01
Long Duration Exposure Facility (LDEF) Experiment A0175 involved the non-instrumented exposure of seven carbon-fiber reinforced resin-matrix advanced composite panels contained in two trays - A7 and A1. These two trays were located, respectively, on the leading and trailing faces of LDEF, obliquely oriented to the RAM (Row 9) and WAKE (Row 3) directions. The identity and location of the seven panels, which consisted of six flat laminates of the following material systems are shown: carbon/epoxy (T300/934), carbon/bismaleimide (T300/F178), and carbon/polyimide (C6000/LARC-160 and C6000/PMR-15), plus one bonded honeycomb sandwich panel (T300/934 face sheets and Nomex core) patterned after the Space Shuttle payload bay door construction. These material systems were selected to represent a range of then-available matrix resins which, by virtue of their differing polymer chemistry, could conceivably exhibit differing susceptibility to the low-earth orbit (LEO) environment. The principal exposure conditions of the LDEF environment at these tray locations are shown. Noteworthy to some of the observations discussed is the four-orders-of magnitude difference in the atomic oxygen (AO) fluence, which made a shallow incidence angle (approximately 22 deg) to Tray A7, while Tray A1 on the trailing face was essentially shielded from AO exposure. This evaluation focused on determining the individual and relative suitability of a variety of resin-matrix composite systems for long-term space structural applications. This was accomplished primarily by measuring and comparing a range of engineering mechanical properties on over 300 test coupons sectioned from the flight panels and from identical control panels, and tested at ambient and elevated temperatures. This testing was supported by limited physical characterization, involving visual examination of flight panel surface features, measurements of weight loss and warpage, and examination for changes in internal integrity (micro cracking, delamination) by ultrasonic c-scan and polished cross-sections.
A Face Inversion Effect without a Face
ERIC Educational Resources Information Center
Brandman, Talia; Yovel, Galit
2012-01-01
Numerous studies have attributed the face inversion effect (FIE) to configural processing of internal facial features in upright but not inverted faces. Recent findings suggest that face mechanisms can be activated by faceless stimuli presented in the context of a body. Here we asked whether faceless stimuli with or without body context may induce…
ERIC Educational Resources Information Center
Greenberg, Seth N.; Goshen-Gottstein, Yonatan
2009-01-01
The present work considers the mental imaging of faces, with a focus in own-face imaging. Experiments 1 and 3 demonstrated an own-face disadvantage, with slower generation of mental images of one's own face than of other familiar faces. In contrast, Experiment 2 demonstrated that mental images of facial parts are generated more quickly for one's…
NASA Astrophysics Data System (ADS)
Yousfi, Ammar; Mechergui, Mohammed
2016-04-01
The seepage face is an important feature of the drainage process when recharge occurs to a permeable region with lateral outlets. Examples of the formation of a seepage face above the downstream water level include agricultural land drained by ditches. Flow problem to these drains has been investigated extensively by many researchers (e.g. Rubin, 1968; Hornberger et al. 1969; Verma and Brutsaert, 1970; Gureghian and Youngs, 1975; Vauclin et al., 1975; Skaggs and Tang, 1976; Youngs, 1990; Gureghian, 1981; Dere, 2000; Rushton and Youngs, 2010; Youngs, 2012; Castro-Orgaz et al., 2012) and may be tackled either using variably saturated flow models, or the complete 2-D solution of Laplace equation, or using the Dupuit-Forchheimer approximation; the most widely accepted methods to obtain analytical solutions for unconfined drainage problems. However, the investigation reported by Clement et al. (1996) suggest that accounting for the seepage face alone, as in the fully saturated flow model, does not improve the discharge estimate because of disregarding flow the unsaturated zone flow contribution. This assumption can induce errors in the location of the water table surface and results in an underestimation of the seepage face and the net discharge (e.g. Skaggs and Tang, 1976; Vauclin et al., 1979; Clement et al., 1996). The importance of the flow in the unsaturated zone has been highlighted by many authors on the basis of laboratory experiments and/or numerical experimentations (e.g. Rubin, 1968; Verma and Brutsaert, 1970; Todsen, 1973; Vauclin et al., 1979; Ahmad et al., 1993; Anguela, 2004; Luthin and Day, 1955; Shamsai and Narasimhan, 1991; Wise et al., 1994; Clement et al., 1996; Boufadel et al., 1999; Romano et al., 1999; Kao et al., 2001; Kao, 2002). These studies demonstrate the failure of fully saturated flow models and suggested that the error made when using these models not only depends on soil properties but also on the infiltration rate as reported by Kao et al. (2001). In this work, a novel solution based on theoretical approach will be adapted to incorporate both the seepage face and the unsaturated zone flow contribution for solving ditch drained aquifers problems. This problem will be tackled on the basis of the approximate 2D solution given by Castro-Orgaz et al. (2012). This given solution yields the generalized water table profile function with a suitable boundary condition to be determined and provides a modified DF theory which permits as an outcome the analytical determination of the seepage face. To assess the ability of the developed equation for water-table estimations, the obtained results were compared with numerical solutions to the 2-D problem under different conditions. It is shown that results are in fair agreement and thus the resulting model can be used for designing ditch drainage systems. With respect to drainage design, the spacings calculated with the newly derived equation are compared with those computed from the DF theory. It is shown that the effect of the unsaturated zone flow contribution is limited to sandy soils and The calculated maximum increase in drain spacing is about 30%. Keywords: subsurface ditch drainage; unsaturated zone; seepage face; water-table, ditch spacing equation
Looking like a criminal: stereotypical black facial features promote face source memory error.
Kleider, Heather M; Cavrak, Sarah E; Knuycky, Leslie R
2012-11-01
The present studies tested whether African American face type (stereotypical or nonstereotypical) facilitated stereotype-consistent categorization, and whether that categorization influenced memory accuracy and errors. Previous studies have shown that stereotypically Black features are associated with crime and violence (e.g., Blair, Judd, & Chapleau Psychological Science 15:674-679, 2004; Blair, Judd, & Fallman Journal of Personality and Social Psychology 87:763-778, 2004; Blair, Judd, Sadler, & Jenkins Journal of Personality and Social Psychology 83:5-252002); here, we extended this finding to investigate whether there is a bias toward remembering and recategorizing stereotypical faces as criminals. Using category labels, consistent (or inconsistent) with race-based expectations, we tested whether face recognition and recategorization were driven by the similarity between a target's facial features and a stereotyped category (i.e., stereotypical Black faces associated with crime/violence). The results revealed that stereotypical faces were associated more often with a stereotype-consistent label (Study 1), were remembered and correctly recategorized as criminals (Studies 2-4), and were miscategorized as criminals when memory failed. These effects occurred regardless of race or gender. Together, these findings suggest that face types have strong category associations that can promote stereotype-motivated recognition errors. Implications for eyewitness accuracy are discussed.
Interfering with memory for faces: The cost of doing two things at once.
Wammes, Jeffrey D; Fernandes, Myra A
2016-01-01
We inferred the processes critical for episodic retrieval of faces by measuring susceptibility to memory interference from different distracting tasks. Experiment 1 examined recognition of studied faces under full attention (FA) or each of two divided attention (DA) conditions requiring concurrent decisions to auditorily presented letters. Memory was disrupted in both DA relative to FA conditions, a result contrary to a material-specific account of interference effects. Experiment 2 investigated whether the magnitude of interference depended on competition between concurrent tasks for common processing resources. Studied faces were presented either upright (configurally processed) or inverted (featurally processed). Recognition was completed under FA, or DA with one of two face-based distracting tasks requiring either featural or configural processing. We found an interaction: memory for upright faces was lower under DA when the distracting task required configural than featural processing, while the reverse was true for memory of inverted faces. Across experiments, the magnitude of memory interference was similar (a 19% or 20% decline from FA) regardless of whether the materials in the distracting task overlapped with the to-be-remembered information. Importantly, interference was significantly larger (42%) when the processing demands of the distracting and target retrieval task overlapped, suggesting a processing-specific account of memory interference.
Dwarfism with gloomy face: a new syndrome with features of 3-M syndrome.
Le Merrer, M; Brauner, R; Maroteaux, P
1991-01-01
Nine children with primordial dwarfism are described and a new syndrome is delineated. The significant features of this syndrome include facial dysmorphism with gloomy face and very short stature, but no radiological abnormality or hormone deficiency. Mental development is normal. The mode of inheritance seems to be autosomal recessive because of consanguinity in three of the four sibships. Some overlap with the 3-M syndrome is discussed but the autonomy of the gloomy face syndrome seems to be real. Images PMID:2051454
Factors contributing to the adaptation aftereffects of facial expression.
Butler, Andrea; Oruc, Ipek; Fox, Christopher J; Barton, Jason J S
2008-01-29
Previous studies have demonstrated the existence of adaptation aftereffects for facial expressions. Here we investigated which aspects of facial stimuli contribute to these aftereffects. In Experiment 1, we examined the role of local adaptation to image elements such as curvature, shape and orientation, independent of expression, by using hybrid faces constructed from either the same or opposing expressions. While hybrid faces made with consistent expressions generated aftereffects as large as those with normal faces, there were no aftereffects from hybrid faces made from different expressions, despite the fact that these contained the same local image elements. In Experiment 2, we examined the role of facial features independent of the normal face configuration by contrasting adaptation with whole faces to adaptation with scrambled faces. We found that scrambled faces also generated significant aftereffects, indicating that expressive features without a normal facial configuration could generate expression aftereffects. In Experiment 3, we examined the role of facial configuration by using schematic faces made from line elements that in isolation do not carry expression-related information (e.g. curved segments and straight lines) but that convey an expression when arranged in a normal facial configuration. We obtained a significant aftereffect for facial configurations but not scrambled configurations of these line elements. We conclude that facial expression aftereffects are not due to local adaptation to image elements but due to high-level adaptation of neural representations that involve both facial features and facial configuration.
Visual selective attention in body dysmorphic disorder, bulimia nervosa and healthy controls.
Kollei, Ines; Horndasch, Stefanie; Erim, Yesim; Martin, Alexandra
2017-01-01
Cognitive behavioral models postulate that selective attention plays an important role in the maintenance of body dysmorphic disorder (BDD). It is suggested that individuals with BDD overfocus on perceived defects in their appearance, which may contribute to the excessive preoccupation with their appearance. The present study used eye tracking to examine visual selective attention in individuals with BDD (n=19), as compared to individuals with bulimia nervosa (BN) (n=21) and healthy controls (HCs) (n=21). Participants completed interviews, questionnaires, rating scales and an eye tracking task: Eye movements were recorded while participants viewed photographs of their own face and attractive as well as unattractive other faces. Eye tracking data showed that BDD and BN participants focused less on their self-rated most attractive facial part than HCs. Scanning patterns in own and other faces showed that BDD and BN participants paid as much attention to attractive as to unattractive features in their own face, whereas they focused more on attractive features in attractive other faces. HCs paid more attention to attractive features in their own face and did the same in attractive other faces. Results indicate an attentional bias in BDD and BN participants manifesting itself in a neglect of positive features compared to HCs. Perceptual retraining may be an important aspect to focus on in therapy in order to overcome the neglect of positive facial aspects. Future research should aim to disentangle attentional processes in BDD by examining the time course of attentional processing. Copyright © 2016 Elsevier Inc. All rights reserved.
Face recognition with the Karhunen-Loeve transform
NASA Astrophysics Data System (ADS)
Suarez, Pedro F.
1991-12-01
The major goal of this research was to investigate machine recognition of faces. The approach taken to achieve this goal was to investigate the use of Karhunen-Loe've Transform (KLT) by implementing flexible and practical code. The KLT utilizes the eigenvectors of the covariance matrix as a basis set. Faces were projected onto the eigenvectors, called eigenfaces, and the resulting projection coefficients were used as features. Face recognition accuracies for the KLT coefficients were superior to Fourier based techniques. Additionally, this thesis demonstrated the image compression and reconstruction capabilities of the KLT. This theses also developed the use of the KLT as a facial feature detector. The ability to differentiate between facial features provides a computer communications interface for non-vocal people with cerebral palsy. Lastly, this thesis developed a KLT based axis system for laser scanner data of human heads. The scanner data axis system provides the anthropometric community a more precise method of fitting custom helmets.
9. WEST FACE OF OLD THEODOLITE BUILDING; WEST FACE OF ...
9. WEST FACE OF OLD THEODOLITE BUILDING; WEST FACE OF EAST PHOTO TOWER IN BACKGROUND - Vandenberg Air Force Base, Space Launch Complex 3, Launch Pad 3 East, Napa & Alden Roads, Lompoc, Santa Barbara County, CA
FaceTOON: a unified platform for feature-based cartoon expression generation
NASA Astrophysics Data System (ADS)
Zaharia, Titus; Marre, Olivier; Prêteux, Françoise; Monjaux, Perrine
2008-02-01
This paper presents the FaceTOON system, a semi-automatic platform dedicated to the creation of verbal and emotional facial expressions, within the applicative framework of 2D cartoon production. The proposed FaceTOON platform makes it possible to rapidly create 3D facial animations with a minimum amount of user interaction. In contrast with existing commercial 3D modeling softwares, which usually require from the users advanced 3D graphics skills and competences, the FaceTOON system is based exclusively on 2D interaction mechanisms, the 3D modeling stage being completely transparent for the user. The system takes as input a neutral 3D face model, free of any facial feature, and a set of 2D drawings, representing the desired facial features. A 2D/3D virtual mapping procedure makes it possible to obtain a ready-for-animation model which can be directly manipulated and deformed for generating expressions. The platform includes a complete set of dedicated tools for 2D/3D interactive deformation, pose management, key-frame interpolation and MPEG-4 compliant animation and rendering. The proposed FaceTOON system is currently considered for industrial evaluation and commercialization by the Quadraxis company.
Bayesian Face Recognition and Perceptual Narrowing in Face-Space
ERIC Educational Resources Information Center
Balas, Benjamin
2012-01-01
During the first year of life, infants' face recognition abilities are subject to "perceptual narrowing", the end result of which is that observers lose the ability to distinguish previously discriminable faces (e.g. other-race faces) from one another. Perceptual narrowing has been reported for faces of different species and different races, in…
Developmental Changes in Perceptions of Attractiveness: A Role of Experience?
ERIC Educational Resources Information Center
Cooper, Philip A.; Geldart, Sybil S.; Mondloch, Catherine J.; Maurer, Daphne
2006-01-01
In three experiments, we traced the development of the adult pattern of judgments of attractiveness for faces that have been altered to have internal features in low, average, or high positions. Twelve-year-olds and adults demonstrated identical patterns of results: they rated faces with features in an average location as significantly more…
Facial Expression Recognition with Fusion Features Extracted from Salient Facial Areas.
Liu, Yanpeng; Li, Yibin; Ma, Xin; Song, Rui
2017-03-29
In the pattern recognition domain, deep architectures are currently widely used and they have achieved fine results. However, these deep architectures make particular demands, especially in terms of their requirement for big datasets and GPU. Aiming to gain better results without deep networks, we propose a simplified algorithm framework using fusion features extracted from the salient areas of faces. Furthermore, the proposed algorithm has achieved a better result than some deep architectures. For extracting more effective features, this paper firstly defines the salient areas on the faces. This paper normalizes the salient areas of the same location in the faces to the same size; therefore, it can extracts more similar features from different subjects. LBP and HOG features are extracted from the salient areas, fusion features' dimensions are reduced by Principal Component Analysis (PCA) and we apply several classifiers to classify the six basic expressions at once. This paper proposes a salient areas definitude method which uses peak expressions frames compared with neutral faces. This paper also proposes and applies the idea of normalizing the salient areas to align the specific areas which express the different expressions. As a result, the salient areas found from different subjects are the same size. In addition, the gamma correction method is firstly applied on LBP features in our algorithm framework which improves our recognition rates significantly. By applying this algorithm framework, our research has gained state-of-the-art performances on CK+ database and JAFFE database.
Wind Drifts at Viking 1 Landing Site
NASA Technical Reports Server (NTRS)
1997-01-01
This image is of so-called wind drifts seen at the Viking 1 landing site. These are somewhat different from the features seen at the Pathfinder site in two important ways. 1) These landforms have no apparent slip-or avalanche-face as do both terrestrial dunes and the Pathfinder features, and may represent deposits of sediment falling from the air, as opposed to dune sand, which 'hops' or saltates along the ground; 2) these features may indicate erosion on one side, because of the layering and apparent scouring on their right sides. They may, therefore have been deposited by a wind moving left to right, partly or weakly cemented or solidified by surface processes at some later time, then eroded by a second wind (right to left), exposing their internal structure.
Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is a division of the California Institute of Technology (Caltech).Further Investigations of the Passive Optical Sample Assembly (POSA) - I Flight Experiment
NASA Technical Reports Server (NTRS)
Finckenor, Miria M.; Kamenetzky, Rachel R.; Vaughn, Jason A.; Mell, Richard; Deshpande, M. S.
2001-01-01
The Passive Optical Sample Assembly-I (POSA-I), part of the Mir Environmental Effects Payload (MEEP), was designed to study the combined effects of contamination, atomic oxygen, ultraviolet radiation, vacuum, then-nal cycling, and other constituents of the space environment on spacecraft materials. The MEEP program is a Phase I International Space Station Risk Mitigation Experiment. Candidate materials for the International Space Station (ISS) were exposed in a specially designed "suitcase" carrier, with identical specimens facing either Mir or space. The payload was attached by EVA to the exterior of the Mir docking module during the STS-76 mission (f'ig. 1). It was removed during the STS-86 mission after an 18-month exposure. During the mission, it received approximately 7 x 1019 atoMS/CM2 atomic oxygen, as calculated by polymer mass loss, and 413 ESH of solar ultraviolet radiation on the Mir-facing side. The side facing away from Mir received significant contaminant deposition, so atomic oxygen fluence has not been reliably determined. The side facing away from Mir received 571 ESH of solar UV. Contamination was observed on both the Mir-facing and space-facing sides of the POSA-I experiment , with a greater amount of deposition on the space facing side than the Mir side. The contamination has been determined to be outgassed silicone photofixed by ultraviolet radiation and converted to silicate by atomic oxygen interaction. Electron spectroscopy for chemical analysis (ESCA) with depth profiling indicated the presence of 26 - 31 nm silicate on the Mir-facing side and 500 - 1000 nm silicate on the space-facing side. The depth profiling also showed that the contaminant layer was uniform, with a small amount of carbon present on the surface and trace amounts of nitrogen, phosphorus, sulfur, and tin. The surface carbon layer is likely due to post-flight exposure in the laboratory and is similar to carbonaceous deposits on control samples. EDAX and FTIR analysis concurred with ESCA for the presence of silicon, oxygen, and carbon. Nearly 400 samples were exposed on POSA-I, which included materials such as thermal control coatings polymeric films, optical materials, and multi-layer insulation blankets. A previous paper discussed the effects of the space environment exposure and contaminant deposition on candidate materials for ISS, including Z93P inorganic thermal control coating, various anodizes, and multi-layer insulation blankets. This paper details the investigation of environmental effects on the remainder of POSA-I samples, particularly the innovative conductive thermal control coatings developed by AZ Technology of Huntsville, AL and HT Research Institute of Chicago, IL. The silicone/silicate contamination had a significant impact on the solar absorptance of white inorganic thermal control coatings on the space-facing side of POSA-I. The effect of contamination on electrical conductivity is discussed. Samples of conductive anodized aluminum developed by Boundary Technologies of Buffalo Grove, IL were also exposed on POSA-I. The effects of the space environment and contaminant deposition on the optical and electrical properties of the conductive anodized aluminum are discussed.
NASA Technical Reports Server (NTRS)
deGroh, Kim, K.; Dever, Joyce A.; Snyder, Aaron; Kaminski, Sharon; McCarthy, Catherine E.; Rapoport, Alison L.; Rucker, Rochelle N.
2006-01-01
A section of the retrieved Hubble Space Telescope (HST) solar array drive arm (SADA) multilayer insulation (MLI), which experienced 8.25 years of space exposure, was analyzed for environmental durability of the top layer of silver-Teflon (DuPont) fluorinated ethylene propylene (Ag-FEP). Because the SADA MLI had solar and anti-solar facing surfaces and was exposed to the space environment for a long duration, it provided a unique opportunity to study solar effects on the environmental degradation of Ag-FEP, a commonly used spacecraft thermal control material. Data obtained included tensile properties, solar absorptance, surface morphology and chemistry. The solar facing surface was found to be extremely embrittled and contained numerous through-thickness cracks. Tensile testing indicated that the solar facing surface lost 60% of its mechanical strength and 90% of its elasticity while the anti-solar facing surface had ductility similar to pristine FEP. The solar absorptance of both the solar facing surface (0.155 plus or minus 0.032) and the anti-solar facing surface (0.208 plus or minus 0.012) were found to be greater than pristine Ag-FEP (0.074). Solar facing and anti-solar facing surfaces were microscopically textured, and locations of isolated contamination were present on the anti-solar surface resulting in increased localized texturing. Yet, the overall texture was significantly more pronounced on the solar facing surface indicating a synergistic effect of combined solar exposure and increased heating with atomic oxygen erosion. The results indicate a very strong dependence of degradation, particularly embrittlement, upon solar exposure with orbital thermal cycling having a significant effect.
Precedence of the eye region in neural processing of faces
Issa, Elias; DiCarlo, James
2012-01-01
SUMMARY Functional magnetic resonance imaging (fMRI) has revealed multiple subregions in monkey inferior temporal cortex (IT) that are selective for images of faces over other objects. The earliest of these subregions, the posterior lateral face patch (PL), has not been studied previously at the neurophysiological level. Perhaps not surprisingly, we found that PL contains a high concentration of ‘face selective’ cells when tested with standard image sets comparable to those used previously to define the region at the level of fMRI. However, we here report that several different image sets and analytical approaches converge to show that nearly all face selective PL cells are driven by the presence of a single eye in the context of a face outline. Most strikingly, images containing only an eye, even when incorrectly positioned in an outline, drove neurons nearly as well as full face images, and face images lacking only this feature led to longer latency responses. Thus, bottom-up face processing is relatively local and linearly integrates features -- consistent with parts-based models -- grounding investigation of how the presence of a face is first inferred in the IT face processing hierarchy. PMID:23175821
Attention Misplaced: The Role of Diagnostic Features in the Face-Inversion Effect
ERIC Educational Resources Information Center
Hills, Peter J.; Ross, David A.; Lewis, Michael B.
2011-01-01
Inversion disproportionately impairs recognition of face stimuli compared to nonface stimuli arguably due to the holistic manner in which faces are processed. A qualification is put forward in which the first point fixated on is different for upright and inverted faces and this carries some of the face-inversion effect. Three experiments explored…
Facing Diabetes: What You Need to Know
... of this page please turn Javascript on. Feature: Diabetes Facing Diabetes: What You Need to Know Past Issues / Fall ... your loved ones. Photos: AP The Faces of Diabetes Diabetes strikes millions of Americans, young and old, ...
Marinkovic, Ksenija; Courtney, Maureen G.; Witzel, Thomas; Dale, Anders M.; Halgren, Eric
2014-01-01
Although a crucial role of the fusiform gyrus (FG) in face processing has been demonstrated with a variety of methods, converging evidence suggests that face processing involves an interactive and overlapping processing cascade in distributed brain areas. Here we examine the spatio-temporal stages and their functional tuning to face inversion, presence and configuration of inner features, and face contour in healthy subjects during passive viewing. Anatomically-constrained magnetoencephalography (aMEG) combines high-density whole-head MEG recordings and distributed source modeling with high-resolution structural MRI. Each person's reconstructed cortical surface served to constrain noise-normalized minimum norm inverse source estimates. The earliest activity was estimated to the occipital cortex at ~100 ms after stimulus onset and was sensitive to an initial coarse level visual analysis. Activity in the right-lateralized ventral temporal area (inclusive of the FG) peaked at ~160 ms and was largest to inverted faces. Images containing facial features in the veridical and rearranged configuration irrespective of the facial outline elicited intermediate level activity. The M160 stage may provide structural representations necessary for downstream distributed areas to process identity and emotional expression. However, inverted faces additionally engaged the left ventral temporal area at ~180 ms and were uniquely subserved by bilateral processing. This observation is consistent with the dual route model and spared processing of inverted faces in prosopagnosia. The subsequent deflection, peaking at ~240 ms in the anterior temporal areas bilaterally, was largest to normal, upright faces. It may reflect initial engagement of the distributed network subserving individuation and familiarity. These results support dynamic models suggesting that processing of unfamiliar faces in the absence of a cognitive task is subserved by a distributed and interactive neural circuit. PMID:25426044
Exploring the perceptual spaces of faces, cars and birds in children and adults
Tanaka, James W.; Meixner, Tamara L.; Kantner, Justin
2011-01-01
While much developmental research has focused on the strategies that children employ to recognize faces, less is known about the principles governing the organization of face exemplars in perceptual memory. In this study, we tested a novel, child-friendly paradigm for investigating the organization of face, bird and car exemplars. Children ages 3–4, 5–6, 7–8, 9–10, 11–12 and adults were presented with 50/50 morphs of typical and atypical face, bird and car parent images. Participants were asked to judge whether the 50/50 morph more strongly resembled the typical or the atypical parent image. Young and older children and adults showed a systematic bias to the atypical faces and birds, but no bias toward the atypical cars. Collectively, these findings argue that by the age of 3, children encode and organize faces, birds and cars in a perceptual space that is strikingly similar to that of adults. Category organization for both children and adults follows Krumhansl’s (1978) distance-density principle in which the similarity between two exemplars is jointly determined by their physical appearance and the density of neighboring exemplars in the perceptual space. PMID:21676096
Du, Jing; Wang, Jian
2015-11-01
Bessel beams carrying orbital angular momentum (OAM) with helical phase fronts exp(ilφ)(l=0;±1;±2;…), where φ is the azimuthal angle and l corresponds to the topological number, are orthogonal with each other. This feature of Bessel beams provides a new dimension to code/decode data information on the OAM state of light, and the theoretical infinity of topological number enables possible high-dimensional structured light coding/decoding for free-space optical communications. Moreover, Bessel beams are nondiffracting beams having the ability to recover by themselves in the face of obstructions, which is important for free-space optical communications relying on line-of-sight operation. By utilizing the OAM and nondiffracting characteristics of Bessel beams, we experimentally demonstrate 12 m distance obstruction-free optical m-ary coding/decoding using visible Bessel beams in a free-space optical communication system. We also study the bit error rate (BER) performance of hexadecimal and 32-ary coding/decoding based on Bessel beams with different topological numbers. After receiving 500 symbols at the receiver side, a zero BER of hexadecimal coding/decoding is observed when the obstruction is placed along the propagation path of light.
Histopathologic Distinguishing Features Between Lupus and Lichenoid Keratosis on the Face.
Marsch, Amanda F; Dacso, Mara; High, Whitney A; Junkins-Hopkins, Jacqueline M
2015-12-01
The occurrence of lichenoid keratosis (LK) on the face is not well characterized, and the histopathologic distinction between LK and lupus erythematosus (LE) occurring on the face is often indeterminate. The authors aimed to describe differences between LE and LK occurring on the face by hematoxylin and eosin alone. Cases of LK and LE were obtained using computer-driven queries. Clinical correlation was obtained for each lupus case. Other diagnoses were excluded for the LK cases. Hematoxylin and eosin-stained sections were reviewed. Forty-five cases of LK and 30 cases of LE occurring on the face were identified. Shared features included follicular involvement, epidermal atrophy, pigment incontinence, paucity of eosinophils, and basket-weave orthokeratosis. Major differences between LK and LE, respectively, included perivascular inflammation (11%, 90%), high Civatte bodies (44%, 7%), solar elastosis (84%, 33%), a predominate pattern of cell-poor vacuolar interface dermatitis (7%, 73%), compact follicular plugging (11%, 50%), hemorrhage (22%, 70%), mucin (0%, 77%), hypergranulosis (44%, 17%), and edema (7%, 60%). A predominate pattern of band-like lichenoid interface was seen more commonly in LK as compared with LE (93% vs. 27%). The authors established the occurrence of LK on the face and identified features to help distinguish LK from LE. Follicular involvement, basket-weave orthokeratosis, pigment incontinence, paucity of eosinophils, and epidermal atrophy were not reliable distinguishing features. Perivascular inflammation, cell-poor vacuolar interface, compact follicular plugging, mucin, hemorrhage, and edema favored LE. High Civatte bodies, band-like lichenoid interface, and solar elastosis favored LK.
ERIC Educational Resources Information Center
Bahrick, Lorraine E.; Krogh-Jespersen, Sheila; Argumosa, Melissa A.; Lopez, Hassel
2014-01-01
Although infants and children show impressive face-processing skills, little research has focused on the conditions that facilitate versus impair face perception. According to the intersensory redundancy hypothesis (IRH), face discrimination, which relies on detection of visual featural information, should be impaired in the context of…
Audio-video feature correlation: faces and speech
NASA Astrophysics Data System (ADS)
Durand, Gwenael; Montacie, Claude; Caraty, Marie-Jose; Faudemay, Pascal
1999-08-01
This paper presents a study of the correlation of features automatically extracted from the audio stream and the video stream of audiovisual documents. In particular, we were interested in finding out whether speech analysis tools could be combined with face detection methods, and to what extend they should be combined. A generic audio signal partitioning algorithm as first used to detect Silence/Noise/Music/Speech segments in a full length movie. A generic object detection method was applied to the keyframes extracted from the movie in order to detect the presence or absence of faces. The correlation between the presence of a face in the keyframes and of the corresponding voice in the audio stream was studied. A third stream, which is the script of the movie, is warped on the speech channel in order to automatically label faces appearing in the keyframes with the name of the corresponding character. We naturally found that extracted audio and video features were related in many cases, and that significant benefits can be obtained from the joint use of audio and video analysis methods.
NASA Astrophysics Data System (ADS)
Narici, Livo; Berger, Thomas; Burmeister, Sönke; Di Fino, Luca; Rizzo, Alessandro; Matthiä, Daniel; Reitz, Günther
2017-08-01
The solar system exploration by humans requires to successfully deal with the radiation exposition issue. The scientific aspect of this issue is twofold: knowing the radiation environment the astronauts are going to face and linking radiation exposure to health risks. Here we focus on the first issue. It is generally agreed that the final tool to describe the radiation environment in a space habitat will be a model featuring the needed amount of details to perform a meaningful risk assessment. The model should also take into account the shield changes due to the movement of materials inside the habitat, which in turn produce changes in the radiation environment. This model will have to undergo a final validation with a radiation field of similar complexity. The International Space Station (ISS) is a space habitat that features a radiation environment inside which is similar to what will be found in habitats in deep space, if we use measurements acquired only during high latitude passages (where the effects of the Earth magnetic field are reduced). Active detectors, providing time information, that can easily select data from different orbital sections, are the ones best fulfilling the requirements for these kinds of measurements. The exploitation of the radiation measurements performed in the ISS by all the available instruments is therefore mandatory to provide the largest possible database to the scientific community, to be merged with detailed Computer Aided Design (CAD) models, in the quest for a full model validation. While some efforts in comparing results from multiple active detectors have been attempted, a thorough study of a procedure to merge data in a single data matrix in order to provide the best validation set for radiation environment models has never been attempted. The aim of this paper is to provide such a procedure, to apply it to two of the most performing active detector systems in the ISS: the Anomalous Long Term Effects in Astronauts (ALTEA) instrument and the DOSimetry TELescope (DOSTEL) detectors, applied in the frame of the DOSIS and DOSIS 3D project onboard the ISS and to present combined results exploiting the features of each of the two apparatuses.
Cheng, Xue Jun; McCarthy, Callum J; Wang, Tony S L; Palmeri, Thomas J; Little, Daniel R
2018-06-01
Upright faces are thought to be processed more holistically than inverted faces. In the widely used composite face paradigm, holistic processing is inferred from interference in recognition performance from a to-be-ignored face half for upright and aligned faces compared with inverted or misaligned faces. We sought to characterize the nature of holistic processing in composite faces in computational terms. We use logical-rule models (Fifić, Little, & Nosofsky, 2010) and Systems Factorial Technology (Townsend & Nozawa, 1995) to examine whether composite faces are processed through pooling top and bottom face halves into a single processing channel-coactive processing-which is one common mechanistic definition of holistic processing. By specifically operationalizing holistic processing as the pooling of features into a single decision process in our task, we are able to distinguish it from other processing models that may underlie composite face processing. For instance, a failure of selective attention might result even when top and bottom components of composite faces are processed in serial or in parallel without processing the entire face coactively. Our results show that performance is best explained by a mixture of serial and parallel processing architectures across all 4 upright and inverted, aligned and misaligned face conditions. The results indicate multichannel, featural processing of composite faces in a manner inconsistent with the notion of coactivity. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Kernel-aligned multi-view canonical correlation analysis for image recognition
NASA Astrophysics Data System (ADS)
Su, Shuzhi; Ge, Hongwei; Yuan, Yun-Hao
2016-09-01
Existing kernel-based correlation analysis methods mainly adopt a single kernel in each view. However, only a single kernel is usually insufficient to characterize nonlinear distribution information of a view. To solve the problem, we transform each original feature vector into a 2-dimensional feature matrix by means of kernel alignment, and then propose a novel kernel-aligned multi-view canonical correlation analysis (KAMCCA) method on the basis of the feature matrices. Our proposed method can simultaneously employ multiple kernels to better capture the nonlinear distribution information of each view, so that correlation features learned by KAMCCA can have well discriminating power in real-world image recognition. Extensive experiments are designed on five real-world image datasets, including NIR face images, thermal face images, visible face images, handwritten digit images, and object images. Promising experimental results on the datasets have manifested the effectiveness of our proposed method.
Deep Convolutional Neural Networks for Classifying Body Constitution Based on Face Image.
Huan, Er-Yang; Wen, Gui-Hua; Zhang, Shi-Jun; Li, Dan-Yang; Hu, Yang; Chang, Tian-Yuan; Wang, Qing; Huang, Bing-Lin
2017-01-01
Body constitution classification is the basis and core content of traditional Chinese medicine constitution research. It is to extract the relevant laws from the complex constitution phenomenon and finally build the constitution classification system. Traditional identification methods have the disadvantages of inefficiency and low accuracy, for instance, questionnaires. This paper proposed a body constitution recognition algorithm based on deep convolutional neural network, which can classify individual constitution types according to face images. The proposed model first uses the convolutional neural network to extract the features of face image and then combines the extracted features with the color features. Finally, the fusion features are input to the Softmax classifier to get the classification result. Different comparison experiments show that the algorithm proposed in this paper can achieve the accuracy of 65.29% about the constitution classification. And its performance was accepted by Chinese medicine practitioners.
Recognition Memory for Realistic Synthetic Faces
Yotsumoto, Yuko; Kahana, Michael J.; Wilson, Hugh R.; Sekuler, Robert
2006-01-01
A series of experiments examined short-term recognition memory for trios of briefly-presented, synthetic human faces derived from three real human faces. The stimuli were graded series of faces, which differed by varying known amounts from the face of the average female. Faces based on each of the three real faces were transformed so as to lie along orthogonal axes in a 3-D face space. Experiment 1 showed that the synthetic faces' perceptual similarity stucture strongly influenced recognition memory. Results were fit by NEMo, a noisy exemplar model of perceptual recognition memory. The fits revealed that recognition memory was influenced both by the similarity of the probe to series items, and by the similarities among the series items themselves. Non-metric multi-dimensional scaling (MDS) showed that faces' perceptual representations largely preserved the 3-D space in which the face stimuli were arrayed. NEMo gave a better account of the results when similarity was defined as perceptual, MDS similarity rather than physical proximity of one face to another. Experiment 2 confirmed the importance of within-list homogeneity directly, without mediation of a model. We discuss the affinities and differences between visual memory for synthetic faces and memory for simpler stimuli. PMID:17948069
ERIC Educational Resources Information Center
Lauer, Patricia A.; Christopher, Debra E.; Firpo-Triplett, Regina; Buchting, Francisco
2014-01-01
A narrative literature review was conducted to identify the design features of effective short-term face-to-face professional development (PD) events. The 23 reviewed studies described PD with durations of 30 hours or less and involved participants in education or human service-related professions. Design features associated with positive impacts…
An Inner Face Advantage in Children's Recognition of Familiar Peers
ERIC Educational Resources Information Center
Ge, Liezhong; Anzures, Gizelle; Wang, Zhe; Kelly, David J.; Pascalis, Olivier; Quinn, Paul C.; Slater, Alan M.; Yang, Zhiliang; Lee, Kang
2008-01-01
Children's recognition of familiar own-age peers was investigated. Chinese children (4-, 8-, and 14-year-olds) were asked to identify their classmates from photographs showing the entire face, the internal facial features only, the external facial features only, or the eyes, nose, or mouth only. Participants from all age groups were familiar with…
Investigating Relationships between Features of Learning Designs and Student Learning Outcomes
ERIC Educational Resources Information Center
McNaught, Carmel; Lam, Paul; Cheng, Kin Fai
2012-01-01
This article reports a study of eLearning in 21 courses in Hong Kong universities that had a blended design of face-to-face classes combined with online learning. The main focus of the study was to examine possible relationships between features of online learning designs and student learning outcomes. Data-collection strategies included expert…
Sorted Index Numbers for Privacy Preserving Face Recognition
NASA Astrophysics Data System (ADS)
Wang, Yongjin; Hatzinakos, Dimitrios
2009-12-01
This paper presents a novel approach for changeable and privacy preserving face recognition. We first introduce a new method of biometric matching using the sorted index numbers (SINs) of feature vectors. Since it is impossible to recover any of the exact values of the original features, the transformation from original features to the SIN vectors is noninvertible. To address the irrevocable nature of biometric signals whilst obtaining stronger privacy protection, a random projection-based method is employed in conjunction with the SIN approach to generate changeable and privacy preserving biometric templates. The effectiveness of the proposed method is demonstrated on a large generic data set, which contains images from several well-known face databases. Extensive experimentation shows that the proposed solution may improve the recognition accuracy.
Toward Optimal Manifold Hashing via Discrete Locally Linear Embedding.
Rongrong Ji; Hong Liu; Liujuan Cao; Di Liu; Yongjian Wu; Feiyue Huang
2017-11-01
Binary code learning, also known as hashing, has received increasing attention in large-scale visual search. By transforming high-dimensional features to binary codes, the original Euclidean distance is approximated via Hamming distance. More recently, it is advocated that it is the manifold distance, rather than the Euclidean distance, that should be preserved in the Hamming space. However, it retains as an open problem to directly preserve the manifold structure by hashing. In particular, it first needs to build the local linear embedding in the original feature space, and then quantize such embedding to binary codes. Such a two-step coding is problematic and less optimized. Besides, the off-line learning is extremely time and memory consuming, which needs to calculate the similarity matrix of the original data. In this paper, we propose a novel hashing algorithm, termed discrete locality linear embedding hashing (DLLH), which well addresses the above challenges. The DLLH directly reconstructs the manifold structure in the Hamming space, which learns optimal hash codes to maintain the local linear relationship of data points. To learn discrete locally linear embeddingcodes, we further propose a discrete optimization algorithm with an iterative parameters updating scheme. Moreover, an anchor-based acceleration scheme, termed Anchor-DLLH, is further introduced, which approximates the large similarity matrix by the product of two low-rank matrices. Experimental results on three widely used benchmark data sets, i.e., CIFAR10, NUS-WIDE, and YouTube Face, have shown superior performance of the proposed DLLH over the state-of-the-art approaches.
Holistic Face Processing Is Mature at 4 Years of Age: Evidence from the Composite Face Effect
ERIC Educational Resources Information Center
de Heering, Adelaide; Houthuys, Sarah; Rossion, Bruno
2007-01-01
Although it is acknowledged that adults integrate features into a representation of the whole face, there is still some disagreement about the onset and developmental course of holistic face processing. We tested adults and children from 4 to 6 years of age with the same paradigm measuring holistic face processing through an adaptation of the…
Robotic End Effectors for Hard-Rock Climbing
NASA Technical Reports Server (NTRS)
Kennedy, Brett; Leger, Patrick
2004-01-01
Special-purpose robot hands (end effectors) now under development are intended to enable robots to traverse cliffs much as human climbers do. Potential applications for robots having this capability include scientific exploration (both on Earth and other rocky bodies in space), military reconnaissance, and outdoor search and rescue operations. Until now, enabling robots to traverse cliffs has been considered too difficult a task because of the perceived need of prohibitively sophisticated planning algorithms as well as end effectors as dexterous as human hands. The present end effectors are being designed to enable robots to attach themselves to typical rock-face features with less planning and simpler end effectors. This advance is based on the emulation of the equipment used by human climbers rather than the emulation of the human hand. Climbing-aid equipment, specifically cams, aid hooks, and cam hooks, are used by sport climbers when a quick ascent of a cliff is desired (see Figure 1). Currently two different end-effector designs have been created. The first, denoted the simple hook emulator, consists of three "fingers" arranged around a central "palm." Each finger emulates the function of a particular type of climbing hook (aid hook, wide cam hook, and a narrow cam hook). These fingers are connected to the palm via a mechanical linkage actuated with a leadscrew/nut. This mechanism allows the fingers to be extended or retracted. The second design, denoted the advanced hook emulator (see Figure 2), shares these features, but it incorporates an aid hook and a cam hook into each finger. The spring-loading of the aid hook allows the passive selection of the type of hook used. The end effectors can be used in several different modes. In the aid-hook mode, the aid hook on one of the fingers locks onto a horizontal ledge while the other two fingers act to stabilize the end effector against the cliff face. In the cam-hook mode, the broad, flat tip of the cam hook is inserted into a non-horizontal crack in the cliff face. A subsequent transfer of weight onto the end effector causes the tip to rotate within the crack, creating a passive, self-locking action of the hook relative to the crack. In the advanced hook emulator, the aid hook is pushed into its retracted position by contact with the cliff face as the cam hook tip is inserted into the crack. When a cliff face contains relatively large pockets or cracks, another type of passive self-locking can be used. Emulating the function of the piece of climbing equipment called a "cam" (note: not the same as a "cam hook"; see Figure 1), the fingers can be fully retracted and the entire end effector inserted into the feature. The fingers are then extended as far as the feature allows. Any weight then transferred to the end effector will tend to extend the fingers further due to frictional force, passively increasing the grip on the feature. In addition to the climbing modes, these end effectors can be used to walk on (either on the palm or the fingertips) and to grasp objects by fully extending the fingers.
The functional basis of face evaluation
Oosterhof, Nikolaas N.; Todorov, Alexander
2008-01-01
People automatically evaluate faces on multiple trait dimensions, and these evaluations predict important social outcomes, ranging from electoral success to sentencing decisions. Based on behavioral studies and computer modeling, we develop a 2D model of face evaluation. First, using a principal components analysis of trait judgments of emotionally neutral faces, we identify two orthogonal dimensions, valence and dominance, that are sufficient to describe face evaluation and show that these dimensions can be approximated by judgments of trustworthiness and dominance. Second, using a data-driven statistical model for face representation, we build and validate models for representing face trustworthiness and face dominance. Third, using these models, we show that, whereas valence evaluation is more sensitive to features resembling expressions signaling whether the person should be avoided or approached, dominance evaluation is more sensitive to features signaling physical strength/weakness. Fourth, we show that important social judgments, such as threat, can be reproduced as a function of the two orthogonal dimensions of valence and dominance. The findings suggest that face evaluation involves an overgeneralization of adaptive mechanisms for inferring harmful intentions and the ability to cause harm and can account for rapid, yet not necessarily accurate, judgments from faces. PMID:18685089
Cascaded K-means convolutional feature learner and its application to face recognition
NASA Astrophysics Data System (ADS)
Zhou, Daoxiang; Yang, Dan; Zhang, Xiaohong; Huang, Sheng; Feng, Shu
2017-09-01
Currently, considerable efforts have been devoted to devise image representation. However, handcrafted methods need strong domain knowledge and show low generalization ability, and conventional feature learning methods require enormous training data and rich parameters tuning experience. A lightened feature learner is presented to solve these problems with application to face recognition, which shares similar topology architecture as a convolutional neural network. Our model is divided into three components: cascaded convolution filters bank learning layer, nonlinear processing layer, and feature pooling layer. Specifically, in the filters learning layer, we use K-means to learn convolution filters. Features are extracted via convoluting images with the learned filters. Afterward, in the nonlinear processing layer, hyperbolic tangent is employed to capture the nonlinear feature. In the feature pooling layer, to remove the redundancy information and incorporate the spatial layout, we exploit multilevel spatial pyramid second-order pooling technique to pool the features in subregions and concatenate them together as the final representation. Extensive experiments on four representative datasets demonstrate the effectiveness and robustness of our model to various variations, yielding competitive recognition results on extended Yale B and FERET. In addition, our method achieves the best identification performance on AR and labeled faces in the wild datasets among the comparative methods.
Event-related potentials to structural familiar face incongruity processing.
Jemel, B; George, N; Olivares, E; Fiori, N; Renault, B
1999-07-01
Thirty scalp sites were used to investigate the specific topography of the event-related potentials (ERPs) related to face associative priming when masked eyes of familiar faces were completed with either the proper features or incongruent ones. The enhanced negativity of N210 and N350, due to structural incongruity of faces, have a "category specific" inferotemporal localization on the scalp. Additional analyses support the existence of multiple ERP features within the temporal interval typically associated with N400 (N350 and N380), involving occipitotemporal and centroparietal areas. Seven reliable dipole locations have been evidenced using the brain electrical source analysis algorithm. Some of these localizations (fusiform, parahippocampal) are already known to be involved in face recognition, the other ones being related to general cognitive processes related to the task's demand. Because of their specific topography, the observed effects suggest that the face structural congruency process might involve early specialized neocortical areas in parallel with cortical memory circuits in the integration of perceptual and cognitive face processing.
Piepers, Daniel W.; Robbins, Rachel A.
2012-01-01
It is widely agreed that the human face is processed differently from other objects. However there is a lack of consensus on what is meant by a wide array of terms used to describe this “special” face processing (e.g., holistic and configural) and the perceptually relevant information within a face (e.g., relational properties and configuration). This paper will review existing models of holistic/configural processing, discuss how they differ from one another conceptually, and review the wide variety of measures used to tap into these concepts. In general we favor a model where holistic processing of a face includes some or all of the interrelations between features and has separate coding for features. However, some aspects of the model remain unclear. We propose the use of moving faces as a way of clarifying what types of information are included in the holistic representation of a face. PMID:23413184
Cross-Category Adaptation: Objects Produce Gender Adaptation in the Perception of Faces
Javadi, Amir Homayoun; Wee, Natalie
2012-01-01
Adaptation aftereffects have been found for low-level visual features such as colour, motion and shape perception, as well as higher-level features such as gender, race and identity in domains such as faces and biological motion. It is not yet clear if adaptation effects in humans extend beyond this set of higher order features. The aim of this study was to investigate whether objects highly associated with one gender, e.g. high heels for females or electric shavers for males can modulate gender perception of a face. In two separate experiments, we adapted subjects to a series of objects highly associated with one gender and subsequently asked participants to judge the gender of an ambiguous face. Results showed that participants are more likely to perceive an ambiguous face as male after being exposed to objects highly associated to females and vice versa. A gender adaptation aftereffect was obtained despite the adaptor and test stimuli being from different global categories (objects and faces respectively). These findings show that our perception of gender from faces is highly affected by our environment and recent experience. This suggests two possible mechanisms: (a) that perception of the gender associated with an object shares at least some brain areas with those responsible for gender perception of faces and (b) adaptation to gender, which is a high-level concept, can modulate brain areas that are involved in facial gender perception through top-down processes. PMID:23049942
NASA Astrophysics Data System (ADS)
Miwa, Shotaro; Kage, Hiroshi; Hirai, Takashi; Sumi, Kazuhiko
We propose a probabilistic face recognition algorithm for Access Control System(ACS)s. Comparing with existing ACSs using low cost IC-cards, face recognition has advantages in usability and security that it doesn't require people to hold cards over scanners and doesn't accept imposters with authorized cards. Therefore face recognition attracts more interests in security markets than IC-cards. But in security markets where low cost ACSs exist, price competition is important, and there is a limitation on the quality of available cameras and image control. Therefore ACSs using face recognition are required to handle much lower quality images, such as defocused and poor gain-controlled images than high security systems, such as immigration control. To tackle with such image quality problems we developed a face recognition algorithm based on a probabilistic model which combines a variety of image-difference features trained by Real AdaBoost with their prior probability distributions. It enables to evaluate and utilize only reliable features among trained ones during each authentication, and achieve high recognition performance rates. The field evaluation using a pseudo Access Control System installed in our office shows that the proposed system achieves a constant high recognition performance rate independent on face image qualities, that is about four times lower EER (Equal Error Rate) under a variety of image conditions than one without any prior probability distributions. On the other hand using image difference features without any prior probabilities are sensitive to image qualities. We also evaluated PCA, and it has worse, but constant performance rates because of its general optimization on overall data. Comparing with PCA, Real AdaBoost without any prior distribution performs twice better under good image conditions, but degrades to a performance as good as PCA under poor image conditions.
How the brain assigns a neural tag to arbitrary points in a high-dimensional space
NASA Astrophysics Data System (ADS)
Stevens, Charles
Brains in almost all organisms need to deal with very complex stimuli. For example, most mammals are very good at face recognition, and faces are very complex objects indeed. For example, modern face recognition software represents a face as a point in a 10,000 dimensional space. Every human must be able to learn to recognize any of the 7 billion faces in the world, and can recognize familiar faces after a display of the face is viewed for only a few hundred milliseconds. Because we do not understand how faces are assigned locations in a high-dimensional space by the brain, attacking the problem of how face recognition is accomplished is very difficult. But a much easier problem of the same sort can be studied for odor recognition. For the mouse, each odor is assigned a point in a 1000 dimensional space, and the fruit fly assigns any odor a location in only a 50 dimensional space. A fly has about 50 distinct types of odorant receptor neurons (ORNs), each of which produce nerve impulses at a specific rate for each different odor. This pattern of firing produced across 50 ORNs is called `a combinatorial odor code', and this code assigns every odor a point in a 50 dimensional space that is used to identify the odor. In order to learn the odor, the brain must alter the strength of synapses. The combinatorial code cannot itself by used to change synaptic strength because all odors use same neurons to form the code, and so all synapses would be changed for any odor and the odors could not be distinguished. In order to learn an odor, the brain must assign a set of neurons - the odor tag - that have the property that these neurons (1) should make use of all of the information available about the odor, and (2) insure that any two tags overlap as little as possible (so one odor does not modify synapses used by other odors). In the talk, I will explain how the olfactory system of both the fruit fly and the mouse produce a tag for each odor that has these two properties. Supported by NSF.
Face recognition via edge-based Gabor feature representation for plastic surgery-altered images
NASA Astrophysics Data System (ADS)
Chude-Olisah, Chollette C.; Sulong, Ghazali; Chude-Okonkwo, Uche A. K.; Hashim, Siti Z. M.
2014-12-01
Plastic surgery procedures on the face introduce skin texture variations between images of the same person (intra-subject), thereby making the task of face recognition more difficult than in normal scenario. Usually, in contemporary face recognition systems, the original gray-level face image is used as input to the Gabor descriptor, which translates to encoding some texture properties of the face image. The texture-encoding process significantly degrades the performance of such systems in the case of plastic surgery due to the presence of surgically induced intra-subject variations. Based on the proposition that the shape of significant facial components such as eyes, nose, eyebrow, and mouth remains unchanged after plastic surgery, this paper employs an edge-based Gabor feature representation approach for the recognition of surgically altered face images. We use the edge information, which is dependent on the shapes of the significant facial components, to address the plastic surgery-induced texture variation problems. To ensure that the significant facial components represent useful edge information with little or no false edges, a simple illumination normalization technique is proposed for preprocessing. Gabor wavelet is applied to the edge image to accentuate on the uniqueness of the significant facial components for discriminating among different subjects. The performance of the proposed method is evaluated on the Georgia Tech (GT) and the Labeled Faces in the Wild (LFW) databases with illumination and expression problems, and the plastic surgery database with texture changes. Results show that the proposed edge-based Gabor feature representation approach is robust against plastic surgery-induced face variations amidst expression and illumination problems and outperforms the existing plastic surgery face recognition methods reported in the literature.
Lemieux, Chantal L; Collin, Charles A; Nelson, Elizabeth A
2015-02-01
In two experiments, we examined the effects of varying the spatial frequency (SF) content of face images on eye movements during the learning and testing phases of an old/new recognition task. At both learning and testing, participants were presented with face stimuli band-pass filtered to 11 different SF bands, as well as an unfiltered baseline condition. We found that eye movements varied significantly as a function of SF. Specifically, the frequency of transitions between facial features showed a band-pass pattern, with more transitions for middle-band faces (≈5-20 cycles/face) than for low-band (≈<5 cpf) or high-band (≈>20 cpf) ones. These findings were similar for the learning and testing phases. The distributions of transitions across facial features were similar for the middle-band, high-band, and unfiltered faces, showing a concentration on the eyes and mouth; conversely, low-band faces elicited mostly transitions involving the nose and nasion. The eye movement patterns elicited by low, middle, and high bands are similar to those previous researchers have suggested reflect holistic, configural, and featural processing, respectively. More generally, our results are compatible with the hypotheses that eye movements are functional, and that the visual system makes flexible use of visuospatial information in face processing. Finally, our finding that only middle spatial frequencies yielded the same number and distribution of fixations as unfiltered faces adds more evidence to the idea that these frequencies are especially important for face recognition, and reveals a possible mediator for the superior performance that they elicit.
NASA Astrophysics Data System (ADS)
Zhu, Yue; Gao, Wanrong; Zhou, Yuan; Guo, Yingcheng; Guo, Feng; He, Yong
2015-11-01
We report rapid and high-resolution tomographic en face imaging of human liver specimens by full-field optical coherence tomography (FF-OCT). First, the arrangement of the FF-OCT system was described and the performance of the system was measured. The measured axial and lateral resolutions of the system are 0.8 and 0.9 μm, respectively. The system has a sensitivity of ˜60 dB and can achieve an imaging rate of 7 fps and a penetration depth of ˜80 μm. The histological structures of normal liver can be seen clearly in the en face tomographic images, including central veins, cords of hepatocytes separated by sinusoidal spaces, and portal area (portal vein, the hepatic arteriole, and the bile duct). A wide variety of histological subtypes of hepatocellular carcinoma was observed in en face tomographic images, revealing notable cancerous features, including the nuclear atypia (enlarged convoluted nuclei), the polygonal tumor cells with obvious resemblance to hepatocytes with enlarged nuclei. In addition, thicker fibrous bands, which make the cytoplasmic plump vesicular nuclei indistinct, were also seen in the images. Finally, comparison between the portal vein in a normal specimen versus that seen in the rare type of cholangiocarcinoma was made. The results show that the cholangiocarcinoma presents with a blurred pattern of portal vein in the lateral direction and an aggregated distribution in the axial direction; the surrounding sinusoidal spaces and nuclei of cholangiocarcinoma are absent. The findings in this work may be used as additional signs of liver cancer or cholangiocarcinoma, demonstrating capacity of FF-OCT device for early cancer diagnosis and many other tumor-related studies in biopsy.
The Role of Early Visual Attention in Social Development
ERIC Educational Resources Information Center
Wagner, Jennifer B.; Luyster, Rhiannon J.; Yim, Jung Yeon; Tager-Flusberg, Helen; Nelson, Charles A.
2013-01-01
Faces convey important information about the social environment, and even very young infants are preferentially attentive to face-like over non-face stimuli. Eye-tracking studies have allowed researchers to examine which features of faces infants find most salient across development, and the present study examined scanning of familiar (i.e.,…
Community College Student Success in Online versus Equivalent Face-to-Face Courses
ERIC Educational Resources Information Center
Gregory, Cheri B.; Lampley, James H.
2016-01-01
As part of a nationwide effort to increase the postsecondary educational attainment levels of citizens, community colleges have expanded offerings of courses and programs to more effectively meet the needs of students. Online courses offer convenience and flexibility that traditional face-to-face classes do not. These features appeal to students…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kenney, Jeffrey D. P.; Abramson, Anne; Bravo-Alfaro, Hector, E-mail: jeff.kenney@yale.edu
Remarkable dust extinction features in the deep Hubble Space Telescope (HST) V and I images of the face-on Coma cluster spiral galaxy NGC 4921 show in unprecedented ways how ram pressure strips the ISM from the disk of a spiral galaxy. New VLA HI maps show a truncated and highly asymmetric HI disk with a compressed HI distribution in the NW, providing evidence for ram pressure acting from the NW. Where the HI distribution is truncated in the NW region, HST images show a well-defined, continuous front of dust that extends over 90° and 20 kpc. This dust front separatesmore » the dusty from dust-free regions of the galaxy, and we interpret it as galaxy ISM swept up near the leading side of the ICM–ISM interaction. We identify and characterize 100 pc–1 kpc scale substructure within this dust front caused by ram pressure, including head–tail filaments, C-shaped filaments, and long smooth dust fronts. The morphology of these features strongly suggests that dense gas clouds partially decouple from surrounding lower density gas during stripping, but decoupling is inhibited, possibly by magnetic fields that link and bind distant parts of the ISM.« less
Oculomotor guidance and capture by irrelevant faces.
Devue, Christel; Belopolsky, Artem V; Theeuwes, Jan
2012-01-01
Even though it is generally agreed that face stimuli constitute a special class of stimuli, which are treated preferentially by our visual system, it remains unclear whether faces can capture attention in a stimulus-driven manner. Moreover, there is a long-standing debate regarding the mechanism underlying the preferential bias of selecting faces. Some claim that faces constitute a set of special low-level features to which our visual system is tuned; others claim that the visual system is capable of extracting the meaning of faces very rapidly, driving attentional selection. Those debates continue because many studies contain methodological peculiarities and manipulations that prevent a definitive conclusion. Here, we present a new visual search task in which observers had to make a saccade to a uniquely colored circle while completely irrelevant objects were also present in the visual field. The results indicate that faces capture and guide the eyes more than other animated objects and that our visual system is not only tuned to the low-level features that make up a face but also to its meaning.
Real-Time X-Ray Microscopy of Al-Cu Eutectic Solidification
NASA Technical Reports Server (NTRS)
Kaukler, William F.; Curreri, Peter A.; Sen, Subhayu
1998-01-01
Recent improvements in the resolution of the X-ray Transmission Microscope (XTM) for Solidification Studies provide microstructure feature detectability down to 5 micrometers during solidification. This presentation will show the recent results from observations made in real-time of the solid-liquid interfacial morphologies of the Al-CuAI2 eutectic alloy. Lamellar dimensions and spacings, transitions of morphology caused by growth rate changes, and eutectic grain structures are open to measurements. A unique vantage point viewing the face of the interface isotherm is possible for the first time with the XTM due to its infinite depth of field. A video of the solid-liquid interfaces seen in-situ and in real-time will be shown.
Carvajal, Gonzalo; Figueroa, Miguel
2014-07-01
Typical image recognition systems operate in two stages: feature extraction to reduce the dimensionality of the input space, and classification based on the extracted features. Analog Very Large Scale Integration (VLSI) is an attractive technology to achieve compact and low-power implementations of these computationally intensive tasks for portable embedded devices. However, device mismatch limits the resolution of the circuits fabricated with this technology. Traditional layout techniques to reduce the mismatch aim to increase the resolution at the transistor level, without considering the intended application. Relating mismatch parameters to specific effects in the application level would allow designers to apply focalized mismatch compensation techniques according to predefined performance/cost tradeoffs. This paper models, analyzes, and evaluates the effects of mismatched analog arithmetic in both feature extraction and classification circuits. For the feature extraction, we propose analog adaptive linear combiners with on-chip learning for both Least Mean Square (LMS) and Generalized Hebbian Algorithm (GHA). Using mathematical abstractions of analog circuits, we identify mismatch parameters that are naturally compensated during the learning process, and propose cost-effective guidelines to reduce the effect of the rest. For the classification, we derive analog models for the circuits necessary to implement Nearest Neighbor (NN) approach and Radial Basis Function (RBF) networks, and use them to emulate analog classifiers with standard databases of face and hand-writing digits. Formal analysis and experiments show how we can exploit adaptive structures and properties of the input space to compensate the effects of device mismatch at the application level, thus reducing the design overhead of traditional layout techniques. Results are also directly extensible to multiple application domains using linear subspace methods. Copyright © 2014 Elsevier Ltd. All rights reserved.
The secrets of highly active older adults.
Franke, Thea; Tong, Catherine; Ashe, Maureen C; McKay, Heather; Sims-Gould, Joanie
2013-12-01
Although physical activity is a recognized component in the management of many chronic diseases associated with aging, activity levels tend to progressively decline with increasing age (Manini & Pahor, 2009; Schutzer & Graves, 2004). In this article we examine the key factors that facilitate physical activity in highly active community-dwelling older adults. Using a strengths based approach, we examined the factors that facilitated physical activity in our sample of highly active older adults. Twenty-seven older adults participated in face-to face interviews. We extracted a sub-sample of 10 highly active older adults to be included in the analyses. Based on a framework analysis of our transcripts we identified three factors that facilitate physical activity in our sample, these include: 1) resourcefulness: engagement in self-help strategies such as self-efficacy, self-control and adaptability; 2) social connections: the presence of relationships (friend, neighborhood, institutions) and social activities that support or facilitate high levels of physical activity; and 3) the role of the built and natural environments: features of places and spaces that support and facilitate high levels of physical activity. Findings provide insight into, and factors that facilitate older adults' physical activity. We discuss implications for programs (e.g., accessible community centers, with appropriate programming throughout the lifecourse) and policies geared towards the promotion of physical activity (e.g., the development of spaces that facilitate both physical and social activities). © 2013.
Schäfer, Andreas; Golz, Christopher; Preut, Hans; Strohmann, Carsten; Hiersemann, Martin
2015-01-01
The title hydrate, C17H28O2·H2O, was synthesized in order to determine the relative configuration of the tetracyclic framework. The fused 5,6,7-tricarbocyclic core exhibits an entire cis-annulation, featuring a 1,4-cis-relation of the angular methyl groups in the six-membered ring. The oxa bridge of the epoxycycloheptane moiety is oriented towards the concave face of the boat-shaped molecule, whereas the angular methyl groups are directed towards the convex face. The asymmetric unit of the crystal contains two nearly identical formula units, which are related via a pseudo-centre of symmetry. The structure could be solved in the space groups I-4 and I41/a. The refinement in the acentric space group, however, gave significantly better results and these are used in this paper. O—H⋯O hydrogen bonds are observed between the organic molecules, between the organic molecules and the water molecules, and between the water molecules, forming a chain along the c-axis direction. PMID:26396907
Automatic 2.5-D Facial Landmarking and Emotion Annotation for Social Interaction Assistance.
Zhao, Xi; Zou, Jianhua; Li, Huibin; Dellandrea, Emmanuel; Kakadiaris, Ioannis A; Chen, Liming
2016-09-01
People with low vision, Alzheimer's disease, and autism spectrum disorder experience difficulties in perceiving or interpreting facial expression of emotion in their social lives. Though automatic facial expression recognition (FER) methods on 2-D videos have been extensively investigated, their performance was constrained by challenges in head pose and lighting conditions. The shape information in 3-D facial data can reduce or even overcome these challenges. However, high expenses of 3-D cameras prevent their widespread use. Fortunately, 2.5-D facial data from emerging portable RGB-D cameras provide a good balance for this dilemma. In this paper, we propose an automatic emotion annotation solution on 2.5-D facial data collected from RGB-D cameras. The solution consists of a facial landmarking method and a FER method. Specifically, we propose building a deformable partial face model and fit the model to a 2.5-D face for localizing facial landmarks automatically. In FER, a novel action unit (AU) space-based FER method has been proposed. Facial features are extracted using landmarks and further represented as coordinates in the AU space, which are classified into facial expressions. Evaluated on three publicly accessible facial databases, namely EURECOM, FRGC, and Bosphorus databases, the proposed facial landmarking and expression recognition methods have achieved satisfactory results. Possible real-world applications using our algorithms have also been discussed.
2012-01-01
Background A crucial issue for the sustainability of societies is how to maintain health and functioning in older people. With increasing age, losses in vision, hearing, balance, mobility and cognitive capacity render older people particularly exposed to environmental barriers. A central building block of human functioning is walking. Walking difficulties may start to develop in midlife and become increasingly prevalent with age. Life-space mobility reflects actual mobility performance by taking into account the balance between older adults internal physiologic capacity and the external challenges they encounter in daily life. The aim of the Life-Space Mobility in Old Age (LISPE) project is to examine how home and neighborhood characteristics influence people’s health, functioning, disability, quality of life and life-space mobility in the context of aging. In addition, examine whether a person’s health and function influence life-space mobility. Design This paper describes the study protocol of the LISPE project, which is a 2-year prospective cohort study of community-dwelling older people aged 75 to 90 (n = 848). The data consists of a baseline survey including face-to-face interviews, objective observation of the home environment and a physical performance test in the participant’s home. All the baseline participants will be interviewed over the phone one and two years after baseline to collect data on life-space mobility, disability and participation restriction. Additional home interviews and environmental evaluations will be conducted for those who relocate during the study period. Data on mortality and health service use will be collected from national registers. In a substudy on walking activity and life space, 358 participants kept a 7-day diary and, in addition, 176 participants also wore an accelerometer. Discussion Our study, which includes extensive data collection with a large sample, provides a unique opportunity to study topics of importance for aging societies. A novel approach is employed which enables us to study the interactions of environmental features and individual characteristics underlying the life-space of older people. Potentially, the results of this study will contribute to improvements in strategies to postpone or prevent progression to disability and loss of independence. PMID:23170987
Perceived Animacy Influences the Processing of Human-Like Surface Features in the Fusiform Gyrus
Shultz, Sarah; McCarthya, Gregory
2014-01-01
While decades of research have demonstrated that a region of the right fusiform gyrus (FG) responds selectively to faces, a second line of research suggests that the FG responds to a range of animacy cues, including biological motion and goal-directed actions, even in the absence of faces or other human-like surface features. These findings raise the question of whether the FG is indeed sensitive to faces or to the more abstract category of animate agents. The current study uses fMRI to examine whether the FG responds to all faces in a category-specific way or whether the FG is especially sensitive to the faces of animate agents. Animate agents are defined here as intentional agents with the capacity for rational goal-directed actions. Specifically, we examine how the FG responds to an entity that looks like an animate agent but that lacks the capacity for goal-directed, rational action. Region-of-interest analyses reveal that the FG activates more strongly to the animate compared with the inanimate entity, even though the surface features of both animate and inanimate entities were identical. These results suggest that the FG does not respond to all faces in a category-specific way, and is instead especially sensitive to whether an entity is animate. PMID:24905285
Shaded Relief of South Africa, Northern Cape Province
NASA Technical Reports Server (NTRS)
2000-01-01
Located north of the Swartberg Mountains in South Africa's Northern Cape Province, this topographic image shows a portion of the Great Karoo region. Karoo is an indigenous word for 'dry thirst land.' The semi-arid area is known for its unique variety of flora and fauna. The topography of the area, with a total relief of 200 meters (650 feet), reveals much about the geologic history of the area. The linear features seen in the image are near-vertical walls of once-molten rock, or dikes, that have intruded the bedrock. The dikes are more resistant to weathering and, therefore, form the linear wall-like features seen in the image. In relatively flat arid areas such as this, small changes in the topography can have large impacts on the water resources and the local ecosystem. These data can be used by biologists to study the distribution and range of the different plants and animals. Geologists can also use the data to study the geologic history of this area in more detail.
This shaded relief image was generated using topographic data from the Shuttle Radar Topography Mission. A computer-generated artificial light source illuminates the elevation data to produce a pattern of light and shadows. Slopes facing the light appear bright, while those facing away are shaded. On flatter surfaces, the pattern of light and shadows can reveal subtle features in the terrain. Colors show the elevation as measured by SRTM. Colors range from green at the lowest elevations to reddish at the highest elevations. Shaded relief maps are commonly used in applications such as geologic mapping and land use planning.The Shuttle Radar Topography Mission (SRTM), launched on February 11, 2000, uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission is designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, an additional C-band imaging antenna and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency (NIMA) and the German (DLR) and Italian (ASI) space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Earth Science Enterprise, Washington, DC.ERIC Educational Resources Information Center
Tronick, E. Z.; Messinger, D. S.; Weinberg, M. K.; Lester, B. M.; LaGasse, L.; Seifer, R.; Bauer, C. R.; Shankaran, S.; Bada, H.; Wright, L. L.; Poole, K.; Liu, J.
2005-01-01
Prenatal cocaine and opiate exposure are thought to subtly compromise social and emotional development. The authors observed a large sample of 236 cocaine-exposed and 459 nonexposed infants (49 were opiate exposed and 646 nonexposed) with their mothers in the face-to-face still-face paradigm. Infant and maternal behaviors were microanalytically…
Kret, Mariska E; Tomonaga, Masaki
2016-01-01
For social species such as primates, the recognition of conspecifics is crucial for their survival. As demonstrated by the 'face inversion effect', humans are experts in recognizing faces and unlike objects, recognize their identity by processing it configurally. The human face, with its distinct features such as eye-whites, eyebrows, red lips and cheeks signals emotions, intentions, health and sexual attraction and, as we will show here, shares important features with the primate behind. Chimpanzee females show a swelling and reddening of the anogenital region around the time of ovulation. This provides an important socio-sexual signal for group members, who can identify individuals by their behinds. We hypothesized that chimpanzees process behinds configurally in a way humans process faces. In four different delayed matching-to-sample tasks with upright and inverted body parts, we show that humans demonstrate a face, but not a behind inversion effect and that chimpanzees show a behind, but no clear face inversion effect. The findings suggest an evolutionary shift in socio-sexual signalling function from behinds to faces, two hairless, symmetrical and attractive body parts, which might have attuned the human brain to process faces, and the human face to become more behind-like.
Passing the Baton: An Experimental Study of Shift Handover
NASA Technical Reports Server (NTRS)
Parke, Bonny; Hobbs, Alan; Kanki, Barbara
2010-01-01
Shift handovers occur in many safety-critical environments, including aviation maintenance, medicine, air traffic control, and mission control for space shuttle and space station operations. Shift handovers are associated with increased risk of communication failures and human error. In dynamic industries, errors and accidents occur disproportionately after shift handover. Typical shift handovers involve transferring information from an outgoing shift to an incoming shift via written logs, or in some cases, face-to-face briefings. The current study explores the possibility of improving written communication with the support modalities of audio and video recordings, as well as face-to-face briefings. Fifty participants participated in an experimental task which mimicked some of the critical challenges involved in transferring information between shifts in industrial settings. All three support modalities, face-to-face, video, and audio recordings, reduced task errors significantly over written communication alone. The support modality most preferred by participants was face-to-face communication; the least preferred was written communication alone.
ERIC Educational Resources Information Center
Nakabayashi, Kazuyo; Lloyd-Jones, Toby J.; Butcher, Natalie; Liu, Chang Hong
2012-01-01
Describing a face in words can either hinder or help subsequent face recognition. Here, the authors examined the relationship between the benefit from verbally describing a series of faces and the same-race advantage (SRA) whereby people are better at recognizing unfamiliar faces from their own race as compared with those from other races.…
Design of aerosol face masks for children using computerized 3D face analysis.
Amirav, Israel; Luder, Anthony S; Halamish, Asaf; Raviv, Dan; Kimmel, Ron; Waisman, Dan; Newhouse, Michael T
2014-08-01
Aerosol masks were originally developed for adults and downsized for children. Overall fit to minimize dead space and a tight seal are problematic, because children's faces undergo rapid and marked topographic and internal anthropometric changes in their first few months/years of life. Facial three-dimensional (3D) anthropometric data were used to design an optimized pediatric mask. Children's faces (n=271, aged 1 month to 4 years) were scanned with 3D technology. Data for the distance from the bridge of the nose to the tip of the chin (H) and the width of the mouth opening (W) were used to categorize the scans into "small," "medium," and "large" "clusters." "Average" masks were developed from each cluster to provide an optimal seal with minimal dead space. The resulting computerized contour, W and H, were used to develop the SootherMask® that enables children, "suckling" on their own pacifier, to keep the mask on their face, mainly by means of subatmospheric pressure. The relatively wide and flexible rim of the mask accommodates variations in facial size within and between clusters. Unique pediatric face masks were developed based on anthropometric data obtained through computerized 3D face analysis. These masks follow facial contours and gently seal to the child's face, and thus may minimize aerosol leakage and dead space.
Influence of Surface Features for Increased Heat Dissipation on Tool Wear
Beno, Tomas; Hoier, Philipp; Wretland, Anders
2018-01-01
The critical problems faced during the machining process of heat resistant superalloys, (HRSA), is the concentration of heat in the cutting zone and the difficulty in dissipating it. The concentrated heat in the cutting zone has a negative influence on the tool life and surface quality of the machined surface, which in turn, contributes to higher manufacturing costs. This paper investigates improved heat dissipation from the cutting zone on the tool wear through surface features on the cutting tools. Firstly, the objective was to increase the available surface area in high temperature regions of the cutting tool. Secondly, multiple surface features were fabricated for the purpose of acting as channels in the rake face to create better access for the coolant to the proximity of the cutting edge. The purpose was thereby to improve the cooling of the cutting edge itself, which exhibits the highest temperature during machining. These modified inserts were experimentally investigated in face turning of Alloy 718 with high-pressure coolant. Overall results exhibited that surface featured inserts decreased flank wear, abrasion of the flank face, cutting edge deterioration and crater wear probably due to better heat dissipation from the cutting zone. PMID:29693579
Log-Gabor Weber descriptor for face recognition
NASA Astrophysics Data System (ADS)
Li, Jing; Sang, Nong; Gao, Changxin
2015-09-01
The Log-Gabor transform, which is suitable for analyzing gradually changing data such as in iris and face images, has been widely used in image processing, pattern recognition, and computer vision. In most cases, only the magnitude or phase information of the Log-Gabor transform is considered. However, the complementary effect taken by combining magnitude and phase information simultaneously for an image-feature extraction problem has not been systematically explored in the existing works. We propose a local image descriptor for face recognition, called Log-Gabor Weber descriptor (LGWD). The novelty of our LGWD is twofold: (1) to fully utilize the information from the magnitude or phase feature of multiscale and orientation Log-Gabor transform, we apply the Weber local binary pattern operator to each transform response. (2) The encoded Log-Gabor magnitude and phase information are fused at the feature level by utilizing kernel canonical correlation analysis strategy, considering that feature level information fusion is effective when the modalities are correlated. Experimental results on the AR, Extended Yale B, and UMIST face databases, compared with those available from recent experiments reported in the literature, show that our descriptor yields a better performance than state-of-the art methods.
Controllable Edge Feature Sharpening for Dental Applications
2014-01-01
This paper presents a new approach to sharpen blurred edge features in scanned tooth preparation surfaces generated by structured-light scanners. It aims to efficiently enhance the edge features so that the embedded feature lines can be easily identified in dental CAD systems, and to avoid unnatural oversharpening geometry. We first separate the feature regions using graph-cut segmentation, which does not require a user-defined threshold. Then, we filter the face normal vectors to propagate the geometry from the smooth region to the feature region. In order to control the degree of the sharpness, we propose a feature distance measure which is based on normal tensor voting. Finally, the vertex positions are updated according to the modified face normal vectors. We have applied the approach to scanned tooth preparation models. The results show that the blurred edge features are enhanced without unnatural oversharpening geometry. PMID:24741376
Controllable edge feature sharpening for dental applications.
Fan, Ran; Jin, Xiaogang
2014-01-01
This paper presents a new approach to sharpen blurred edge features in scanned tooth preparation surfaces generated by structured-light scanners. It aims to efficiently enhance the edge features so that the embedded feature lines can be easily identified in dental CAD systems, and to avoid unnatural oversharpening geometry. We first separate the feature regions using graph-cut segmentation, which does not require a user-defined threshold. Then, we filter the face normal vectors to propagate the geometry from the smooth region to the feature region. In order to control the degree of the sharpness, we propose a feature distance measure which is based on normal tensor voting. Finally, the vertex positions are updated according to the modified face normal vectors. We have applied the approach to scanned tooth preparation models. The results show that the blurred edge features are enhanced without unnatural oversharpening geometry.
Lewandowski, Zdzisław
2015-09-01
The project aimed at finding the answers to the following two questions: to what extent does a change in size, height or width of the selected facial features influence the assessment of likeness between an original female composite portrait and a modified one? And how does the sex of the person who judges the images have an impact on the perception of likeness of facial features? The first stage of the project consisted of creating the image of the averaged female faces. Then the basic facial features like eyes, nose and mouth were cut out of the averaged face and each of these features was transformed in three ways: its size was changed by reduction or enlargement, its height was modified through reduction or enlargement of the above-mentioned features and its width was altered through widening or narrowing. In each out of six feature alternation methods, intensity of modification reached up to 20% of the original size with changes every 2%. The features altered in such a way were again stuck onto the original faces and retouched. The third stage consisted of the assessment, performed by the judges of both sexes, of the extent of likeness between the averaged composite portrait (without any changes) and the modified portraits. The results indicate that there are significant differences in the assessment of likeness of the portraits with some features modified to the original ones. The images with changes in the size and height of the nose received the lowest scores on the likeness scale, which indicates that these changes were perceived by the subjects as the most important. The photos with changes in the height of lip vermillion thickness (the lip height), lip width and the height and width of eye slit, in turn, received high scores of likeness, in spite of big changes, which signifies that these modifications were perceived as less important when compared to the other features investigated.
Wang, Yuanye; Luo, Huan
2017-01-01
In order to deal with external world efficiently, the brain constantly generates predictions about incoming sensory inputs, a process known as "predictive coding." Our recent studies, by employing visual priming paradigms in combination with a time-resolved behavioral measurement, reveal that perceptual predictions about simple features (e.g., left or right orientation) return to low sensory areas not continuously but recurrently in a theta-band (3-4Hz) rhythm. However, it remains unknown whether high-level object processing is also mediated by the oscillatory mechanism and if yes at which rhythm the mechanism works. In the present study, we employed a morph-face priming paradigm and the time-resolved behavioral measurements to examine the fine temporal dynamics of face identity priming performance. First, we reveal classical priming effects and a rhythmic trend within the prime-to-probe SOA of 600ms (Experiment 1). Next, we densely sampled the face priming behavioral performances within this SOA range (Experiment 2). Our results demonstrate a significant ~5Hz oscillatory component in the face priming behavioral performances, suggesting that a rhythmic process also coordinates the object-level prediction (i.e., face identity here). In comparison to our previous studies, the results suggest that the rhythm for the high-level object is faster than that for simple features. We propose that the seemingly distinctive priming rhythms might be attributable to that the object-level and simple feature-level predictions return to different stages along the visual pathway (e.g., FFA area for face priming and V1 area for simple feature priming). In summary, the findings support a general theta-band (3-6Hz) temporal organization mechanism in predictive coding, and that such wax-and-waning pattern in predictive coding may aid the brain to be more readily updated for new inputs. © 2017 Elsevier B.V. All rights reserved.
The depth estimation of 3D face from single 2D picture based on manifold learning constraints
NASA Astrophysics Data System (ADS)
Li, Xia; Yang, Yang; Xiong, Hailiang; Liu, Yunxia
2018-04-01
The estimation of depth is virtual important in 3D face reconstruction. In this paper, we propose a t-SNE based on manifold learning constraints and introduce K-means method to divide the original database into several subset, and the selected optimal subset to reconstruct the 3D face depth information can greatly reduce the computational complexity. Firstly, we carry out the t-SNE operation to reduce the key feature points in each 3D face model from 1×249 to 1×2. Secondly, the K-means method is applied to divide the training 3D database into several subset. Thirdly, the Euclidean distance between the 83 feature points of the image to be estimated and the feature point information before the dimension reduction of each cluster center is calculated. The category of the image to be estimated is judged according to the minimum Euclidean distance. Finally, the method Kong D will be applied only in the optimal subset to estimate the depth value information of 83 feature points of 2D face images. Achieving the final depth estimation results, thus the computational complexity is greatly reduced. Compared with the traditional traversal search estimation method, although the proposed method error rate is reduced by 0.49, the number of searches decreases with the change of the category. In order to validate our approach, we use a public database to mimic the task of estimating the depth of face images from 2D images. The average number of searches decreased by 83.19%.
Internal curvature signal and noise in low- and high-level vision
Grabowecky, Marcia; Kim, Yee Joon; Suzuki, Satoru
2011-01-01
How does internal processing contribute to visual pattern perception? By modeling visual search performance, we estimated internal signal and noise relevant to perception of curvature, a basic feature important for encoding of three-dimensional surfaces and objects. We used isolated, sparse, crowded, and face contexts to determine how internal curvature signal and noise depended on image crowding, lateral feature interactions, and level of pattern processing. Observers reported the curvature of a briefly flashed segment, which was presented alone (without lateral interaction) or among multiple straight segments (with lateral interaction). Each segment was presented with no context (engaging low-to-intermediate-level curvature processing), embedded within a face context as the mouth (engaging high-level face processing), or embedded within an inverted-scrambled-face context as a control for crowding. Using a simple, biologically plausible model of curvature perception, we estimated internal curvature signal and noise as the mean and standard deviation, respectively, of the Gaussian-distributed population activity of local curvature-tuned channels that best simulated behavioral curvature responses. Internal noise was increased by crowding but not by face context (irrespective of lateral interactions), suggesting prevention of noise accumulation in high-level pattern processing. In contrast, internal curvature signal was unaffected by crowding but modulated by lateral interactions. Lateral interactions (with straight segments) increased curvature signal when no contextual elements were added, but equivalent interactions reduced curvature signal when each segment was presented within a face. These opposing effects of lateral interactions are consistent with the phenomena of local-feature contrast in low-level processing and global-feature averaging in high-level processing. PMID:21209356
Toward End-to-End Face Recognition Through Alignment Learning
NASA Astrophysics Data System (ADS)
Zhong, Yuanyi; Chen, Jiansheng; Huang, Bo
2017-08-01
Plenty of effective methods have been proposed for face recognition during the past decade. Although these methods differ essentially in many aspects, a common practice of them is to specifically align the facial area based on the prior knowledge of human face structure before feature extraction. In most systems, the face alignment module is implemented independently. This has actually caused difficulties in the designing and training of end-to-end face recognition models. In this paper we study the possibility of alignment learning in end-to-end face recognition, in which neither prior knowledge on facial landmarks nor artificially defined geometric transformations are required. Specifically, spatial transformer layers are inserted in front of the feature extraction layers in a Convolutional Neural Network (CNN) for face recognition. Only human identity clues are used for driving the neural network to automatically learn the most suitable geometric transformation and the most appropriate facial area for the recognition task. To ensure reproducibility, our model is trained purely on the publicly available CASIA-WebFace dataset, and is tested on the Labeled Face in the Wild (LFW) dataset. We have achieved a verification accuracy of 99.08\\% which is comparable to state-of-the-art single model based methods.
Optimal Eye-Gaze Fixation Position for Face-Related Neural Responses
Zerouali, Younes; Lina, Jean-Marc; Jemel, Boutheina
2013-01-01
It is generally agreed that some features of a face, namely the eyes, are more salient than others as indexed by behavioral diagnosticity, gaze-fixation patterns and evoked-neural responses. However, because previous studies used unnatural stimuli, there is no evidence so far that the early encoding of a whole face in the human brain is based on the eyes or other facial features. To address this issue, scalp electroencephalogram (EEG) and eye gaze-fixations were recorded simultaneously in a gaze-contingent paradigm while observers viewed faces. We found that the N170 indexing the earliest face-sensitive response in the human brain was the largest when the fixation position is located around the nasion. Interestingly, for inverted faces, this optimal fixation position was more variable, but mainly clustered in the upper part of the visual field (around the mouth). These observations extend the findings of recent behavioral studies, suggesting that the early encoding of a face, as indexed by the N170, is not driven by the eyes per se, but rather arises from a general perceptual setting (upper-visual field advantage) coupled with the alignment of a face stimulus to a stored face template. PMID:23762224
Optimal eye-gaze fixation position for face-related neural responses.
Zerouali, Younes; Lina, Jean-Marc; Jemel, Boutheina
2013-01-01
It is generally agreed that some features of a face, namely the eyes, are more salient than others as indexed by behavioral diagnosticity, gaze-fixation patterns and evoked-neural responses. However, because previous studies used unnatural stimuli, there is no evidence so far that the early encoding of a whole face in the human brain is based on the eyes or other facial features. To address this issue, scalp electroencephalogram (EEG) and eye gaze-fixations were recorded simultaneously in a gaze-contingent paradigm while observers viewed faces. We found that the N170 indexing the earliest face-sensitive response in the human brain was the largest when the fixation position is located around the nasion. Interestingly, for inverted faces, this optimal fixation position was more variable, but mainly clustered in the upper part of the visual field (around the mouth). These observations extend the findings of recent behavioral studies, suggesting that the early encoding of a face, as indexed by the N170, is not driven by the eyes per se, but rather arises from a general perceptual setting (upper-visual field advantage) coupled with the alignment of a face stimulus to a stored face template.
Sad Facial Expressions Increase Choice Blindness
Wang, Yajie; Zhao, Song; Zhang, Zhijie; Feng, Wenfeng
2018-01-01
Previous studies have discovered a fascinating phenomenon known as choice blindness—individuals fail to detect mismatches between the face they choose and the face replaced by the experimenter. Although previous studies have reported a couple of factors that can modulate the magnitude of choice blindness, the potential effect of facial expression on choice blindness has not yet been explored. Using faces with sad and neutral expressions (Experiment 1) and faces with happy and neutral expressions (Experiment 2) in the classic choice blindness paradigm, the present study investigated the effects of facial expressions on choice blindness. The results showed that the detection rate was significantly lower on sad faces than neutral faces, whereas no significant difference was observed between happy faces and neutral faces. The exploratory analysis of verbal reports found that participants who reported less facial features for sad (as compared to neutral) expressions also tended to show a lower detection rate of sad (as compared to neutral) faces. These findings indicated that sad facial expressions increased choice blindness, which might have resulted from inhibition of further processing of the detailed facial features by the less attractive sad expressions (as compared to neutral expressions). PMID:29358926
Sad Facial Expressions Increase Choice Blindness.
Wang, Yajie; Zhao, Song; Zhang, Zhijie; Feng, Wenfeng
2017-01-01
Previous studies have discovered a fascinating phenomenon known as choice blindness-individuals fail to detect mismatches between the face they choose and the face replaced by the experimenter. Although previous studies have reported a couple of factors that can modulate the magnitude of choice blindness, the potential effect of facial expression on choice blindness has not yet been explored. Using faces with sad and neutral expressions (Experiment 1) and faces with happy and neutral expressions (Experiment 2) in the classic choice blindness paradigm, the present study investigated the effects of facial expressions on choice blindness. The results showed that the detection rate was significantly lower on sad faces than neutral faces, whereas no significant difference was observed between happy faces and neutral faces. The exploratory analysis of verbal reports found that participants who reported less facial features for sad (as compared to neutral) expressions also tended to show a lower detection rate of sad (as compared to neutral) faces. These findings indicated that sad facial expressions increased choice blindness, which might have resulted from inhibition of further processing of the detailed facial features by the less attractive sad expressions (as compared to neutral expressions).
McComb, Sara; Kennedy, Deanna; Perryman, Rebecca; Warner, Norman; Letsky, Michael
2010-04-01
Our objective is to capture temporal patterns in mental model convergence processes and differences in these patterns between distributed teams using an electronic collaboration space and face-to-face teams with no interface. Distributed teams, as sociotechnical systems, collaborate via technology to work on their task. The way in which they process information to inform their mental models may be examined via team communication and may unfold differently than it does in face-to-face teams. We conducted our analysis on 32 three-member teams working on a planning task. Half of the teams worked as distributed teams in an electronic collaboration space, and the other half worked face-to-face without an interface. Using event history analysis, we found temporal interdependencies among the initial convergence points of the multiple mental models we examined. Furthermore, the timing of mental model convergence and the onset of task work discussions were related to team performance. Differences existed in the temporal patterns of convergence and task work discussions across conditions. Distributed teams interacting via an electronic interface and face-to-face teams with no interface converged on multiple mental models, but their communication patterns differed. In particular, distributed teams with an electronic interface required less overall communication, converged on all mental models later in their life cycles, and exhibited more linear cognitive processes than did face-to-face teams interacting verbally. Managers need unique strategies for facilitating communication and mental model convergence depending on teams' degrees of collocation and access to an interface, which in turn will enhance team performance.
ERIC Educational Resources Information Center
Geber, Beverly
1995-01-01
Virtual work teams scattered around the globe are becoming a feature of corporate workplaces. Although most people prefer face-to-face meetings and interactions, reality often requires telecommuting. (JOW)