A causal relationship between face-patch activity and face-detection behavior.
Sadagopan, Srivatsun; Zarco, Wilbert; Freiwald, Winrich A
2017-04-04
The primate brain contains distinct areas densely populated by face-selective neurons. One of these, face-patch ML, contains neurons selective for contrast relationships between face parts. Such contrast-relationships can serve as powerful heuristics for face detection. However, it is unknown whether neurons with such selectivity actually support face-detection behavior. Here, we devised a naturalistic face-detection task and combined it with fMRI-guided pharmacological inactivation of ML to test whether ML is of critical importance for real-world face detection. We found that inactivation of ML impairs face detection. The effect was anatomically specific, as inactivation of areas outside ML did not affect face detection, and it was categorically specific, as inactivation of ML impaired face detection while sparing body and object detection. These results establish that ML function is crucial for detection of faces in natural scenes, performing a critical first step on which other face processing operations can build.
Greater sensitivity of the cortical face processing system to perceptually-equated face detection
Maher, S.; Ekstrom, T.; Tong, Y.; Nickerson, L.D.; Frederick, B.; Chen, Y.
2015-01-01
Face detection, the perceptual capacity to identify a visual stimulus as a face before probing deeper into specific attributes (such as its identity or emotion), is essential for social functioning. Despite the importance of this functional capacity, face detection and its underlying brain mechanisms are not well understood. This study evaluated the roles that the cortical face processing system, which is identified largely through studying other aspects of face perception, play in face detection. Specifically, we used functional magnetic resonance imaging (fMRI) to examine the activations of the fusifom face area (FFA), occipital face area (OFA) and superior temporal sulcus (STS) when face detection was isolated from other aspects of face perception and when face detection was perceptually-equated across individual human participants (n=20). During face detection, FFA and OFA were significantly activated, even for stimuli presented at perceptual-threshold levels, whereas STS was not. During tree detection, however, FFA and OFA were responsive only for highly salient (i.e., high contrast) stimuli. Moreover, activation of FFA during face detection predicted a significant portion of the perceptual performance levels that were determined psychophysically for each participant. This pattern of result indicates that FFA and OFA have a greater sensitivity to face detection signals and selectively support the initial process of face vs. non-face object perception. PMID:26592952
The wide window of face detection.
Hershler, Orit; Golan, Tal; Bentin, Shlomo; Hochstein, Shaul
2010-08-20
Faces are detected more rapidly than other objects in visual scenes and search arrays, but the cause for this face advantage has been contested. In the present study, we found that under conditions of spatial uncertainty, faces were easier to detect than control targets (dog faces, clocks and cars) even in the absence of surrounding stimuli, making an explanation based only on low-level differences unlikely. This advantage improved with eccentricity in the visual field, enabling face detection in wider visual windows, and pointing to selective sparing of face detection at greater eccentricities. This face advantage might be due to perceptual factors favoring face detection. In addition, the relative face advantage is greater under flanked than non-flanked conditions, suggesting an additional, possibly attention-related benefit enabling face detection in groups of distracters.
Impaired face detection may explain some but not all cases of developmental prosopagnosia.
Dalrymple, Kirsten A; Duchaine, Brad
2016-05-01
Developmental prosopagnosia (DP) is defined by severe face recognition difficulties due to the failure to develop the visual mechanisms for processing faces. The two-process theory of face recognition (Morton & Johnson, 1991) implies that DP could result from a failure of an innate face detection system; this failure could prevent an individual from then tuning higher-level processes for face recognition (Johnson, 2005). Work with adults indicates that some individuals with DP have normal face detection whereas others are impaired. However, face detection has not been addressed in children with DP, even though their results may be especially informative because they have had less opportunity to develop strategies that could mask detection deficits. We tested the face detection abilities of seven children with DP. Four were impaired at face detection to some degree (i.e. abnormally slow, or failed to find faces) while the remaining three children had normal face detection. Hence, the cases with impaired detection are consistent with the two-process account suggesting that DP could result from a failure of face detection. However, the cases with normal detection implicate a higher-level origin. The dissociation between normal face detection and impaired identity perception also indicates that these abilities depend on different neurocognitive processes. © 2015 John Wiley & Sons Ltd.
Familiarity facilitates feature-based face processing.
Visconti di Oleggio Castello, Matteo; Wheeler, Kelsey G; Cipolli, Carlo; Gobbini, M Ida
2017-01-01
Recognition of personally familiar faces is remarkably efficient, effortless and robust. We asked if feature-based face processing facilitates detection of familiar faces by testing the effect of face inversion on a visual search task for familiar and unfamiliar faces. Because face inversion disrupts configural and holistic face processing, we hypothesized that inversion would diminish the familiarity advantage to the extent that it is mediated by such processing. Subjects detected personally familiar and stranger target faces in arrays of two, four, or six face images. Subjects showed significant facilitation of personally familiar face detection for both upright and inverted faces. The effect of familiarity on target absent trials, which involved only rejection of unfamiliar face distractors, suggests that familiarity facilitates rejection of unfamiliar distractors as well as detection of familiar targets. The preserved familiarity effect for inverted faces suggests that facilitation of face detection afforded by familiarity reflects mostly feature-based processes.
Pongakkasira, Kaewmart; Bindemann, Markus
2015-04-01
Human face detection might be driven by skin-coloured face-shaped templates. To explore this idea, this study compared the detection of faces for which the natural height-to-width ratios were preserved with distorted faces that were stretched vertically or horizontally. The impact of stretching on detection performance was not obvious when faces were equated to their unstretched counterparts in terms of their height or width dimension (Experiment 1). However, stretching impaired detection when the original and distorted faces were matched for their surface area (Experiment 2), and this was found with both vertically and horizontally stretched faces (Experiment 3). This effect was evident in accuracy, response times, and also observers' eye movements to faces. These findings demonstrate that height-to-width ratios are an important component of the cognitive template for face detection. The results also highlight important differences between face detection and face recognition. Copyright © 2015 Elsevier Ltd. All rights reserved.
Efficient search for a face by chimpanzees (Pan troglodytes).
Tomonaga, Masaki; Imura, Tomoko
2015-07-16
The face is quite an important stimulus category for human and nonhuman primates in their social lives. Recent advances in comparative-cognitive research clearly indicate that chimpanzees and humans process faces in a special manner; that is, using holistic or configural processing. Both species exhibit the face-inversion effect in which the inverted presentation of a face deteriorates their perception and recognition. Furthermore, recent studies have shown that humans detect human faces among non-facial objects rapidly. We report that chimpanzees detected chimpanzee faces among non-facial objects quite efficiently. This efficient search was not limited to own-species faces. They also found human adult and baby faces--but not monkey faces--efficiently. Additional testing showed that a front-view face was more readily detected than a profile, suggesting the important role of eye-to-eye contact. Chimpanzees also detected a photograph of a banana as efficiently as a face, but a further examination clearly indicated that the banana was detected mainly due to a low-level feature (i.e., color). Efficient face detection was hampered by an inverted presentation, suggesting that configural processing of faces is a critical element of efficient face detection in both species. This conclusion was supported by a simple simulation experiment using the saliency model.
Efficient search for a face by chimpanzees (Pan troglodytes)
Tomonaga, Masaki; Imura, Tomoko
2015-01-01
The face is quite an important stimulus category for human and nonhuman primates in their social lives. Recent advances in comparative-cognitive research clearly indicate that chimpanzees and humans process faces in a special manner; that is, using holistic or configural processing. Both species exhibit the face-inversion effect in which the inverted presentation of a face deteriorates their perception and recognition. Furthermore, recent studies have shown that humans detect human faces among non-facial objects rapidly. We report that chimpanzees detected chimpanzee faces among non-facial objects quite efficiently. This efficient search was not limited to own-species faces. They also found human adult and baby faces-but not monkey faces-efficiently. Additional testing showed that a front-view face was more readily detected than a profile, suggesting the important role of eye-to-eye contact. Chimpanzees also detected a photograph of a banana as efficiently as a face, but a further examination clearly indicated that the banana was detected mainly due to a low-level feature (i.e., color). Efficient face detection was hampered by an inverted presentation, suggesting that configural processing of faces is a critical element of efficient face detection in both species. This conclusion was supported by a simple simulation experiment using the saliency model. PMID:26180944
Face detection assisted auto exposure: supporting evidence from a psychophysical study
NASA Astrophysics Data System (ADS)
Jin, Elaine W.; Lin, Sheng; Dharumalingam, Dhandapani
2010-01-01
Face detection has been implemented in many digital still cameras and camera phones with the promise of enhancing existing camera functions (e.g. auto exposure) and adding new features to cameras (e.g. blink detection). In this study we examined the use of face detection algorithms in assisting auto exposure (AE). The set of 706 images, used in this study, was captured using Canon Digital Single Lens Reflex cameras and subsequently processed with an image processing pipeline. A psychophysical study was performed to obtain optimal exposure along with the upper and lower bounds of exposure for all 706 images. Three methods of marking faces were utilized: manual marking, face detection algorithm A (FD-A), and face detection algorithm B (FD-B). The manual marking method found 751 faces in 426 images, which served as the ground-truth for face regions of interest. The remaining images do not have any faces or the faces are too small to be considered detectable. The two face detection algorithms are different in resource requirements and in performance. FD-A uses less memory and gate counts compared to FD-B, but FD-B detects more faces and has less false positives. A face detection assisted auto exposure algorithm was developed and tested against the evaluation results from the psychophysical study. The AE test results showed noticeable improvement when faces were detected and used in auto exposure. However, the presence of false positives would negatively impact the added benefit.
A robust human face detection algorithm
NASA Astrophysics Data System (ADS)
Raviteja, Thaluru; Karanam, Srikrishna; Yeduguru, Dinesh Reddy V.
2012-01-01
Human face detection plays a vital role in many applications like video surveillance, managing a face image database, human computer interface among others. This paper proposes a robust algorithm for face detection in still color images that works well even in a crowded environment. The algorithm uses conjunction of skin color histogram, morphological processing and geometrical analysis for detecting human faces. To reinforce the accuracy of face detection, we further identify mouth and eye regions to establish the presence/absence of face in a particular region of interest.
Face detection and eyeglasses detection for thermal face recognition
NASA Astrophysics Data System (ADS)
Zheng, Yufeng
2012-01-01
Thermal face recognition becomes an active research direction in human identification because it does not rely on illumination condition. Face detection and eyeglasses detection are necessary steps prior to face recognition using thermal images. Infrared light cannot go through glasses and thus glasses will appear as dark areas in a thermal image. One possible solution is to detect eyeglasses and to exclude the eyeglasses areas before face matching. In thermal face detection, a projection profile analysis algorithm is proposed, where region growing and morphology operations are used to segment the body of a subject; then the derivatives of two projections (horizontal and vertical) are calculated and analyzed to locate a minimal rectangle of containing the face area. Of course, the searching region of a pair of eyeglasses is within the detected face area. The eyeglasses detection algorithm should produce either a binary mask if eyeglasses present, or an empty set if no eyeglasses at all. In the proposed eyeglasses detection algorithm, block processing, region growing, and priori knowledge (i.e., low mean and variance within glasses areas, the shapes and locations of eyeglasses) are employed. The results of face detection and eyeglasses detection are quantitatively measured and analyzed using the manually defined ground truths (for both face and eyeglasses). Our experimental results shown that the proposed face detection and eyeglasses detection algorithms performed very well in contrast with the predefined ground truths.
Searching for differences in race: is there evidence for preferential detection of other-race faces?
Lipp, Ottmar V; Terry, Deborah J; Smith, Joanne R; Tellegen, Cassandra L; Kuebbeler, Jennifer; Newey, Mareka
2009-06-01
Previous research has suggested that like animal and social fear-relevant stimuli, other-race faces (African American) are detected preferentially in visual search. Three experiments using Chinese or Indonesian faces as other-race faces yielded the opposite pattern of results: faster detection of same-race faces among other-race faces. This apparently inconsistent pattern of results was resolved by showing that Asian and African American faces are detected preferentially in tasks that have small stimulus sets and employ fixed target searches. Asian and African American other-race faces are found slower among Caucasian face backgrounds if larger stimulus sets are used in tasks with a variable mapping of stimulus to background or target. Thus, preferential detection of other-race faces was not found under task conditions in which preferential detection of animal and social fear-relevant stimuli is evident. Although consistent with the view that same-race faces are processed in more detail than other-race faces, the current findings suggest that other-race faces do not draw attention preferentially.
LoBue, Vanessa; Matthews, Kaleigh; Harvey, Teresa; Thrasher, Cat
2014-02-01
For decades, researchers have documented a bias for the rapid detection of angry faces in adult, child, and even infant participants. However, despite the age of the participant, the facial stimuli used in all of these experiments were schematic drawings or photographs of adult faces. The current research is the first to examine the detection of both child and adult emotional facial expressions. In our study, 3- to 5-year-old children and adults detected angry, sad, and happy faces among neutral distracters. The depicted faces were of adults or of other children. As in previous work, children detected angry faces more quickly than happy and neutral faces overall, and they tended to detect the faces of other children more quickly than the faces of adults. Adults also detected angry faces more quickly than happy and sad faces even when the faces depicted child models. The results are discussed in terms of theoretical implications for the development of a bias for threat in detection. Copyright © 2013 Elsevier Inc. All rights reserved.
Energy conservation using face detection
NASA Astrophysics Data System (ADS)
Deotale, Nilesh T.; Kalbande, Dhananjay R.; Mishra, Akassh A.
2011-10-01
Computerized Face Detection, is concerned with the difficult task of converting a video signal of a person to written text. It has several applications like face recognition, simultaneous multiple face processing, biometrics, security, video surveillance, human computer interface, image database management, digital cameras use face detection for autofocus, selecting regions of interest in photo slideshows that use a pan-and-scale and The Present Paper deals with energy conservation using face detection. Automating the process to a computer requires the use of various image processing techniques. There are various methods that can be used for Face Detection such as Contour tracking methods, Template matching, Controlled background, Model based, Motion based and color based. Basically, the video of the subject are converted into images are further selected manually for processing. However, several factors like poor illumination, movement of face, viewpoint-dependent Physical appearance, Acquisition geometry, Imaging conditions, Compression artifacts makes Face detection difficult. This paper reports an algorithm for conservation of energy using face detection for various devices. The present paper suggests Energy Conservation can be done by Detecting the Face and reducing the brightness of complete image and then adjusting the brightness of the particular area of an image where the face is located using histogram equalization.
Real-time detection with AdaBoost-svm combination in various face orientation
NASA Astrophysics Data System (ADS)
Fhonna, R. P.; Nasution, M. K. M.; Tulus
2018-03-01
Most of the research has used algorithm AdaBoost-SVM for face detection. However, to our knowledge so far there is no research has been facing detection on real-time data with various orientations using the combination of AdaBoost and Support Vector Machine (SVM). Characteristics of complex and diverse face variations and real-time data in various orientations, and with a very complex application will slow down the performance of the face detection system this becomes a challenge in this research. Face orientation performed on the detection system, that is 900, 450, 00, -450, and -900. This combination method is expected to be an effective and efficient solution in various face orientations. The results showed that the highest average detection rate is on the face detection oriented 00 and the lowest detection rate is in the face orientation 900.
Seeing Objects as Faces Enhances Object Detection.
Takahashi, Kohske; Watanabe, Katsumi
2015-10-01
The face is a special visual stimulus. Both bottom-up processes for low-level facial features and top-down modulation by face expectations contribute to the advantages of face perception. However, it is hard to dissociate the top-down factors from the bottom-up processes, since facial stimuli mandatorily lead to face awareness. In the present study, using the face pareidolia phenomenon, we demonstrated that face awareness, namely seeing an object as a face, enhances object detection performance. In face pareidolia, some people see a visual stimulus, for example, three dots arranged in V shape, as a face, while others do not. This phenomenon allows us to investigate the effect of face awareness leaving the stimulus per se unchanged. Participants were asked to detect a face target or a triangle target. While target per se was identical between the two tasks, the detection sensitivity was higher when the participants recognized the target as a face. This was the case irrespective of the stimulus eccentricity or the vertical orientation of the stimulus. These results demonstrate that seeing an object as a face facilitates object detection via top-down modulation. The advantages of face perception are, therefore, at least partly, due to face awareness.
Seeing Objects as Faces Enhances Object Detection
Watanabe, Katsumi
2015-01-01
The face is a special visual stimulus. Both bottom-up processes for low-level facial features and top-down modulation by face expectations contribute to the advantages of face perception. However, it is hard to dissociate the top-down factors from the bottom-up processes, since facial stimuli mandatorily lead to face awareness. In the present study, using the face pareidolia phenomenon, we demonstrated that face awareness, namely seeing an object as a face, enhances object detection performance. In face pareidolia, some people see a visual stimulus, for example, three dots arranged in V shape, as a face, while others do not. This phenomenon allows us to investigate the effect of face awareness leaving the stimulus per se unchanged. Participants were asked to detect a face target or a triangle target. While target per se was identical between the two tasks, the detection sensitivity was higher when the participants recognized the target as a face. This was the case irrespective of the stimulus eccentricity or the vertical orientation of the stimulus. These results demonstrate that seeing an object as a face facilitates object detection via top-down modulation. The advantages of face perception are, therefore, at least partly, due to face awareness. PMID:27648219
Efficient human face detection in infancy.
Jakobsen, Krisztina V; Umstead, Lindsey; Simpson, Elizabeth A
2016-01-01
Adults detect conspecific faces more efficiently than heterospecific faces; however, the development of this own-species bias (OSB) remains unexplored. We tested whether 6- and 11-month-olds exhibit OSB in their attention to human and animal faces in complex visual displays with high perceptual load (25 images competing for attention). Infants (n = 48) and adults (n = 43) passively viewed arrays containing a face among 24 non-face distractors while we measured their gaze with remote eye tracking. While OSB is typically not observed until about 9 months, we found that, already by 6 months, human faces were more likely to be detected, were detected more quickly (attention capture), and received longer looks (attention holding) than animal faces. These data suggest that 6-month-olds already exhibit OSB in face detection efficiency, consistent with perceptual attunement. This specialization may reflect the biological importance of detecting conspecific faces, a foundational ability for early social interactions. © 2015 Wiley Periodicals, Inc.
Adaboost multi-view face detection based on YCgCr skin color model
NASA Astrophysics Data System (ADS)
Lan, Qi; Xu, Zhiyong
2016-09-01
Traditional Adaboost face detection algorithm uses Haar-like features training face classifiers, whose detection error rate is low in the face region. While under the complex background, the classifiers will make wrong detection easily to the background regions with the similar faces gray level distribution, which leads to the error detection rate of traditional Adaboost algorithm is high. As one of the most important features of a face, skin in YCgCr color space has good clustering. We can fast exclude the non-face areas through the skin color model. Therefore, combining with the advantages of the Adaboost algorithm and skin color detection algorithm, this paper proposes Adaboost face detection algorithm method that bases on YCgCr skin color model. Experiments show that, compared with traditional algorithm, the method we proposed has improved significantly in the detection accuracy and errors.
Novel face-detection method under various environments
NASA Astrophysics Data System (ADS)
Jing, Min-Quan; Chen, Ling-Hwei
2009-06-01
We propose a method to detect a face with different poses under various environments. On the basis of skin color information, skin regions are first extracted from an input image. Next, the shoulder part is cut out by using shape information and the head part is then identified as a face candidate. For a face candidate, a set of geometric features is applied to determine if it is a profile face. If not, then a set of eyelike rectangles extracted from the face candidate and the lighting distribution are used to determine if the face candidate is a nonprofile face. Experimental results show that the proposed method is robust under a wide range of lighting conditions, different poses, and races. The detection rate for the HHI face database is 93.68%. For the Champion face database, the detection rate is 95.15%.
A Method of Face Detection with Bayesian Probability
NASA Astrophysics Data System (ADS)
Sarker, Goutam
2010-10-01
The objective of face detection is to identify all images which contain a face, irrespective of its orientation, illumination conditions etc. This is a hard problem, because the faces are highly variable in size, shape lighting conditions etc. Many methods have been designed and developed to detect faces in a single image. The present paper is based on one `Appearance Based Method' which relies on learning the facial and non facial features from image examples. This in its turn is based on statistical analysis of examples and counter examples of facial images and employs Bayesian Conditional Classification Rule to detect the probability of belongingness of a face (or non-face) within an image frame. The detection rate of the present system is very high and thereby the number of false positive and false negative detection is substantially low.
Li, Jun; Liu, Jiangang; Liang, Jimin; Zhang, Hongchuan; Zhao, Jizheng; Rieth, Cory A.; Huber, David E.; Li, Wu; Shi, Guangming; Ai, Lin; Tian, Jie; Lee, Kang
2013-01-01
To study top-down face processing, the present study used an experimental paradigm in which participants detected non-existent faces in pure noise images. Conventional BOLD signal analysis identified three regions involved in this illusory face detection. These regions included the left orbitofrontal cortex (OFC) in addition to the right fusiform face area (FFA) and right occipital face area (OFA), both of which were previously known to be involved in both top-down and bottom-up processing of faces. We used Dynamic Causal Modeling (DCM) and Bayesian model selection to further analyze the data, revealing both intrinsic and modulatory effective connectivities among these three cortical regions. Specifically, our results support the claim that the orbitofrontal cortex plays a crucial role in the top-down processing of faces by regulating the activities of the occipital face area, and the occipital face area in turn detects the illusory face features in the visual stimuli and then provides this information to the fusiform face area for further analysis. PMID:20423709
Chen, Wenfeng; Liu, Chang Hong; Nakabayashi, Kazuyo
2012-01-01
Recent research has shown that the presence of a task-irrelevant attractive face can induce a transient diversion of attention from a perceptual task that requires covert deployment of attention to one of the two locations. However, it is not known whether this spontaneous appraisal for facial beauty also modulates attention in change detection among multiple locations, where a slower, and more controlled search process is simultaneously affected by the magnitude of a change and the facial distinctiveness. Using the flicker paradigm, this study examines how spontaneous appraisal for facial beauty affects the detection of identity change among multiple faces. Participants viewed a display consisting of two alternating frames of four faces separated by a blank frame. In half of the trials, one of the faces (target face) changed to a different person. The task of the participant was to indicate whether a change of face identity had occurred. The results showed that (1) observers were less efficient at detecting identity change among multiple attractive faces relative to unattractive faces when the target and distractor faces were not highly distinctive from one another; and (2) it is difficult to detect a change if the new face is similar to the old. The findings suggest that attractive faces may interfere with the attention-switch process in change detection. The results also show that attention in change detection was strongly modulated by physical similarity between the alternating faces. Although facial beauty is a powerful stimulus that has well-demonstrated priority, its influence on change detection is easily superseded by low-level image similarity. The visual system appears to take a different approach to facial beauty when a task requires resource-demanding feature comparisons.
Looser, Christine E; Guntupalli, Jyothi S; Wheatley, Thalia
2013-10-01
More than a decade of research has demonstrated that faces evoke prioritized processing in a 'core face network' of three brain regions. However, whether these regions prioritize the detection of global facial form (shared by humans and mannequins) or the detection of life in a face has remained unclear. Here, we dissociate form-based and animacy-based encoding of faces by using animate and inanimate faces with human form (humans, mannequins) and dog form (real dogs, toy dogs). We used multivariate pattern analysis of BOLD responses to uncover the representational similarity space for each area in the core face network. Here, we show that only responses in the inferior occipital gyrus are organized by global facial form alone (human vs dog) while animacy becomes an additional organizational priority in later face-processing regions: the lateral fusiform gyri (latFG) and right superior temporal sulcus. Additionally, patterns evoked by human faces were maximally distinct from all other face categories in the latFG and parts of the extended face perception system. These results suggest that once a face configuration is perceived, faces are further scrutinized for whether the face is alive and worthy of social cognitive resources.
Face liveness detection for face recognition based on cardiac features of skin color image
NASA Astrophysics Data System (ADS)
Suh, Kun Ha; Lee, Eui Chul
2016-07-01
With the growth of biometric technology, spoofing attacks have been emerged a threat to the security of the system. Main spoofing scenarios in the face recognition system include the printing attack, replay attack, and 3D mask attack. To prevent such attacks, techniques that evaluating liveness of the biometric data can be considered as a solution. In this paper, a novel face liveness detection method based on cardiac signal extracted from face is presented. The key point of proposed method is that the cardiac characteristic is detected in live faces but not detected in non-live faces. Experimental results showed that the proposed method can be effective way for determining printing attack or 3D mask attack.
Face liveness detection using shearlet-based feature descriptors
NASA Astrophysics Data System (ADS)
Feng, Litong; Po, Lai-Man; Li, Yuming; Yuan, Fang
2016-07-01
Face recognition is a widely used biometric technology due to its convenience but it is vulnerable to spoofing attacks made by nonreal faces such as photographs or videos of valid users. The antispoof problem must be well resolved before widely applying face recognition in our daily life. Face liveness detection is a core technology to make sure that the input face is a live person. However, this is still very challenging using conventional liveness detection approaches of texture analysis and motion detection. The aim of this paper is to propose a feature descriptor and an efficient framework that can be used to effectively deal with the face liveness detection problem. In this framework, new feature descriptors are defined using a multiscale directional transform (shearlet transform). Then, stacked autoencoders and a softmax classifier are concatenated to detect face liveness. We evaluated this approach using the CASIA Face antispoofing database and replay-attack database. The experimental results show that our approach performs better than the state-of-the-art techniques following the provided protocols of these databases, and it is possible to significantly enhance the security of the face recognition biometric system. In addition, the experimental results also demonstrate that this framework can be easily extended to classify different spoofing attacks.
Golan, Tal; Bentin, Shlomo; DeGutis, Joseph M; Robertson, Lynn C; Harel, Assaf
2014-02-01
Expertise in face recognition is characterized by high proficiency in distinguishing between individual faces. However, faces also enjoy an advantage at the early stage of basic-level detection, as demonstrated by efficient visual search for faces among nonface objects. In the present study, we asked (1) whether the face advantage in detection is a unique signature of face expertise, or whether it generalizes to other objects of expertise, and (2) whether expertise in face detection is intrinsically linked to expertise in face individuation. We compared how groups with varying degrees of object and face expertise (typical adults, developmental prosopagnosics [DP], and car experts) search for objects within and outside their domains of expertise (faces, cars, airplanes, and butterflies) among a variable set of object distractors. Across all three groups, search efficiency (indexed by reaction time slopes) was higher for faces and airplanes than for cars and butterflies. Notably, the search slope for car targets was considerably shallower in the car experts than in nonexperts. Although the mean face slope was slightly steeper among the DPs than in the other two groups, most of the DPs' search slopes were well within the normative range. This pattern of results suggests that expertise in object detection is indeed associated with expertise at the subordinate level, that it is not specific to faces, and that the two types of expertise are distinct facilities. We discuss the potential role of experience in bridging between low-level discriminative features and high-level naturalistic categories.
Multiview face detection based on position estimation over multicamera surveillance system
NASA Astrophysics Data System (ADS)
Huang, Ching-chun; Chou, Jay; Shiu, Jia-Hou; Wang, Sheng-Jyh
2012-02-01
In this paper, we propose a multi-view face detection system that locates head positions and indicates the direction of each face in 3-D space over a multi-camera surveillance system. To locate 3-D head positions, conventional methods relied on face detection in 2-D images and projected the face regions back to 3-D space for correspondence. However, the inevitable false face detection and rejection usually degrades the system performance. Instead, our system searches for the heads and face directions over the 3-D space using a sliding cube. Each searched 3-D cube is projected onto the 2-D camera views to determine the existence and direction of human faces. Moreover, a pre-process to estimate the locations of candidate targets is illustrated to speed-up the searching process over the 3-D space. In summary, our proposed method can efficiently fuse multi-camera information and suppress the ambiguity caused by detection errors. Our evaluation shows that the proposed approach can efficiently indicate the head position and face direction on real video sequences even under serious occlusion.
Automated facial attendance logger for students
NASA Astrophysics Data System (ADS)
Krithika, L. B.; Kshitish, S.; Kishore, M. R.
2017-11-01
From the past two decades, various spheres of activity are in the aspect of ‘Face recognition’ as an essential tool. The complete series of actions of face recognition is composed of 3 stages: Face Detection, Feature Extraction and Recognition. In this paper, we make an effort to put forth a new application of face recognition and detection in education. The proposed system scans the classroom and detects the face of the students in class and matches the scanned face with the templates that is available in the database and updates the attendance of the respective students.
Deficient cortical face-sensitive N170 responses and basic visual processing in schizophrenia.
Maher, S; Mashhoon, Y; Ekstrom, T; Lukas, S; Chen, Y
2016-01-01
Face detection, an ability to identify a visual stimulus as a face, is impaired in patients with schizophrenia. It is unclear whether impaired face processing in this psychiatric disorder results from face-specific domains or stems from more basic visual domains. In this study, we examined cortical face-sensitive N170 response in schizophrenia, taking into account deficient basic visual contrast processing. We equalized visual contrast signals among patients (n=20) and controls (n=20) and between face and tree images, based on their individual perceptual capacities (determined using psychophysical methods). We measured N170, a putative temporal marker of face processing, during face detection and tree detection. In controls, N170 amplitudes were significantly greater for faces than trees across all three visual contrast levels tested (perceptual threshold, two times perceptual threshold and 100%). In patients, however, N170 amplitudes did not differ between faces and trees, indicating diminished face selectivity (indexed by the differential responses to face vs. tree). These results indicate a lack of face-selectivity in temporal responses of brain machinery putatively responsible for face processing in schizophrenia. This neuroimaging finding suggests that face-specific processing is compromised in this psychiatric disorder. Copyright © 2015 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polese, Luigi Gentile; Brackney, Larry
An image-based occupancy sensor includes a motion detection module that receives and processes an image signal to generate a motion detection signal, a people detection module that receives the image signal and processes the image signal to generate a people detection signal, a face detection module that receives the image signal and processes the image signal to generate a face detection signal, and a sensor integration module that receives the motion detection signal from the motion detection module, receives the people detection signal from the people detection module, receives the face detection signal from the face detection module, and generatesmore » an occupancy signal using the motion detection signal, the people detection signal, and the face detection signal, with the occupancy signal indicating vacancy or occupancy, with an occupancy indication specifying that one or more people are detected within the monitored volume.« less
Toward automated face detection in thermal and polarimetric thermal imagery
NASA Astrophysics Data System (ADS)
Gordon, Christopher; Acosta, Mark; Short, Nathan; Hu, Shuowen; Chan, Alex L.
2016-05-01
Visible spectrum face detection algorithms perform pretty reliably under controlled lighting conditions. However, variations in illumination and application of cosmetics can distort the features used by common face detectors, thereby degrade their detection performance. Thermal and polarimetric thermal facial imaging are relatively invariant to illumination and robust to the application of makeup, due to their measurement of emitted radiation instead of reflected light signals. The objective of this work is to evaluate a government off-the-shelf wavelet based naïve-Bayes face detection algorithm and a commercial off-the-shelf Viola-Jones cascade face detection algorithm on face imagery acquired in different spectral bands. New classifiers were trained using the Viola-Jones cascade object detection framework with preprocessed facial imagery. Preprocessing using Difference of Gaussians (DoG) filtering reduces the modality gap between facial signatures across the different spectral bands, thus enabling more correlated histogram of oriented gradients (HOG) features to be extracted from the preprocessed thermal and visible face images. Since the availability of training data is much more limited in the thermal spectrum than in the visible spectrum, it is not feasible to train a robust multi-modal face detector using thermal imagery alone. A large training dataset was constituted with DoG filtered visible and thermal imagery, which was subsequently used to generate a custom trained Viola-Jones detector. A 40% increase in face detection rate was achieved on a testing dataset, as compared to the performance of a pre-trained/baseline face detector. Insights gained in this research are valuable in the development of more robust multi-modal face detectors.
Nihei, Yuji; Minami, Tetsuto; Nakauchi, Shigeki
2018-01-01
Faces represent important information for social communication, because social information, such as face-color, expression, and gender, is obtained from faces. Therefore, individuals' tend to find faces unconsciously, even in objects. Why is face-likeness perceived in non-face objects? Previous event-related potential (ERP) studies showed that the P1 component (early visual processing), the N170 component (face detection), and the N250 component (personal detection) reflect the neural processing of faces. Inverted faces were reported to enhance the amplitude and delay the latency of P1 and N170. To investigate face-likeness processing in the brain, we explored the face-related components of the ERP through a face-like evaluation task using natural faces, cars, insects, and Arcimboldo paintings presented upright or inverted. We found a significant correlation between the inversion effect index and face-like scores in P1 in both hemispheres and in N170 in the right hemisphere. These results suggest that judgment of face-likeness occurs in a relatively early stage of face processing.
Nihei, Yuji; Minami, Tetsuto; Nakauchi, Shigeki
2018-01-01
Faces represent important information for social communication, because social information, such as face-color, expression, and gender, is obtained from faces. Therefore, individuals' tend to find faces unconsciously, even in objects. Why is face-likeness perceived in non-face objects? Previous event-related potential (ERP) studies showed that the P1 component (early visual processing), the N170 component (face detection), and the N250 component (personal detection) reflect the neural processing of faces. Inverted faces were reported to enhance the amplitude and delay the latency of P1 and N170. To investigate face-likeness processing in the brain, we explored the face-related components of the ERP through a face-like evaluation task using natural faces, cars, insects, and Arcimboldo paintings presented upright or inverted. We found a significant correlation between the inversion effect index and face-like scores in P1 in both hemispheres and in N170 in the right hemisphere. These results suggest that judgment of face-likeness occurs in a relatively early stage of face processing. PMID:29503612
Minami, T; Goto, K; Kitazaki, M; Nakauchi, S
2011-03-10
In humans, face configuration, contour and color may affect face perception, which is important for social interactions. This study aimed to determine the effect of color information on face perception by measuring event-related potentials (ERPs) during the presentation of natural- and bluish-colored faces. Our results demonstrated that the amplitude of the N170 event-related potential, which correlates strongly with face processing, was higher in response to a bluish-colored face than to a natural-colored face. However, gamma-band activity was insensitive to the deviation from a natural face color. These results indicated that color information affects the N170 associated with a face detection mechanism, which suggests that face color is important for face detection. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Becker, D. Vaughn; Anderson, Uriah S.; Mortensen, Chad R.; Neufeld, Samantha L.; Neel, Rebecca
2011-01-01
Is it easier to detect angry or happy facial expressions in crowds of faces? The present studies used several variations of the visual search task to assess whether people selectively attend to expressive faces. Contrary to widely cited studies (e.g., Ohman, Lundqvist, & Esteves, 2001) that suggest angry faces "pop out" of crowds, our review of…
High precision automated face localization in thermal images: oral cancer dataset as test case
NASA Astrophysics Data System (ADS)
Chakraborty, M.; Raman, S. K.; Mukhopadhyay, S.; Patsa, S.; Anjum, N.; Ray, J. G.
2017-02-01
Automated face detection is the pivotal step in computer vision aided facial medical diagnosis and biometrics. This paper presents an automatic, subject adaptive framework for accurate face detection in the long infrared spectrum on our database for oral cancer detection consisting of malignant, precancerous and normal subjects of varied age group. Previous works on oral cancer detection using Digital Infrared Thermal Imaging(DITI) reveals that patients and normal subjects differ significantly in their facial thermal distribution. Therefore, it is a challenging task to formulate a completely adaptive framework to veraciously localize face from such a subject specific modality. Our model consists of first extracting the most probable facial regions by minimum error thresholding followed by ingenious adaptive methods to leverage the horizontal and vertical projections of the segmented thermal image. Additionally, the model incorporates our domain knowledge of exploiting temperature difference between strategic locations of the face. To our best knowledge, this is the pioneering work on detecting faces in thermal facial images comprising both patients and normal subjects. Previous works on face detection have not specifically targeted automated medical diagnosis; face bounding box returned by those algorithms are thus loose and not apt for further medical automation. Our algorithm significantly outperforms contemporary face detection algorithms in terms of commonly used metrics for evaluating face detection accuracy. Since our method has been tested on challenging dataset consisting of both patients and normal subjects of diverse age groups, it can be seamlessly adapted in any DITI guided facial healthcare or biometric applications.
Live face detection based on the analysis of Fourier spectra
NASA Astrophysics Data System (ADS)
Li, Jiangwei; Wang, Yunhong; Tan, Tieniu; Jain, Anil K.
2004-08-01
Biometrics is a rapidly developing technology that is to identify a person based on his or her physiological or behavioral characteristics. To ensure the correction of authentication, the biometric system must be able to detect and reject the use of a copy of a biometric instead of the live biometric. This function is usually termed "liveness detection". This paper describes a new method for live face detection. Using structure and movement information of live face, an effective live face detection algorithm is presented. Compared to existing approaches, which concentrate on the measurement of 3D depth information, this method is based on the analysis of Fourier spectra of a single face image or face image sequences. Experimental results show that the proposed method has an encouraging performance.
Fast hierarchical knowledge-based approach for human face detection in color images
NASA Astrophysics Data System (ADS)
Jiang, Jun; Gong, Jie; Zhang, Guilin; Hu, Ruolan
2001-09-01
This paper presents a fast hierarchical knowledge-based approach for automatically detecting multi-scale upright faces in still color images. The approach consists of three levels. At the highest level, skin-like regions are determinated by skin model, which is based on the color attributes hue and saturation in HSV color space, as well color attributes red and green in normalized color space. In level 2, a new eye model is devised to select human face candidates in segmented skin-like regions. An important feature of the eye model is that it is independent of the scale of human face. So it is possible for finding human faces in different scale with scanning image only once, and it leads to reduction the computation time of face detection greatly. In level 3, a human face mosaic image model, which is consistent with physical structure features of human face well, is applied to judge whether there are face detects in human face candidate regions. This model includes edge and gray rules. Experiment results show that the approach has high robustness and fast speed. It has wide application perspective at human-computer interactions and visual telephone etc.
Efficient live face detection to counter spoof attack in face recognition systems
NASA Astrophysics Data System (ADS)
Biswas, Bikram Kumar; Alam, Mohammad S.
2015-03-01
Face recognition is a critical tool used in almost all major biometrics based security systems. But recognition, authentication and liveness detection of the face of an actual user is a major challenge because an imposter or a non-live face of the actual user can be used to spoof the security system. In this research, a robust technique is proposed which detects liveness of faces in order to counter spoof attacks. The proposed technique uses a three-dimensional (3D) fast Fourier transform to compare spectral energies of a live face and a fake face in a mathematically selective manner. The mathematical model involves evaluation of energies of selective high frequency bands of average power spectra of both live and non-live faces. It also carries out proper recognition and authentication of the face of the actual user using the fringe-adjusted joint transform correlation technique, which has been found to yield the highest correlation output for a match. Experimental tests show that the proposed technique yields excellent results for identifying live faces.
Detection of emotional faces: salient physical features guide effective visual search.
Calvo, Manuel G; Nummenmaa, Lauri
2008-08-01
In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent, surprised and disgusted faces was found both under upright and inverted display conditions. Inversion slowed down the detection of these faces less than that of others (fearful, angry, and sad). Accordingly, the detection advantage involves processing of featural rather than configural information. The facial features responsible for the detection advantage are located in the mouth rather than the eye region. Computationally modeled visual saliency predicted both attentional orienting and detection. Saliency was greatest for the faces (happy) and regions (mouth) that were fixated earlier and detected faster, and there was close correspondence between the onset of the modeled saliency peak and the time at which observers initially fixated the faces. The authors conclude that visual saliency of specific facial features--especially the smiling mouth--is responsible for facilitated initial orienting, which thus shortens detection. (PsycINFO Database Record (c) 2008 APA, all rights reserved).
The Effect of Early Visual Deprivation on the Development of Face Detection
ERIC Educational Resources Information Center
Mondloch, Catherine J.; Segalowitz, Sidney J.; Lewis, Terri L.; Dywan, Jane; Le Grand, Richard; Maurer, Daphne
2013-01-01
The expertise of adults in face perception is facilitated by their ability to rapidly detect that a stimulus is a face. In two experiments, we examined the role of early visual input in the development of face detection by testing patients who had been treated as infants for bilateral congenital cataract. Experiment 1 indicated that, at age 9 to…
Face pose tracking using the four-point algorithm
NASA Astrophysics Data System (ADS)
Fung, Ho Yin; Wong, Kin Hong; Yu, Ying Kin; Tsui, Kwan Pang; Kam, Ho Chuen
2017-06-01
In this paper, we have developed an algorithm to track the pose of a human face robustly and efficiently. Face pose estimation is very useful in many applications such as building virtual reality systems and creating an alternative input method for the disabled. Firstly, we have modified a face detection toolbox called DLib for the detection of a face in front of a camera. The detected face features are passed to a pose estimation method, known as the four-point algorithm, for pose computation. The theory applied and the technical problems encountered during system development are discussed in the paper. It is demonstrated that the system is able to track the pose of a face in real time using a consumer grade laptop computer.
Nestor, Adrian; Vettel, Jean M; Tarr, Michael J
2013-11-01
What basic visual structures underlie human face detection and how can we extract such structures directly from the amplitude of neural responses elicited by face processing? Here, we address these issues by investigating an extension of noise-based image classification to BOLD responses recorded in high-level visual areas. First, we assess the applicability of this classification method to such data and, second, we explore its results in connection with the neural processing of faces. To this end, we construct luminance templates from white noise fields based on the response of face-selective areas in the human ventral cortex. Using behaviorally and neurally-derived classification images, our results reveal a family of simple but robust image structures subserving face representation and detection. Thus, we confirm the role played by classical face selective regions in face detection and we help clarify the representational basis of this perceptual function. From a theory standpoint, our findings support the idea of simple but highly diagnostic neurally-coded features for face detection. At the same time, from a methodological perspective, our work demonstrates the ability of noise-based image classification in conjunction with fMRI to help uncover the structure of high-level perceptual representations. Copyright © 2012 Wiley Periodicals, Inc.
Thompson, Laura A; Malloy, Daniel M; Cone, John M; Hendrickson, David L
2010-01-01
We introduce a novel paradigm for studying the cognitive processes used by listeners within interactive settings. This paradigm places the talker and the listener in the same physical space, creating opportunities for investigations of attention and comprehension processes taking place during interactive discourse situations. An experiment was conducted to compare results from previous research using videotaped stimuli to those obtained within the live face-to-face task paradigm. A headworn apparatus is used to briefly display LEDs on the talker's face in four locations as the talker communicates with the participant. In addition to the primary task of comprehending speeches, participants make a secondary task light detection response. In the present experiment, the talker gave non-emotionally-expressive speeches that were used in past research with videotaped stimuli. Signal detection analysis was employed to determine which areas of the face received the greatest focus of attention. Results replicate previous findings using videotaped methods.
Thompson, Laura A.; Malloy, Daniel M.; Cone, John M.; Hendrickson, David L.
2009-01-01
We introduce a novel paradigm for studying the cognitive processes used by listeners within interactive settings. This paradigm places the talker and the listener in the same physical space, creating opportunities for investigations of attention and comprehension processes taking place during interactive discourse situations. An experiment was conducted to compare results from previous research using videotaped stimuli to those obtained within the live face-to-face task paradigm. A headworn apparatus is used to briefly display LEDs on the talker’s face in four locations as the talker communicates with the participant. In addition to the primary task of comprehending speeches, participants make a secondary task light detection response. In the present experiment, the talker gave non-emotionally-expressive speeches that were used in past research with videotaped stimuli. Signal detection analysis was employed to determine which areas of the face received the greatest focus of attention. Results replicate previous findings using videotaped methods. PMID:21113354
Face detection on distorted images using perceptual quality-aware features
NASA Astrophysics Data System (ADS)
Gunasekar, Suriya; Ghosh, Joydeep; Bovik, Alan C.
2014-02-01
We quantify the degradation in performance of a popular and effective face detector when human-perceived image quality is degraded by distortions due to additive white gaussian noise, gaussian blur or JPEG compression. It is observed that, within a certain range of perceived image quality, a modest increase in image quality can drastically improve face detection performance. These results can be used to guide resource or bandwidth allocation in a communication/delivery system that is associated with face detection tasks. A new face detector based on QualHOG features is also proposed that augments face-indicative HOG features with perceptual quality-aware spatial Natural Scene Statistics (NSS) features, yielding improved tolerance against image distortions. The new detector provides statistically significant improvements over a strong baseline on a large database of face images representing a wide range of distortions. To facilitate this study, we created a new Distorted Face Database, containing face and non-face patches from images impaired by a variety of common distortion types and levels. This new dataset is available for download and further experimentation at www.ideal.ece.utexas.edu/˜suriya/DFD/.
The relationship between visual search and categorization of own- and other-age faces.
Craig, Belinda M; Lipp, Ottmar V
2018-03-13
Young adult participants are faster to detect young adult faces in crowds of infant and child faces than vice versa. These findings have been interpreted as evidence for more efficient attentional capture by own-age than other-age faces, but could alternatively reflect faster rejection of other-age than own-age distractors, consistent with the previously reported other-age categorization advantage: faster categorization of other-age than own-age faces. Participants searched for own-age faces in other-age backgrounds or vice versa. Extending the finding to different other-age groups, young adult participants were faster to detect young adult faces in both early adolescent (Experiment 1) and older adult backgrounds (Experiment 2). To investigate whether the own-age detection advantage could be explained by faster categorization and rejection of other-age background faces, participants in experiments 3 and 4 also completed an age categorization task. Relatively faster categorization of other-age faces was related to relatively faster search through other-age backgrounds on target absent trials but not target present trials. These results confirm that other-age faces are more quickly categorized and searched through and that categorization and search processes are related; however, this correlational approach could not confirm or reject the contribution of background face processing to the own-age detection advantage. © 2018 The British Psychological Society.
Directional templates for real-time detection of coronal axis rotated faces
NASA Astrophysics Data System (ADS)
Perez, Claudio A.; Estevez, Pablo A.; Garate, Patricio
2004-10-01
Real-time face and iris detection on video images has gained renewed attention because of multiple possible applications in studying eye function, drowsiness detection, virtual keyboard interfaces, face recognition, video processing and multimedia retrieval. In this paper, a study is presented on using directional templates in the detection of faces rotated in the coronal axis. The templates are built by extracting the directional image information from the regions of the eyes, nose and mouth. The face position is determined by computing a line integral using the templates over the face directional image. The line integral reaches a maximum when it coincides with the face position. It is shown an improvement in localization selectivity by the increased value in the line integral computed with the directional template. Besides, improvements in the line integral value for face size and face rotation angle was also found through the computation of the line integral using the directional template. Based on these results the new templates should improve selectivity and hence provide the means to restrict computations to a fewer number of templates and restrict the region of search during the face and eye tracking procedure. The proposed method is real time, completely non invasive and was applied with no background limitation and normal illumination conditions in an indoor environment.
Zachariou, Valentinos; Nikas, Christine V; Safiullah, Zaid N; Gotts, Stephen J; Ungerleider, Leslie G
2017-08-01
Human face recognition is often attributed to configural processing; namely, processing the spatial relationships among the features of a face. If configural processing depends on fine-grained spatial information, do visuospatial mechanisms within the dorsal visual pathway contribute to this process? We explored this question in human adults using functional magnetic resonance imaging and transcranial magnetic stimulation (TMS) in a same-different face detection task. Within localized, spatial-processing regions of the posterior parietal cortex, configural face differences led to significantly stronger activation compared to featural face differences, and the magnitude of this activation correlated with behavioral performance. In addition, detection of configural relative to featural face differences led to significantly stronger functional connectivity between the right FFA and the spatial processing regions of the dorsal stream, whereas detection of featural relative to configural face differences led to stronger functional connectivity between the right FFA and left FFA. Critically, TMS centered on these parietal regions impaired performance on configural but not featural face difference detections. We conclude that spatial mechanisms within the dorsal visual pathway contribute to the configural processing of facial features and, more broadly, that the dorsal stream may contribute to the veridical perception of faces. Published by Oxford University Press 2016.
Dissociation of face-selective cortical responses by attention.
Furey, Maura L; Tanskanen, Topi; Beauchamp, Michael S; Avikainen, Sari; Uutela, Kimmo; Hari, Riitta; Haxby, James V
2006-01-24
We studied attentional modulation of cortical processing of faces and houses with functional MRI and magnetoencephalography (MEG). MEG detected an early, transient face-selective response. Directing attention to houses in "double-exposure" pictures of superimposed faces and houses strongly suppressed the characteristic, face-selective functional MRI response in the fusiform gyrus. By contrast, attention had no effect on the M170, the early, face-selective response detected with MEG. Late (>190 ms) category-related MEG responses elicited by faces and houses, however, were strongly modulated by attention. These results indicate that hemodynamic and electrophysiological measures of face-selective cortical processing complement each other. The hemodynamic signals reflect primarily late responses that can be modulated by feedback connections. By contrast, the early, face-specific M170 that was not modulated by attention likely reflects a rapid, feed-forward phase of face-selective processing.
Lo, L Y; Cheng, M Y
2017-06-01
Detection of angry and happy faces is generally found to be easier and faster than that of faces expressing emotions other than anger or happiness. This can be explained by the threatening account and the feature account. Few empirical studies have explored the interaction between these two accounts which are seemingly, but not necessarily, mutually exclusive. The present studies hypothesised that prominent facial features are important in facilitating the detection process of both angry and happy expressions; yet the detection of happy faces was more facilitated by the prominent features than angry faces. Results confirmed the hypotheses and indicated that participants reacted faster to the emotional expressions with prominent features (in Study 1) and the detection of happy faces was more facilitated by the prominent feature than angry faces (in Study 2). The findings are compatible with evolutionary speculation which suggests that the angry expression is an alarming signal of potential threats to survival. Compared to the angry faces, the happy faces need more salient physical features to obtain a similar level of processing efficiency. © 2015 International Union of Psychological Science.
Enhancing the performance of cooperative face detector by NFGS
NASA Astrophysics Data System (ADS)
Yesugade, Snehal; Dave, Palak; Srivastava, Srinkhala; Das, Apurba
2015-07-01
Computerized human face detection is an important task of deformable pattern recognition in today's world. Especially in cooperative authentication scenarios like ATM fraud detection, attendance recording, video tracking and video surveillance, the accuracy of the face detection engine in terms of accuracy, memory utilization and speed have been active areas of research for the last decade. The Haar based face detection or SIFT and EBGM based face recognition systems are fairly reliable in this regard. But, there the features are extracted in terms of gray textures. When the input is a high resolution online video with a fairly large viewing area, Haar needs to search for face everywhere (say 352×250 pixels) and every time (e.g., 30 FPS capture all the time). In the current paper we have proposed to address both the aforementioned scenarios by a neuro-visually inspired method of figure-ground segregation (NFGS) [5] to result in a two-dimensional binary array from gray face image. The NFGS would identify the reference video frame in a low sampling rate and updates the same with significant change of environment like illumination. The proposed algorithm would trigger the face detector only when appearance of a new entity is encountered into the viewing area. To address the detection accuracy, classical face detector would be enabled only in a narrowed down region of interest (RoI) as fed by the NFGS. The act of updating the RoI would be done in each frame online with respect to the moving entity which in turn would improve both FR (False Rejection) and FA (False Acceptance) of the face detection system.
Ales, Justin M.; Farzin, Faraz; Rossion, Bruno; Norcia, Anthony M.
2012-01-01
We introduce a sensitive method for measuring face detection thresholds rapidly, objectively, and independently of low-level visual cues. The method is based on the swept parameter steady-state visual evoked potential (ssVEP), in which a stimulus is presented at a specific temporal frequency while parametrically varying (“sweeping”) the detectability of the stimulus. Here, the visibility of a face image was increased by progressive derandomization of the phase spectra of the image in a series of equally spaced steps. Alternations between face and fully randomized images at a constant rate (3/s) elicit a robust first harmonic response at 3 Hz specific to the structure of the face. High-density EEG was recorded from 10 human adult participants, who were asked to respond with a button-press as soon as they detected a face. The majority of participants produced an evoked response at the first harmonic (3 Hz) that emerged abruptly between 30% and 35% phase-coherence of the face, which was most prominent on right occipito-temporal sites. Thresholds for face detection were estimated reliably in single participants from 15 trials, or on each of the 15 individual face trials. The ssVEP-derived thresholds correlated with the concurrently measured perceptual face detection thresholds. This first application of the sweep VEP approach to high-level vision provides a sensitive and objective method that could be used to measure and compare visual perception thresholds for various object shapes and levels of categorization in different human populations, including infants and individuals with developmental delay. PMID:23024355
Facial detection using deep learning
NASA Astrophysics Data System (ADS)
Sharma, Manik; Anuradha, J.; Manne, H. K.; Kashyap, G. S. C.
2017-11-01
In the recent past, we have observed that Facebook has developed an uncanny ability to recognize people in photographs. Previously, we had to tag people in photos by clicking on them and typing their name. Now as soon as we upload a photo, Facebook tags everyone on its own. Facebook can recognize faces with 98% accuracy which is pretty much as good as humans can do. This technology is called Face Detection. Face detection is a popular topic in biometrics. We have surveillance cameras in public places for video capture as well as security purposes. The main advantages of this algorithm over other are uniqueness and approval. We need speed and accuracy to identify. But face detection is really a series of several related problems: First, look at a picture and find all the faces in it. Second, focus on each face and understand that even if a face is turned in a weird direction or in bad lighting, it is still the same person. Third select features which can be used to identify each face uniquely like size of the eyes, face etc. Finally, compare these features to data we have to find the person name. As a human, your brain is wired to do all of this automatically and instantly. In fact, humans are too good at recognizing faces. Computers are not capable of this kind of high-level generalization, so we must teach them how to do each step in this process separately. The growth of face detection is largely driven by growing applications such as credit card verification, surveillance video images, authentication for banking and security system access.
A multi-view face recognition system based on cascade face detector and improved Dlib
NASA Astrophysics Data System (ADS)
Zhou, Hongjun; Chen, Pei; Shen, Wei
2018-03-01
In this research, we present a framework for multi-view face detect and recognition system based on cascade face detector and improved Dlib. This method is aimed to solve the problems of low efficiency and low accuracy in multi-view face recognition, to build a multi-view face recognition system, and to discover a suitable monitoring scheme. For face detection, the cascade face detector is used to extracted the Haar-like feature from the training samples, and Haar-like feature is used to train a cascade classifier by combining Adaboost algorithm. Next, for face recognition, we proposed an improved distance model based on Dlib to improve the accuracy of multiview face recognition. Furthermore, we applied this proposed method into recognizing face images taken from different viewing directions, including horizontal view, overlooks view, and looking-up view, and researched a suitable monitoring scheme. This method works well for multi-view face recognition, and it is also simulated and tested, showing satisfactory experimental results.
NASA Astrophysics Data System (ADS)
Guidang, Excel Philip B.; Llanda, Christopher John R.; Palaoag, Thelma D.
2018-03-01
Face Detection Technique as a strategy in controlling a multimedia instructional material was implemented in this study. Specifically, it achieved the following objectives: 1) developed a face detection application that controls an embedded mother-tongue-based instructional material for face-recognition configuration using Python; 2) determined the perceptions of the students using the Mutt Susan’s student app review rubric. The study concludes that face detection technique is effective in controlling an electronic instructional material. It can be used to change the method of interaction of the student with an instructional material. 90% of the students perceived the application to be a great app and 10% rated the application to be good.
Hirai, Masahiro; Muramatsu, Yukako; Mizuno, Seiji; Kurahashi, Naoko; Kurahashi, Hirokazu; Nakamura, Miho
2017-01-01
Individuals with Williams syndrome (WS) exhibit an atypical social phenotype termed hypersociability. One theory accounting for hypersociability presumes an atypical function of the amygdala, which processes fear-related information. However, evidence is lacking regarding the detection mechanisms of fearful faces for individuals with WS. Here, we introduce a visual search paradigm to elucidate the mechanisms for detecting fearful faces by evaluating the search asymmetry; the reaction time when both the target and distractors were swapped was asymmetrical. Eye movements reflect subtle atypical attentional properties, whereas, manual responses are unable to capture atypical attentional profiles toward faces in individuals with WS. Therefore, we measured both eye movements and manual responses of individuals with WS and typically developed children and adults in visual searching for a fearful face among neutral faces or a neutral face among fearful faces. Two task measures, namely reaction time and performance accuracy, were analyzed for each stimulus as well as gaze behavior and the initial fixation onset latency. Overall, reaction times in the WS group and the mentally age-matched control group were significantly longer than those in the chronologically age-matched group. We observed a search asymmetry effect in all groups: when a neutral target facial expression was presented among fearful faces, the reaction times were significantly prolonged in comparison with when a fearful target facial expression was displayed among neutral distractor faces. Furthermore, the first fixation onset latency of eye movement toward a target facial expression showed a similar tendency for manual responses. Although overall responses in detecting fearful faces for individuals with WS are slower than those for control groups, search asymmetry was observed. Therefore, cognitive mechanisms underlying the detection of fearful faces seem to be typical in individuals with WS. This finding is discussed with reference to the amygdala account explaining hypersociability in individuals with WS.
Kikuchi, Yukiko; Senju, Atsushi; Tojo, Yoshikuni; Osanai, Hiroo; Hasegawa, Toshikazu
2009-01-01
Two experiments investigated attention of children with autism spectrum disorder (ASD) to faces and objects. In both experiments, children (7- to 15-year-olds) detected the difference between 2 visual scenes. Results in Experiment 1 revealed that typically developing children (n = 16) detected the change in faces faster than in objects, whereas children with ASD (n = 16) were equally fast in detecting changes in faces and objects. These results were replicated in Experiment 2 (n = 16 in children with ASD and 22 in typically developing children), which does not require face recognition skill. Results suggest that children with ASD lack an attentional bias toward others' faces, which could contribute to their atypical social orienting.
Adapting Local Features for Face Detection in Thermal Image.
Ma, Chao; Trung, Ngo Thanh; Uchiyama, Hideaki; Nagahara, Hajime; Shimada, Atsushi; Taniguchi, Rin-Ichiro
2017-11-27
A thermal camera captures the temperature distribution of a scene as a thermal image. In thermal images, facial appearances of different people under different lighting conditions are similar. This is because facial temperature distribution is generally constant and not affected by lighting condition. This similarity in face appearances is advantageous for face detection. To detect faces in thermal images, cascade classifiers with Haar-like features are generally used. However, there are few studies exploring the local features for face detection in thermal images. In this paper, we introduce two approaches relying on local features for face detection in thermal images. First, we create new feature types by extending Multi-Block LBP. We consider a margin around the reference and the generally constant distribution of facial temperature. In this way, we make the features more robust to image noise and more effective for face detection in thermal images. Second, we propose an AdaBoost-based training method to get cascade classifiers with multiple types of local features. These feature types have different advantages. In this way we enhance the description power of local features. We did a hold-out validation experiment and a field experiment. In the hold-out validation experiment, we captured a dataset from 20 participants, comprising 14 males and 6 females. For each participant, we captured 420 images with 10 variations in camera distance, 21 poses, and 2 appearances (participant with/without glasses). We compared the performance of cascade classifiers trained by different sets of the features. The experiment results showed that the proposed approaches effectively improve the performance of face detection in thermal images. In the field experiment, we compared the face detection performance in realistic scenes using thermal and RGB images, and gave discussion based on the results.
Hole Feature on Conical Face Recognition for Turning Part Model
NASA Astrophysics Data System (ADS)
Zubair, A. F.; Abu Mansor, M. S.
2018-03-01
Computer Aided Process Planning (CAPP) is the bridge between CAD and CAM and pre-processing of the CAD data in the CAPP system is essential. For CNC turning part, conical faces of part model is inevitable to be recognised beside cylindrical and planar faces. As the sinus cosines of the cone radius structure differ according to different models, face identification in automatic feature recognition of the part model need special intention. This paper intends to focus hole on feature on conical faces that can be detected by CAD solid modeller ACIS via. SAT file. Detection algorithm of face topology were generated and compared. The study shows different faces setup for similar conical part models with different hole type features. Three types of holes were compared and different between merge faces and unmerge faces were studied.
Visual search for faces by race: a cross-race study.
Sun, Gang; Song, Luping; Bentin, Shlomo; Yang, Yanjie; Zhao, Lun
2013-08-30
Using a single averaged face of each race previous study indicated that the detection of one other-race face among own-race faces background was faster than vice versa (Levin, 1996, 2000). However, employing a variable mapping of face pictures one recent report found preferential detection of own-race faces vs. other-race faces (Lipp et al., 2009). Using the well-controlled design and a heterogeneous set of real face images, in the present study we explored the visual search for own and other race faces in Chinese and Caucasian participants. Across both groups, the search for a face of one race among other-race faces was serial and self-terminating. In Chinese participants, the search consistently faster for other-race than own-race faces, irrespective of upright or upside-down condition; however, this search asymmetry was not evident in Caucasian participants. These characteristics suggested that the race of a face is not a visual basic feature, and in Chinese participants the faster search for other-race than own-race faces also reflects perceptual factors. The possible mechanism underlying other-race search effects was discussed. Copyright © 2013 Elsevier Ltd. All rights reserved.
A Viola-Jones based hybrid face detection framework
NASA Astrophysics Data System (ADS)
Murphy, Thomas M.; Broussard, Randy; Schultz, Robert; Rakvic, Ryan; Ngo, Hau
2013-12-01
Improvements in face detection performance would benefit many applications. The OpenCV library implements a standard solution, the Viola-Jones detector, with a statistically boosted rejection cascade of binary classifiers. Empirical evidence has shown that Viola-Jones underdetects in some instances. This research shows that a truncated cascade augmented by a neural network could recover these undetected faces. A hybrid framework is constructed, with a truncated Viola-Jones cascade followed by an artificial neural network, used to refine the face decision. Optimally, a truncation stage that captured all faces and allowed the neural network to remove the false alarms is selected. A feedforward backpropagation network with one hidden layer is trained to discriminate faces based upon the thresholding (detection) values of intermediate stages of the full rejection cascade. A clustering algorithm is used as a precursor to the neural network, to group significant overlappings. Evaluated on the CMU/VASC Image Database, comparison with an unmodified OpenCV approach shows: (1) a 37% increase in detection rates if constrained by the requirement of no increase in false alarms, (2) a 48% increase in detection rates if some additional false alarms are tolerated, and (3) an 82% reduction in false alarms with no reduction in detection rates. These results demonstrate improved face detection and could address the need for such improvement in various applications.
NASA Astrophysics Data System (ADS)
Muneyasu, Mitsuji; Odani, Shuhei; Kitaura, Yoshihiro; Namba, Hitoshi
On the use of a surveillance camera, there is a case where privacy protection should be considered. This paper proposes a new privacy protection method by automatically degrading the face region in surveillance images. The proposed method consists of ROI coding of JPEG2000 and a face detection method based on template matching. The experimental result shows that the face region can be detected and hidden correctly.
What makes a cell face-selective: the importance of contrast
Ohayon, Shay; Freiwald, Winrich A; Tsao, Doris Y
2012-01-01
Summary Faces are robustly detected by computer vision algorithms that search for characteristic coarse contrast features. Here, we investigated whether face-selective cells in the primate brain exploit contrast features as well. We recorded from face-selective neurons in macaque inferotemporal cortex, while presenting a face-like collage of regions whose luminances were changed randomly. Modulating contrast combinations between regions induced activity changes ranging from no response to a response greater than that to a real face in 50% of cells. The critical stimulus factor determining response magnitude was contrast polarity, e.g., nose region brighter than left eye. Contrast polarity preferences were consistent across cells, suggesting a common computational strategy across the population, and matched features used by computer vision algorithms for face detection. Furthermore, most cells were tuned both for contrast polarity and for the geometry of facial features, suggesting cells encode information useful both for detection and recognition. PMID:22578507
Category search speeds up face-selective fMRI responses in a non-hierarchical cortical face network.
Jiang, Fang; Badler, Jeremy B; Righi, Giulia; Rossion, Bruno
2015-05-01
The human brain is extremely efficient at detecting faces in complex visual scenes, but the spatio-temporal dynamics of this remarkable ability, and how it is influenced by category-search, remain largely unknown. In the present study, human subjects were shown gradually-emerging images of faces or cars in visual scenes, while neural activity was recorded using functional magnetic resonance imaging (fMRI). Category search was manipulated by the instruction to indicate the presence of either a face or a car, in different blocks, as soon as an exemplar of the target category was detected in the visual scene. The category selectivity of most face-selective areas was enhanced when participants were instructed to report the presence of faces in gradually decreasing noise stimuli. Conversely, the same regions showed much less selectivity when participants were instructed instead to detect cars. When "face" was the target category, the fusiform face area (FFA) showed consistently earlier differentiation of face versus car stimuli than did the "occipital face area" (OFA). When "car" was the target category, only the FFA showed differentiation of face versus car stimuli. These observations provide further challenges for hierarchical models of cortical face processing and show that during gradual revealing of information, selective category-search may decrease the required amount of information, enhancing and speeding up category-selective responses in the human brain. Copyright © 2015 Elsevier Ltd. All rights reserved.
Right wing authoritarianism is associated with race bias in face detection
Bret, Amélie; Beffara, Brice; McFadyen, Jessica; Mermillod, Martial
2017-01-01
Racial discrimination can be observed in a wide range of psychological processes, including even the earliest phases of face detection. It remains unclear, however, whether racially-biased low-level face processing is influenced by ideologies, such as right wing authoritarianism or social dominance orientation. In the current study, we hypothesized that socio-political ideologies such as these can substantially predict perceptive racial bias during early perception. To test this hypothesis, 67 participants detected faces within arrays of neutral objects. The faces were either Caucasian (in-group) or North African (out-group) and either had a neutral or angry expression. Results showed that participants with higher self-reported right-wing authoritarianism were more likely to show slower response times for detecting out- vs. in-groups faces. We interpreted our results according to the Dual Process Motivational Model and suggest that socio-political ideologies may foster early racial bias via attentional disengagement. PMID:28692705
Observed touch on a non-human face is not remapped onto the human observer's own face.
Beck, Brianna; Bertini, Caterina; Scarpazza, Cristina; Làdavas, Elisabetta
2013-01-01
Visual remapping of touch (VRT) is a phenomenon in which seeing a human face being touched enhances detection of tactile stimuli on the observer's own face, especially when the observed face expresses fear. This study tested whether VRT would occur when seeing touch on monkey faces and whether it would be similarly modulated by facial expressions. Human participants detected near-threshold tactile stimulation on their own cheeks while watching fearful, happy, and neutral human or monkey faces being concurrently touched or merely approached by fingers. We predicted minimal VRT for neutral and happy monkey faces but greater VRT for fearful monkey faces. The results with human faces replicated previous findings, demonstrating stronger VRT for fearful expressions than for happy or neutral expressions. However, there was no VRT (i.e. no difference between accuracy in touch and no-touch trials) for any of the monkey faces, regardless of facial expression, suggesting that touch on a non-human face is not remapped onto the somatosensory system of the human observer.
Observed Touch on a Non-Human Face Is Not Remapped onto the Human Observer's Own Face
Beck, Brianna; Bertini, Caterina; Scarpazza, Cristina; Làdavas, Elisabetta
2013-01-01
Visual remapping of touch (VRT) is a phenomenon in which seeing a human face being touched enhances detection of tactile stimuli on the observer's own face, especially when the observed face expresses fear. This study tested whether VRT would occur when seeing touch on monkey faces and whether it would be similarly modulated by facial expressions. Human participants detected near-threshold tactile stimulation on their own cheeks while watching fearful, happy, and neutral human or monkey faces being concurrently touched or merely approached by fingers. We predicted minimal VRT for neutral and happy monkey faces but greater VRT for fearful monkey faces. The results with human faces replicated previous findings, demonstrating stronger VRT for fearful expressions than for happy or neutral expressions. However, there was no VRT (i.e. no difference between accuracy in touch and no-touch trials) for any of the monkey faces, regardless of facial expression, suggesting that touch on a non-human face is not remapped onto the somatosensory system of the human observer. PMID:24250781
Sad Facial Expressions Increase Choice Blindness
Wang, Yajie; Zhao, Song; Zhang, Zhijie; Feng, Wenfeng
2018-01-01
Previous studies have discovered a fascinating phenomenon known as choice blindness—individuals fail to detect mismatches between the face they choose and the face replaced by the experimenter. Although previous studies have reported a couple of factors that can modulate the magnitude of choice blindness, the potential effect of facial expression on choice blindness has not yet been explored. Using faces with sad and neutral expressions (Experiment 1) and faces with happy and neutral expressions (Experiment 2) in the classic choice blindness paradigm, the present study investigated the effects of facial expressions on choice blindness. The results showed that the detection rate was significantly lower on sad faces than neutral faces, whereas no significant difference was observed between happy faces and neutral faces. The exploratory analysis of verbal reports found that participants who reported less facial features for sad (as compared to neutral) expressions also tended to show a lower detection rate of sad (as compared to neutral) faces. These findings indicated that sad facial expressions increased choice blindness, which might have resulted from inhibition of further processing of the detailed facial features by the less attractive sad expressions (as compared to neutral expressions). PMID:29358926
Sad Facial Expressions Increase Choice Blindness.
Wang, Yajie; Zhao, Song; Zhang, Zhijie; Feng, Wenfeng
2017-01-01
Previous studies have discovered a fascinating phenomenon known as choice blindness-individuals fail to detect mismatches between the face they choose and the face replaced by the experimenter. Although previous studies have reported a couple of factors that can modulate the magnitude of choice blindness, the potential effect of facial expression on choice blindness has not yet been explored. Using faces with sad and neutral expressions (Experiment 1) and faces with happy and neutral expressions (Experiment 2) in the classic choice blindness paradigm, the present study investigated the effects of facial expressions on choice blindness. The results showed that the detection rate was significantly lower on sad faces than neutral faces, whereas no significant difference was observed between happy faces and neutral faces. The exploratory analysis of verbal reports found that participants who reported less facial features for sad (as compared to neutral) expressions also tended to show a lower detection rate of sad (as compared to neutral) faces. These findings indicated that sad facial expressions increased choice blindness, which might have resulted from inhibition of further processing of the detailed facial features by the less attractive sad expressions (as compared to neutral expressions).
Facial recognition in education system
NASA Astrophysics Data System (ADS)
Krithika, L. B.; Venkatesh, K.; Rathore, S.; Kumar, M. Harish
2017-11-01
Human beings exploit emotions comprehensively for conveying messages and their resolution. Emotion detection and face recognition can provide an interface between the individuals and technologies. The most successful applications of recognition analysis are recognition of faces. Many different techniques have been used to recognize the facial expressions and emotion detection handle varying poses. In this paper, we approach an efficient method to recognize the facial expressions to track face points and distances. This can automatically identify observer face movements and face expression in image. This can capture different aspects of emotion and facial expressions.
Detection of Emotional Faces: Salient Physical Features Guide Effective Visual Search
ERIC Educational Resources Information Center
Calvo, Manuel G.; Nummenmaa, Lauri
2008-01-01
In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent,…
NASA Astrophysics Data System (ADS)
Karam, Lina J.; Zhu, Tong
2015-03-01
The varying quality of face images is an important challenge that limits the effectiveness of face recognition technology when applied in real-world applications. Existing face image databases do not consider the effect of distortions that commonly occur in real-world environments. This database (QLFW) represents an initial attempt to provide a set of labeled face images spanning the wide range of quality, from no perceived impairment to strong perceived impairment for face detection and face recognition applications. Types of impairment include JPEG2000 compression, JPEG compression, additive white noise, Gaussian blur and contrast change. Subjective experiments are conducted to assess the perceived visual quality of faces under different levels and types of distortions and also to assess the human recognition performance under the considered distortions. One goal of this work is to enable automated performance evaluation of face recognition technologies in the presence of different types and levels of visual distortions. This will consequently enable the development of face recognition systems that can operate reliably on real-world visual content in the presence of real-world visual distortions. Another goal is to enable the development and assessment of visual quality metrics for face images and for face detection and recognition applications.
A special purpose knowledge-based face localization method
NASA Astrophysics Data System (ADS)
Hassanat, Ahmad; Jassim, Sabah
2008-04-01
This paper is concerned with face localization for visual speech recognition (VSR) system. Face detection and localization have got a great deal of attention in the last few years, because it is an essential pre-processing step in many techniques that handle or deal with faces, (e.g. age, face, gender, race and visual speech recognition). We shall present an efficient method for localization human's faces in video images captured on mobile constrained devices, under a wide variation in lighting conditions. We use a multiphase method that may include all or some of the following steps starting with image pre-processing, followed by a special purpose edge detection, then an image refinement step. The output image will be passed through a discrete wavelet decomposition procedure, and the computed LL sub-band at a certain level will be transformed into a binary image that will be scanned by using a special template to select a number of possible candidate locations. Finally, we fuse the scores from the wavelet step with scores determined by color information for the candidate location and employ a form of fuzzy logic to distinguish face from non-face locations. We shall present results of large number of experiments to demonstrate that the proposed face localization method is efficient and achieve high level of accuracy that outperforms existing general-purpose face detection methods.
Sensitivity and Specificity of OCT Angiography to Detect Choroidal Neovascularization.
Faridi, Ambar; Jia, Yali; Gao, Simon S; Huang, David; Bhavsar, Kavita V; Wilson, David J; Sill, Andrew; Flaxel, Christina J; Hwang, Thomas S; Lauer, Andreas K; Bailey, Steven T
2017-01-01
To determine the sensitivity and specificity of optical coherence tomography angiography (OCTA) in the detection of choroidal neovascularization (CNV) in age-related macular degeneration (AMD). Prospective case series. Prospective series of seventy-two eyes were studied, which included eyes with treatment-naive CNV due to AMD, non-neovascular AMD, and normal controls. All eyes underwent OCTA with a spectral domain (SD) OCT (Optovue, Inc.). The 3D angiogram was segmented into separate en face views including the inner retinal angiogram, outer retinal angiogram, and choriocapillaris angiogram. Detection of abnormal flow in the outer retina served as candidate CNV with OCTA. Masked graders reviewed structural OCT alone, en face OCTA alone, and en face OCTA combined with cross-sectional OCTA for the presence of CNV. The sensitivity and specificity of CNV detection compared to the gold standard of fluorescein angiography (FA) and OCT was determined for structural SD-OCT alone, en face OCTA alone, and with en face OCTA combined with cross-sectional OCTA. Of 32 eyes with CNV, both graders identified 26 true positives with en face OCTA alone, resulting in a sensitivity of 81.3%. Four of the 6 false negatives had large subretinal hemorrhage (SRH) and sensitivity improved to 94% for both graders if eyes with SRH were excluded. The addition of cross-sectional OCTA along with en face OCTA improved the sensitivity to 100% for both graders. Structural OCT alone also had a sensitivity of 100%. The specificity of en face OCTA alone was 92.5% for grader A and 97.5% for grader B. The specificity of structural OCT alone was 97.5% for grader A and 85% for grader B. Cross-sectional OCTA combined with en face OCTA had a specificity of 97.5% for grader A and 100% for grader B. Sensitivity and specificity for CNV detection with en face OCTA combined with cross-sectional OCTA approaches that of the gold standard of FA with OCT, and it is better than en face OCTA alone. Structural OCT alone has excellent sensitivity for CNV detection. False positives from structural OCT can be mitigated with the addition of flow information with OCTA.
Dissociation between recognition and detection advantage for facial expressions: a meta-analysis.
Nummenmaa, Lauri; Calvo, Manuel G
2015-04-01
Happy facial expressions are recognized faster and more accurately than other expressions in categorization tasks, whereas detection in visual search tasks is widely believed to be faster for angry than happy faces. We used meta-analytic techniques for resolving this categorization versus detection advantage discrepancy for positive versus negative facial expressions. Effect sizes were computed on the basis of the r statistic for a total of 34 recognition studies with 3,561 participants and 37 visual search studies with 2,455 participants, yielding a total of 41 effect sizes for recognition accuracy, 25 for recognition speed, and 125 for visual search speed. Random effects meta-analysis was conducted to estimate effect sizes at population level. For recognition tasks, an advantage in recognition accuracy and speed for happy expressions was found for all stimulus types. In contrast, for visual search tasks, moderator analysis revealed that a happy face detection advantage was restricted to photographic faces, whereas a clear angry face advantage was found for schematic and "smiley" faces. Robust detection advantage for nonhappy faces was observed even when stimulus emotionality was distorted by inversion or rearrangement of the facial features, suggesting that visual features primarily drive the search. We conclude that the recognition advantage for happy faces is a genuine phenomenon related to processing of facial expression category and affective valence. In contrast, detection advantages toward either happy (photographic stimuli) or nonhappy (schematic) faces is contingent on visual stimulus features rather than facial expression, and may not involve categorical or affective processing. (c) 2015 APA, all rights reserved).
Isomura, Tomoko; Ogawa, Shino; Yamada, Satoko; Shibasaki, Masahiro; Masataka, Nobuo
2014-01-01
Previous studies have demonstrated that angry faces capture humans' attention more rapidly than emotionally positive faces. This phenomenon is referred to as the anger superiority effect (ASE). Despite atypical emotional processing, adults and children with Autism Spectrum Disorders (ASD) have been reported to show ASE as well as typically developed (TD) individuals. So far, however, few studies have clarified whether or not the mechanisms underlying ASE are the same for both TD and ASD individuals. Here, we tested how TD and ASD children process schematic emotional faces during detection by employing a recognition task in combination with a face-in-the-crowd task. Results of the face-in-the-crowd task revealed the prevalence of ASE both in TD and ASD children. However, the results of the recognition task revealed group differences: In TD children, detection of angry faces required more configural face processing and disrupted the processing of local features. In ASD children, on the other hand, it required more feature-based processing rather than configural processing. Despite the small sample sizes, these findings provide preliminary evidence that children with ASD, in contrast to TD children, show quick detection of angry faces by extracting local features in faces. PMID:24904477
The biometric-based module of smart grid system
NASA Astrophysics Data System (ADS)
Engel, E.; Kovalev, I. V.; Ermoshkina, A.
2015-10-01
Within Smart Grid concept the flexible biometric-based module base on Principal Component Analysis (PCA) and selective Neural Network is developed. The formation of the selective Neural Network the biometric-based module uses the method which includes three main stages: preliminary processing of the image, face localization and face recognition. Experiments on the Yale face database show that (i) selective Neural Network exhibits promising classification capability for face detection, recognition problems; and (ii) the proposed biometric-based module achieves near real-time face detection, recognition speed and the competitive performance, as compared to some existing subspaces-based methods.
Interactive display system having a matrix optical detector
Veligdan, James T.; DeSanto, Leonard
2007-01-23
A display system includes a waveguide optical panel having an inlet face and an opposite outlet face. An image beam is projected across the inlet face laterally and transversely for display on the outlet face. An optical detector including a matrix of detector elements is optically aligned with the inlet face for detecting a corresponding lateral and transverse position of an inbound light spot on the outlet face.
An ERP study of famous face incongruity detection in middle age.
Chaby, L; Jemel, B; George, N; Renault, B; Fiori, N
2001-04-01
Age-related changes in famous face incongruity detection were examined in middle-aged (mean = 50.6) and young (mean = 24.8) subjects. Behavioral and ERP responses were recorded while subjects, after a presentation of a "prime face" (a famous person with the eyes masked), had to decide whether the following "test face" was completed with its authentic eyes (congruent) or with other eyes (incongruent). The principal effects of advancing age were (1) behavioral difficulties in discriminating between incongruent and congruent faces; (2) a reduced N400 effect due to N400 enhancement for both congruent and incongruent faces; (3) a latency increase of both N400 and P600 components. ERPs to primes (face encoding) were not affected by aging. These results are interpreted in terms of early signs of aging. Copyright 2001 Academic Press.
Face Pareidolia in the Rhesus Monkey.
Taubert, Jessica; Wardle, Susan G; Flessert, Molly; Leopold, David A; Ungerleider, Leslie G
2017-08-21
Face perception in humans and nonhuman primates is rapid and accurate [1-4]. In the human brain, a network of visual-processing regions is specialized for faces [5-7]. Although face processing is a priority of the primate visual system, face detection is not infallible. Face pareidolia is the compelling illusion of perceiving facial features on inanimate objects, such as the illusory face on the surface of the moon. Although face pareidolia is commonly experienced by humans, its presence in other species is unknown. Here we provide evidence for face pareidolia in a species known to possess a complex face-processing system [8-10]: the rhesus monkey (Macaca mulatta). In a visual preference task [11, 12], monkeys looked longer at photographs of objects that elicited face pareidolia in human observers than at photographs of similar objects that did not elicit illusory faces. Examination of eye movements revealed that monkeys fixated the illusory internal facial features in a pattern consistent with how they view photographs of faces [13]. Although the specialized response to faces observed in humans [1, 3, 5-7, 14] is often argued to be continuous across primates [4, 15], it was previously unclear whether face pareidolia arose from a uniquely human capacity. For example, pareidolia could be a product of the human aptitude for perceptual abstraction or result from frequent exposure to cartoons and illustrations that anthropomorphize inanimate objects. Instead, our results indicate that the perception of illusory facial features on inanimate objects is driven by a broadly tuned face-detection mechanism that we share with other species. Published by Elsevier Ltd.
De Winter, François-Laurent; Timmers, Dorien; de Gelder, Beatrice; Van Orshoven, Marc; Vieren, Marleen; Bouckaert, Miriam; Cypers, Gert; Caekebeke, Jo; Van de Vliet, Laura; Goffin, Karolien; Van Laere, Koen; Sunaert, Stefan; Vandenberghe, Rik; Vandenbulcke, Mathieu; Van den Stock, Jan
2016-01-01
Deficits in face processing have been described in the behavioral variant of fronto-temporal dementia (bvFTD), primarily regarding the recognition of facial expressions. Less is known about face shape and face identity processing. Here we used a hierarchical strategy targeting face shape and face identity recognition in bvFTD and matched healthy controls. Participants performed 3 psychophysical experiments targeting face shape detection (Experiment 1), unfamiliar face identity matching (Experiment 2), familiarity categorization and famous face-name matching (Experiment 3). The results revealed group differences only in Experiment 3, with a deficit in the bvFTD group for both familiarity categorization and famous face-name matching. Voxel-based morphometry regression analyses in the bvFTD group revealed an association between grey matter volume of the left ventral anterior temporal lobe and familiarity recognition, while face-name matching correlated with grey matter volume of the bilateral ventral anterior temporal lobes. Subsequently, we quantified familiarity-specific and name-specific recognition deficits as the sum of the celebrities of which respectively only the name or only the familiarity was accurately recognized. Both indices were associated with grey matter volume of the bilateral anterior temporal cortices. These findings extent previous results by documenting the involvement of the left anterior temporal lobe (ATL) in familiarity detection and the right ATL in name recognition deficits in fronto-temporal lobar degeneration.
Enhancement of Fast Face Detection Algorithm Based on a Cascade of Decision Trees
NASA Astrophysics Data System (ADS)
Khryashchev, V. V.; Lebedev, A. A.; Priorov, A. L.
2017-05-01
Face detection algorithm based on a cascade of ensembles of decision trees (CEDT) is presented. The new approach allows detecting faces other than the front position through the use of multiple classifiers. Each classifier is trained for a specific range of angles of the rotation head. The results showed a high rate of productivity for CEDT on images with standard size. The algorithm increases the area under the ROC-curve of 13% compared to a standard Viola-Jones face detection algorithm. Final realization of given algorithm consist of 5 different cascades for frontal/non-frontal faces. One more thing which we take from the simulation results is a low computational complexity of CEDT algorithm in comparison with standard Viola-Jones approach. This could prove important in the embedded system and mobile device industries because it can reduce the cost of hardware and make battery life longer.
NASA Astrophysics Data System (ADS)
Soetedjo, Aryuanto; Yamada, Koichi
This paper describes a new color segmentation based on a normalized RGB chromaticity diagram for face detection. Face skin is extracted from color images using a coarse skin region with fixed boundaries followed by a fine skin region with variable boundaries. Two newly developed histograms that have prominent peaks of skin color and non-skin colors are employed to adjust the boundaries of the skin region. The proposed approach does not need a skin color model, which depends on a specific camera parameter and is usually limited to a particular environment condition, and no sample images are required. The experimental results using color face images of various races under varying lighting conditions and complex backgrounds, obtained from four different resources on the Internet, show a high detection rate of 87%. The results of the detection rate and computation time are comparable to the well known real-time face detection method proposed by Viola-Jones [11], [12].
Automatic Processing of Changes in Facial Emotions in Dysphoria: A Magnetoencephalography Study.
Xu, Qianru; Ruohonen, Elisa M; Ye, Chaoxiong; Li, Xueqiao; Kreegipuu, Kairi; Stefanics, Gabor; Luo, Wenbo; Astikainen, Piia
2018-01-01
It is not known to what extent the automatic encoding and change detection of peripherally presented facial emotion is altered in dysphoria. The negative bias in automatic face processing in particular has rarely been studied. We used magnetoencephalography (MEG) to record automatic brain responses to happy and sad faces in dysphoric (Beck's Depression Inventory ≥ 13) and control participants. Stimuli were presented in a passive oddball condition, which allowed potential negative bias in dysphoria at different stages of face processing (M100, M170, and M300) and alterations of change detection (visual mismatch negativity, vMMN) to be investigated. The magnetic counterpart of the vMMN was elicited at all stages of face processing, indexing automatic deviance detection in facial emotions. The M170 amplitude was modulated by emotion, response amplitudes being larger for sad faces than happy faces. Group differences were found for the M300, and they were indexed by two different interaction effects. At the left occipital region of interest, the dysphoric group had larger amplitudes for sad than happy deviant faces, reflecting negative bias in deviance detection, which was not found in the control group. On the other hand, the dysphoric group showed no vMMN to changes in facial emotions, while the vMMN was observed in the control group at the right occipital region of interest. Our results indicate that there is a negative bias in automatic visual deviance detection, but also a general change detection deficit in dysphoria.
Brain Signals of Face Processing as Revealed by Event-Related Potentials
Olivares, Ela I.; Iglesias, Jaime; Saavedra, Cristina; Trujillo-Barreto, Nelson J.; Valdés-Sosa, Mitchell
2015-01-01
We analyze the functional significance of different event-related potentials (ERPs) as electrophysiological indices of face perception and face recognition, according to cognitive and neurofunctional models of face processing. Initially, the processing of faces seems to be supported by early extrastriate occipital cortices and revealed by modulations of the occipital P1. This early response is thought to reflect the detection of certain primary structural aspects indicating the presence grosso modo of a face within the visual field. The posterior-temporal N170 is more sensitive to the detection of faces as complex-structured stimuli and, therefore, to the presence of its distinctive organizational characteristics prior to within-category identification. In turn, the relatively late and probably more rostrally generated N250r and N400-like responses might respectively indicate processes of access and retrieval of face-related information, which is stored in long-term memory (LTM). New methods of analysis of electrophysiological and neuroanatomical data, namely, dynamic causal modeling, single-trial and time-frequency analyses, are highly recommended to advance in the knowledge of those brain mechanisms concerning face processing. PMID:26160999
Wang, Yamin; Zhou, Lu
2016-10-01
Most young Chinese people now learn about Caucasian individuals via media, especially American and European movies and television series (AEMT). The current study aimed to explore whether long-term exposure to AEMT facilitates Caucasian face perception in young Chinese watchers. Before the experiment, we created Chinese, Caucasian, and generic average faces (generic average face was created from both Chinese and Caucasian faces) and tested participants' ability to identify them. In the experiment, we asked AEMT watchers and Chinese movie and television series (CMT) watchers to complete a facial norm detection task. This task was developed recently to detect norms used in facial perception. The results indicated that AEMT watchers coded Caucasian faces relative to a Caucasian face norm better than they did to a generic face norm, whereas no such difference was found among CMT watchers. All watchers coded Chinese faces by referencing a Chinese norm better than they did relative to a generic norm. The results suggested that long-term exposure to AEMT has the same effect as daily other-race face contact in shaping facial perception. © The Author(s) 2016.
A Comparative Survey of Methods for Remote Heart Rate Detection From Frontal Face Videos
Wang, Chen; Pun, Thierry; Chanel, Guillaume
2018-01-01
Remotely measuring physiological activity can provide substantial benefits for both the medical and the affective computing applications. Recent research has proposed different methodologies for the unobtrusive detection of heart rate (HR) using human face recordings. These methods are based on subtle color changes or motions of the face due to cardiovascular activities, which are invisible to human eyes but can be captured by digital cameras. Several approaches have been proposed such as signal processing and machine learning. However, these methods are compared with different datasets, and there is consequently no consensus on method performance. In this article, we describe and evaluate several methods defined in literature, from 2008 until present day, for the remote detection of HR using human face recordings. The general HR processing pipeline is divided into three stages: face video processing, face blood volume pulse (BVP) signal extraction, and HR computation. Approaches presented in the paper are classified and grouped according to each stage. At each stage, algorithms are analyzed and compared based on their performance using the public database MAHNOB-HCI. Results found in this article are limited on MAHNOB-HCI dataset. Results show that extracted face skin area contains more BVP information. Blind source separation and peak detection methods are more robust with head motions for estimating HR. PMID:29765940
Kume, Yuko; Maekawa, Toshihiko; Urakawa, Tomokazu; Hironaga, Naruhito; Ogata, Katsuya; Shigyo, Maki; Tobimatsu, Shozo
2016-08-01
When and where the awareness of faces is consciously initiated is unclear. We used magnetoencephalography to probe the brain responses associated with face awareness under intermittent pseudo-rivalry (PR) and binocular rivalry (BR) conditions. The stimuli comprised three pictures: a human face, a monkey face and a house. In the PR condition, we detected the M130 component, which has been minimally characterized in previous research. We obtained a clear recording of the M170 component in the fusiform face area (FFA), and found that this component had an earlier response time to faces compared with other objects. The M170 occurred predominantly in the right hemisphere in both conditions. In the BR condition, the amplitude of the M130 significantly increased in the right hemisphere irrespective of the physical characteristics of the visual stimuli. Conversely, we did not detect the M170 when the face image was suppressed in the BR condition, although this component was clearly present when awareness for the face was initiated. We also found a significant difference in the latency of the M170 (human
Van Giang, Nguyen; Chiu, Hsiao-Yean; Thai, Duong Hong; Kuo, Shu-Yu; Tsai, Pei-Shan
2015-10-01
Pain is common in patients after orthopedic surgery. The 11-face Faces Pain Scale has not been validated for use in adult patients with postoperative pain. To assess the validity of the 11-face Faces Pain Scale and its ability to detect responses to pain medications, and to determine whether the sensitivity of the 11-face Faces Pain Scale for detecting changes in pain intensity over time is associated with gender differences in adult postorthopedic surgery patients. The 11-face Faces Pain Scale was translated into Vietnamese using forward and back translation. Postoperative pain was assessed using an 11-point numerical rating scale and the 11-face Faces Pain Scale on the day of surgery, and before (Time 1) and every 30 minutes after (Times 2-5) the patients had taken pain medications on the first postoperative day. The 11-face Faces Pain Scale highly correlated with the numerical rating scale (r = 0.78, p < .001). When the scores from each follow-up test (Times 2-5) were compared with those from the baseline test (Time 1), the effect sizes were -0.70, -1.05, -1.20, and -1.31, and the standardized response means were -1.17, -1.59, -1.66, and -1.82, respectively. The mean change in pain intensity, but not gender-time interaction effect, over the five time points was significant (F = 182.03, p < .001). Our results support that the 11-face Faces Pain Scale is appropriate for measuring acute postoperative pain in adults. Copyright © 2015 American Society for Pain Management Nursing. Published by Elsevier Inc. All rights reserved.
From face processing to face recognition: Comparing three different processing levels.
Besson, G; Barragan-Jason, G; Thorpe, S J; Fabre-Thorpe, M; Puma, S; Ceccaldi, M; Barbeau, E J
2017-01-01
Verifying that a face is from a target person (e.g. finding someone in the crowd) is a critical ability of the human face processing system. Yet how fast this can be performed is unknown. The 'entry-level shift due to expertise' hypothesis suggests that - since humans are face experts - processing faces should be as fast - or even faster - at the individual than at superordinate levels. In contrast, the 'superordinate advantage' hypothesis suggests that faces are processed from coarse to fine, so that the opposite pattern should be observed. To clarify this debate, three different face processing levels were compared: (1) a superordinate face categorization level (i.e. detecting human faces among animal faces), (2) a face familiarity level (i.e. recognizing famous faces among unfamiliar ones) and (3) verifying that a face is from a target person, our condition of interest. The minimal speed at which faces can be categorized (∼260ms) or recognized as familiar (∼360ms) has largely been documented in previous studies, and thus provides boundaries to compare our condition of interest to. Twenty-seven participants were included. The recent Speed and Accuracy Boosting procedure paradigm (SAB) was used since it constrains participants to use their fastest strategy. Stimuli were presented either upright or inverted. Results revealed that verifying that a face is from a target person (minimal RT at ∼260ms) was remarkably fast but longer than the face categorization level (∼240ms) and was more sensitive to face inversion. In contrast, it was much faster than recognizing a face as familiar (∼380ms), a level severely affected by face inversion. Face recognition corresponding to finding a specific person in a crowd thus appears achievable in only a quarter of a second. In favor of the 'superordinate advantage' hypothesis or coarse-to-fine account of the face visual hierarchy, these results suggest a graded engagement of the face processing system across processing levels as reflected by the face inversion effects. Furthermore, they underline how verifying that a face is from a target person and detecting a face as familiar - both often referred to as "Face Recognition" - in fact differs. Copyright © 2016 Elsevier B.V. All rights reserved.
Gender classification system in uncontrolled environments
NASA Astrophysics Data System (ADS)
Zeng, Pingping; Zhang, Yu-Jin; Duan, Fei
2011-01-01
Most face analysis systems available today perform mainly on restricted databases of images in terms of size, age, illumination. In addition, it is frequently assumed that all images are frontal and unconcealed. Actually, in a non-guided real-time supervision, the face pictures taken may often be partially covered and with head rotation less or more. In this paper, a special system supposed to be used in real-time surveillance with un-calibrated camera and non-guided photography is described. It mainly consists of five parts: face detection, non-face filtering, best-angle face selection, texture normalization, and gender classification. Emphases are focused on non-face filtering and best-angle face selection parts as well as texture normalization. Best-angle faces are figured out by PCA reconstruction, which equals to an implicit face alignment and results in a huge increase of the accuracy for gender classification. Dynamic skin model and a masked PCA reconstruction algorithm are applied to filter out faces detected in error. In order to fully include facial-texture and shape-outline features, a hybrid feature that is a combination of Gabor wavelet and PHoG (pyramid histogram of gradients) was proposed to equitable inner texture and outer contour. Comparative study on the effects of different non-face filtering and texture masking methods in the context of gender classification by SVM is reported through experiments on a set of UT (a company name) face images, a large number of internet images and CAS (Chinese Academy of Sciences) face database. Some encouraging results are obtained.
Detecting 'infant-directedness' in face and voice.
Kim, Hojin I; Johnson, Scott P
2014-07-01
Five- and 3-month-old infants' perception of infant-directed (ID) faces and the role of speech in perceiving faces were examined. Infants' eye movements were recorded as they viewed a series of two side-by-side talking faces, one infant-directed and one adult-directed (AD), while listening to ID speech, AD speech, or in silence. Infants showed consistently greater dwell time on ID faces vs. AD faces, and this ID face preference was consistent across all three sound conditions. ID speech resulted in higher looking overall, but it did not increase looking at the ID face per se. Together, these findings demonstrate that infants' preferences for ID speech extend to ID faces. © 2014 John Wiley & Sons Ltd.
ERIC Educational Resources Information Center
Bahrick, Lorraine E.; Lickliter, Robert; Castellanos, Irina
2013-01-01
Although research has demonstrated impressive face perception skills of young infants, little attention has focused on conditions that enhance versus impair infant face perception. The present studies tested the prediction, generated from the intersensory redundancy hypothesis (IRH), that face discrimination, which relies on detection of visual…
ERIC Educational Resources Information Center
Bahrick, Lorraine E.; Krogh-Jespersen, Sheila; Argumosa, Melissa A.; Lopez, Hassel
2014-01-01
Although infants and children show impressive face-processing skills, little research has focused on the conditions that facilitate versus impair face perception. According to the intersensory redundancy hypothesis (IRH), face discrimination, which relies on detection of visual featural information, should be impaired in the context of…
Atypical face shape and genomic structural variants in epilepsy
Chinthapalli, Krishna; Bartolini, Emanuele; Novy, Jan; Suttie, Michael; Marini, Carla; Falchi, Melania; Fox, Zoe; Clayton, Lisa M. S.; Sander, Josemir W.; Guerrini, Renzo; Depondt, Chantal; Hennekam, Raoul; Hammond, Peter
2012-01-01
Many pathogenic structural variants of the human genome are known to cause facial dysmorphism. During the past decade, pathogenic structural variants have also been found to be an important class of genetic risk factor for epilepsy. In other fields, face shape has been assessed objectively using 3D stereophotogrammetry and dense surface models. We hypothesized that computer-based analysis of 3D face images would detect subtle facial abnormality in people with epilepsy who carry pathogenic structural variants as determined by chromosome microarray. In 118 children and adults attending three European epilepsy clinics, we used an objective measure called Face Shape Difference to show that those with pathogenic structural variants have a significantly more atypical face shape than those without such variants. This is true when analysing the whole face, or the periorbital region or the perinasal region alone. We then tested the predictive accuracy of our measure in a second group of 63 patients. Using a minimum threshold to detect face shape abnormalities with pathogenic structural variants, we found high sensitivity (4/5, 80% for whole face; 3/5, 60% for periorbital and perinasal regions) and specificity (45/58, 78% for whole face and perinasal regions; 40/58, 69% for periorbital region). We show that the results do not seem to be affected by facial injury, facial expression, intellectual disability, drug history or demographic differences. Finally, we use bioinformatics tools to explore relationships between facial shape and gene expression within the developing forebrain. Stereophotogrammetry and dense surface models are powerful, objective, non-contact methods of detecting relevant face shape abnormalities. We demonstrate that they are useful in identifying atypical face shape in adults or children with structural variants, and they may give insights into the molecular genetics of facial development. PMID:22975390
The feasibility test of state-of-the-art face detection algorithms for vehicle occupant detection
NASA Astrophysics Data System (ADS)
Makrushin, Andrey; Dittmann, Jana; Vielhauer, Claus; Langnickel, Mirko; Kraetzer, Christian
2010-01-01
Vehicle seat occupancy detection systems are designed to prevent the deployment of airbags at unoccupied seats, thus avoiding the considerable cost imposed by the replacement of airbags. Occupancy detection can also improve passenger comfort, e.g. by activating air-conditioning systems. The most promising development perspectives are seen in optical sensing systems which have become cheaper and smaller in recent years. The most plausible way to check the seat occupancy by occupants is the detection of presence and location of heads, or more precisely, faces. This paper compares the detection performances of the three most commonly used and widely available face detection algorithms: Viola- Jones, Kienzle et al. and Nilsson et al. The main objective of this work is to identify whether one of these systems is suitable for use in a vehicle environment with variable and mostly non-uniform illumination conditions, and whether any one face detection system can be sufficient for seat occupancy detection. The evaluation of detection performance is based on a large database comprising 53,928 video frames containing proprietary data collected from 39 persons of both sexes and different ages and body height as well as different objects such as bags and rearward/forward facing child restraint systems.
Interactive display system having a scaled virtual target zone
Veligdan, James T.; DeSanto, Leonard
2006-06-13
A display system includes a waveguide optical panel having an inlet face and an opposite outlet face. A projector and imaging device cooperate with the panel for projecting a video image thereon. An optical detector bridges at least a portion of the waveguides for detecting a location on the outlet face within a target zone of an inbound light spot. A controller is operatively coupled to the imaging device and detector for displaying a cursor on the outlet face corresponding with the detected location of the spot within the target zone.
Friends with Faces: How Social Networks Can Enhance Face Recognition and Vice Versa
NASA Astrophysics Data System (ADS)
Mavridis, Nikolaos; Kazmi, Wajahat; Toulis, Panos
The "friendship" relation, a social relation among individuals, is one of the primary relations modeled in some of the world's largest online social networking sites, such as "FaceBook." On the other hand, the "co-occurrence" relation, as a relation among faces appearing in pictures, is one that is easily detectable using modern face detection techniques. These two relations, though appearing in different realms (social vs. visual sensory), have a strong correlation: faces that co-occur in photos often belong to individuals who are friends. Using real-world data gathered from "Facebook," which were gathered as part of the "FaceBots" project, the world's first physical face-recognizing and conversing robot that can utilize and publish information on "Facebook" was established. We present here methods as well as results for utilizing this correlation in both directions. Both algorithms for utilizing knowledge of the social context for faster and better face recognition are given, as well as algorithms for estimating the friendship network of a number of individuals given photos containing their faces. The results are quite encouraging. In the primary example, doubling of the recognition accuracy as well as a sixfold improvement in speed is demonstrated. Various improvements, interesting statistics, as well as an empirical investigation leading to predictions of scalability to much bigger data sets are discussed.
Urgesi, Cosimo; Bricolo, Emanuela; Aglioti, Salvatore M
2005-08-01
Cerebral dominance and hemispheric metacontrol were investigated by testing the ability of healthy participants to match chimeric, entire, or half faces presented tachistoscopically. The two hemi-faces compounding chimeric or entire stimuli were presented simultaneously or asynchronously at different exposure times. Participants did not consciously detect chimeric faces for simultaneous presentations lasting up to 40 ms. Interestingly, a 20 ms separation between each half-chimera was sufficient to induce detection of conflicts at a conscious level. Although the presence of chimeric faces was not consciously perceived, performance on chimeric faces was poorer than on entire- and half-faces stimuli, thus indicating an implicit processing of perceptual conflicts. Moreover, the precedence of hemispheric stimulation over-ruled the right hemisphere dominance for face processing, insofar as the hemisphere stimulated last appeared to influence the response. This dynamic reversal of cerebral dominance, however, was not caused by a shift in hemispheric specialization, since the level of performance always reflected the right hemisphere specialization for face recognition. Thus, the dissociation between hemispheric dominance and specialization found in the present study hints at the existence of hemispheric metacontrol in healthy individuals.
A novel BCI based on ERP components sensitive to configural processing of human faces
NASA Astrophysics Data System (ADS)
Zhang, Yu; Zhao, Qibin; Jing, Jin; Wang, Xingyu; Cichocki, Andrzej
2012-04-01
This study introduces a novel brain-computer interface (BCI) based on an oddball paradigm using stimuli of facial images with loss of configural face information (e.g., inversion of face). To the best of our knowledge, till now the configural processing of human faces has not been applied to BCI but widely studied in cognitive neuroscience research. Our experiments confirm that the face-sensitive event-related potential (ERP) components N170 and vertex positive potential (VPP) have reflected early structural encoding of faces and can be modulated by the configural processing of faces. With the proposed novel paradigm, we investigate the effects of ERP components N170, VPP and P300 on target detection for BCI. An eight-class BCI platform is developed to analyze ERPs and evaluate the target detection performance using linear discriminant analysis without complicated feature extraction processing. The online classification accuracy of 88.7% and information transfer rate of 38.7 bits min-1 using stimuli of inverted faces with only single trial suggest that the proposed paradigm based on the configural processing of faces is very promising for visual stimuli-driven BCI applications.
A novel BCI based on ERP components sensitive to configural processing of human faces.
Zhang, Yu; Zhao, Qibin; Jin, Jing; Wang, Xingyu; Cichocki, Andrzej
2012-04-01
This study introduces a novel brain-computer interface (BCI) based on an oddball paradigm using stimuli of facial images with loss of configural face information (e.g., inversion of face). To the best of our knowledge, till now the configural processing of human faces has not been applied to BCI but widely studied in cognitive neuroscience research. Our experiments confirm that the face-sensitive event-related potential (ERP) components N170 and vertex positive potential (VPP) have reflected early structural encoding of faces and can be modulated by the configural processing of faces. With the proposed novel paradigm, we investigate the effects of ERP components N170, VPP and P300 on target detection for BCI. An eight-class BCI platform is developed to analyze ERPs and evaluate the target detection performance using linear discriminant analysis without complicated feature extraction processing. The online classification accuracy of 88.7% and information transfer rate of 38.7 bits min(-1) using stimuli of inverted faces with only single trial suggest that the proposed paradigm based on the configural processing of faces is very promising for visual stimuli-driven BCI applications.
ERIC Educational Resources Information Center
Kikuchi, Yukiko; Senju, Atsushi; Tojo, Yoshikuni; Osanai, Hiroo; Hasegawa, Toshikazu
2009-01-01
Two experiments investigated attention of children with autism spectrum disorder (ASD) to faces and objects. In both experiments, children (7- to 15-year-olds) detected the difference between 2 visual scenes. Results in Experiment 1 revealed that typically developing children (n = 16) detected the change in faces faster than in objects, whereas…
A Smart Spoofing Face Detector by Display Features Analysis.
Lai, ChinLun; Tai, ChiuYuan
2016-07-21
In this paper, a smart face liveness detector is proposed to prevent the biometric system from being "deceived" by the video or picture of a valid user that the counterfeiter took with a high definition handheld device (e.g., iPad with retina display). By analyzing the characteristics of the display platform and using an expert decision-making core, we can effectively detect whether a spoofing action comes from a fake face displayed in the high definition display by verifying the chromaticity regions in the captured face. That is, a live or spoof face can be distinguished precisely by the designed optical image sensor. To sum up, by the proposed method/system, a normal optical image sensor can be upgraded to a powerful version to detect the spoofing actions. The experimental results prove that the proposed detection system can achieve very high detection rate compared to the existing methods and thus be practical to implement directly in the authentication systems.
Face recognition system for set-top box-based intelligent TV.
Lee, Won Oh; Kim, Yeong Gon; Hong, Hyung Gil; Park, Kang Ryoung
2014-11-18
Despite the prevalence of smart TVs, many consumers continue to use conventional TVs with supplementary set-top boxes (STBs) because of the high cost of smart TVs. However, because the processing power of a STB is quite low, the smart TV functionalities that can be implemented in a STB are very limited. Because of this, negligible research has been conducted regarding face recognition for conventional TVs with supplementary STBs, even though many such studies have been conducted with smart TVs. In terms of camera sensors, previous face recognition systems have used high-resolution cameras, cameras with high magnification zoom lenses, or camera systems with panning and tilting devices that can be used for face recognition from various positions. However, these cameras and devices cannot be used in intelligent TV environments because of limitations related to size and cost, and only small, low cost web-cameras can be used. The resulting face recognition performance is degraded because of the limited resolution and quality levels of the images. Therefore, we propose a new face recognition system for intelligent TVs in order to overcome the limitations associated with low resource set-top box and low cost web-cameras. We implement the face recognition system using a software algorithm that does not require special devices or cameras. Our research has the following four novelties: first, the candidate regions in a viewer's face are detected in an image captured by a camera connected to the STB via low processing background subtraction and face color filtering; second, the detected candidate regions of face are transmitted to a server that has high processing power in order to detect face regions accurately; third, in-plane rotations of the face regions are compensated based on similarities between the left and right half sub-regions of the face regions; fourth, various poses of the viewer's face region are identified using five templates obtained during the initial user registration stage and multi-level local binary pattern matching. Experimental results indicate that the recall; precision; and genuine acceptance rate were about 95.7%; 96.2%; and 90.2%, respectively.
NASA Astrophysics Data System (ADS)
Kaewkasi, Pitchaya; Widjaja, Joewono; Uozumi, Jun
2007-03-01
Effects of threshold value on detection performance of the modified amplitude-modulated joint transform correlator are quantitatively studied using computer simulation. Fingerprint and human face images are used as test scenes in the presence of noise and a contrast difference. Simulation results demonstrate that this correlator improves detection performance for both types of image used, but moreso for human face images. Optimal detection of low-contrast human face images obscured by strong noise can be obtained by selecting an appropriate threshold value.
Face Liveness Detection Using Defocus
Kim, Sooyeon; Ban, Yuseok; Lee, Sangyoun
2015-01-01
In order to develop security systems for identity authentication, face recognition (FR) technology has been applied. One of the main problems of applying FR technology is that the systems are especially vulnerable to attacks with spoofing faces (e.g., 2D pictures). To defend from these attacks and to enhance the reliability of FR systems, many anti-spoofing approaches have been recently developed. In this paper, we propose a method for face liveness detection using the effect of defocus. From two images sequentially taken at different focuses, three features, focus, power histogram and gradient location and orientation histogram (GLOH), are extracted. Afterwards, we detect forged faces through the feature-level fusion approach. For reliable performance verification, we develop two databases with a handheld digital camera and a webcam. The proposed method achieves a 3.29% half total error rate (HTER) at a given depth of field (DoF) and can be extended to camera-equipped devices, like smartphones. PMID:25594594
Enhanced attention amplifies face adaptation.
Rhodes, Gillian; Jeffery, Linda; Evangelista, Emma; Ewing, Louise; Peters, Marianne; Taylor, Libby
2011-08-15
Perceptual adaptation not only produces striking perceptual aftereffects, but also enhances coding efficiency and discrimination by calibrating coding mechanisms to prevailing inputs. Attention to simple stimuli increases adaptation, potentially enhancing its functional benefits. Here we show that attention also increases adaptation to faces. In Experiment 1, face identity aftereffects increased when attention to adapting faces was increased using a change detection task. In Experiment 2, figural (distortion) face aftereffects increased when attention was increased using a snap game (detecting immediate repeats) during adaptation. Both were large effects. Contributions of low-level adaptation were reduced using free viewing (both experiments) and a size change between adapt and test faces (Experiment 2). We suggest that attention may enhance adaptation throughout the entire cortical visual pathway, with functional benefits well beyond the immediate advantages of selective processing of potentially important stimuli. These results highlight the potential to facilitate adaptive updating of face-coding mechanisms by strategic deployment of attentional resources. Copyright © 2011 Elsevier Ltd. All rights reserved.
More efficient rejection of happy than of angry face distractors in visual search.
Horstmann, Gernot; Scharlau, Ingrid; Ansorge, Ulrich
2006-12-01
In the present study, we examined whether the detection advantage for negative-face targets in crowds of positive-face distractors over positive-face targets in crowds of negative faces can be explained by differentially efficient distractor rejection. Search Condition A demonstrated more efficient distractor rejection with negative-face targets in positive-face crowds than vice versa. Search Condition B showed that target identity alone is not sufficient to account for this effect, because there was no difference in processing efficiency for positive- and negative-face targets within neutral crowds. Search Condition C showed differentially efficient processing with neutral-face targets among positive- or negative-face distractors. These results were obtained with both a within-participants (Experiment 1) and a between-participants (Experiment 2) design. The pattern of results is consistent with the assumption that efficient rejection of positive (more homogenous) distractors is an important determinant of performance in search among (face) distractors.
The processing of social stimuli in early infancy: from faces to biological motion perception.
Simion, Francesca; Di Giorgio, Elisa; Leo, Irene; Bardi, Lara
2011-01-01
There are several lines of evidence which suggests that, since birth, the human system detects social agents on the basis of at least two properties: the presence of a face and the way they move. This chapter reviews the infant research on the origin of brain specialization for social stimuli and on the role of innate mechanisms and perceptual experience in shaping the development of the social brain. Two lines of convergent evidence on face detection and biological motion detection will be presented to demonstrate the innate predispositions of the human system to detect social stimuli at birth. As for face detection, experiments will be presented to demonstrate that, by virtue of nonspecific attentional biases, a very coarse template of faces become active at birth. As for biological motion detection, studies will be presented to demonstrate that, since birth, the human system is able to detect social stimuli on the basis of their properties such as the presence of a semi-rigid motion named biological motion. Overall, the empirical evidence converges in supporting the notion that the human system begins life broadly tuned to detect social stimuli and that the progressive specialization will narrow the system for social stimuli as a function of experience. Copyright © 2011 Elsevier B.V. All rights reserved.
Detecting "Infant-Directedness" in Face and Voice
ERIC Educational Resources Information Center
Kim, Hojin I.; Johnson, Scott P.
2014-01-01
Five- and 3-month-old infants' perception of infant-directed (ID) faces and the role of speech in perceiving faces were examined. Infants' eye movements were recorded as they viewed a series of two side-by-side talking faces, one infant-directed and one adult-directed (AD), while listening to ID speech, AD speech, or in silence. Infants…
Yamaguchi, Satoshi; Yamada, Yuya; Yoshida, Yoshinori; Noborio, Hiroshi; Imazato, Satoshi
2012-01-01
The virtual reality (VR) simulator is a useful tool to develop dental hand skill. However, VR simulations with reactions of patients have limited computational time to reproduce a face model. Our aim was to develop a patient face model that enables real-time collision detection and cutting operation by using stereolithography (STL) and deterministic finite automaton (DFA) data files. We evaluated dependence of computational cost and constructed the patient face model using the optimum condition for combining STL and DFA data files, and assessed the computational costs for operation in do-nothing, collision, cutting, and combination of collision and cutting. The face model was successfully constructed with low computational costs of 11.3, 18.3, 30.3, and 33.5 ms for do-nothing, collision, cutting, and collision and cutting, respectively. The patient face model could be useful for developing dental hand skill with VR.
Levy, Boaz
2006-10-01
Empirical studies have questioned the validity of the Faces subtest from the WMS-III for detecting impairment in visual memory, particularly among the elderly. A recent examination of the test norms revealed a significant age related floor effect already emerging on Faces I (immediate recall), implying excessive difficulty in the acquisition phase among unimpaired older adults. The current study compared the concurrent validity of the Faces subtest with an alternative measure between 16 Alzheimer's patients and 16 controls. The alternative measure was designed to facilitate acquisition by reducing the sequence of item presentation. Other changes aimed at increasing the retrieval challenge, decreasing error due to guessing and standardizing the administration. Analyses converged to indicate that the alternative measure provided a considerably greater differentiation than the Faces subtest between Alzheimer's patients and controls. Steps for revising the Faces subtest are discussed.
Tracking the truth: the effect of face familiarity on eye fixations during deception.
Millen, Ailsa E; Hope, Lorraine; Hillstrom, Anne P; Vrij, Aldert
2017-05-01
In forensic investigations, suspects sometimes conceal recognition of a familiar person to protect co-conspirators or hide knowledge of a victim. The current experiment sought to determine whether eye fixations could be used to identify memory of known persons when lying about recognition of faces. Participants' eye movements were monitored whilst they lied and told the truth about recognition of faces that varied in familiarity (newly learned, famous celebrities, personally known). Memory detection by eye movements during recognition of personally familiar and famous celebrity faces was negligibly affected by lying, thereby demonstrating that detection of memory during lies is influenced by the prior learning of the face. By contrast, eye movements did not reveal lies robustly for newly learned faces. These findings support the use of eye movements as markers of memory during concealed recognition but also suggest caution when familiarity is only a consequence of one brief exposure.
Bona, Silvia; Cattaneo, Zaira; Silvanto, Juha
2016-01-01
The right occipital face area (rOFA) is known to be involved in face discrimination based on local featural information. Whether this region is also involved in global, holistic stimulus processing is not known. We used fMRI-guided transcranial magnetic stimulation (TMS) to investigate whether rOFA is causally implicated in stimulus detection based on holistic processing, by the use of Mooney stimuli. Two studies were carried out: In Experiment 1, participants performed a detection task involving Mooney faces and Mooney objects; Mooney stimuli lack distinguishable local features and can be detected solely via holistic processing (i.e. at a global level) with top-down guidance from previously stored representations. Experiment 2 required participants to detect shapes which are recognized via bottom-up integration of local (collinear) Gabor elements and was performed to control for specificity of rOFA's implication in holistic detection. In Experiment 1, TMS over rOFA and rLO impaired detection of all stimulus categories, with no category-specific effect. In Experiment 2, shape detection was impaired when TMS was applied over rLO but not over rOFA. Our results demonstrate that rOFA is causally implicated in the type of top-down holistic detection required by Mooney stimuli and that such role is not face-selective. In contrast, rOFA does not appear to play a causal role in detection of shapes based on bottom-up integration of local components, demonstrating that its involvement in processing non-face stimuli is specific for holistic processing. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Automatic Detection of Frontal Face Midline by Chain-coded Merlin-Farber Hough Trasform
NASA Astrophysics Data System (ADS)
Okamoto, Daichi; Ohyama, Wataru; Wakabayashi, Tetsushi; Kimura, Fumitaka
We propose a novel approach for detection of the facial midline (facial symmetry axis) from a frontal face image. The facial midline has several applications, for instance reducing computational cost required for facial feature extraction (FFE) and postoperative assessment for cosmetic or dental surgery. The proposed method detects the facial midline of a frontal face from an edge image as the symmetry axis using the Merlin-Faber Hough transformation. And a new performance improvement scheme for midline detection by MFHT is present. The main concept of the proposed scheme is suppression of redundant vote on the Hough parameter space by introducing chain code representation for the binary edge image. Experimental results on the image dataset containing 2409 images from FERET database indicate that the proposed algorithm can improve the accuracy of midline detection from 89.9% to 95.1 % for face images with different scales and rotation.
Hyper-realistic face masks: a new challenge in person identification.
Sanders, Jet Gabrielle; Ueda, Yoshiyuki; Minemoto, Kazusa; Noyes, Eilidh; Yoshikawa, Sakiko; Jenkins, Rob
2017-01-01
We often identify people using face images. This is true in occupational settings such as passport control as well as in everyday social environments. Mapping between images and identities assumes that facial appearance is stable within certain bounds. For example, a person's apparent age, gender and ethnicity change slowly, if at all. It also assumes that deliberate changes beyond these bounds (i.e., disguises) would be easy to spot. Hyper-realistic face masks overturn these assumptions by allowing the wearer to look like an entirely different person. If unnoticed, these masks break the link between facial appearance and personal identity, with clear implications for applied face recognition. However, to date, no one has assessed the realism of these masks, or specified conditions under which they may be accepted as real faces. Herein, we examined incidental detection of unexpected but attended hyper-realistic masks in both photographic and live presentations. Experiment 1 (UK; n = 60) revealed no evidence for overt detection of hyper-realistic masks among real face photos, and little evidence of covert detection. Experiment 2 (Japan; n = 60) extended these findings to different masks, mask-wearers and participant pools. In Experiment 3 (UK and Japan; n = 407), passers-by failed to notice that a live confederate was wearing a hyper-realistic mask and showed limited evidence of covert detection, even at close viewing distance (5 vs. 20 m). Across all of these studies, viewers accepted hyper-realistic masks as real faces. Specific countermeasures will be required if detection rates are to be improved.
Park, Hyung-Bum; Han, Ji-Eun; Hyun, Joo-Seok
2015-05-01
An expressionless face is often perceived as rude whereas a smiling face is considered as hospitable. Repetitive exposure to such perceptions may have developed stereotype of categorizing an expressionless face as expressing negative emotion. To test this idea, we displayed a search array where the target was an expressionless face and the distractors were either smiling or frowning faces. We manipulated set size. Search reaction times were delayed with frowning distractors. Delays became more evident as the set size increased. We also devised a short-term comparison task where participants compared two sequential sets of expressionless, smiling, and frowning faces. Detection of an expression change across the sets was highly inaccurate when the change was made between frowning and expressionless face. These results indicate that subjects were confused with expressed emotions on frowning and expressionless faces, suggesting that it is difficult to distinguish expressionless face from frowning faces. Copyright © 2015 Elsevier B.V. All rights reserved.
Image preprocessing study on KPCA-based face recognition
NASA Astrophysics Data System (ADS)
Li, Xuan; Li, Dehua
2015-12-01
Face recognition as an important biometric identification method, with its friendly, natural, convenient advantages, has obtained more and more attention. This paper intends to research a face recognition system including face detection, feature extraction and face recognition, mainly through researching on related theory and the key technology of various preprocessing methods in face detection process, using KPCA method, focuses on the different recognition results in different preprocessing methods. In this paper, we choose YCbCr color space for skin segmentation and choose integral projection for face location. We use erosion and dilation of the opening and closing operation and illumination compensation method to preprocess face images, and then use the face recognition method based on kernel principal component analysis method for analysis and research, and the experiments were carried out using the typical face database. The algorithms experiment on MATLAB platform. Experimental results show that integration of the kernel method based on PCA algorithm under certain conditions make the extracted features represent the original image information better for using nonlinear feature extraction method, which can obtain higher recognition rate. In the image preprocessing stage, we found that images under various operations may appear different results, so as to obtain different recognition rate in recognition stage. At the same time, in the process of the kernel principal component analysis, the value of the power of the polynomial function can affect the recognition result.
Unconstrained face detection and recognition based on RGB-D camera for the visually impaired
NASA Astrophysics Data System (ADS)
Zhao, Xiangdong; Wang, Kaiwei; Yang, Kailun; Hu, Weijian
2017-02-01
It is highly important for visually impaired people (VIP) to be aware of human beings around themselves, so correctly recognizing people in VIP assisting apparatus provide great convenience. However, in classical face recognition technology, faces used in training and prediction procedures are usually frontal, and the procedures of acquiring face images require subjects to get close to the camera so that frontal face and illumination guaranteed. Meanwhile, labels of faces are defined manually rather than automatically. Most of the time, labels belonging to different classes need to be input one by one. It prevents assisting application for VIP with these constraints in practice. In this article, a face recognition system under unconstrained environment is proposed. Specifically, it doesn't require frontal pose or uniform illumination as required by previous algorithms. The attributes of this work lie in three aspects. First, a real time frontal-face synthesizing enhancement is implemented, and frontal faces help to increase recognition rate, which is proved with experiment results. Secondly, RGB-D camera plays a significant role in our system, from which both color and depth information are utilized to achieve real time face tracking which not only raises the detection rate but also gives an access to label faces automatically. Finally, we propose to use neural networks to train a face recognition system, and Principal Component Analysis (PCA) is applied to pre-refine the input data. This system is expected to provide convenient help for VIP to get familiar with others, and make an access for them to recognize people when the system is trained enough.
Meinhardt, Günter; Kurbel, David; Meinhardt-Injac, Bozana; Persike, Malte
2018-03-22
Some years ago an asymmetry was reported for the inversion effect for horizontal (H) and vertical (V) relational face manipulations (Goffaux & Rossion, 2007). Subsequent research examined whether a specific disruption of long-range relations underlies the H/V inversion asymmetry (Sekunova & Barton, 2008). Here, we tested how detection of changes in interocular distance (H) and eye height (V) depends on cardinal internal features and external feature surround. Results replicated the H/V inversion asymmetry. Moreover, we found very different face cue dependencies for both change types. Performance and inversion effects did not depend on the presence of other face cues for detecting H changes. In contrast, accuracy for detecting V changes strongly depended on internal and external features, showing cumulative improvement when more cues were added. Inversion effects were generally large, and larger with external feature surround. The cue independence in detecting H relational changes indicates specialized local processing tightly tuned to the eyes region, while the strong cue dependency in detecting V relational changes indicates a global mechanism of cue integration across different face regions. These findings suggest that the H/V asymmetry of the inversion effect rests on an H/V anisotropy of face cue dependency, since only the global V mechanism suffers from disruption of cue integration as the major effect of face inversion. Copyright © 2018. Published by Elsevier Ltd.
Improving face image extraction by using deep learning technique
NASA Astrophysics Data System (ADS)
Xue, Zhiyun; Antani, Sameer; Long, L. R.; Demner-Fushman, Dina; Thoma, George R.
2016-03-01
The National Library of Medicine (NLM) has made a collection of over a 1.2 million research articles containing 3.2 million figure images searchable using the Open-iSM multimodal (text+image) search engine. Many images are visible light photographs, some of which are images containing faces ("face images"). Some of these face images are acquired in unconstrained settings, while others are studio photos. To extract the face regions in the images, we first applied one of the most widely-used face detectors, a pre-trained Viola-Jones detector implemented in Matlab and OpenCV. The Viola-Jones detector was trained for unconstrained face image detection, but the results for the NLM database included many false positives, which resulted in a very low precision. To improve this performance, we applied a deep learning technique, which reduced the number of false positives and as a result, the detection precision was improved significantly. (For example, the classification accuracy for identifying whether the face regions output by this Viola- Jones detector are true positives or not in a test set is about 96%.) By combining these two techniques (Viola-Jones and deep learning) we were able to increase the system precision considerably, while avoiding the need to manually construct a large training set by manual delineation of the face regions.
Monkeys and Humans Share a Common Computation for Face/Voice Integration
Chandrasekaran, Chandramouli; Lemus, Luis; Trubanova, Andrea; Gondan, Matthias; Ghazanfar, Asif A.
2011-01-01
Speech production involves the movement of the mouth and other regions of the face resulting in visual motion cues. These visual cues enhance intelligibility and detection of auditory speech. As such, face-to-face speech is fundamentally a multisensory phenomenon. If speech is fundamentally multisensory, it should be reflected in the evolution of vocal communication: similar behavioral effects should be observed in other primates. Old World monkeys share with humans vocal production biomechanics and communicate face-to-face with vocalizations. It is unknown, however, if they, too, combine faces and voices to enhance their perception of vocalizations. We show that they do: monkeys combine faces and voices in noisy environments to enhance their detection of vocalizations. Their behavior parallels that of humans performing an identical task. We explored what common computational mechanism(s) could explain the pattern of results we observed across species. Standard explanations or models such as the principle of inverse effectiveness and a “race” model failed to account for their behavior patterns. Conversely, a “superposition model”, positing the linear summation of activity patterns in response to visual and auditory components of vocalizations, served as a straightforward but powerful explanatory mechanism for the observed behaviors in both species. As such, it represents a putative homologous mechanism for integrating faces and voices across primates. PMID:21998576
Manipulation Detection and Preference Alterations in a Choice Blindness Paradigm
Taya, Fumihiko; Gupta, Swati; Farber, Ilya; Mullette-Gillman, O'Dhaniel A.
2014-01-01
Objectives It is commonly believed that individuals make choices based upon their preferences and have access to the reasons for their choices. Recent studies in several areas suggest that this is not always the case. In choice blindness paradigms, two-alternative forced-choice in which chosen-options are later replaced by the unselected option, individuals often fail to notice replacement of their chosen option, confabulate explanations for why they chose the unselected option, and even show increased preferences for the unselected-but-replaced options immediately after choice (seconds). Although choice blindness has been replicated across a variety of domains, there are numerous outstanding questions. Firstly, we sought to investigate how individual- or trial-factors modulated detection of the manipulations. Secondly, we examined the nature and temporal duration (minutes vs. days) of the preference alterations induced by these manipulations. Methods Participants performed a computerized choice blindness task, selecting the more attractive face between presented pairs of female faces, and providing a typewritten explanation for their choice on half of the trials. Chosen-face cue manipulations were produced on a subset of trials by presenting the unselected face during the choice explanation as if it had been selected. Following all choice trials, participants rated the attractiveness of each face individually, and rated the similarity of each face pair. After approximately two weeks, participants re-rated the attractiveness of each individual face online. Results Participants detected manipulations on only a small proportion of trials, with detections by fewer than half of participants. Detection rates increased with the number of prior detections, and detection rates subsequent to first detection were modulated by the choice certainty. We show clear short-term modulation of preferences in both manipulated and non-manipulated explanation trials compared to choice-only trials (with opposite directions of effect). Preferences were altered in the direction that subjects were led to believe they selected. PMID:25247886
Tsao, Doris Y.
2009-01-01
Faces are among the most informative stimuli we ever perceive: Even a split-second glimpse of a person's face tells us their identity, sex, mood, age, race, and direction of attention. The specialness of face processing is acknowledged in the artificial vision community, where contests for face recognition algorithms abound. Neurological evidence strongly implicates a dedicated machinery for face processing in the human brain, to explain the double dissociability of face and object recognition deficits. Furthermore, it has recently become clear that macaques too have specialized neural machinery for processing faces. Here we propose a unifying hypothesis, deduced from computational, neurological, fMRI, and single-unit experiments: that what makes face processing special is that it is gated by an obligatory detection process. We will clarify this idea in concrete algorithmic terms, and show how it can explain a variety of phenomena associated with face processing. PMID:18558862
Kashihara, Koji
2014-01-01
Unlike assistive technology for verbal communication, the brain-machine or brain-computer interface (BMI/BCI) has not been established as a non-verbal communication tool for amyotrophic lateral sclerosis (ALS) patients. Face-to-face communication enables access to rich emotional information, but individuals suffering from neurological disorders, such as ALS and autism, may not express their emotions or communicate their negative feelings. Although emotions may be inferred by looking at facial expressions, emotional prediction for neutral faces necessitates advanced judgment. The process that underlies brain neuronal responses to neutral faces and causes emotional changes remains unknown. To address this problem, therefore, this study attempted to decode conditioned emotional reactions to neutral face stimuli. This direction was motivated by the assumption that if electroencephalogram (EEG) signals can be used to detect patients' emotional responses to specific inexpressive faces, the results could be incorporated into the design and development of BMI/BCI-based non-verbal communication tools. To these ends, this study investigated how a neutral face associated with a negative emotion modulates rapid central responses in face processing and then identified cortical activities. The conditioned neutral face-triggered event-related potentials that originated from the posterior temporal lobe statistically significantly changed during late face processing (600–700 ms) after stimulus, rather than in early face processing activities, such as P1 and N170 responses. Source localization revealed that the conditioned neutral faces increased activity in the right fusiform gyrus (FG). This study also developed an efficient method for detecting implicit negative emotional responses to specific faces by using EEG signals. A classification method based on a support vector machine enables the easy classification of neutral faces that trigger specific individual emotions. In accordance with this classification, a face on a computer morphs into a sad or displeased countenance. The proposed method could be incorporated as a part of non-verbal communication tools to enable emotional expression. PMID:25206321
Kashihara, Koji
2014-01-01
Unlike assistive technology for verbal communication, the brain-machine or brain-computer interface (BMI/BCI) has not been established as a non-verbal communication tool for amyotrophic lateral sclerosis (ALS) patients. Face-to-face communication enables access to rich emotional information, but individuals suffering from neurological disorders, such as ALS and autism, may not express their emotions or communicate their negative feelings. Although emotions may be inferred by looking at facial expressions, emotional prediction for neutral faces necessitates advanced judgment. The process that underlies brain neuronal responses to neutral faces and causes emotional changes remains unknown. To address this problem, therefore, this study attempted to decode conditioned emotional reactions to neutral face stimuli. This direction was motivated by the assumption that if electroencephalogram (EEG) signals can be used to detect patients' emotional responses to specific inexpressive faces, the results could be incorporated into the design and development of BMI/BCI-based non-verbal communication tools. To these ends, this study investigated how a neutral face associated with a negative emotion modulates rapid central responses in face processing and then identified cortical activities. The conditioned neutral face-triggered event-related potentials that originated from the posterior temporal lobe statistically significantly changed during late face processing (600-700 ms) after stimulus, rather than in early face processing activities, such as P1 and N170 responses. Source localization revealed that the conditioned neutral faces increased activity in the right fusiform gyrus (FG). This study also developed an efficient method for detecting implicit negative emotional responses to specific faces by using EEG signals. A classification method based on a support vector machine enables the easy classification of neutral faces that trigger specific individual emotions. In accordance with this classification, a face on a computer morphs into a sad or displeased countenance. The proposed method could be incorporated as a part of non-verbal communication tools to enable emotional expression.
A novel thermal face recognition approach using face pattern words
NASA Astrophysics Data System (ADS)
Zheng, Yufeng
2010-04-01
A reliable thermal face recognition system can enhance the national security applications such as prevention against terrorism, surveillance, monitoring and tracking, especially at nighttime. The system can be applied at airports, customs or high-alert facilities (e.g., nuclear power plant) for 24 hours a day. In this paper, we propose a novel face recognition approach utilizing thermal (long wave infrared) face images that can automatically identify a subject at both daytime and nighttime. With a properly acquired thermal image (as a query image) in monitoring zone, the following processes will be employed: normalization and denoising, face detection, face alignment, face masking, Gabor wavelet transform, face pattern words (FPWs) creation, face identification by similarity measure (Hamming distance). If eyeglasses are present on a subject's face, an eyeglasses mask will be automatically extracted from the querying face image, and then masked with all comparing FPWs (no more transforms). A high identification rate (97.44% with Top-1 match) has been achieved upon our preliminary face dataset (of 39 subjects) from the proposed approach regardless operating time and glasses-wearing condition.e
Audio-video feature correlation: faces and speech
NASA Astrophysics Data System (ADS)
Durand, Gwenael; Montacie, Claude; Caraty, Marie-Jose; Faudemay, Pascal
1999-08-01
This paper presents a study of the correlation of features automatically extracted from the audio stream and the video stream of audiovisual documents. In particular, we were interested in finding out whether speech analysis tools could be combined with face detection methods, and to what extend they should be combined. A generic audio signal partitioning algorithm as first used to detect Silence/Noise/Music/Speech segments in a full length movie. A generic object detection method was applied to the keyframes extracted from the movie in order to detect the presence or absence of faces. The correlation between the presence of a face in the keyframes and of the corresponding voice in the audio stream was studied. A third stream, which is the script of the movie, is warped on the speech channel in order to automatically label faces appearing in the keyframes with the name of the corresponding character. We naturally found that extracted audio and video features were related in many cases, and that significant benefits can be obtained from the joint use of audio and video analysis methods.
Face Processing: Models For Recognition
NASA Astrophysics Data System (ADS)
Turk, Matthew A.; Pentland, Alexander P.
1990-03-01
The human ability to process faces is remarkable. We can identify perhaps thousands of faces learned throughout our lifetime and read facial expression to understand such subtle qualities as emotion. These skills are quite robust, despite sometimes large changes in the visual stimulus due to expression, aging, and distractions such as glasses or changes in hairstyle or facial hair. Computers which model and recognize faces will be useful in a variety of applications, including criminal identification, human-computer interface, and animation. We discuss models for representing faces and their applicability to the task of recognition, and present techniques for identifying faces and detecting eye blinks.
Automated face detection for occurrence and occupancy estimation in chimpanzees.
Crunchant, Anne-Sophie; Egerer, Monika; Loos, Alexander; Burghardt, Tilo; Zuberbühler, Klaus; Corogenes, Katherine; Leinert, Vera; Kulik, Lars; Kühl, Hjalmar S
2017-03-01
Surveying endangered species is necessary to evaluate conservation effectiveness. Camera trapping and biometric computer vision are recent technological advances. They have impacted on the methods applicable to field surveys and these methods have gained significant momentum over the last decade. Yet, most researchers inspect footage manually and few studies have used automated semantic processing of video trap data from the field. The particular aim of this study is to evaluate methods that incorporate automated face detection technology as an aid to estimate site use of two chimpanzee communities based on camera trapping. As a comparative baseline we employ traditional manual inspection of footage. Our analysis focuses specifically on the basic parameter of occurrence where we assess the performance and practical value of chimpanzee face detection software. We found that the semi-automated data processing required only 2-4% of the time compared to the purely manual analysis. This is a non-negligible increase in efficiency that is critical when assessing the feasibility of camera trap occupancy surveys. Our evaluations suggest that our methodology estimates the proportion of sites used relatively reliably. Chimpanzees are mostly detected when they are present and when videos are filmed in high-resolution: the highest recall rate was 77%, for a false alarm rate of 2.8% for videos containing only chimpanzee frontal face views. Certainly, our study is only a first step for transferring face detection software from the lab into field application. Our results are promising and indicate that the current limitation of detecting chimpanzees in camera trap footage due to lack of suitable face views can be easily overcome on the level of field data collection, that is, by the combined placement of multiple high-resolution cameras facing reverse directions. This will enable to routinely conduct chimpanzee occupancy surveys based on camera trapping and semi-automated processing of footage. Using semi-automated ape face detection technology for processing camera trap footage requires only 2-4% of the time compared to manual analysis and allows to estimate site use by chimpanzees relatively reliably. © 2017 Wiley Periodicals, Inc.
Finding a face in the crowd: testing the anger superiority effect in Asperger Syndrome.
Ashwin, Chris; Wheelwright, Sally; Baron-Cohen, Simon
2006-06-01
Social threat captures attention and is processed rapidly and efficiently, with many lines of research showing involvement of the amygdala. Visual search paradigms looking at social threat have shown angry faces 'pop-out' in a crowd, compared to happy faces. Autism and Asperger Syndrome (AS) are neurodevelopmental conditions characterised by social deficits, abnormal face processing, and amygdala dysfunction. We tested adults with high-functioning autism (HFA) and AS using a facial visual search paradigm with schematic neutral and emotional faces. We found, contrary to predictions, that people with HFA/AS performed similarly to controls in many conditions. However, the effect was reduced in the HFA/AS group when using widely varying crowd sizes and when faces were inverted, suggesting a difference in face-processing style may be evident even with simple schematic faces. We conclude there are intact threat detection mechanisms in AS, under simple and predictable conditions, but that like other face-perception tasks, the visual search of threat faces task reveals atypical face-processing in HFA/AS.
Fraudulent ID using face morphs: Experiments on human and automatic recognition
Robertson, David J.; Kramer, Robin S. S.
2017-01-01
Matching unfamiliar faces is known to be difficult, and this can give an opportunity to those engaged in identity fraud. Here we examine a relatively new form of fraud, the use of photo-ID containing a graphical morph between two faces. Such a document may look sufficiently like two people to serve as ID for both. We present two experiments with human viewers, and a third with a smartphone face recognition system. In Experiment 1, viewers were asked to match pairs of faces, without being warned that one of the pair could be a morph. They very commonly accepted a morphed face as a match. However, in Experiment 2, following very short training on morph detection, their acceptance rate fell considerably. Nevertheless, there remained large individual differences in people’s ability to detect a morph. In Experiment 3 we show that a smartphone makes errors at a similar rate to ‘trained’ human viewers—i.e. accepting a small number of morphs as genuine ID. We discuss these results in reference to the use of face photos for security. PMID:28328928
Fraudulent ID using face morphs: Experiments on human and automatic recognition.
Robertson, David J; Kramer, Robin S S; Burton, A Mike
2017-01-01
Matching unfamiliar faces is known to be difficult, and this can give an opportunity to those engaged in identity fraud. Here we examine a relatively new form of fraud, the use of photo-ID containing a graphical morph between two faces. Such a document may look sufficiently like two people to serve as ID for both. We present two experiments with human viewers, and a third with a smartphone face recognition system. In Experiment 1, viewers were asked to match pairs of faces, without being warned that one of the pair could be a morph. They very commonly accepted a morphed face as a match. However, in Experiment 2, following very short training on morph detection, their acceptance rate fell considerably. Nevertheless, there remained large individual differences in people's ability to detect a morph. In Experiment 3 we show that a smartphone makes errors at a similar rate to 'trained' human viewers-i.e. accepting a small number of morphs as genuine ID. We discuss these results in reference to the use of face photos for security.
A Fuzzy Aproach For Facial Emotion Recognition
NASA Astrophysics Data System (ADS)
Gîlcă, Gheorghe; Bîzdoacă, Nicu-George
2015-09-01
This article deals with an emotion recognition system based on the fuzzy sets. Human faces are detected in images with the Viola - Jones algorithm and for its tracking in video sequences we used the Camshift algorithm. The detected human faces are transferred to the decisional fuzzy system, which is based on the variable fuzzyfication measurements of the face: eyebrow, eyelid and mouth. The system can easily determine the emotional state of a person.
Almeida, Inês; van Asselen, Marieke; Castelo-Branco, Miguel
2013-09-01
In human cognition, most relevant stimuli, such as faces, are processed in central vision. However, it is widely believed that recognition of relevant stimuli (e.g. threatening animal faces) at peripheral locations is also important due to their survival value. Moreover, task instructions have been shown to modulate brain regions involved in threat recognition (e.g. the amygdala). In this respect it is also controversial whether tasks requiring explicit focus on stimulus threat content vs. implicit processing differently engage primitive subcortical structures involved in emotional appraisal. Here we have addressed the role of central vs. peripheral processing in the human amygdala using animal threatening vs. non-threatening face stimuli. First, a simple animal face recognition task with threatening and non-threatening animal faces, as well as non-face control stimuli, was employed in naïve subjects (implicit task). A subsequent task was then performed with the same stimulus categories (but different stimuli) in which subjects were told to explicitly detect threat signals. We found lateralized amygdala responses both to the spatial location of stimuli and to the threatening content of faces depending on the task performed: the right amygdala showed increased responses to central compared to left presented stimuli specifically during the threat detection task, while the left amygdala was better prone to discriminate threatening faces from non-facial displays during the animal face recognition task. Additionally, the right amygdala responded to faces during the threat detection task but only when centrally presented. Moreover, we have found no evidence for superior responses of the amygdala to peripheral stimuli. Importantly, we have found that striatal regions activate differentially depending on peripheral vs. central processing of threatening faces. Accordingly, peripheral processing of these stimuli activated more strongly the putaminal region, while central processing engaged mainly the caudate nucleus. We conclude that the human amygdala has a central bias for face stimuli, and that visual processing recruits different striatal regions, putaminal or caudate based, depending on the task and on whether peripheral or central visual processing is involved. © 2013 Elsevier Ltd. All rights reserved.
de Carlo, Talisa E; Kokame, Gregg T; Kaneko, Kyle N; Lian, Rebecca; Lai, James C; Wee, Raymond
2018-03-20
Determine sensitivity and specificity of polypoidal choroidal vasculopathy (PCV) diagnosis with structural en face optical coherence tomography (OCT) and OCT angiography (OCTA). Retrospective review of the medical records of eyes diagnosed with PCV by indocyanine green angiography with review of diagnostic testing with structural en face OCT and OCTA by a trained reader. Structural en face OCT, cross-sectional OCT angiograms alone, and OCTA in its entirety were reviewed blinded to the findings of indocyanine green angiography and each other to determine if they could demonstrate the PCV complex. Sensitivity and specificity of PCV diagnosis was determined for each imaging technique using indocyanine green angiography as the ground truth. Sensitivity and specificity of structural en face OCT were 30.0% and 85.7%, of OCT angiograms alone were 26.8% and 96.8%, and of the entire OCTA were 43.9% and 87.1%, respectively. Sensitivity and specificity were improved for OCT angiograms and OCTA when looking at images taken within 1 month of PCV diagnosis. Sensitivity of detecting PCV was low using structural en face OCT and OCTA but specificity was high. Indocyanine green angiography remains the gold standard for PCV detection.
Is Beauty in the Face of the Beholder?
Laeng, Bruno; Vermeer, Oddrun; Sulutvedt, Unni
2013-01-01
Opposing forces influence assortative mating so that one seeks a similar mate while at the same time avoiding inbreeding with close relatives. Thus, mate choice may be a balancing of phenotypic similarity and dissimilarity between partners. In the present study, we assessed the role of resemblance to Self’s facial traits in judgments of physical attractiveness. Participants chose the most attractive face image of their romantic partner among several variants, where the faces were morphed so as to include only 22% of another face. Participants distinctly preferred a “Self-based morph” (i.e., their partner’s face with a small amount of Self’s face blended into it) to other morphed images. The Self-based morph was also preferred to the morph of their partner’s face blended with the partner’s same-sex “prototype”, although the latter face was (“objectively”) judged more attractive by other individuals. When ranking morphs differing in level of amalgamation (i.e., 11% vs. 22% vs. 33%) of another face, the 22% was chosen consistently as the preferred morph and, in particular, when Self was blended in the partner’s face. A forced-choice signal-detection paradigm showed that the effect of self-resemblance operated at an unconscious level, since the same participants were unable to detect the presence of their own faces in the above morphs. We concluded that individuals, if given the opportunity, seek to promote “positive assortment” for Self’s phenotype, especially when the level of similarity approaches an optimal point that is similar to Self without causing a conscious acknowledgment of the similarity. PMID:23874608
[Neural basis of self-face recognition: social aspects].
Sugiura, Motoaki
2012-07-01
Considering the importance of the face in social survival and evidence from evolutionary psychology of visual self-recognition, it is reasonable that we expect neural mechanisms for higher social-cognitive processes to underlie self-face recognition. A decade of neuroimaging studies so far has, however, not provided an encouraging finding in this respect. Self-face specific activation has typically been reported in the areas for sensory-motor integration in the right lateral cortices. This observation appears to reflect the physical nature of the self-face which representation is developed via the detection of contingency between one's own action and sensory feedback. We have recently revealed that the medial prefrontal cortex, implicated in socially nuanced self-referential process, is activated during self-face recognition under a rich social context where multiple other faces are available for reference. The posterior cingulate cortex has also exhibited this activation modulation, and in the separate experiment showed a response to attractively manipulated self-face suggesting its relevance to positive self-value. Furthermore, the regions in the right lateral cortices typically showing self-face-specific activation have responded also to the face of one's close friend under the rich social context. This observation is potentially explained by the fact that the contingency detection for physical self-recognition also plays a role in physical social interaction, which characterizes the representation of personally familiar people. These findings demonstrate that neuroscientific exploration reveals multiple facets of the relationship between self-face recognition and social-cognitive process, and that technically the manipulation of social context is key to its success.
Folgerø, Per O; Hodne, Lasse; Johansson, Christer; Andresen, Alf E; Sætren, Lill C; Specht, Karsten; Skaar, Øystein O; Reber, Rolf
2016-01-01
This article explores the possibility of testing hypotheses about art production in the past by collecting data in the present. We call this enterprise "experimental art history". Why did medieval artists prefer to paint Christ with his face directed towards the beholder, while profane faces were noticeably more often painted in different degrees of profile? Is a preference for frontal faces motivated by deeper evolutionary and biological considerations? Head and gaze direction is a significant factor for detecting the intentions of others, and accurate detection of gaze direction depends on strong contrast between a dark iris and a bright sclera, a combination that is only found in humans among the primates. One uniquely human capacity is language acquisition, where the detection of shared or joint attention, for example through detection of gaze direction, contributes significantly to the ease of acquisition. The perceived face and gaze direction is also related to fundamental emotional reactions such as fear, aggression, empathy and sympathy. The fast-track modulator model presents a related fast and unconscious subcortical route that involves many central brain areas. Activity in this pathway mediates the affective valence of the stimulus. In particular, different sub-regions of the amygdala show specific activation as response to gaze direction, head orientation and the valence of facial expression. We present three experiments on the effects of face orientation and gaze direction on the judgments of social attributes. We observed that frontal faces with direct gaze were more highly associated with positive adjectives. Does this help to associate positive values to the Holy Face in a Western context? The formal result indicates that the Holy Face is perceived more positively than profiles with both direct and averted gaze. Two control studies, using a Brazilian and a Dutch database of photographs, showed a similar but weaker effect with a larger contrast between the gaze directions for profiles. Our findings indicate that many factors affect the impression of a face, and that eye contact in combination with face direction reinforce the general impression of portraits, rather than determine it.
Neural evidence for the subliminal processing of facial trustworthiness in infancy.
Jessen, Sarah; Grossmann, Tobias
2017-04-22
Face evaluation is thought to play a vital role in human social interactions. One prominent aspect is the evaluation of facial signs of trustworthiness, which has been shown to occur reliably, rapidly, and without conscious awareness in adults. Recent developmental work indicates that the sensitivity to facial trustworthiness has early ontogenetic origins as it can already be observed in infancy. However, it is unclear whether infants' sensitivity to facial signs of trustworthiness relies upon conscious processing of a face or, similar to adults, occurs also in response to subliminal faces. To investigate this question, we conducted an event-related brain potential (ERP) study, in which we presented 7-month-old infants with faces varying in trustworthiness. Facial stimuli were presented subliminally (below infants' face visibility threshold) for only 50ms and then masked by presenting a scrambled face image. Our data revealed that infants' ERP responses to subliminally presented faces differed as a function of trustworthiness. Specifically, untrustworthy faces elicited an enhanced negative slow wave (800-1000ms) at frontal and central electrodes. The current findings critically extend prior work by showing that, similar to adults, infants' neural detection of facial signs of trustworthiness occurs also in response to subliminal face. This supports the view that detecting facial trustworthiness is an early developing and automatic process in humans. Copyright © 2017 Elsevier Ltd. All rights reserved.
The Caledonian face test: A new test of face discrimination.
Logan, Andrew J; Wilkinson, Frances; Wilson, Hugh R; Gordon, Gael E; Loffler, Gunter
2016-02-01
This study aimed to develop a clinical test of face perception which is applicable to a wide range of patients and can capture normal variability. The Caledonian face test utilises synthetic faces which combine simplicity with sufficient realism to permit individual identification. Face discrimination thresholds (i.e. minimum difference between faces required for accurate discrimination) were determined in an "odd-one-out" task. The difference between faces was controlled by an adaptive QUEST procedure. A broad range of face discrimination sensitivity was determined from a group (N=52) of young adults (mean 5.75%; SD 1.18; range 3.33-8.84%). The test is fast (3-4 min), repeatable (test-re-test r(2)=0.795) and demonstrates a significant inversion effect. The potential to identify impairments of face discrimination was evaluated by testing LM who reported a lifelong difficulty with face perception. While LM's impairment for two established face tests was close to the criterion for significance (Z-scores of -2.20 and -2.27) for the Caledonian face test, her Z-score was -7.26, implying a more than threefold higher sensitivity. The new face test provides a quantifiable and repeatable assessment of face discrimination ability. The enhanced sensitivity suggests that the Caledonian face test may be capable of detecting more subtle impairments of face perception than available tests. Copyright © 2015 Elsevier Ltd. All rights reserved.
A smart technique for attendance system to recognize faces through parallelism
NASA Astrophysics Data System (ADS)
Prabhavathi, B.; Tanuja, V.; Madhu Viswanatham, V.; Rajashekhara Babu, M.
2017-11-01
Major part of recognising a person is face with the help of image processing techniques we can exploit the physical features of a person. In the old approach method that is used in schools and colleges it is there that the professor calls the student name and then the attendance for the students marked. Here in paper want to deviate from the old approach and go with the new approach by using techniques that are there in image processing. In this paper we presenting spontaneous presence for students in classroom. At first classroom image has been in use and after that image is kept in data record. For the images that are stored in the database we apply system algorithm which includes steps such as, histogram classification, noise removal, face detection and face recognition methods. So by using these steps we detect the faces and then compare it with the database. The attendance gets marked automatically if the system recognizes the faces.
Hardware-software face detection system based on multi-block local binary patterns
NASA Astrophysics Data System (ADS)
Acasandrei, Laurentiu; Barriga, Angel
2015-03-01
Face detection is an important aspect for biometrics, video surveillance and human computer interaction. Due to the complexity of the detection algorithms any face detection system requires a huge amount of computational and memory resources. In this communication an accelerated implementation of MB LBP face detection algorithm targeting low frequency, low memory and low power embedded system is presented. The resulted implementation is time deterministic and uses a customizable AMBA IP hardware accelerator. The IP implements the kernel operations of the MB-LBP algorithm and can be used as universal accelerator for MB LBP based applications. The IP employs 8 parallel MB-LBP feature evaluators cores, uses a deterministic bandwidth, has a low area profile and the power consumption is ~95 mW on a Virtex5 XC5VLX50T. The resulted implementation acceleration gain is between 5 to 8 times, while the hardware MB-LBP feature evaluation gain is between 69 and 139 times.
A system for tracking and recognizing pedestrian faces using a network of loosely coupled cameras
NASA Astrophysics Data System (ADS)
Gagnon, L.; Laliberté, F.; Foucher, S.; Branzan Albu, A.; Laurendeau, D.
2006-05-01
A face recognition module has been developed for an intelligent multi-camera video surveillance system. The module can recognize a pedestrian face in terms of six basic emotions and the neutral state. Face and facial features detection (eyes, nasal root, nose and mouth) are first performed using cascades of boosted classifiers. These features are used to normalize the pose and dimension of the face image. Gabor filters are then sampled on a regular grid covering the face image to build a facial feature vector that feeds a nearest neighbor classifier with a cosine distance similarity measure for facial expression interpretation and face model construction. A graphical user interface allows the user to adjust the module parameters.
Dynamic Encoding of Face Information in the Human Fusiform Gyrus
Ghuman, Avniel Singh; Brunet, Nicolas M.; Li, Yuanning; Konecky, Roma O.; Pyles, John A.; Walls, Shawn A.; Destefino, Vincent; Wang, Wei; Richardson, R. Mark
2014-01-01
Humans’ ability to rapidly and accurately detect, identify, and classify faces under variable conditions derives from a network of brain regions highly tuned to face information. The fusiform face area (FFA) is thought to be a computational hub for face processing, however temporal dynamics of face information processing in FFA remains unclear. Here we use multivariate pattern classification to decode the temporal dynamics of expression-invariant face information processing using electrodes placed directly upon FFA in humans. Early FFA activity (50-75 ms) contained information regarding whether participants were viewing a face. Activity between 200-500 ms contained expression-invariant information about which of 70 faces participants were viewing along with the individual differences in facial features and their configurations. Long-lasting (500+ ms) broadband gamma frequency activity predicted task performance. These results elucidate the dynamic computational role FFA plays in multiple face processing stages and indicate what information is used in performing these visual analyses. PMID:25482825
Robust Point Set Matching for Partial Face Recognition.
Weng, Renliang; Lu, Jiwen; Tan, Yap-Peng
2016-03-01
Over the past three decades, a number of face recognition methods have been proposed in computer vision, and most of them use holistic face images for person identification. In many real-world scenarios especially some unconstrained environments, human faces might be occluded by other objects, and it is difficult to obtain fully holistic face images for recognition. To address this, we propose a new partial face recognition approach to recognize persons of interest from their partial faces. Given a pair of gallery image and probe face patch, we first detect keypoints and extract their local textural features. Then, we propose a robust point set matching method to discriminatively match these two extracted local feature sets, where both the textural information and geometrical information of local features are explicitly used for matching simultaneously. Finally, the similarity of two faces is converted as the distance between these two aligned feature sets. Experimental results on four public face data sets show the effectiveness of the proposed approach.
Bennetts, Rachel J; Mole, Joseph; Bate, Sarah
2017-09-01
Face recognition abilities vary widely. While face recognition deficits have been reported in children, it is unclear whether superior face recognition skills can be encountered during development. This paper presents O.B., a 14-year-old female with extraordinary face recognition skills: a "super-recognizer" (SR). O.B. demonstrated exceptional face-processing skills across multiple tasks, with a level of performance that is comparable to adult SRs. Her superior abilities appear to be specific to face identity: She showed an exaggerated face inversion effect and her superior abilities did not extend to object processing or non-identity aspects of face recognition. Finally, an eye-movement task demonstrated that O.B. spent more time than controls examining the nose - a pattern previously reported in adult SRs. O.B. is therefore particularly skilled at extracting and using identity-specific facial cues, indicating that face and object recognition are dissociable during development, and that super recognition can be detected in adolescence.
Dynamic encoding of face information in the human fusiform gyrus.
Ghuman, Avniel Singh; Brunet, Nicolas M; Li, Yuanning; Konecky, Roma O; Pyles, John A; Walls, Shawn A; Destefino, Vincent; Wang, Wei; Richardson, R Mark
2014-12-08
Humans' ability to rapidly and accurately detect, identify and classify faces under variable conditions derives from a network of brain regions highly tuned to face information. The fusiform face area (FFA) is thought to be a computational hub for face processing; however, temporal dynamics of face information processing in FFA remains unclear. Here we use multivariate pattern classification to decode the temporal dynamics of expression-invariant face information processing using electrodes placed directly on FFA in humans. Early FFA activity (50-75 ms) contained information regarding whether participants were viewing a face. Activity between 200 and 500 ms contained expression-invariant information about which of 70 faces participants were viewing along with the individual differences in facial features and their configurations. Long-lasting (500+ms) broadband gamma frequency activity predicted task performance. These results elucidate the dynamic computational role FFA plays in multiple face processing stages and indicate what information is used in performing these visual analyses.
NASA Astrophysics Data System (ADS)
Lemoff, Brian E.; Martin, Robert B.; Sluch, Mikhail; Kafka, Kristopher M.; McCormick, William; Ice, Robert
2013-06-01
The capability to positively and covertly identify people at a safe distance, 24-hours per day, could provide a valuable advantage in protecting installations, both domestically and in an asymmetric warfare environment. This capability would enable installation security officers to identify known bad actors from a safe distance, even if they are approaching under cover of darkness. We will describe an active-SWIR imaging system being developed to automatically detect, track, and identify people at long range using computer face recognition. The system illuminates the target with an eye-safe and invisible SWIR laser beam, to provide consistent high-resolution imagery night and day. SWIR facial imagery produced by the system is matched against a watch-list of mug shots using computer face recognition algorithms. The current system relies on an operator to point the camera and to review and interpret the face recognition results. Automation software is being developed that will allow the system to be cued to a location by an external system, automatically detect a person, track the person as they move, zoom in on the face, select good facial images, and process the face recognition results, producing alarms and sharing data with other systems when people are detected and identified. Progress on the automation of this system will be presented along with experimental night-time face recognition results at distance.
ERIC Educational Resources Information Center
Mermillod, Martial; Vermeulen, Nicolas; Lundqvist, Daniel; Niedenthal, Paula M.
2009-01-01
Research findings in social and cognitive psychology imply that it is easier to detect angry faces than happy faces in a crowd of neutral faces [Hansen, C. H., & Hansen, R. D. (1988). Finding the face in the crowd--An anger superiority effect. "Journal of Personality and Social Psychology," 54(6), 917-924]. This phenomenon has been held to have…
Expectations about person identity modulate the face-sensitive N170.
Johnston, Patrick; Overell, Anne; Kaufman, Jordy; Robinson, Jonathan; Young, Andrew W
2016-12-01
Identifying familiar faces is a fundamentally important aspect of social perception that requires the ability to assign very different (ambient) images of a face to a common identity. The current consensus is that the brain processes face identity at approximately 250-300 msec following stimulus onset, as indexed by the N250 event related potential. However, using two experiments we show compelling evidence that where experimental paradigms induce expectations about person identity, changes in famous face identity are in fact detected at an earlier latency corresponding to the face-sensitive N170. In Experiment 1, using a rapid periodic stimulation paradigm presenting highly variable ambient images, we demonstrate robust effects of low frequency, periodic face-identity changes in N170 amplitude. In Experiment 2, we added infrequent aperiodic identity changes to show that the N170 was larger to both infrequent periodic and infrequent aperiodic identity changes than to high frequency identities. Our use of ambient stimulus images makes it unlikely that these effects are due to adaptation of low-level stimulus features. In line with current ideas about predictive coding, we therefore suggest that when expectations about the identity of a face exist, the visual system is capable of detecting identity mismatches at a latency consistent with the N170. Copyright © 2016 Elsevier Ltd. All rights reserved.
On the flexibility of social source memory: a test of the emotional incongruity hypothesis.
Bell, Raoul; Buchner, Axel; Kroneisen, Meike; Giang, Trang
2012-11-01
A popular hypothesis in evolutionary psychology posits that reciprocal altruism is supported by a cognitive module that helps cooperative individuals to detect and remember cheaters. Consistent with this hypothesis, a source memory advantage for faces of cheaters (better memory for the cheating context in which these faces were encountered) was observed in previous studies. Here, we examined whether positive or negative expectancies would influence source memory for cheaters and cooperators. A cooperation task with virtual opponents was used in Experiments 1 and 2. Source memory for the emotionally incongruent information was enhanced relative to the congruent information: In Experiment 1, source memory was best for cheaters with likable faces and for cooperators with unlikable faces; in Experiment 2, source memory was better for smiling cheater faces than for smiling cooperator faces, and descriptively better for angry cooperator faces than for angry cheater faces. Experiments 3 and 4 showed that the emotional incongruity effect generalizes to 3rd-party reputational information (descriptions of cheating and trustworthy behavior). The results are inconsistent with the assumption of a highly specific cheater detection module. Focusing on expectancy-incongruent information may represent a more efficient, general, and hence more adaptive memory strategy for remembering exchange-relevant information than focusing only on cheaters.
Kilfedder, Catherine; Power, Kevin; Karatzias, Thanos; McCafferty, Aileen; Niven, Karen; Chouliara, Zoë; Galloway, Lisa; Sharp, Stephen
2010-09-01
The aim of the present study was to compare the effectiveness and acceptability of three interventions for occupational stress. A total of 90 National Health Service employees were randomized to face-to-face counselling or telephone counselling or bibliotherapy. Outcomes were assessed at post-intervention and 4-month follow-up. Clinical Outcomes in Routine Evaluation (CORE), General Health Questionnaire (GHQ-12), and Perceived Stress Scale (PSS-10) were used to evaluate intervention outcomes. An intention-to-treat analyses was performed. Repeated measures analysis revealed significant time effects on all measures with the exception of CORE Risk. No significant group effects were detected on all outcome measures. No time by group significant interaction effects were detected on any of the outcome measures with the exception of CORE Functioning and GHQ total. With regard to acceptability of interventions, participants expressed a preference for face-to-face counselling over the other two modalities. Overall, it was concluded that the three intervention groups are equally effective. Given that bibliotherapy is the least costly of the three, results from the present study might be considered in relation to a stepped care approach to occupational stress management with bibliotherapy as the first line of intervention, followed by telephone and face-to-face counselling as required.
Expectation and Surprise Determine Neural Population Responses in the Ventral Visual Stream
Egner, Tobias; Monti, Jim M.; Summerfield, Christopher
2014-01-01
Visual cortex is traditionally viewed as a hierarchy of neural feature detectors, with neural population responses being driven by bottom-up stimulus features. Conversely, “predictive coding” models propose that each stage of the visual hierarchy harbors two computationally distinct classes of processing unit: representational units that encode the conditional probability of a stimulus and provide predictions to the next lower level; and error units that encode the mismatch between predictions and bottom-up evidence, and forward prediction error to the next higher level. Predictive coding therefore suggests that neural population responses in category-selective visual regions, like the fusiform face area (FFA), reflect a summation of activity related to prediction (“face expectation”) and prediction error (“face surprise”), rather than a homogenous feature detection response. We tested the rival hypotheses of the feature detection and predictive coding models by collecting functional magnetic resonance imaging data from the FFA while independently varying both stimulus features (faces vs houses) and subjects’ perceptual expectations regarding those features (low vs medium vs high face expectation). The effects of stimulus and expectation factors interacted, whereby FFA activity elicited by face and house stimuli was indistinguishable under high face expectation and maximally differentiated under low face expectation. Using computational modeling, we show that these data can be explained by predictive coding but not by feature detection models, even when the latter are augmented with attentional mechanisms. Thus, population responses in the ventral visual stream appear to be determined by feature expectation and surprise rather than by stimulus features per se. PMID:21147999
Short, Lindsey A; Mondloch, Catherine J; Hackland, Anne T
2015-01-01
Adults are more accurate in detecting deviations from normality in young adult faces than in older adult faces despite exhibiting comparable accuracy in discriminating both face ages. This deficit in judging the normality of older faces may be due to reliance on a face space optimized for the dimensions of young adult faces, perhaps because of early and continuous experience with young adult faces. Here we examined the emergence of this young adult face bias by testing 3- and 7-year-old children on a child-friendly version of the task used to test adults. In an attractiveness judgment task, children viewed young and older adult face pairs; each pair consisted of an unaltered face and a distorted face of the same identity. Children pointed to the prettiest face, which served as a measure of their sensitivity to the dimensions on which faces vary relative to a norm. To examine whether biases in the attractiveness task were specific to deficits in referencing a norm or extended to impaired discrimination, we tested children on a simultaneous match-to-sample task with the same stimuli. Both age groups were more accurate in judging the attractiveness of young faces relative to older faces; however, unlike adults, the young adult face bias extended to the match-to-sample task. These results suggest that by 3 years of age, children's perceptual system is more finely tuned for young adult faces than for older adult faces, which may support past findings of superior recognition for young adult faces. Copyright © 2014 Elsevier Inc. All rights reserved.
Human face processing is tuned to sexual age preferences
Ponseti, J.; Granert, O.; van Eimeren, T.; Jansen, O.; Wolff, S.; Beier, K.; Deuschl, G.; Bosinski, H.; Siebner, H.
2014-01-01
Human faces can motivate nurturing behaviour or sexual behaviour when adults see a child or an adult face, respectively. This suggests that face processing is tuned to detecting age cues of sexual maturity to stimulate the appropriate reproductive behaviour: either caretaking or mating. In paedophilia, sexual attraction is directed to sexually immature children. Therefore, we hypothesized that brain networks that normally are tuned to mature faces of the preferred gender show an abnormal tuning to sexual immature faces in paedophilia. Here, we use functional magnetic resonance imaging (fMRI) to test directly for the existence of a network which is tuned to face cues of sexual maturity. During fMRI, participants sexually attracted to either adults or children were exposed to various face images. In individuals attracted to adults, adult faces activated several brain regions significantly more than child faces. These brain regions comprised areas known to be implicated in face processing, and sexual processing, including occipital areas, the ventrolateral prefrontal cortex and, subcortically, the putamen and nucleus caudatus. The same regions were activated in paedophiles, but with a reversed preferential response pattern. PMID:24850896
Who is who: areas of the brain associated with recognizing and naming famous faces.
Giussani, Carlo; Roux, Franck-Emmanuel; Bello, Lorenzo; Lauwers-Cances, Valérie; Papagno, Costanza; Gaini, Sergio M; Puel, Michelle; Démonet, Jean-François
2009-02-01
It has been hypothesized that specific brain regions involved in face naming may exist in the brain. To spare these areas and to gain a better understanding of their organization, the authors studied patients who underwent surgery by using direct electrical stimulation mapping for brain tumors, and they compared an object-naming task to a famous face-naming task. Fifty-six patients with brain tumors (39 and 17 in the left and right hemispheres, respectively) and with no significant preoperative overall language deficit were prospectively studied over a 2-year period. Four patients who had a partially selective famous face anomia and 2 with prosopagnosia were not included in the final analysis. Face-naming interferences were exclusively localized in small cortical areas (< 1 cm2). Among 35 patients whose dominant left hemisphere was studied, 26 face-naming specific areas (that is, sites of interference in face naming only and not in object naming) were found. These face naming-specific sites were significantly detected in 2 regions: in the left frontal areas of the superior, middle, and inferior frontal gyri (p < 0.001) and in the anterior part of the superior and middle temporal gyri (p < 0.01). Variable patterns of interference were observed (speech arrest, anomia, phonemic, or semantic paraphasia) probably related to the different stages in famous face processing. Only 4 famous face-naming interferences were found in the right hemisphere. Relative anatomical segregation of naming categories within language areas was detected. This study showed that famous face naming was preferentially processed in the left frontal and anterior temporal gyri. The authors think it is necessary to adapt naming tasks in neurosurgical patients to the brain region studied.
A study on facial expressions recognition
NASA Astrophysics Data System (ADS)
Xu, Jingjing
2017-09-01
In terms of communication, postures and facial expressions of such feelings like happiness, anger and sadness play important roles in conveying information. With the development of the technology, recently a number of algorithms dealing with face alignment, face landmark detection, classification, facial landmark localization and pose estimation have been put forward. However, there are a lot of challenges and problems need to be fixed. In this paper, a few technologies have been concluded and analyzed, and they all relate to handling facial expressions recognition and poses like pose-indexed based multi-view method for face alignment, robust facial landmark detection under significant head pose and occlusion, partitioning the input domain for classification, robust statistics face formalization.
Familiarity Enhances Visual Working Memory for Faces
ERIC Educational Resources Information Center
Jackson, Margaret C.; Raymond, Jane E.
2008-01-01
Although it is intuitive that familiarity with complex visual objects should aid their preservation in visual working memory (WM), empirical evidence for this is lacking. This study used a conventional change-detection procedure to assess visual WM for unfamiliar and famous faces in healthy adults. Across experiments, faces were upright or…
Event-Related Brain Potential Correlates of Emotional Face Processing
ERIC Educational Resources Information Center
Eimer, Martin; Holmes, Amanda
2007-01-01
Results from recent event-related brain potential (ERP) studies investigating brain processes involved in the detection and analysis of emotional facial expression are reviewed. In all experiments, emotional faces were found to trigger an increased ERP positivity relative to neutral faces. The onset of this emotional expression effect was…
Attention and memory bias to facial emotions underlying negative symptoms of schizophrenia.
Jang, Seon-Kyeong; Park, Seon-Cheol; Lee, Seung-Hwan; Cho, Yang Seok; Choi, Kee-Hong
2016-01-01
This study assessed bias in selective attention to facial emotions in negative symptoms of schizophrenia and its influence on subsequent memory for facial emotions. Thirty people with schizophrenia who had high and low levels of negative symptoms (n = 15, respectively) and 21 healthy controls completed a visual probe detection task investigating selective attention bias (happy, sad, and angry faces randomly presented for 50, 500, or 1000 ms). A yes/no incidental facial memory task was then completed. Attention bias scores and recognition errors were calculated. Those with high negative symptoms exhibited reduced attention to emotional faces relative to neutral faces; those with low negative symptoms showed the opposite pattern when faces were presented for 500 ms regardless of the valence. Compared to healthy controls, those with high negative symptoms made more errors for happy faces in the memory task. Reduced attention to emotional faces in the probe detection task was significantly associated with less pleasure and motivation and more recognition errors for happy faces in schizophrenia group only. Attention bias away from emotional information relatively early in the attentional process and associated diminished positive memory may relate to pathological mechanisms for negative symptoms.
Applying face identification to detecting hijacking of airplane
NASA Astrophysics Data System (ADS)
Luo, Xuanwen; Cheng, Qiang
2004-09-01
That terrorists hijacked the airplanes and crashed the World Trade Center is disaster to civilization. To avoid the happening of hijack is critical to homeland security. To report the hijacking in time, limit the terrorist to operate the plane if happened and land the plane to the nearest airport could be an efficient way to avoid the misery. Image processing technique in human face recognition or identification could be used for this task. Before the plane take off, the face images of pilots are input into a face identification system installed in the airplane. The camera in front of pilot seat keeps taking the pilot face image during the flight and comparing it with pre-input pilot face images. If a different face is detected, a warning signal is sent to ground automatically. At the same time, the automatic cruise system is started or the plane is controlled by the ground. The terrorists will have no control over the plane. The plane will be landed to a nearest or appropriate airport under the control of the ground or cruise system. This technique could also be used in automobile industry as an image key to avoid car stealth.
iFER: facial expression recognition using automatically selected geometric eye and eyebrow features
NASA Astrophysics Data System (ADS)
Oztel, Ismail; Yolcu, Gozde; Oz, Cemil; Kazan, Serap; Bunyak, Filiz
2018-03-01
Facial expressions have an important role in interpersonal communications and estimation of emotional states or intentions. Automatic recognition of facial expressions has led to many practical applications and became one of the important topics in computer vision. We present a facial expression recognition system that relies on geometry-based features extracted from eye and eyebrow regions of the face. The proposed system detects keypoints on frontal face images and forms a feature set using geometric relationships among groups of detected keypoints. Obtained feature set is refined and reduced using the sequential forward selection (SFS) algorithm and fed to a support vector machine classifier to recognize five facial expression classes. The proposed system, iFER (eye-eyebrow only facial expression recognition), is robust to lower face occlusions that may be caused by beards, mustaches, scarves, etc. and lower face motion during speech production. Preliminary experiments on benchmark datasets produced promising results outperforming previous facial expression recognition studies using partial face features, and comparable results to studies using whole face information, only slightly lower by ˜ 2.5 % compared to the best whole face facial recognition system while using only ˜ 1 / 3 of the facial region.
Sunglass detection method for automation of video surveillance system
NASA Astrophysics Data System (ADS)
Sikandar, Tasriva; Samsudin, Wan Nur Azhani W.; Hawari Ghazali, Kamarul; Mohd, Izzeldin I.; Fazle Rabbi, Mohammad
2018-04-01
Wearing sunglass to hide face from surveillance camera is a common activity in criminal incidences. Therefore, sunglass detection from surveillance video has become a demanding issue in automation of security systems. In this paper we propose an image processing method to detect sunglass from surveillance images. Specifically, a unique feature using facial height and width has been employed to identify the covered region of the face. The presence of covered area by sunglass is evaluated using facial height-width ratio. Threshold value of covered area percentage is used to classify the glass wearing face. Two different types of glasses have been considered i.e. eye glass and sunglass. The results of this study demonstrate that the proposed method is able to detect sunglasses in two different illumination conditions such as, room illumination as well as in the presence of sunlight. In addition, due to the multi-level checking in facial region, this method has 100% accuracy of detecting sunglass. However, in an exceptional case where fabric surrounding the face has similar color as skin, the correct detection rate was found 93.33% for eye glass.
Global Binary Continuity for Color Face Detection With Complex Background
NASA Astrophysics Data System (ADS)
Belavadi, Bhaskar; Mahendra Prashanth, K. V.; Joshi, Sujay S.; Suprathik, N.
2017-08-01
In this paper, we propose a method to detect human faces in color images, with complex background. The proposed algorithm makes use of basically two color space models, specifically HSV and YCgCr. The color segmented image is filled uniformly with a single color (binary) and then all unwanted discontinuous lines are removed to get the final image. Experimental results on Caltech database manifests that the purported model is able to accomplish far better segmentation for faces of varying orientations, skin color and background environment.
Reverse engineering the face space: Discovering the critical features for face identification.
Abudarham, Naphtali; Yovel, Galit
2016-01-01
How do we identify people? What are the critical facial features that define an identity and determine whether two faces belong to the same person or different people? To answer these questions, we applied the face space framework, according to which faces are represented as points in a multidimensional feature space, such that face space distances are correlated with perceptual similarities between faces. In particular, we developed a novel method that allowed us to reveal the critical dimensions (i.e., critical features) of the face space. To that end, we constructed a concrete face space, which included 20 facial features of natural face images, and asked human observers to evaluate feature values (e.g., how thick are the lips). Next, we systematically and quantitatively changed facial features, and measured the perceptual effects of these manipulations. We found that critical features were those for which participants have high perceptual sensitivity (PS) for detecting differences across identities (e.g., which of two faces has thicker lips). Furthermore, these high PS features vary minimally across different views of the same identity, suggesting high PS features support face recognition across different images of the same face. The methods described here set an infrastructure for discovering the critical features of other face categories not studied here (e.g., Asians, familiar) as well as other aspects of face processing, such as attractiveness or trait inferences.
Typical and atypical neurodevelopment for face specialization: An fMRI study
Joseph, Jane E.; Zhu, Xun; Gundran, Andrew; Davies, Faraday; Clark, Jonathan D.; Ruble, Lisa; Glaser, Paul; Bhatt, Ramesh S.
2014-01-01
Individuals with Autism Spectrum Disorder (ASD) and their relatives process faces differently from typically developed (TD) individuals. In an fMRI face-viewing task, TD and undiagnosed sibling (SIB) children (5–18 years) showed face specialization in the right amygdala and ventromedial prefrontal cortex (vmPFC), with left fusiform and right amygdala face specialization increasing with age in TD subjects. SIBs showed extensive antero-medial temporal lobe activation for faces that was not present in any other group, suggesting a potential compensatory mechanism. In ASD, face specialization was minimal but increased with age in the right fusiform and decreased with age in the left amygdala, suggesting atypical development of a frontal-amygdala-fusiform system which is strongly linked to detecting salience and processing facial information. PMID:25479816
Cooperative multisensor system for real-time face detection and tracking in uncontrolled conditions
NASA Astrophysics Data System (ADS)
Marchesotti, Luca; Piva, Stefano; Turolla, Andrea; Minetti, Deborah; Regazzoni, Carlo S.
2005-03-01
The presented work describes an innovative architecture for multi-sensor distributed video surveillance applications. The aim of the system is to track moving objects in outdoor environments with a cooperative strategy exploiting two video cameras. The system also exhibits the capacity of focusing its attention on the faces of detected pedestrians collecting snapshot frames of face images, by segmenting and tracking them over time at different resolution. The system is designed to employ two video cameras in a cooperative client/server structure: the first camera monitors the entire area of interest and detects the moving objects using change detection techniques. The detected objects are tracked over time and their position is indicated on a map representing the monitored area. The objects" coordinates are sent to the server sensor in order to point its zooming optics towards the moving object. The second camera tracks the objects at high resolution. As well as the client camera, this sensor is calibrated and the position of the object detected on the image plane reference system is translated in its coordinates referred to the same area map. In the map common reference system, data fusion techniques are applied to achieve a more precise and robust estimation of the objects" track and to perform face detection and tracking. The work novelties and strength reside in the cooperative multi-sensor approach, in the high resolution long distance tracking and in the automatic collection of biometric data such as a person face clip for recognition purposes.
Li, Tian-Tian; Lu, Yong
2014-11-07
This study on the subliminal affective priming effects of faces displaying various levels of arousal employed event-related potentials (ERPs). The participants were asked to rate the arousal of ambiguous medium-arousing faces that were preceded by high- or low-arousing priming faces presented subliminally. The results revealed that the participants exhibited arousal-consistent variation in their arousal level ratings of the probe faces exclusively in the negative prime condition. Compared with high-arousing faces, the low-arousing faces tended to elicit greater late positive component (LPC, 450-660ms) and greater N400 (330-450ms) potentials. These findings support the following conclusions: (1) the effect of subliminal affective priming of faces can be detected in the affective arousal dimension; (2) valence may influence the subliminal affective priming effect of the arousal dimension of emotional stimuli; and (3) the subliminal affective priming effect of face arousal occurs when the prime stimulus affects late-stage processing of the probe. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Bahrick, Lorraine E.; Lickliter, Robert; Castellanos, Irina
2014-01-01
Although research has demonstrated impressive face perception skills of young infants, little attention has focused on conditions that enhance versus impair infant face perception. The present studies tested the prediction, generated from the Intersensory Redundancy Hypothesis (IRH), that face discrimination, which relies on detection of visual featural information, would be impaired in the context of intersensory redundancy provided by audiovisual speech, and enhanced in the absence of intersensory redundancy (unimodal visual and asynchronous audiovisual speech) in early development. Later in development, following improvements in attention, faces should be discriminated in both redundant audiovisual and nonredundant stimulation. Results supported these predictions. Two-month-old infants discriminated a novel face in unimodal visual and asynchronous audiovisual speech but not in synchronous audiovisual speech. By 3 months, face discrimination was evident even during synchronous audiovisual speech. These findings indicate that infant face perception is enhanced and emerges developmentally earlier following unimodal visual than synchronous audiovisual exposure and that intersensory redundancy generated by naturalistic audiovisual speech can interfere with face processing. PMID:23244407
Inoue, Maiko; Jung, Jesse J; Balaratnasingam, Chandrakumar; Dansingani, Kunal K; Dhrami-Gavazi, Elona; Suzuki, Mihoko; de Carlo, Talisa E; Shahlaee, Abtin; Klufas, Michael A; El Maftouhi, Adil; Duker, Jay S; Ho, Allen C; Maftouhi, Maddalena Quaranta-El; Sarraf, David; Freund, K Bailey
2016-07-01
To determine the sensitivity of the combination of optical coherence tomography angiography (OCTA) and structural optical coherence tomography (OCT) for detecting type 1 neovascularization (NV) and to determine significant factors that preclude visualization of type 1 NV using OCTA. Multicenter, retrospective cohort study of 115 eyes from 100 patients with type 1 NV. A retrospective review of fluorescein (FA), OCT, and OCTA imaging was performed on a consecutive series of eyes with type 1 NV from five institutions. Unmasked graders utilized FA and structural OCT data to determine the diagnosis of type 1 NV. Masked graders evaluated FA data alone, en face OCTA data alone and combined en face OCTA and structural OCT data to determine the presence of type 1 NV. Sensitivity analyses were performed using combined FA and OCT data as the reference standard. A total of 105 eyes were diagnosed with type 1 NV using the reference. Of these, 90 (85.7%) could be detected using en face OCTA and structural OCT. The sensitivities of FA data alone and en face OCTA data alone for visualizing type 1 NV were the same (66.7%). Significant factors that precluded visualization of NV using en face OCTA included the height of pigment epithelial detachment, low signal strength, and treatment-naïve disease (P < 0.05, respectively). En face OCTA and structural OCT showed better detection of type 1 NV than either FA alone or en face OCTA alone. Combining en face OCTA and structural OCT information may therefore be a useful way to noninvasively diagnose and monitor the treatment of type 1 NV.
Newborns' Mooney-Face Perception
ERIC Educational Resources Information Center
Leo, Irene; Simion, Francesca
2009-01-01
The aim of this study is to investigate whether newborns detect a face on the basis of a Gestalt representation based on first-order relational information (i.e., the basic arrangement of face features) by using Mooney stimuli. The incomplete 2-tone Mooney stimuli were used because they preclude focusing both on the local features (i.e., the fine…
Infant Face Preferences after Binocular Visual Deprivation
ERIC Educational Resources Information Center
Mondloch, Catherine J.; Lewis, Terri L.; Levin, Alex V.; Maurer, Daphne
2013-01-01
Early visual deprivation impairs some, but not all, aspects of face perception. We investigated the possible developmental roots of later abnormalities by using a face detection task to test infants treated for bilateral congenital cataract within 1 hour of their first focused visual input. The seven patients were between 5 and 12 weeks old…
Burra, Nicolas; Coll, Sélim Yahia; Barras, Caroline; Kerzel, Dirk
2017-01-10
Recently, research on lateralized event related potentials (ERPs) in response to irrelevant distractors has revealed that angry but not happy schematic distractors capture spatial attention. Whether this effect occurs in the context of the natural expression of emotions is unknown. To fill this gap, observers were asked to judge the gender of a natural face surrounded by a color singleton among five other face identities. In contrast to previous studies, the similarity between the task-relevant feature (color) and the distractor features was low. On some trials, the target was displayed concurrently with an irrelevant angry or happy face. The lateralized ERPs to these distractors were measured as a marker of spatial attention. Our results revealed that angry face distractors, but not happy face distractors, triggered a P D , which is a marker of distractor suppression. Subsequent to the P D , angry distractors elicited a larger N450 component, which is associated with conflict detection. We conclude that threatening expressions have a high attentional priority because of their emotional value, resulting in early suppression and late conflict detection. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Real-time camera-based face detection using a modified LAMSTAR neural network system
NASA Astrophysics Data System (ADS)
Girado, Javier I.; Sandin, Daniel J.; DeFanti, Thomas A.; Wolf, Laura K.
2003-03-01
This paper describes a cost-effective, real-time (640x480 at 30Hz) upright frontal face detector as part of an ongoing project to develop a video-based, tetherless 3D head position and orientation tracking system. The work is specifically targeted for auto-stereoscopic displays and projection-based virtual reality systems. The proposed face detector is based on a modified LAMSTAR neural network system. At the input stage, after achieving image normalization and equalization, a sub-window analyzes facial features using a neural network. The sub-window is segmented, and each part is fed to a neural network layer consisting of a Kohonen Self-Organizing Map (SOM). The output of the SOM neural networks are interconnected and related by correlation-links, and can hence determine the presence of a face with enough redundancy to provide a high detection rate. To avoid tracking multiple faces simultaneously, the system is initially trained to track only the face centered in a box superimposed on the display. The system is also rotationally and size invariant to a certain degree.
Is the Thatcher Illusion Modulated by Face Familiarity? Evidence from an Eye Tracking Study
2016-01-01
Thompson (1980) first detected and described the Thatcher Illusion, where participants instantly perceive an upright face with inverted eyes and mouth as grotesque, but fail to do so when the same face is inverted. One prominent but controversial explanation is that the processing of configural information is disrupted in inverted faces. Studies investigating the Thatcher Illusion either used famous faces or non-famous faces. Highly familiar faces were often thought to be processed in a pronounced configural mode, so they seem ideal candidates to be tested in one Thatcher study against unfamiliar faces–but this has never been addressed so far. In our study, participants evaluated 16 famous and 16 non-famous faces for their grotesqueness. We tested whether familiarity (famous/non-famous faces) modulates reaction times, correctness of grotesqueness assessments (accuracy), and eye movement patterns for the factors orientation (upright/inverted) and Thatcherisation (Thatcherised/non-Thatcherised). On a behavioural level, familiarity effects were only observable via face inversion (higher accuracy and sensitivity for famous compared to non-famous faces) but not via Thatcherisation. Regarding eye movements, however, Thatcherisation influenced the scanning of famous and non-famous faces, for instance, in scanning the mouth region of the presented faces (higher number, duration and dwell time of fixations for famous compared to non-famous faces if Thatcherised). Altogether, famous faces seem to be processed in a more elaborate, more expertise-based way than non-famous faces, whereas non-famous, inverted faces seem to cause difficulties in accurate and sensitive processing. Results are further discussed in the face of existing studies of familiar vs. unfamiliar face processing. PMID:27776145
Prevalence of face recognition deficits in middle childhood.
Bennetts, Rachel J; Murray, Ebony; Boyce, Tian; Bate, Sarah
2017-02-01
Approximately 2-2.5% of the adult population is believed to show severe difficulties with face recognition, in the absence of any neurological injury-a condition known as developmental prosopagnosia (DP). However, to date no research has attempted to estimate the prevalence of face recognition deficits in children, possibly because there are very few child-friendly, well-validated tests of face recognition. In the current study, we examined face and object recognition in a group of primary school children (aged 5-11 years), to establish whether our tests were suitable for children and to provide an estimate of face recognition difficulties in children. In Experiment 1 (n = 184), children completed a pre-existing test of child face memory, the Cambridge Face Memory Test-Kids (CFMT-K), and a bicycle test with the same format. In Experiment 2 (n = 413), children completed three-alternative forced-choice matching tasks with faces and bicycles. All tests showed good psychometric properties. The face and bicycle tests were well matched for difficulty and showed a similar developmental trajectory. Neither the memory nor the matching tests were suitable to detect impairments in the youngest groups of children, but both tests appear suitable to screen for face recognition problems in middle childhood. In the current sample, 1.2-5.2% of children showed difficulties with face recognition; 1.2-4% showed face-specific difficulties-that is, poor face recognition with typical object recognition abilities. This is somewhat higher than previous adult estimates: It is possible that face matching tests overestimate the prevalence of face recognition difficulties in children; alternatively, some children may "outgrow" face recognition difficulties.
Face verification system for Android mobile devices using histogram based features
NASA Astrophysics Data System (ADS)
Sato, Sho; Kobayashi, Kazuhiro; Chen, Qiu
2016-07-01
This paper proposes a face verification system that runs on Android mobile devices. In this system, facial image is captured by a built-in camera on the Android device firstly, and then face detection is implemented using Haar-like features and AdaBoost learning algorithm. The proposed system verify the detected face using histogram based features, which are generated by binary Vector Quantization (VQ) histogram using DCT coefficients in low frequency domains, as well as Improved Local Binary Pattern (Improved LBP) histogram in spatial domain. Verification results with different type of histogram based features are first obtained separately and then combined by weighted averaging. We evaluate our proposed algorithm by using publicly available ORL database and facial images captured by an Android tablet.
Support vector machine for automatic pain recognition
NASA Astrophysics Data System (ADS)
Monwar, Md Maruf; Rezaei, Siamak
2009-02-01
Facial expressions are a key index of emotion and the interpretation of such expressions of emotion is critical to everyday social functioning. In this paper, we present an efficient video analysis technique for recognition of a specific expression, pain, from human faces. We employ an automatic face detector which detects face from the stored video frame using skin color modeling technique. For pain recognition, location and shape features of the detected faces are computed. These features are then used as inputs to a support vector machine (SVM) for classification. We compare the results with neural network based and eigenimage based automatic pain recognition systems. The experiment results indicate that using support vector machine as classifier can certainly improve the performance of automatic pain recognition system.
Learned face-voice pairings facilitate visual search.
Zweig, L Jacob; Suzuki, Satoru; Grabowecky, Marcia
2015-04-01
Voices provide a rich source of information that is important for identifying individuals and for social interaction. During search for a face in a crowd, voices often accompany visual information, and they facilitate localization of the sought-after individual. However, it is unclear whether this facilitation occurs primarily because the voice cues the location of the face or because it also increases the salience of the associated face. Here we demonstrate that a voice that provides no location information nonetheless facilitates visual search for an associated face. We trained novel face-voice associations and verified learning using a two-alternative forced choice task in which participants had to correctly match a presented voice to the associated face. Following training, participants searched for a previously learned target face among other faces while hearing one of the following sounds (localized at the center of the display): a congruent learned voice, an incongruent but familiar voice, an unlearned and unfamiliar voice, or a time-reversed voice. Only the congruent learned voice speeded visual search for the associated face. This result suggests that voices facilitate the visual detection of associated faces, potentially by increasing their visual salience, and that the underlying crossmodal associations can be established through brief training.
Human face processing is tuned to sexual age preferences.
Ponseti, J; Granert, O; van Eimeren, T; Jansen, O; Wolff, S; Beier, K; Deuschl, G; Bosinski, H; Siebner, H
2014-05-01
Human faces can motivate nurturing behaviour or sexual behaviour when adults see a child or an adult face, respectively. This suggests that face processing is tuned to detecting age cues of sexual maturity to stimulate the appropriate reproductive behaviour: either caretaking or mating. In paedophilia, sexual attraction is directed to sexually immature children. Therefore, we hypothesized that brain networks that normally are tuned to mature faces of the preferred gender show an abnormal tuning to sexual immature faces in paedophilia. Here, we use functional magnetic resonance imaging (fMRI) to test directly for the existence of a network which is tuned to face cues of sexual maturity. During fMRI, participants sexually attracted to either adults or children were exposed to various face images. In individuals attracted to adults, adult faces activated several brain regions significantly more than child faces. These brain regions comprised areas known to be implicated in face processing, and sexual processing, including occipital areas, the ventrolateral prefrontal cortex and, subcortically, the putamen and nucleus caudatus. The same regions were activated in paedophiles, but with a reversed preferential response pattern. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Croydon, Abigail; Pimperton, Hannah; Ewing, Louise; Duchaine, Brad C; Pellicano, Elizabeth
2014-09-01
Face recognition ability follows a lengthy developmental course, not reaching maturity until well into adulthood. Valid and reliable assessments of face recognition memory ability are necessary to examine patterns of ability and disability in face processing, yet there is a dearth of such assessments for children. We modified a well-known test of face memory in adults, the Cambridge Face Memory Test (Duchaine & Nakayama, 2006, Neuropsychologia, 44, 576-585), to make it developmentally appropriate for children. To establish its utility, we administered either the upright or inverted versions of the computerised Cambridge Face Memory Test - Children (CFMT-C) to 401 children aged between 5 and 12 years. Our results show that the CFMT-C is sufficiently sensitive to demonstrate age-related gains in the recognition of unfamiliar upright and inverted faces, does not suffer from ceiling or floor effects, generates robust inversion effects, and is capable of detecting difficulties in face memory in children diagnosed with autism. Together, these findings indicate that the CFMT-C constitutes a new valid assessment tool for children's face recognition skills. Copyright © 2014 Elsevier Ltd. All rights reserved.
Robust Selectivity for Faces in the Human Amygdala in the Absence of Expressions
Mende-Siedlecki, Peter; Verosky, Sara C.; Turk-Browne, Nicholas B.; Todorov, Alexander
2014-01-01
There is a well-established posterior network of cortical regions that plays a central role in face processing and that has been investigated extensively. In contrast, although responsive to faces, the amygdala is not considered a core face-selective region, and its face selectivity has never been a topic of systematic research in human neuroimaging studies. Here, we conducted a large-scale group analysis of fMRI data from 215 participants. We replicated the posterior network observed in prior studies but found equally robust and reliable responses to faces in the amygdala. These responses were detectable in most individual participants, but they were also highly sensitive to the initial statistical threshold and habituated more rapidly than the responses in posterior face-selective regions. A multivariate analysis showed that the pattern of responses to faces across voxels in the amygdala had high reliability over time. Finally, functional connectivity analyses showed stronger coupling between the amygdala and posterior face-selective regions during the perception of faces than during the perception of control visual categories. These findings suggest that the amygdala should be considered a core face-selective region. PMID:23984945
Daini, Roberta; Comparetti, Chiara M.; Ricciardelli, Paola
2014-01-01
Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP) can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently from features (which are impaired in CP), and basic emotional expressions. To this end, we carried out a behavioral study in which we compared the performance of 6 CP individuals to that of typical development individuals, using upright and inverted faces. Four avatar faces with a neutral expression were presented in the initial phase. The target faces presented in the recognition phase, in which a recognition task was requested (2AFC paradigm), could be identical (neutral) to those of the initial phase or present biologically plausible changes to features, non-emotional expressions, or emotional expressions. After this task, a second task was performed, in which the participants had to detect whether or not the recognized face exactly matched the study face or showed any difference. The results confirmed the CPs' impairment in the configural processing of the invariant aspects of the face, but also showed a spared configural processing of non-emotional facial expression (task 1). Interestingly and unlike the non-emotional expressions, the configural processing of emotional expressions was compromised in CPs and did not improve their change detection ability (task 2). These new results have theoretical implications for face perception models since they suggest that, at least in CPs, non-emotional expressions are processed configurally, can be dissociated from other facial dimensions, and may serve as a compensatory strategy to achieve face recognition. PMID:25520643
Daini, Roberta; Comparetti, Chiara M; Ricciardelli, Paola
2014-01-01
Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP) can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently from features (which are impaired in CP), and basic emotional expressions. To this end, we carried out a behavioral study in which we compared the performance of 6 CP individuals to that of typical development individuals, using upright and inverted faces. Four avatar faces with a neutral expression were presented in the initial phase. The target faces presented in the recognition phase, in which a recognition task was requested (2AFC paradigm), could be identical (neutral) to those of the initial phase or present biologically plausible changes to features, non-emotional expressions, or emotional expressions. After this task, a second task was performed, in which the participants had to detect whether or not the recognized face exactly matched the study face or showed any difference. The results confirmed the CPs' impairment in the configural processing of the invariant aspects of the face, but also showed a spared configural processing of non-emotional facial expression (task 1). Interestingly and unlike the non-emotional expressions, the configural processing of emotional expressions was compromised in CPs and did not improve their change detection ability (task 2). These new results have theoretical implications for face perception models since they suggest that, at least in CPs, non-emotional expressions are processed configurally, can be dissociated from other facial dimensions, and may serve as a compensatory strategy to achieve face recognition.
Luvizutto, Gustavo José; Fogaroli, Marcelo Ortolani; Theotonio, Rodolfo Mazeto; Nunes, Hélio Rubens de Carvalho; Resende, Luiz Antônio de Lima; Bazan, Rodrigo
2016-12-01
The face-hand test is a simple, practical, and rapid test to detect neurological syndromes. However, it has not previously been assessed in a Brazilian sample; therefore, the objective of the present study was to standardize the face-hand test for use in the multi-cultural population of Brazil and identify the sociodemographic factors affecting the results. This was a cross sectional study of 150 individuals. The sociodemographic variables that were collected included age, gender, race, body mass index and years of education. Standardization of the face-hand test occurred in 2 rounds of 10 sensory stimuli, with the participant seated to support the trunk and their vision obstructed in a sound-controlled environment. The face-hand test was conducted by applying 2 rounds of 10 sensory stimuli that were applied to the face and hand simultaneously. The associations between the face-hand test and sociodemographic variables were analyzed using Mann-Whitney tests and Spearman correlations. Binomial models were adjusted for the number of face-hand test variations, and ROC curves evaluated sensitivity and specificity of sensory extinction. There was no significant relationship between the sociodemographic variables and the number of stimuli perceived for the face-hand test. There was a high relative frequency of detection, 8 out of 10 stimuli, in this population. Sensory extinction was 25.3%, which increased with increasing age (OR=1.4[1:01-1:07]; p=0.006) and decreased significantly with increasing education (OR=0.82[0.71-0.94]; p=0.005). In the Brazilian population, a normal face-hand test score ranges between 8-10 stimuli, and the results indicate that sensory extinction is associated with increased age and lower levels of education.
Detecting gear tooth fracture in a high contact ratio face gear mesh
NASA Technical Reports Server (NTRS)
Zakrajsek, James J.; Handschuh, Robert F.; Lewicki, David G.; Decker, Harry J.
1995-01-01
This paper summarized the results of a study in which three different vibration diagnostic methods were used to detect gear tooth fracture in a high contact ratio face gear mesh. The NASA spiral bevel gear fatigue test rig was used to produce unseeded fault, natural failures of four face gear specimens. During the fatigue tests, which were run to determine load capacity and primary failure mechanisms for face gears, vibration signals were monitored and recorded for gear diagnostic purposes. Gear tooth bending fatigue and surface pitting were the primary failure modes found in the tests. The damage ranged from partial tooth fracture on a single tooth in one test to heavy wear, severe pitting, and complete tooth fracture of several teeth on another test. Three gear fault detection techniques, FM4, NA4*, and NB4, were applied to the experimental data. These methods use the signal average in both the time and frequency domain. Method NA4* was able to conclusively detect the gear tooth fractures in three out of the four fatigue tests, along with gear tooth surface pitting and heavy wear. For multiple tooth fractures, all of the methods gave a clear indication of the damage. It was also found that due to the high contact ratio of the face gear mesh, single tooth fractures did not significantly affect the vibration signal, making this type of failure difficult to detect.
Discrimination between smiling faces: Human observers vs. automated face analysis.
Del Líbano, Mario; Calvo, Manuel G; Fernández-Martín, Andrés; Recio, Guillermo
2018-05-11
This study investigated (a) how prototypical happy faces (with happy eyes and a smile) can be discriminated from blended expressions with a smile but non-happy eyes, depending on type and intensity of the eye expression; and (b) how smile discrimination differs for human perceivers versus automated face analysis, depending on affective valence and morphological facial features. Human observers categorized faces as happy or non-happy, or rated their valence. Automated analysis (FACET software) computed seven expressions (including joy/happiness) and 20 facial action units (AUs). Physical properties (low-level image statistics and visual saliency) of the face stimuli were controlled. Results revealed, first, that some blended expressions (especially, with angry eyes) had lower discrimination thresholds (i.e., they were identified as "non-happy" at lower non-happy eye intensities) than others (especially, with neutral eyes). Second, discrimination sensitivity was better for human perceivers than for automated FACET analysis. As an additional finding, affective valence predicted human discrimination performance, whereas morphological AUs predicted FACET discrimination. FACET can be a valid tool for categorizing prototypical expressions, but is currently more limited than human observers for discrimination of blended expressions. Configural processing facilitates detection of in/congruence(s) across regions, and thus detection of non-genuine smiling faces (due to non-happy eyes). Copyright © 2018 Elsevier B.V. All rights reserved.
Hacker, Catrina M; Meschke, Emily X; Biederman, Irving
2018-03-20
Familiar objects, specified by name, can be identified with high accuracy when embedded in a rapidly presented sequence of images at rates exceeding 10 images/s. Not only can target objects be detected at such brief presentation rates, they can also be detected under high uncertainty, where their classification is defined negatively, e.g., "Not a Tool." The identification of a familiar speaker's voice declines precipitously when uncertainty is increased from one to a mere handful of possible speakers. Is the limitation imposed by uncertainty, i.e., the number of possible individuals, a general characteristic of processes for person individuation such that the identifiability of a familiar face would undergo a similar decline with uncertainty? Specifically, could the presence of an unnamed celebrity, thus any celebrity, be detected when presented in a rapid sequence of unfamiliar faces? If so, could the celebrity be identified? Despite the markedly greater physical similarity of faces compared to objects that are, say, not tools, the presence of a celebrity could be detected with moderately high accuracy (∼75%) at rates exceeding 7 faces/s. False alarms were exceedingly rare as almost all the errors were misses. Detection accuracy by moderate congenital prosopagnosics was lower than controls, but still well above chance. Given the detection of the presence of a celebrity, all subjects were almost always able to identify that celebrity, providing no role for a covert familiarity signal outside of awareness. Copyright © 2018 Elsevier Ltd. All rights reserved.
Lucas, Nadia; Vuilleumier, Patrik
2008-04-01
In normal observers, visual search is facilitated for targets with salient attributes. We compared how two different types of cue (expression and colour) may influence search for face targets, in healthy subjects (n=27) and right brain-damaged patients with left spatial neglect (n=13). The target faces were defined by their identity (singleton among a crowd of neutral faces) but could either be neutral (like other faces), or have a different emotional expression (fearful or happy), or a different colour (red-tinted). Healthy subjects were the fastest for detecting the colour-cued targets, but also showed a significant facilitation for emotionally cued targets, relative to neutral faces differing from other distracter faces by identity only. Healthy subjects were also faster overall for target faces located on the left, as compared to the right side of the display. In contrast, neglect patients were slower to detect targets on the left (contralesional) relative to the right (ipsilesional) side. However, they showed the same pattern of cueing effects as healthy subjects on both sides of space; while their best performance was also found for faces cued by colour, they showed a significant advantage for faces cued by expression, relative to the neutral condition. These results indicate that despite impaired attention towards the left hemispace, neglect patients may still show an intact influence of both low-level colour cues and emotional expression cues on attention, suggesting that neural mechanisms responsible for these effects are partly separate from fronto-parietal brain systems controlling spatial attention during search.
Brief Report: Reduced Prioritization of Facial Threat in Adults with Autism
ERIC Educational Resources Information Center
Sasson, Noah J.; Shasteen, Jonathon R.; Pinkham, Amy E.
2016-01-01
Typically-developing (TD) adults detect angry faces more efficiently within a crowd than non-threatening faces. Prior studies of this social threat superiority effect (TSE) in ASD using tasks consisting of schematic faces and homogeneous crowds have produced mixed results. Here, we employ a more ecologically-valid test of the social TSE and find…
Face, Body, and Center of Gravity Mediate Person Detection in Natural Scenes
ERIC Educational Resources Information Center
Bindemann, Markus; Scheepers, Christoph; Ferguson, Heather J.; Burton, A. Mike
2010-01-01
Person detection is an important prerequisite of social interaction, but is not well understood. Following suggestions that people in the visual field can capture a viewer's attention, this study examines the role of the face and the body for person detection in natural scenes. We observed that viewers tend first to look at the center of a scene,…
Automatic Detection of Acromegaly From Facial Photographs Using Machine Learning Methods.
Kong, Xiangyi; Gong, Shun; Su, Lijuan; Howard, Newton; Kong, Yanguo
2018-01-01
Automatic early detection of acromegaly is theoretically possible from facial photographs, which can lessen the prevalence and increase the cure probability. In this study, several popular machine learning algorithms were used to train a retrospective development dataset consisting of 527 acromegaly patients and 596 normal subjects. We firstly used OpenCV to detect the face bounding rectangle box, and then cropped and resized it to the same pixel dimensions. From the detected faces, locations of facial landmarks which were the potential clinical indicators were extracted. Frontalization was then adopted to synthesize frontal facing views to improve the performance. Several popular machine learning methods including LM, KNN, SVM, RT, CNN, and EM were used to automatically identify acromegaly from the detected facial photographs, extracted facial landmarks, and synthesized frontal faces. The trained models were evaluated using a separate dataset, of which half were diagnosed as acromegaly by growth hormone suppression test. The best result of our proposed methods showed a PPV of 96%, a NPV of 95%, a sensitivity of 96% and a specificity of 96%. Artificial intelligence can automatically early detect acromegaly with a high sensitivity and specificity. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Jessen, Sarah; Grossmann, Tobias
2017-01-01
Enhanced attention to fear expressions in adults is primarily driven by information from low as opposed to high spatial frequencies contained in faces. However, little is known about the role of spatial frequency information in emotion processing during infancy. In the present study, we examined the role of low compared to high spatial frequencies in the processing of happy and fearful facial expressions by using filtered face stimuli and measuring event-related brain potentials (ERPs) in 7-month-old infants ( N = 26). Our results revealed that infants' brains discriminated between emotional facial expressions containing high but not between expressions containing low spatial frequencies. Specifically, happy faces containing high spatial frequencies elicited a smaller Nc amplitude than fearful faces containing high spatial frequencies and happy and fearful faces containing low spatial frequencies. Our results demonstrate that already in infancy spatial frequency content influences the processing of facial emotions. Furthermore, we observed that fearful facial expressions elicited a comparable Nc response for high and low spatial frequencies, suggesting a robust detection of fearful faces irrespective of spatial frequency content, whereas the detection of happy facial expressions was contingent upon frequency content. In summary, these data provide new insights into the neural processing of facial emotions in early development by highlighting the differential role played by spatial frequencies in the detection of fear and happiness.
Veligdan, J.T.
1995-10-03
An interactive optical panel assembly includes an optical panel having a plurality of ribbon optical waveguides stacked together with opposite ends thereof defining panel first and second faces. A light source provides an image beam to the panel first face for being channeled through the waveguides and emitted from the panel second face in the form of a viewable light image. A remote device produces a response beam over a discrete selection area of the panel second face for being channeled through at least one of the waveguides toward the panel first face. A light sensor is disposed across a plurality of the waveguides for detecting the response beam therein for providing interactive capability. 10 figs.
Standoff imaging of a masked human face using a 670 GHz high resolution radar
NASA Astrophysics Data System (ADS)
Kjellgren, Jan; Svedin, Jan; Cooper, Ken B.
2011-11-01
This paper presents an exploratory attempt to use high-resolution radar measurements for face identification in forensic applications. An imaging radar system developed by JPL was used to measure a human face at 670 GHz. Frontal views of the face were measured both with and without a ski mask at a range of 25 m. The realized spatial resolution was roughly 1 cm in all three dimensions. The surfaces of the ski mask and the face were detected by using the two dominating reflections from amplitude data. Various methods for visualization of these surfaces are presented. The possibility to use radar data to determine certain face distance measures between well-defined face landmarks, typically used for anthropometric statistics, was explored. The measures used here were face length, frontal breadth and interpupillary distance. In many cases the radar system seems to provide sufficient information to exclude an innocent subject from suspicion. For an accurate identification it is believed that a system must provide significantly more information.
Valence modulates source memory for faces.
Bell, Raoul; Buchner, Axel
2010-01-01
Previous studies in which the effects of emotional valence on old-new discrimination and source memory have been examined have yielded highly inconsistent results. Here, we present two experiments showing that old-new face discrimination was not affected by whether a face was associated with disgusting, pleasant, or neutral behavior. In contrast, source memory for faces associated with disgusting behavior (i.e., memory for the disgusting context in which the face was encountered) was consistently better than source memory for other types of faces. This data pattern replicates the findings of studies in which descriptions of cheating, neutral, and trustworthy behavior were used, which findings were previously ascribed to a highly specific cheater detection module. The present results suggest that the enhanced source memory for faces of cheaters is due to a more general source memory advantage for faces associated with negative or threatening contexts that may be instrumental in avoiding the negative consequences of encounters with persons associated with negative or threatening behaviors.
Covert face recognition in congenital prosopagnosia: a group study.
Rivolta, Davide; Palermo, Romina; Schmalzl, Laura; Coltheart, Max
2012-03-01
Even though people with congenital prosopagnosia (CP) never develop a normal ability to "overtly" recognize faces, some individuals show indices of "covert" (or implicit) face recognition. The aim of this study was to demonstrate covert face recognition in CP when participants could not overtly recognize the faces. Eleven people with CP completed three tasks assessing their overt face recognition ability, and three tasks assessing their "covert" face recognition: a Forced choice familiarity task, a Forced choice cued task, and a Priming task. Evidence of covert recognition was observed with the Forced choice familiarity task, but not the Priming task. In addition, we propose that the Forced choice cued task does not measure covert processing as such, but instead "provoked-overt" recognition. Our study clearly shows that people with CP demonstrate covert recognition for faces that they cannot overtly recognize, and that behavioural tasks vary in their sensitivity to detect covert recognition in CP. Copyright © 2011 Elsevier Srl. All rights reserved.
Neuronal integration in visual cortex elevates face category tuning to conscious face perception
Fahrenfort, Johannes J.; Snijders, Tineke M.; Heinen, Klaartje; van Gaal, Simon; Scholte, H. Steven; Lamme, Victor A. F.
2012-01-01
The human brain has the extraordinary capability to transform cluttered sensory input into distinct object representations. For example, it is able to rapidly and seemingly without effort detect object categories in complex natural scenes. Surprisingly, category tuning is not sufficient to achieve conscious recognition of objects. What neural process beyond category extraction might elevate neural representations to the level where objects are consciously perceived? Here we show that visible and invisible faces produce similar category-selective responses in the ventral visual cortex. The pattern of neural activity evoked by visible faces could be used to decode the presence of invisible faces and vice versa. However, only visible faces caused extensive response enhancements and changes in neural oscillatory synchronization, as well as increased functional connectivity between higher and lower visual areas. We conclude that conscious face perception is more tightly linked to neural processes of sustained information integration and binding than to processes accommodating face category tuning. PMID:23236162
Yu, Fu-Yun; Han, Chialing; Chan, Tak-Wai
2008-08-01
This study investigates the impact of anonymous, computerized, synchronized team competition on students' motivation, satisfaction, and interpersonal relationships. Sixty-eight fourth-graders participated in this study. A synchronous gaming learning system was developed to have dyads compete against each other in answering multiple-choice questions set in accordance with the school curriculum in two conditions (face-to-face and anonymous). The results showed that students who were exposed to the anonymous team competition condition responded significantly more positively than those in the face-to-face condition in terms of motivation and satisfaction at the 0.050 and 0.056 levels respectively. Although further studies regarding the effects of anonymous interaction in a networked gaming learning environment are imperative, the positive effects detected in this preliminary study indicate that anonymity is a viable feature for mitigating the negative effects that competition may inflict on motivation and satisfaction as reported in traditional face-to-face environments.
Cloutier, Jasmin; Li, Tianyi; Mišic, Bratislav; Correll, Joshua; Berman, Marc G
2017-09-01
An extended distributed network of brain regions supports face perception. Face familiarity influences activity in brain regions involved in this network, but the impact of perceptual familiarity on this network has never been directly assessed with the use of partial least squares analysis. In the present work, we use this multivariate statistical analysis to examine how face-processing systems are differentially recruited by characteristics of the targets (i.e. perceptual familiarity and race) and of the perceivers (i.e. childhood interracial contact). Novel faces were found to preferentially recruit a large distributed face-processing network compared with perceptually familiar faces. Additionally, increased interracial contact during childhood led to decreased recruitment of distributed brain networks previously implicated in face perception, salience detection, and social cognition. Current results provide a novel perspective on the impact of cross-race exposure, suggesting that interracial contact early in life may dramatically shape the neural substrates of face perception generally. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
How Fast is Famous Face Recognition?
Barragan-Jason, Gladys; Lachat, Fanny; Barbeau, Emmanuel J.
2012-01-01
The rapid recognition of familiar faces is crucial for social interactions. However the actual speed with which recognition can be achieved remains largely unknown as most studies have been carried out without any speed constraints. Different paradigms have been used, leading to conflicting results, and although many authors suggest that face recognition is fast, the speed of face recognition has not been directly compared to “fast” visual tasks. In this study, we sought to overcome these limitations. Subjects performed three tasks, a familiarity categorization task (famous faces among unknown faces), a superordinate categorization task (human faces among animal ones), and a gender categorization task. All tasks were performed under speed constraints. The results show that, despite the use of speed constraints, subjects were slow when they had to categorize famous faces: minimum reaction time was 467 ms, which is 180 ms more than during superordinate categorization and 160 ms more than in the gender condition. Our results are compatible with a hierarchy of face processing from the superordinate level to the familiarity level. The processes taking place between detection and recognition need to be investigated in detail. PMID:23162503
Afraz, Arash; Boyden, Edward S; DiCarlo, James J
2015-05-26
Neurons that respond more to images of faces over nonface objects were identified in the inferior temporal (IT) cortex of primates three decades ago. Although it is hypothesized that perceptual discrimination between faces depends on the neural activity of IT subregions enriched with "face neurons," such a causal link has not been directly established. Here, using optogenetic and pharmacological methods, we reversibly suppressed the neural activity in small subregions of IT cortex of macaque monkeys performing a facial gender-discrimination task. Each type of intervention independently demonstrated that suppression of IT subregions enriched in face neurons induced a contralateral deficit in face gender-discrimination behavior. The same neural suppression of other IT subregions produced no detectable change in behavior. These results establish a causal link between the neural activity in IT face neuron subregions and face gender-discrimination behavior. Also, the demonstration that brief neural suppression of specific spatial subregions of IT induces behavioral effects opens the door for applying the technical advantages of optogenetics to a systematic attack on the causal relationship between IT cortex and high-level visual perception.
Differential involvement of episodic and face representations in ERP repetition effects.
Jemel, Boutheina; Calabria, Marco; Delvenne, Jean-François; Crommelinck, Marc; Bruyer, Raymond
2003-03-03
The purpose of this study was to disentangle the contribution of episodic-perceptual from pre-existing memory representations of faces to repetition effects. ERPs were recorded to first and second presentations of same and different photos of famous and unfamiliar faces, in an incidental task where occasional non-targets had to be detected. Repetition of same and different photos of famous faces resulted in an N400 amplitude decrement. No such N400 repetition-induced attenuation was observed for unfamiliar faces. In addition, repetition of same photos of faces, and not different ones, gave rise to an early ERP repetition effect (starting at approximately 350 ms) with an occipito-temporal scalp distribution. Together, these results suggest that repetition effects depend on two temporally and may be neuro-functionally distinct loci, episode-based representation and face recognition units stored in long-term memory.
The own-age face recognition bias is task dependent.
Proietti, Valentina; Macchi Cassia, Viola; Mondloch, Catherine J
2015-08-01
The own-age bias (OAB) in face recognition (more accurate recognition of own-age than other-age faces) is robust among young adults but not older adults. We investigated the OAB under two different task conditions. In Experiment 1 young and older adults (who reported more recent experience with own than other-age faces) completed a match-to-sample task with young and older adult faces; only young adults showed an OAB. In Experiment 2 young and older adults completed an identity detection task in which we manipulated the identity strength of target and distracter identities by morphing each face with an average face in 20% steps. Accuracy increased with identity strength and facial age influenced older adults' (but not younger adults') strategy, but there was no evidence of an OAB. Collectively, these results suggest that the OAB depends on task demands and may be absent when searching for one identity. © 2014 The British Psychological Society.
Segmentation of human face using gradient-based approach
NASA Astrophysics Data System (ADS)
Baskan, Selin; Bulut, M. Mete; Atalay, Volkan
2001-04-01
This paper describes a method for automatic segmentation of facial features such as eyebrows, eyes, nose, mouth and ears in color images. This work is an initial step for wide range of applications based on feature-based approaches, such as face recognition, lip-reading, gender estimation, facial expression analysis, etc. Human face can be characterized by its skin color and nearly elliptical shape. For this purpose, face detection is performed using color and shape information. Uniform illumination is assumed. No restrictions on glasses, make-up, beard, etc. are imposed. Facial features are extracted using the vertically and horizontally oriented gradient projections. The gradient of a minimum with respect to its neighbor maxima gives the boundaries of a facial feature. Each facial feature has a different horizontal characteristic. These characteristics are derived by extensive experimentation with many face images. Using fuzzy set theory, the similarity between the candidate and the feature characteristic under consideration is calculated. Gradient-based method is accompanied by the anthropometrical information, for robustness. Ear detection is performed using contour-based shape descriptors. This method detects the facial features and circumscribes each facial feature with the smallest rectangle possible. AR database is used for testing. The developed method is also suitable for real-time systems.
Norton, Daniel; McBain, Ryan; Holt, Daphne J; Ongur, Dost; Chen, Yue
2009-06-15
Impaired emotion recognition has been reported in schizophrenia, yet the nature of this impairment is not completely understood. Recognition of facial emotion depends on processing affective and nonaffective facial signals, as well as basic visual attributes. We examined whether and how poor facial emotion recognition in schizophrenia is related to basic visual processing and nonaffective face recognition. Schizophrenia patients (n = 32) and healthy control subjects (n = 29) performed emotion discrimination, identity discrimination, and visual contrast detection tasks, where the emotionality, distinctiveness of identity, or visual contrast was systematically manipulated. Subjects determined which of two presentations in a trial contained the target: the emotional face for emotion discrimination, a specific individual for identity discrimination, and a sinusoidal grating for contrast detection. Patients had significantly higher thresholds (worse performance) than control subjects for discriminating both fearful and happy faces. Furthermore, patients' poor performance in fear discrimination was predicted by performance in visual detection and face identity discrimination. Schizophrenia patients require greater emotional signal strength to discriminate fearful or happy face images from neutral ones. Deficient emotion recognition in schizophrenia does not appear to be determined solely by affective processing but is also linked to the processing of basic visual and facial information.
Koda, Hiroki; Sato, Anna; Kato, Akemi
2013-09-01
Humans innately perceive infantile features as cute. The ethologist Konrad Lorenz proposed that the infantile features of mammals and birds, known as the baby schema (kindchenschema), motivate caretaking behaviour. As biologically relevant stimuli, newborns are likely to be processed specially in terms of visual attention, perception, and cognition. Recent demonstrations on human participants have shown visual attentional prioritisation to newborn faces (i.e., newborn faces capture visual attention). Although characteristics equivalent to those found in the faces of human infants are found in nonhuman primates, attentional capture by newborn faces has not been tested in nonhuman primates. We examined whether conspecific newborn faces captured the visual attention of two Japanese monkeys using a target-detection task based on dot-probe tasks commonly used in human visual attention studies. Although visual cues enhanced target detection in subject monkeys, our results, unlike those for humans, showed no evidence of an attentional prioritisation for newborn faces by monkeys. Our demonstrations showed the validity of dot-probe task for visual attention studies in monkeys and propose a novel approach to bridge the gap between human and nonhuman primate social cognition research. This suggests that attentional capture by newborn faces is not common to macaques, but it is unclear if nursing experiences influence their perception and recognition of infantile appraisal stimuli. We need additional comparative studies to reveal the evolutionary origins of baby-schema perception and recognition. Copyright © 2013 Elsevier B.V. All rights reserved.
Technology survey on video face tracking
NASA Astrophysics Data System (ADS)
Zhang, Tong; Gomes, Herman Martins
2014-03-01
With the pervasiveness of monitoring cameras installed in public areas, schools, hospitals, work places and homes, video analytics technologies for interpreting these video contents are becoming increasingly relevant to people's lives. Among such technologies, human face detection and tracking (and face identification in many cases) are particularly useful in various application scenarios. While plenty of research has been conducted on face tracking and many promising approaches have been proposed, there are still significant challenges in recognizing and tracking people in videos with uncontrolled capturing conditions, largely due to pose and illumination variations, as well as occlusions and cluttered background. It is especially complex to track and identify multiple people simultaneously in real time due to the large amount of computation involved. In this paper, we present a survey on literature and software that are published or developed during recent years on the face tracking topic. The survey covers the following topics: 1) mainstream and state-of-the-art face tracking methods, including features used to model the targets and metrics used for tracking; 2) face identification and face clustering from face sequences; and 3) software packages or demonstrations that are available for algorithm development or trial. A number of publically available databases for face tracking are also introduced.
Face detection in color images using skin color, Laplacian of Gaussian, and Euler number
NASA Astrophysics Data System (ADS)
Saligrama Sundara Raman, Shylaja; Kannanedhi Narasimha Sastry, Balasubramanya Murthy; Subramanyam, Natarajan; Senkutuvan, Ramya; Srikanth, Radhika; John, Nikita; Rao, Prateek
2010-02-01
In this a paper, a feature based approach to face detection has been proposed using an ensemble of algorithms. The method uses chrominance values and edge features to classify the image as skin and nonskin regions. The edge detector used for this purpose is Laplacian of Gaussian (LoG) which is found to be appropriate when images having multiple faces with noise in them. Eight connectivity analysis of these regions will segregate them as probable face or nonface. The procedure is made more robust by identifying local features within these skin regions which include number of holes, percentage of skin and the golden ratio. The method proposed has been tested on color face images of various races obtained from different sources and its performance is found to be encouraging as the color segmentation cleans up almost all the complex facial features. The result obtained has a calculated accuracy of 86.5% on a test set of 230 images.
Face processing pattern under top-down perception: a functional MRI study
NASA Astrophysics Data System (ADS)
Li, Jun; Liang, Jimin; Tian, Jie; Liu, Jiangang; Zhao, Jizheng; Zhang, Hui; Shi, Guangming
2009-02-01
Although top-down perceptual process plays an important role in face processing, its neural substrate is still puzzling because the top-down stream is extracted difficultly from the activation pattern associated with contamination caused by bottom-up face perception input. In the present study, a novel paradigm of instructing participants to detect faces from pure noise images is employed, which could efficiently eliminate the interference of bottom-up face perception in topdown face processing. Analyzing the map of functional connectivity with right FFA analyzed by conventional Pearson's correlation, a possible face processing pattern induced by top-down perception can be obtained. Apart from the brain areas of bilateral fusiform gyrus (FG), left inferior occipital gyrus (IOG) and left superior temporal sulcus (STS), which are consistent with a core system in the distributed cortical network for face perception, activation induced by top-down face processing is also found in these regions that include the anterior cingulate gyrus (ACC), right oribitofrontal cortex (OFC), left precuneus, right parahippocampal cortex, left dorsolateral prefrontal cortex (DLPFC), right frontal pole, bilateral premotor cortex, left inferior parietal cortex and bilateral thalamus. The results indicate that making-decision, attention, episodic memory retrieving and contextual associative processing network cooperate with general face processing regions to process face information under top-down perception.
Innovation in weight loss programs: a 3-dimensional virtual-world approach.
Johnston, Jeanne D; Massey, Anne P; Devaneaux, Celeste A
2012-09-20
The rising trend in obesity calls for innovative weight loss programs. While behavioral-based face-to-face programs have proven to be the most effective, they are expensive and often inaccessible. Internet or Web-based weight loss programs have expanded reach but may lack qualities critical to weight loss and maintenance such as human interaction, social support, and engagement. In contrast to Web technologies, virtual reality technologies offer unique affordances as a behavioral intervention by directly supporting engagement and active learning. To explore the effectiveness of a virtual-world weight loss program relative to weight loss and behavior change. We collected data from overweight people (N = 54) participating in a face-to-face or a virtual-world weight loss program. Weight, body mass index (BMI), percentage weight change, and health behaviors (ie, weight loss self-efficacy, physical activity self-efficacy, self-reported physical activity, and fruit and vegetable consumption) were assessed before and after the 12-week program. Repeated measures analysis was used to detect differences between groups and across time. A total of 54 participants with a BMI of 32 (SD 6.05) kg/m(2)enrolled in the study, with a 13% dropout rate for each group (virtual world group: 5/38; face-to-face group: 3/24). Both groups lost a significant amount of weight (virtual world: 3.9 kg, P < .001; face-to-face: 2.8 kg, P = .002); however, no significant differences between groups were detected (P = .29). Compared with baseline, the virtual-world group lost an average of 4.2%, with 33% (11/33) of the participants losing a clinically significant (≥5%) amount of baseline weight. The face-to-face group lost an average of 3.0% of their baseline weight, with 29% (6/21) losing a clinically significant amount. We detected a significant group × time interaction for moderate (P = .006) and vigorous physical activity (P = .008), physical activity self-efficacy (P = .04), fruit and vegetable consumption (P = .007), and weight loss self-efficacy (P < .001). Post hoc paired t tests indicated significant improvements across all of the variables for the virtual-world group. Overall, these results offer positive early evidence that a virtual-world-based weight loss program can be as effective as a face-to-face one relative to biometric changes. In addition, our results suggest that a virtual world may be a more effective platform to influence meaningful behavioral changes and improve self-efficacy.
Innovation in Weight Loss Programs: A 3-Dimensional Virtual-World Approach
Massey, Anne P; DeVaneaux, Celeste A
2012-01-01
Background The rising trend in obesity calls for innovative weight loss programs. While behavioral-based face-to-face programs have proven to be the most effective, they are expensive and often inaccessible. Internet or Web-based weight loss programs have expanded reach but may lack qualities critical to weight loss and maintenance such as human interaction, social support, and engagement. In contrast to Web technologies, virtual reality technologies offer unique affordances as a behavioral intervention by directly supporting engagement and active learning. Objective To explore the effectiveness of a virtual-world weight loss program relative to weight loss and behavior change. Methods We collected data from overweight people (N = 54) participating in a face-to-face or a virtual-world weight loss program. Weight, body mass index (BMI), percentage weight change, and health behaviors (ie, weight loss self-efficacy, physical activity self-efficacy, self-reported physical activity, and fruit and vegetable consumption) were assessed before and after the 12-week program. Repeated measures analysis was used to detect differences between groups and across time. Results A total of 54 participants with a BMI of 32 (SD 6.05) kg/m2 enrolled in the study, with a 13% dropout rate for each group (virtual world group: 5/38; face-to-face group: 3/24). Both groups lost a significant amount of weight (virtual world: 3.9 kg, P < .001; face-to-face: 2.8 kg, P = .002); however, no significant differences between groups were detected (P = .29). Compared with baseline, the virtual-world group lost an average of 4.2%, with 33% (11/33) of the participants losing a clinically significant (≥5%) amount of baseline weight. The face-to-face group lost an average of 3.0% of their baseline weight, with 29% (6/21) losing a clinically significant amount. We detected a significant group × time interaction for moderate (P = .006) and vigorous physical activity (P = .008), physical activity self-efficacy (P = .04), fruit and vegetable consumption (P = .007), and weight loss self-efficacy (P < .001). Post hoc paired t tests indicated significant improvements across all of the variables for the virtual-world group. Conclusions Overall, these results offer positive early evidence that a virtual-world-based weight loss program can be as effective as a face-to-face one relative to biometric changes. In addition, our results suggest that a virtual world may be a more effective platform to influence meaningful behavioral changes and improve self-efficacy. PMID:22995535
Key, Alexandra P; Dykens, Elisabeth M
2016-12-01
The present study examined possible neural mechanisms underlying increased social interest in persons with Williams syndrome (WS). Visual event-related potentials (ERPs) during passive viewing were used to compare incidental memory traces for repeated vs. single presentations of previously unfamiliar social (faces) and nonsocial (houses) images in 26 adults with WS and 26 typical adults. Results indicated that participants with WS developed familiarity with the repeated faces and houses (frontal N400 response), but only typical adults evidenced the parietal old/new effect (previously associated with stimulus recollection) for the repeated faces. There was also no evidence of exceptional salience of social information in WS, as ERP markers of memory for repeated faces vs. houses were not significantly different. Thus, while persons with WS exhibit behavioral evidence of increased social interest, their processing of social information in the absence of specific instructions may be relatively superficial. The ERP evidence of face repetition detection in WS was independent of IQ and the earlier perceptual differentiation of social vs. nonsocial stimuli. Large individual differences in ERPs of participants with WS may provide valuable information for understanding the WS phenotype and have relevance for educational and treatment purposes.
NASA Astrophysics Data System (ADS)
Chen, Hai-Wen; McGurr, Mike
2016-05-01
We have developed a new way for detection and tracking of human full-body and body-parts with color (intensity) patch morphological segmentation and adaptive thresholding for security surveillance cameras. An adaptive threshold scheme has been developed for dealing with body size changes, illumination condition changes, and cross camera parameter changes. Tests with the PETS 2009 and 2014 datasets show that we can obtain high probability of detection and low probability of false alarm for full-body. Test results indicate that our human full-body detection method can considerably outperform the current state-of-the-art methods in both detection performance and computational complexity. Furthermore, in this paper, we have developed several methods using color features for detection and tracking of human body-parts (arms, legs, torso, and head, etc.). For example, we have developed a human skin color sub-patch segmentation algorithm by first conducting a RGB to YIQ transformation and then applying a Subtractive I/Q image Fusion with morphological operations. With this method, we can reliably detect and track human skin color related body-parts such as face, neck, arms, and legs. Reliable body-parts (e.g. head) detection allows us to continuously track the individual person even in the case that multiple closely spaced persons are merged. Accordingly, we have developed a new algorithm to split a merged detection blob back to individual detections based on the detected head positions. Detected body-parts also allow us to extract important local constellation features of the body-parts positions and angles related to the full-body. These features are useful for human walking gait pattern recognition and human pose (e.g. standing or falling down) estimation for potential abnormal behavior and accidental event detection, as evidenced with our experimental tests. Furthermore, based on the reliable head (face) tacking, we have applied a super-resolution algorithm to enhance the face resolution for improved human face recognition performance.
Searching for emotion or race: task-irrelevant facial cues have asymmetrical effects.
Lipp, Ottmar V; Craig, Belinda M; Frost, Mareka J; Terry, Deborah J; Smith, Joanne R
2014-01-01
Facial cues of threat such as anger and other race membership are detected preferentially in visual search tasks. However, it remains unclear whether these facial cues interact in visual search. If both cues equally facilitate search, a symmetrical interaction would be predicted; anger cues should facilitate detection of other race faces and cues of other race membership should facilitate detection of anger. Past research investigating this race by emotional expression interaction in categorisation tasks revealed an asymmetrical interaction. This suggests that cues of other race membership may facilitate the detection of angry faces but not vice versa. Utilising the same stimuli and procedures across two search tasks, participants were asked to search for targets defined by either race or emotional expression. Contrary to the results revealed in the categorisation paradigm, cues of anger facilitated detection of other race faces whereas differences in race did not differentially influence detection of emotion targets.
Being BOLD: The neural dynamics of face perception.
Gentile, Francesco; Ales, Justin; Rossion, Bruno
2017-01-01
According to a non-hierarchical view of human cortical face processing, selective responses to faces may emerge in a higher-order area of the hierarchy, in the lateral part of the middle fusiform gyrus (fusiform face area [FFA]) independently from face-selective responses in the lateral inferior occipital gyrus (occipital face area [OFA]), a lower order area. Here we provide a stringent test of this hypothesis by gradually revealing segmented face stimuli throughout strict linear descrambling of phase information [Ales et al., 2012]. Using a short sampling rate (500 ms) of fMRI acquisition and single subject statistical analysis, we show a face-selective responses emerging earlier, that is, at a lower level of structural (i.e., phase) information, in the FFA compared with the OFA. In both regions, a face detection response emerging at a lower level of structural information for upright than inverted faces, both in the FFA and OFA, in line with behavioral responses and with previous findings of delayed responses to inverted faces with direct recordings of neural activity were also reported. Overall, these results support the non-hierarchical view of human cortical face processing and open new perspectives for time-resolved analysis at the single subject level of fMRI data obtained during continuously evolving visual stimulation. Hum Brain Mapp 38:120-139, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Komes, Jessica; Schweinberger, Stefan R.; Wiese, Holger
2015-01-01
Previous event-related potential (ERP) research revealed that older relative to younger adults show reduced inversion effects in the N170 (with more negative amplitudes for inverted than upright faces), suggestive of impairments in face perception. However, as these studies used young to middle-aged faces only, this finding may reflect preferential processing of own- relative to other-age faces rather than age-related decline. We conducted an ERP study in which young and older participants categorized young and old upright or inverted faces by age. Stimuli were presented either unfiltered or low-pass filtered at 30, 20, or 10 cycles per image (CPI). Response times revealed larger inversion effects, with slower responses for inverted faces, for young faces in young participants. Older participants did not show a corresponding effect. ERPs yielded a trend toward reduced N170 inversion effects in older relative to younger adults independent of face age. Moreover, larger inversion effects for young relative to old faces were detected, and filtering resulted in smaller N170 amplitudes. The reduced N170 inversion effect in older adults may reflect age-related changes in neural correlates of face perception. A smaller N170 inversion effect for old faces may indicate that facial changes with age hamper early face perception stages. PMID:26441790
Johann N. Bruhn; James J. Wetteroff; Jeanne D. Mihail; Susan Burks
1997-01-01
Armillaria root rot contributes to oak decline in the Ozarks. Three Armillaria species were detected in Ecological Landtypes (ELT's) representing south- to west-facing side slopes (ELT 17), north- to east-facing side slopes (ELT 18), and ridge tops (ELT 11). Armillaria mellea was detected in 91 percent...
Hidden Covariation Detection Produces Faster, Not Slower, Social Judgments
ERIC Educational Resources Information Center
Barker, Lynne A.; Andrade, Jackie
2006-01-01
In P. Lewicki's (1986b) demonstration of hidden covariation detection (HCD), responses of participants were slower to faces that corresponded with a covariation encountered previously than to faces with novel covariations. This slowing contrasts with the typical finding that priming leads to faster responding and suggests that HCD is a unique type…
Neural markers of opposite-sex bias in face processing.
Proverbio, Alice Mado; Riva, Federica; Martin, Eleonora; Zani, Alberto
2010-01-01
Some behavioral and neuroimaging studies suggest that adults prefer to view attractive faces of the opposite sex more than attractive faces of the same sex. However, unlike the other-race face effect (Caldara et al., 2004), little is known regarding the existence of an opposite-/same-sex bias in face processing. In this study, the faces of 130 attractive male and female adults were foveally presented to 40 heterosexual university students (20 men and 20 women) who were engaged in a secondary perceptual task (landscape detection). The automatic processing of face gender was investigated by recording ERPs from 128 scalp sites. Neural markers of opposite- vs. same-sex bias in face processing included larger and earlier centro-parietal N400s in response to faces of the opposite sex and a larger late positivity (LP) to same-sex faces. Analysis of intra-cortical neural generators (swLORETA) showed that facial processing-related (FG, BA37, BA20/21) and emotion-related brain areas (the right parahippocampal gyrus, BA35; uncus, BA36/38; and the cingulate gyrus, BA24) had higher activations in response to opposite- than same-sex faces. The results of this analysis, along with data obtained from ERP recordings, support the hypothesis that both genders process opposite-sex faces differently than same-sex faces. The data also suggest a hemispheric asymmetry in the processing of opposite-/same-sex faces, with the right hemisphere involved in processing same-sex faces and the left hemisphere involved in processing faces of the opposite sex. The data support previous literature suggesting a right lateralization for the representation of self-image and body awareness.
Associative (prosop)agnosia without (apparent) perceptual deficits: a case-study.
Anaki, David; Kaufman, Yakir; Freedman, Morris; Moscovitch, Morris
2007-04-09
In associative agnosia early perceptual processing of faces or objects are considered to be intact, while the ability to access stored semantic information about the individual face or object is impaired. Recent claims, however, have asserted that associative agnosia is also characterized by deficits at the perceptual level, which are too subtle to be detected by current neuropsychological tests. Thus, the impaired identification of famous faces or common objects in associative agnosia stems from difficulties in extracting the minute perceptual details required to identify a face or an object. In the present study, we report the case of a patient DBO with a left occipital infarct, who shows impaired object and famous face recognition. Despite his disability, he exhibits a face inversion effect, and is able to select a famous face from among non-famous distractors. In addition, his performance is normal in an immediate and delayed recognition memory for faces, whose external features were deleted. His deficits in face recognition are apparent only when he is required to name a famous face, or select two faces from among a triad of famous figures based on their semantic relationships (a task which does not require access to names). The nature of his deficits in object perception and recognition are similar to his impairments in the face domain. This pattern of behavior supports the notion that apperceptive and associative agnosia reflect distinct and dissociated deficits, which result from damage to different stages of the face and object recognition process.
False match elimination for face recognition based on SIFT algorithm
NASA Astrophysics Data System (ADS)
Gu, Xuyuan; Shi, Ping; Shao, Meide
2011-06-01
The SIFT (Scale Invariant Feature Transform) is a well known algorithm used to detect and describe local features in images. It is invariant to image scale, rotation and robust to the noise and illumination. In this paper, a novel method used for face recognition based on SIFT is proposed, which combines the optimization of SIFT, mutual matching and Progressive Sample Consensus (PROSAC) together and can eliminate the false matches of face recognition effectively. Experiments on ORL face database show that many false matches can be eliminated and better recognition rate is achieved.
NASA Astrophysics Data System (ADS)
Abdullah, Nurul Azma; Saidi, Md. Jamri; Rahman, Nurul Hidayah Ab; Wen, Chuah Chai; Hamid, Isredza Rahmi A.
2017-10-01
In practice, identification of criminal in Malaysia is done through thumbprint identification. However, this type of identification is constrained as most of criminal nowadays getting cleverer not to leave their thumbprint on the scene. With the advent of security technology, cameras especially CCTV have been installed in many public and private areas to provide surveillance activities. The footage of the CCTV can be used to identify suspects on scene. However, because of limited software developed to automatically detect the similarity between photo in the footage and recorded photo of criminals, the law enforce thumbprint identification. In this paper, an automated facial recognition system for criminal database was proposed using known Principal Component Analysis approach. This system will be able to detect face and recognize face automatically. This will help the law enforcements to detect or recognize suspect of the case if no thumbprint present on the scene. The results show that about 80% of input photo can be matched with the template data.
Correlation based efficient face recognition and color change detection
NASA Astrophysics Data System (ADS)
Elbouz, M.; Alfalou, A.; Brosseau, C.; Alam, M. S.; Qasmi, S.
2013-01-01
Identifying the human face via correlation is a topic attracting widespread interest. At the heart of this technique lies the comparison of an unknown target image to a known reference database of images. However, the color information in the target image remains notoriously difficult to interpret. In this paper, we report a new technique which: (i) is robust against illumination change, (ii) offers discrimination ability to detect color change between faces having similar shape, and (iii) is specifically designed to detect red colored stains (i.e. facial bleeding). We adopt the Vanderlugt correlator (VLC) architecture with a segmented phase filter and we decompose the color target image using normalized red, green, and blue (RGB), and hue, saturation, and value (HSV) scales. We propose a new strategy to effectively utilize color information in signatures for further increasing the discrimination ability. The proposed algorithm has been found to be very efficient for discriminating face subjects with different skin colors, and those having color stains in different areas of the facial image.
Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung
2018-01-01
Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases. PMID:29495417
Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung
2018-02-26
Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases.
Framework for objective evaluation of privacy filters
NASA Astrophysics Data System (ADS)
Korshunov, Pavel; Melle, Andrea; Dugelay, Jean-Luc; Ebrahimi, Touradj
2013-09-01
Extensive adoption of video surveillance, affecting many aspects of our daily lives, alarms the public about the increasing invasion into personal privacy. To address these concerns, many tools have been proposed for protection of personal privacy in image and video. However, little is understood regarding the effectiveness of such tools and especially their impact on the underlying surveillance tasks, leading to a tradeoff between the preservation of privacy offered by these tools and the intelligibility of activities under video surveillance. In this paper, we investigate this privacy-intelligibility tradeoff objectively by proposing an objective framework for evaluation of privacy filters. We apply the proposed framework on a use case where privacy of people is protected by obscuring faces, assuming an automated video surveillance system. We used several popular privacy protection filters, such as blurring, pixelization, and masking and applied them with varying strengths to people's faces from different public datasets of video surveillance footage. Accuracy of face detection algorithm was used as a measure of intelligibility (a face should be detected to perform a surveillance task), and accuracy of face recognition algorithm as a measure of privacy (a specific person should not be identified). Under these conditions, after application of an ideal privacy protection tool, an obfuscated face would be visible as a face but would not be correctly identified by the recognition algorithm. The experiments demonstrate that, in general, an increase in strength of privacy filters under consideration leads to an increase in privacy (i.e., reduction in recognition accuracy) and to a decrease in intelligibility (i.e., reduction in detection accuracy). Masking also shows to be the most favorable filter across all tested datasets.
The safety helmet detection technology and its application to the surveillance system.
Wen, Che-Yen
2004-07-01
The Automatic Teller Machine (ATM) plays an important role in the modem economy. It provides a fast and convenient way to process transactions between banks and their customers. Unfortunately, it also provides a convenient way for criminals to get illegal money or use stolen ATM cards to extract money from their victims' accounts. For safety reasons, each ATM has a surveillance system to record customer's face information. However, when criminals use an ATM to withdraw money illegally, they usually hide their faces with something (in Taiwan, criminals usually use safety helmets to block their faces) to avoid the surveillance system recording their face information, which decreases the efficiency of the surveillance system. In this paper, we propose a circle/circular arc detection method based upon the modified Hough transform, and apply it to the detection of safety helmets for the surveillance system of ATMs. Since the safety helmet location will be within the set of the obtainable circles/circular arcs (if any exist), we use geometric features to verify if any safety helmet exists in the set. The proposed method can be used to help the surveillance systems record a customer's face information more precisely. If customers wear safety helmets to block their faces, the system can send a message to remind them to take off their helmets. Besides this, the method can be applied to the surveillance systems of banks by providing an early warning safeguard when any "customer" or "intruder" uses a safety helmet to avoid his/her face information from being recorded by the surveillance system. This will make the surveillance system more useful. Real images are used to analyze the performance of the proposed method.
Powell, Jane; Letson, Susan; Davidoff, Jules; Valentine, Tim; Greenwood, Richard
2008-04-01
Twenty patients with impairments of face recognition, in the context of a broader pattern of cognitive deficits, were administered three new training procedures derived from contemporary theories of face processing to enhance their learning of new faces: semantic association (being given additional verbal information about the to-be-learned faces); caricaturing (presentation of caricatured versions of the faces during training and veridical versions at recognition testing); and part recognition (focusing patients on distinctive features during the training phase). Using a within-subjects design, each training procedure was applied to a different set of 10 previously unfamiliar faces and entailed six presentations of each face. In a "simple exposure" control procedure (SE), participants were given six presentations of another set of faces using the same basic protocol but with no further elaboration. Order of the four procedures was counterbalanced, and each condition was administered on a different day. A control group of 12 patients with similar levels of face recognition impairment were trained on all four sets of faces under SE conditions. Compared to the SE condition, all three training procedures resulted in more accurate discrimination between the 10 studied faces and 10 distractor faces in a post-training recognition test. This did not reflect any intrinsic lesser memorability of the faces used in the SE condition, as evidenced by the comparable performance across face sets by the control group. At the group level, the three experimental procedures were of similar efficacy, and associated cognitive deficits did not predict which technique would be most beneficial to individual patients; however, there was limited power to detect such associations. Interestingly, a pure prosopagnosic patient who was tested separately showed benefit only from the part recognition technique. Possible mechanisms for the observed effects, and implications for rehabilitation, are discussed.
Automated Detection of Actinic Keratoses in Clinical Photographs
Hames, Samuel C.; Sinnya, Sudipta; Tan, Jean-Marie; Morze, Conrad; Sahebian, Azadeh; Soyer, H. Peter; Prow, Tarl W.
2015-01-01
Background Clinical diagnosis of actinic keratosis is known to have intra- and inter-observer variability, and there is currently no non-invasive and objective measure to diagnose these lesions. Objective The aim of this pilot study was to determine if automatically detecting and circumscribing actinic keratoses in clinical photographs is feasible. Methods Photographs of the face and dorsal forearms were acquired in 20 volunteers from two groups: the first with at least on actinic keratosis present on the face and each arm, the second with no actinic keratoses. The photographs were automatically analysed using colour space transforms and morphological features to detect erythema. The automated output was compared with a senior consultant dermatologist’s assessment of the photographs, including the intra-observer variability. Performance was assessed by the correlation between total lesions detected by automated method and dermatologist, and whether the individual lesions detected were in the same location as the dermatologist identified lesions. Additionally, the ability to limit false positives was assessed by automatic assessment of the photographs from the no actinic keratosis group in comparison to the high actinic keratosis group. Results The correlation between the automatic and dermatologist counts was 0.62 on the face and 0.51 on the arms, compared to the dermatologist’s intra-observer variation of 0.83 and 0.93 for the same. Sensitivity of automatic detection was 39.5% on the face, 53.1% on the arms. Positive predictive values were 13.9% on the face and 39.8% on the arms. Significantly more lesions (p<0.0001) were detected in the high actinic keratosis group compared to the no actinic keratosis group. Conclusions The proposed method was inferior to assessment by the dermatologist in terms of sensitivity and positive predictive value. However, this pilot study used only a single simple feature and was still able to achieve sensitivity of detection of 53.1% on the arms.This suggests that image analysis is a feasible avenue of investigation for overcoming variability in clinical assessment. Future studies should focus on more sophisticated features to improve sensitivity for actinic keratoses without erythema and limit false positives associated with the anatomical structures on the face. PMID:25615930
Knife blade as a facial foreign body.
Gardner, P A; Righi, P; Shahbahrami, P B
1997-08-01
This case demonstrates the unpredictability of foreign bodies in the face. The retained knife blade eluded detection on two separate examinations. The essential components to making a correct diagnosis of a foreign body following a stabbing to the face include a thorough review of the mechanism of injury, a complete head and neck examination, a high index of suspicion, and plain radiographs of the face.
Romani, Maria; Vigliante, Miriam; Faedda, Noemi; Rossetti, Serena; Pezzuti, Lina; Guidetti, Vincenzo; Cardona, Francesco
2018-06-01
This review focuses on facial recognition abilities in children and adolescents with attention deficit hyperactivity disorder (ADHD). A systematic review, using PRISMA guidelines, was conducted to identify original articles published prior to May 2017 pertaining to memory, face recognition, affect recognition, facial expression recognition and recall of faces in children and adolescents with ADHD. The qualitative synthesis based on different studies shows a particular focus of the research on facial affect recognition without paying similar attention to the structural encoding of facial recognition. In this review, we further investigate facial recognition abilities in children and adolescents with ADHD, providing synthesis of the results observed in the literature, while detecting face recognition tasks used on face processing abilities in ADHD and identifying aspects not yet explored. Copyright © 2018 Elsevier Ltd. All rights reserved.
Interactive display system having a digital micromirror imaging device
Veligdan, James T.; DeSanto, Leonard; Kaull, Lisa; Brewster, Calvin
2006-04-11
A display system includes a waveguide optical panel having an inlet face and an opposite outlet face. A projector cooperates with a digital imaging device, e.g. a digital micromirror imaging device, for projecting an image through the panel for display on the outlet face. The imaging device includes an array of mirrors tiltable between opposite display and divert positions. The display positions reflect an image light beam from the projector through the panel for display on the outlet face. The divert positions divert the image light beam away from the panel, and are additionally used for reflecting a probe light beam through the panel toward the outlet face. Covering a spot on the panel, e.g. with a finger, reflects the probe light beam back through the panel toward the inlet face for detection thereat and providing interactive capability.
Detecting Emotional Expression in Face-to-Face and Online Breast Cancer Support Groups
ERIC Educational Resources Information Center
Liess, Anna; Simon, Wendy; Yutsis, Maya; Owen, Jason E.; Piemme, Karen Altree; Golant, Mitch; Giese-Davis, Janine
2008-01-01
Accurately detecting emotional expression in women with primary breast cancer participating in support groups may be important for therapists and researchers. In 2 small studies (N = 20 and N = 16), the authors examined whether video coding, human text coding, and automated text analysis provided consistent estimates of the level of emotional…
Disordered high-frequency oscillation in face processing in schizophrenia patients
Liu, Miaomiao; Pei, Guangying; Peng, Yinuo; Wang, Changming; Yan, Tianyi; Wu, Jinglong
2018-01-01
Abstract Schizophrenia is a complex disorder characterized by marked social dysfunctions, but the neural mechanism underlying this deficit is unknown. To investigate whether face-specific perceptual processes are influenced in schizophrenia patients, both face detection and configural analysis were assessed in normal individuals and schizophrenia patients by recording electroencephalogram (EEG) data. Here, a face processing model was built based on the frequency oscillations, and the evoked power (theta, alpha, and beta bands) and the induced power (gamma bands) were recorded while the subjects passively viewed face and nonface images presented in upright and inverted orientations. The healthy adults showed a significant face-specific effect in the alpha, beta, and gamma bands, and an inversion effect was observed in the gamma band in the occipital lobe and right temporal lobe. Importantly, the schizophrenia patients showed face-specific deficits in the low-frequency beta and gamma bands, and the face inversion effect in the gamma band was absent from the occipital lobe. All these results revealed face-specific processing in patients due to the disorder of high-frequency EEG, providing additional evidence to enrich future studies investigating neural mechanisms and serving as a marked diagnostic basis. PMID:29419668
Afraz, Arash; Boyden, Edward S.; DiCarlo, James J.
2015-01-01
Neurons that respond more to images of faces over nonface objects were identified in the inferior temporal (IT) cortex of primates three decades ago. Although it is hypothesized that perceptual discrimination between faces depends on the neural activity of IT subregions enriched with “face neurons,” such a causal link has not been directly established. Here, using optogenetic and pharmacological methods, we reversibly suppressed the neural activity in small subregions of IT cortex of macaque monkeys performing a facial gender-discrimination task. Each type of intervention independently demonstrated that suppression of IT subregions enriched in face neurons induced a contralateral deficit in face gender-discrimination behavior. The same neural suppression of other IT subregions produced no detectable change in behavior. These results establish a causal link between the neural activity in IT face neuron subregions and face gender-discrimination behavior. Also, the demonstration that brief neural suppression of specific spatial subregions of IT induces behavioral effects opens the door for applying the technical advantages of optogenetics to a systematic attack on the causal relationship between IT cortex and high-level visual perception. PMID:25953336
Early detection of tooth wear by en-face optical coherence tomography
NASA Astrophysics Data System (ADS)
Mărcăuteanu, Corina; Negrutiu, Meda; Sinescu, Cosmin; Demjan, Eniko; Hughes, Mike; Bradu, Adrian; Dobre, George; Podoleanu, Adrian G.
2009-02-01
Excessive dental wear (pathological attrition and/or abfractions) is a frequent complication in bruxing patients. The parafunction causes heavy occlusal loads. The aim of this study is the early detection and monitoring of occlusal overload in bruxing patients. En-face optical coherence tomography was used for investigating and imaging of several extracted tooth, with a normal morphology, derived from patients with active bruxism and from subjects without parafunction. We found a characteristic pattern of enamel cracks in patients with first degree bruxism and with a normal tooth morphology. We conclude that the en-face optical coherence tomography is a promising non-invasive alternative technique for the early detection of occlusal overload, before it becomes clinically evident as tooth wear.
Dimitriou, D; Leonard, H C; Karmiloff-Smith, A; Johnson, M H; Thomas, M S C
2015-05-01
Configural processing in face recognition is a sensitivity to the spacing between facial features. It has been argued both that its presence represents a high level of expertise in face recognition, and also that it is a developmentally vulnerable process. We report a cross-syndrome investigation of the development of configural face recognition in school-aged children with autism, Down syndrome and Williams syndrome compared with a typically developing comparison group. Cross-sectional trajectory analyses were used to compare configural and featural face recognition utilising the 'Jane faces' task. Trajectories were constructed linking featural and configural performance either to chronological age or to different measures of mental age (receptive vocabulary, visuospatial construction), as well as the Benton face recognition task. An emergent inversion effect across age for detecting configural but not featural changes in faces was established as the marker of typical development. Children from clinical groups displayed atypical profiles that differed across all groups. We discuss the implications for the nature of face processing within the respective developmental disorders, and how the cross-sectional syndrome comparison informs the constraints that shape the typical development of face recognition. © 2014 MENCAP and International Association of the Scientific Study of Intellectual and Developmental Disabilities and John Wiley & Sons Ltd.
Effects of configural processing on the perceptual spatial resolution for face features.
Namdar, Gal; Avidan, Galia; Ganel, Tzvi
2015-11-01
Configural processing governs human perception across various domains, including face perception. An established marker of configural face perception is the face inversion effect, in which performance is typically better for upright compared to inverted faces. In two experiments, we tested whether configural processing could influence basic visual abilities such as perceptual spatial resolution (i.e., the ability to detect spatial visual changes). Face-related perceptual spatial resolution was assessed by measuring the just noticeable difference (JND) to subtle positional changes between specific features in upright and inverted faces. The results revealed robust inversion effect for spatial sensitivity to configural-based changes, such as the distance between the mouth and the nose, or the distance between the eyes and the nose. Critically, spatial resolution for face features within the region of the eyes (e.g., the interocular distance between the eyes) was not affected by inversion, suggesting that the eye region operates as a separate 'gestalt' unit which is relatively immune to manipulations that would normally hamper configural processing. Together these findings suggest that face orientation modulates fundamental psychophysical abilities including spatial resolution. Furthermore, they indicate that classic psychophysical methods can be used as a valid measure of configural face processing. Copyright © 2015 Elsevier Ltd. All rights reserved.
Development of Preferences for Differently Aged Faces of Different Races.
Heron-Delaney, Michelle; Quinn, Paul C; Damon, Fabrice; Lee, Kang; Pascalis, Olivier
2018-02-01
Children's experiences with differently aged faces changes in the course of development. During infancy, most faces encountered are adult, however as children mature, exposure to child faces becomes more extensive. Does this change in experience influence preference for differently aged faces? The preferences of children for adult versus child, and adult versus infant faces were investigated. Caucasian 3- to 6-year-olds and adults were presented with adult/child and adult/infant face pairs which were either Caucasian or Asian (race consistent within pairs). Younger children (3 to 4 years) preferred adults over children, whereas older children (5 to 6 years) preferred children over adults. This preference was only detected for Caucasian faces. These data support a "here and now" model of the development of face age processing from infancy to childhood. In particular, the findings suggest that growing experience with peers influences age preferences and that race impacts on these preferences. In contrast, adults preferred infants and children over adults when the faces were Caucasian or Asian, suggesting an increasing influence of a baby schema, and a decreasing influence of race. The different preferences of younger children, older children, and adults also suggest discontinuity and the possibility of different mechanisms at work during different developmental periods.
Hill, Kendra M; Brözel, Volker S; Heiberger, Greg A
2014-05-01
Current research supports the role of metacognitive strategies to enhance reading comprehension. This study measured the effectiveness of online versus face-to-face metacognitive and active reading skills lessons introduced by Biology faculty to college students in a nonmajors introductory biology course. These lessons were delivered in two lectures either online (Group 1: N = 154) or face to face (Group 2: N = 152). Previously validated pre- and post- surveys were used to collect and compare data by paired and independent t-test analysis (α = 0.05). Pre- and post- survey data showed a statistically significant improvement in both groups in metacognitive awareness (p = 0.001, p = 0.003, respectively) and reading comprehension (p < 0.001 for both groups). When comparing the delivery mode of these lessons, no difference was detected between the online and face-to-face instruction for metacognitive awareness (pre- p = 0.619, post- p = 0.885). For reading comprehension, no difference in gains was demonstrated between online and face-to-face (p = 0.381); however, differences in pre- and post- test scores were measured (pre- p = 0.005, post- p = 0.038). This study suggests that biology instructors can easily introduce effective metacognitive awareness and active reading lessons into their course, either through online or face-to-face instruction.
Probing Anisotropic Surface Properties of Molybdenite by Direct Force Measurements.
Lu, Zhenzhen; Liu, Qingxia; Xu, Zhenghe; Zeng, Hongbo
2015-10-27
Probing anisotropic surface properties of layer-type mineral is fundamentally important in understanding its surface charge and wettability for a variety of applications. In this study, the surface properties of the face and the edge surfaces of natural molybdenite (MoS2) were investigated by direct surface force measurements using atomic force microscope (AFM). The interaction forces between the AFM tip (Si3N4) and face or edge surface of molybdenite were measured in 10 mM NaCl solutions at various pHs. The force profiles were well-fitted with classical DLVO (Derjaguin-Landau-Verwey-Overbeek) theory to determine the surface potentials of the face and the edge surfaces of molybdenite. The surface potentials of both the face and edge surfaces become more negative with increasing pH. At neutral and alkaline conditions, the edge surface exhibits more negative surface potential than the face surface, which is possibly due to molybdate and hydromolybdate ions on the edge surface. The point of zero charge (PZC) of the edge surface was determined around pH 3 while PZC of the face surface was not observed in the range of pH 3-11. The interaction forces between octadecyltrichlorosilane-treated AFM tip (OTS-tip) and face or edge surface of molybdenite were also measured at various pHs to study the wettability of molybdenite surfaces. An attractive force between the OTS-tip and the face surface was detected. The force profiles were well-fitted by considering DLVO forces and additional hydrophobic force. Our results suggest the hydrophobic feature of the face surface of molybdenite. In contrast, no attractive force between the OTS-tip and the edge surface was detected. This is the first study in directly measuring surface charge and wettability of the pristine face and edge surfaces of molybdenite through surface force measurements.
Meconi, Federica; Luria, Roy; Sessa, Paola
2014-12-01
When facing strangers, one of the first evaluations people perform is to implicitly assess their trustworthiness. However, the underlying processes supporting trustworthiness appraisal are poorly understood. We hypothesized that visual working memory (VWM) maintains online face representations that are sensitive to physical cues of trustworthiness, and that differences among individuals in representing untrustworthy faces are associated with individual differences in anxiety. Participants performed a change detection task that required encoding and maintaining for a short interval the identity of one face parametrically manipulated to be either trustworthy or untrustworthy. The sustained posterior contralateral negativity (SPCN), an event-related component (ERP) time-locked to the onset of the face, was used to index the resolution of face representations in VWM. Results revealed greater SPCN amplitudes for trustworthy faces when compared with untrustworthy faces, indicating that VWM is sensitive to physical cues of trustworthiness, even in the absence of explicit trustworthiness appraisal. In addition, differences in SPCN amplitude between trustworthy and untrustworthy faces correlated with participants' anxiety, indicating that healthy college students with sub-clinical high anxiety levels represented untrustworthy faces in greater detail compared with students with sub-clinical low anxiety levels. This pattern of findings is discussed in terms of the high flexibility of aversive/avoidance and appetitive/approach motivational systems. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Robust 3D face landmark localization based on local coordinate coding.
Song, Mingli; Tao, Dacheng; Sun, Shengpeng; Chen, Chun; Maybank, Stephen J
2014-12-01
In the 3D facial animation and synthesis community, input faces are usually required to be labeled by a set of landmarks for parameterization. Because of the variations in pose, expression and resolution, automatic 3D face landmark localization remains a challenge. In this paper, a novel landmark localization approach is presented. The approach is based on local coordinate coding (LCC) and consists of two stages. In the first stage, we perform nose detection, relying on the fact that the nose shape is usually invariant under the variations in the pose, expression, and resolution. Then, we use the iterative closest points algorithm to find a 3D affine transformation that aligns the input face to a reference face. In the second stage, we perform resampling to build correspondences between the input 3D face and the training faces. Then, an LCC-based localization algorithm is proposed to obtain the positions of the landmarks in the input face. Experimental results show that the proposed method is comparable to state of the art methods in terms of its robustness, flexibility, and accuracy.
Detecting Superior Face Recognition Skills in a Large Sample of Young British Adults
Bobak, Anna K.; Pampoulov, Philip; Bate, Sarah
2016-01-01
The Cambridge Face Memory Test Long Form (CFMT+) and Cambridge Face Perception Test (CFPT) are typically used to assess the face processing ability of individuals who believe they have superior face recognition skills. Previous large-scale studies have presented norms for the CFPT but not the CFMT+. However, previous research has also highlighted the necessity for establishing country-specific norms for these tests, indicating that norming data is required for both tests using young British adults. The current study addressed this issue in 254 British participants. In addition to providing the first norm for performance on the CFMT+ in any large sample, we also report the first UK specific cut-off for superior face recognition on the CFPT. Further analyses identified a small advantage for females on both tests, and only small associations between objective face recognition skills and self-report measures. A secondary aim of the study was to examine the relationship between trait or social anxiety and face processing ability, and no associations were noted. The implications of these findings for the classification of super-recognizers are discussed. PMID:27713706
Large Capacity of Conscious Access for Incidental Memories in Natural Scenes.
Kaunitz, Lisandro N; Rowe, Elise G; Tsuchiya, Naotsugu
2016-09-01
When searching a crowd, people can detect a target face only by direct fixation and attention. Once the target is found, it is consciously experienced and remembered, but what is the perceptual fate of the fixated nontarget faces? Whereas introspection suggests that one may remember nontargets, previous studies have proposed that almost no memory should be retained. Using a gaze-contingent paradigm, we asked subjects to visually search for a target face within a crowded natural scene and then tested their memory for nontarget faces, as well as their confidence in those memories. Subjects remembered up to seven fixated, nontarget faces with more than 70% accuracy. Memory accuracy was correlated with trial-by-trial confidence ratings, which implies that the memory was consciously maintained and accessed. When the search scene was inverted, no more than three nontarget faces were remembered. These findings imply that incidental memory for faces, such as those recalled by eyewitnesses, is more reliable than is usually assumed. © The Author(s) 2016.
Is fear perception special? Evidence at the level of decision-making and subjective confidence.
Koizumi, Ai; Mobbs, Dean; Lau, Hakwan
2016-11-01
Fearful faces are believed to be prioritized in visual perception. However, it is unclear whether the processing of low-level facial features alone can facilitate such prioritization or whether higher-level mechanisms also contribute. We examined potential biases for fearful face perception at the levels of perceptual decision-making and perceptual confidence. We controlled for lower-level visual processing capacity by titrating luminance contrasts of backward masks, and the emotional intensity of fearful, angry and happy faces. Under these conditions, participants showed liberal biases in perceiving a fearful face, in both detection and discrimination tasks. This effect was stronger among individuals with reduced density in dorsolateral prefrontal cortex, a region linked to perceptual decision-making. Moreover, participants reported higher confidence when they accurately perceived a fearful face, suggesting that fearful faces may have privileged access to consciousness. Together, the results suggest that mechanisms in the prefrontal cortex contribute to making fearful face perception special. © The Author (2016). Published by Oxford University Press.
Baker, Lewis J; Levin, Daniel T
2016-12-01
Levin and Banaji (Journal of Experimental Psychology: General, 135, 501-512, 2006) reported a lightness illusion in which participants appeared to perceive Black faces to be darker than White faces, even though the faces were matched for overall brightness and contrast. Recently, this finding was challenged by Firestone and Scholl (Psychonomic Bulletin and Review, 2014), who argued that the nominal illusion remained even when the faces were blurred so as to make their race undetectable, and concluded that uncontrolled perceptual differences between the stimulus faces drove at least some observations of the original distortion effect. In this paper we report that measures of race perception used by Firestone and Scholl were insufficiently sensitive. We demonstrate that a forced choice race-identification task not only reveals that participants could detect the race of the blurred faces but also that participants' lightness judgments often aligned with their assignment of race.
Automatically Log Off Upon Disappearance of Facial Image
2005-03-01
log off a PC when the user’s face disappears for an adjustable time interval. Among the fundamental technologies of biometrics, facial recognition is... facial recognition products. In this report, a brief overview of face detection technologies is provided. The particular neural network-based face...ensure that the user logging onto the system is the same person. Among the fundamental technologies of biometrics, facial recognition is the only
Mapping multisensory parietal face and body areas in humans.
Huang, Ruey-Song; Chen, Ching-fu; Tran, Alyssa T; Holstein, Katie L; Sereno, Martin I
2012-10-30
Detection and avoidance of impending obstacles is crucial to preventing head and body injuries in daily life. To safely avoid obstacles, locations of objects approaching the body surface are usually detected via the visual system and then used by the motor system to guide defensive movements. Mediating between visual input and motor output, the posterior parietal cortex plays an important role in integrating multisensory information in peripersonal space. We used functional MRI to map parietal areas that see and feel multisensory stimuli near or on the face and body. Tactile experiments using full-body air-puff stimulation suits revealed somatotopic areas of the face and multiple body parts forming a higher-level homunculus in the superior posterior parietal cortex. Visual experiments using wide-field looming stimuli revealed retinotopic maps that overlap with the parietal face and body areas in the postcentral sulcus at the most anterior border of the dorsal visual pathway. Starting at the parietal face area and moving medially and posteriorly into the lower-body areas, the median of visual polar-angle representations in these somatotopic areas gradually shifts from near the horizontal meridian into the lower visual field. These results suggest the parietal face and body areas fuse multisensory information in peripersonal space to guard an individual from head to toe.
Veligdan, James T.
2004-12-21
A display scanner includes an optical panel having a plurality of stacked optical waveguides. The waveguides define an inlet face at one end and a screen at an opposite end, with each waveguide having a core laminated between cladding. A projector projects a scan beam of light into the panel inlet face for transmission from the screen as a scan line to scan a barcode. A light sensor at the inlet face detects a return beam reflected from the barcode into the screen. A decoder decodes the return beam detected by the sensor for reading the barcode. In an exemplary embodiment, the optical panel also displays a visual image thereon.
Saito, Atsuko; Hamada, Hiroki; Kikusui, Takefumi; Mogi, Kazutaka; Nagasawa, Miho; Mitsui, Shohei; Higuchi, Takashi; Hasegawa, Toshikazu; Hiraki, Kazuo
2014-01-01
The neuropeptide oxytocin plays a central role in prosocial and parental behavior in non-human mammals as well as humans. It has been suggested that oxytocin may affect visual processing of infant faces and emotional reaction to infants. Healthy male volunteers (N = 13) were tested for their ability to detect infant or adult faces among adult or infant faces (facial visual search task). Urine samples were collected from all participants before the study to measure the concentration of oxytocin. Urinary oxytocin positively correlated with performance in the facial visual search task. However, task performance and its correlation with oxytocin concentration did not differ between infant faces and adult faces. Our data suggests that endogenous oxytocin is related to facial visual cognition, but does not promote infant-specific responses in unmarried men who are not fathers.
Scrambling for anonymous visual communications
NASA Astrophysics Data System (ADS)
Dufaux, Frederic; Ebrahimi, Touradj
2005-08-01
In this paper, we present a system for anonymous visual communications. Target application is an anonymous video chat. The system is identifying faces in the video sequence by means of face detection or skin detection. The corresponding regions are subsequently scrambled. We investigate several approaches for scrambling, either in the image-domain or in the transform-domain. Experiment results show the effectiveness of the proposed system.
Eye coding mechanisms in early human face event-related potentials.
Rousselet, Guillaume A; Ince, Robin A A; van Rijsbergen, Nicola J; Schyns, Philippe G
2014-11-10
In humans, the N170 event-related potential (ERP) is an integrated measure of cortical activity that varies in amplitude and latency across trials. Researchers often conjecture that N170 variations reflect cortical mechanisms of stimulus coding for recognition. Here, to settle the conjecture and understand cortical information processing mechanisms, we unraveled the coding function of N170 latency and amplitude variations in possibly the simplest socially important natural visual task: face detection. On each experimental trial, 16 observers saw face and noise pictures sparsely sampled with small Gaussian apertures. Reverse-correlation methods coupled with information theory revealed that the presence of the eye specifically covaries with behavioral and neural measurements: the left eye strongly modulates reaction times and lateral electrodes represent mainly the presence of the contralateral eye during the rising part of the N170, with maximum sensitivity before the N170 peak. Furthermore, single-trial N170 latencies code more about the presence of the contralateral eye than N170 amplitudes and early latencies are associated with faster reaction times. The absence of these effects in control images that did not contain a face refutes alternative accounts based on retinal biases or allocation of attention to the eye location on the face. We conclude that the rising part of the N170, roughly 120-170 ms post-stimulus, is a critical time-window in human face processing mechanisms, reflecting predominantly, in a face detection task, the encoding of a single feature: the contralateral eye. © 2014 ARVO.
Colour detection thresholds in faces and colour patches.
Tan, Kok Wei; Stephen, Ian D
2013-01-01
Human facial skin colour reflects individuals' underlying health (Stephen et al 2011 Evolution & Human Behavior 32 216-227); and enhanced facial skin CIELab b* (yellowness), a* (redness), and L* (lightness) are perceived as healthy (also Stephen et al 2009a International Journal of Primatology 30 845-857). Here, we examine Malaysian Chinese participants' detection thresholds for CIELab L* (lightness), a* (redness), and b* (yellowness) colour changes in Asian, African, and Caucasian faces and skin coloured patches. Twelve face photos and three skin coloured patches were transformed to produce four pairs of images of each individual face and colour patch with different amounts of red, yellow, or lightness, from very subtle (deltaE = 1.2) to quite large differences (deltaE = 9.6). Participants were asked to decide which of sequentially displayed, paired same-face images or colour patches were lighter, redder, or yellower. Changes in facial redness, followed by changes in yellowness, were more easily discriminated than changes in luminance. However, visual sensitivity was not greater for redness and yellowness in nonface stimuli, suggesting red facial skin colour special salience. Participants were also significantly better at recognizing colour differences in own-race (Asian) and Caucasian faces than in African faces, suggesting the existence of cross-race effect in discriminating facial colours. Humans' colour vision may have been selected for skin colour signalling (Changizi et al 2006 Biology Letters 2 217-221), enabling individuals to perceive subtle changes in skin colour, reflecting health and emotional status.
Implicit Binding of Facial Features During Change Blindness
Lyyra, Pessi; Mäkelä, Hanna; Hietanen, Jari K.; Astikainen, Piia
2014-01-01
Change blindness refers to the inability to detect visual changes if introduced together with an eye-movement, blink, flash of light, or with distracting stimuli. Evidence of implicit detection of changed visual features during change blindness has been reported in a number of studies using both behavioral and neurophysiological measurements. However, it is not known whether implicit detection occurs only at the level of single features or whether complex organizations of features can be implicitly detected as well. We tested this in adult humans using intact and scrambled versions of schematic faces as stimuli in a change blindness paradigm while recording event-related potentials (ERPs). An enlargement of the face-sensitive N170 ERP component was observed at the right temporal electrode site to changes from scrambled to intact faces, even if the participants were not consciously able to report such changes (change blindness). Similarly, the disintegration of an intact face to scrambled features resulted in attenuated N170 responses during change blindness. Other ERP deflections were modulated by changes, but unlike the N170 component, they were indifferent to the direction of the change. The bidirectional modulation of the N170 component during change blindness suggests that implicit change detection can also occur at the level of complex features in the case of facial stimuli. PMID:24498165
Implicit binding of facial features during change blindness.
Lyyra, Pessi; Mäkelä, Hanna; Hietanen, Jari K; Astikainen, Piia
2014-01-01
Change blindness refers to the inability to detect visual changes if introduced together with an eye-movement, blink, flash of light, or with distracting stimuli. Evidence of implicit detection of changed visual features during change blindness has been reported in a number of studies using both behavioral and neurophysiological measurements. However, it is not known whether implicit detection occurs only at the level of single features or whether complex organizations of features can be implicitly detected as well. We tested this in adult humans using intact and scrambled versions of schematic faces as stimuli in a change blindness paradigm while recording event-related potentials (ERPs). An enlargement of the face-sensitive N170 ERP component was observed at the right temporal electrode site to changes from scrambled to intact faces, even if the participants were not consciously able to report such changes (change blindness). Similarly, the disintegration of an intact face to scrambled features resulted in attenuated N170 responses during change blindness. Other ERP deflections were modulated by changes, but unlike the N170 component, they were indifferent to the direction of the change. The bidirectional modulation of the N170 component during change blindness suggests that implicit change detection can also occur at the level of complex features in the case of facial stimuli.
Swartz, Erik E; Belmore, Keith; Decoster, Laura C; Armstrong, Charles W
2010-01-01
Football helmet face-mask attachment design changes might affect the effectiveness of face-mask removal. To compare the efficiency of face-mask removal between newly designed and traditional football helmets. Controlled laboratory study. Applied biomechanics laboratory. Twenty-five certified athletic trainers. The independent variable was face-mask attachment system on 5 levels: (1) Revolution IQ with Quick Release (QR), (2) Revolution IQ with Quick Release hardware altered (QRAlt), (3) traditional (Trad), (4) traditional with hardware altered (TradAlt), and (5) ION 4D (ION). Participants removed face masks using a cordless screwdriver with a back-up cutting tool or only the cutting tool for the ION. Investigators altered face-mask hardware to unexpectedly challenge participants during removal for traditional and Revolution IQ helmets. Participants completed each condition twice in random order and were blinded to hardware alteration. Removal success, removal time, helmet motion, and rating of perceived exertion (RPE). Time and 3-dimensional helmet motion were recorded. If the face mask remained attached at 3 minutes, the trial was categorized as unsuccessful. Participants rated each trial for level of difficulty (RPE). We used repeated-measures analyses of variance (α = .05) with follow-up comparisons to test for differences. Removal success was 100% (48 of 48) for QR, Trad, and ION; 97.9% (47 of 48) for TradAlt; and 72.9% (35 of 48) for QRAlt. Differences in time for face-mask removal were detected (F(4,20) = 48.87, P = .001), with times ranging from 33.96 ± 14.14 seconds for QR to 99.22 ± 20.53 seconds for QRAlt. Differences were found in range of motion during face-mask removal (F(4,20) = 16.25, P = .001), with range of motion from 10.10° ± 3.07° for QR to 16.91° ± 5.36° for TradAlt. Differences also were detected in RPE during face-mask removal (F(4,20) = 43.20, P = .001), with participants reporting average perceived difficulty ranging from 1.44 ± 1.19 for QR to 3.68 ± 1.70 for TradAlt. The QR and Trad trials resulted in superior results. When trials required cutting loop straps, results deteriorated.
Discrimination of human and dog faces and inversion responses in domestic dogs (Canis familiaris).
Racca, Anaïs; Amadei, Eleonora; Ligout, Séverine; Guo, Kun; Meints, Kerstin; Mills, Daniel
2010-05-01
Although domestic dogs can respond to many facial cues displayed by other dogs and humans, it remains unclear whether they can differentiate individual dogs or humans based on facial cues alone and, if so, whether they would demonstrate the face inversion effect, a behavioural hallmark commonly used in primates to differentiate face processing from object processing. In this study, we first established the applicability of the visual paired comparison (VPC or preferential looking) procedure for dogs using a simple object discrimination task with 2D pictures. The animals demonstrated a clear looking preference for novel objects when simultaneously presented with prior-exposed familiar objects. We then adopted this VPC procedure to assess their face discrimination and inversion responses. Dogs showed a deviation from random behaviour, indicating discrimination capability when inspecting upright dog faces, human faces and object images; but the pattern of viewing preference was dependent upon image category. They directed longer viewing time at novel (vs. familiar) human faces and objects, but not at dog faces, instead, a longer viewing time at familiar (vs. novel) dog faces was observed. No significant looking preference was detected for inverted images regardless of image category. Our results indicate that domestic dogs can use facial cues alone to differentiate individual dogs and humans and that they exhibit a non-specific inversion response. In addition, the discrimination response by dogs of human and dog faces appears to differ with the type of face involved.
Neumann, Markus F; Schweinberger, Stefan R
2008-11-06
It is a matter of considerable debate whether attention to initial stimulus presentations is required for repetition-related neural modulations to occur. Recently, it has been assumed that faces are particularly hard to ignore, and can capture attention in a reflexive manner. In line with this idea, electrophysiological evidence for long-term repetition effects of unattended famous faces has been reported. The present study investigated influences of attention to prime faces on short-term repetition effects in event-related potentials (ERPs). We manipulated attention to short (200 ms) prime presentations (S1) of task-irrelevant famous faces according to Lavie's Perceptual Load Theory. Participants attended to letter strings superimposed on face images, and identified target letters "X" vs. "N" embedded in strings of either 6 different (high load) or 6 identical (low load) letters. Letter identification was followed by probe presentations (S2), which were either repetitions of S1 faces, new famous faces, or infrequent butterflies, to which participants responded. Our ERP data revealed repetition effects in terms of an N250r at occipito-temporal regions, suggesting priming of face identification processes, and in terms of an N400 at the vertex, suggesting semantic priming. Crucially, the magnitude of these effects was unaffected by perceptual load at S1 presentation. This indicates that task-irrelevant face processing is remarkably preserved even in a demanding letter detection task, supporting recent notions of face-specific attentional resources.
The biasing of figure-ground assignment by shading cues for objects and faces in prosopagnosia.
Hefter, Rebecca; Jerskey, Beth A; Barton, Jason J S
2008-01-01
Prosopagnosia is defined by impaired recognition of the identity of specific faces. Whether the perception of faces at the categorical level (recognizing that a face is a face) is also impaired to a lesser degree is unclear. We examined whether prosopagnosia is associated with impaired detection of facial contours in a bistable display, by testing a series of five prosopagnosic patients on a variation of Rubin's vase illusion, in which shading was introduced to bias perception towards either the face or the vase. We also included a control bistable display in which a disc or an aperture were the two possible percepts. With the control disc/aperture test, prosopagnosic patients did not generate a normal sigmoid function, but a U-shaped function, indicating that they perceived the shading but had difficulty in using the shading to make the appropriate figure-ground assignment. While controls still generated a sigmoid function for the vase/face test, prosopagnosic patients showed a severe impairment in using shading to make consistent perceptual assignments. We conclude that prosopagnosic patients have difficulty in using shading to segment figures from background correctly, particularly with complex stimuli like faces. This suggests that a subtler defect in face categorization accompanies their severe defect in face identification, consistent with predictions of computational models and recent data from functional imaging.
Alternative face models for 3D face registration
NASA Astrophysics Data System (ADS)
Salah, Albert Ali; Alyüz, Neşe; Akarun, Lale
2007-01-01
3D has become an important modality for face biometrics. The accuracy of a 3D face recognition system depends on a correct registration that aligns the facial surfaces and makes a comparison possible. The best results obtained so far use a one-to-all registration approach, which means each new facial surface is registered to all faces in the gallery, at a great computational cost. We explore the approach of registering the new facial surface to an average face model (AFM), which automatically establishes correspondence to the pre-registered gallery faces. Going one step further, we propose that using a couple of well-selected AFMs can trade-off computation time with accuracy. Drawing on cognitive justifications, we propose to employ category-specific alternative average face models for registration, which is shown to increase the accuracy of the subsequent recognition. We inspect thin-plate spline (TPS) and iterative closest point (ICP) based registration schemes under realistic assumptions on manual or automatic landmark detection prior to registration. We evaluate several approaches for the coarse initialization of ICP. We propose a new algorithm for constructing an AFM, and show that it works better than a recent approach. Finally, we perform simulations with multiple AFMs that correspond to different clusters in the face shape space and compare these with gender and morphology based groupings. We report our results on the FRGC 3D face database.
George, Nathalie; Jemel, Boutheina; Fiori, Nicole; Chaby, Laurence; Renault, Bernard
2005-08-01
We investigated the ERP correlates of the subjective perception of upright and upside-down ambiguous pictures as faces using two-tone Mooney stimuli in an explicit facial decision task (deciding whether a face is perceived or not in the display). The difficulty in perceiving upside-down Mooneys as faces was reflected by both lower rates of "Face" responses and delayed "Face" reaction times for upside-down relative to upright stimuli. The N170 was larger for the stimuli reported as "faces". It was also larger for the upright than the upside-down stimuli only when they were reported as faces. Furthermore, facial decision as well as stimulus orientation effects spread from 140-190 ms to 390-440 ms. The behavioural delay in 'Face' responses to upside-down stimuli was reflected in ERPs by later effect of facial decision for upside-down relative to upright Mooneys over occipito-temporal electrodes. Moreover, an orientation effect was observed only for the stimuli reported as faces; it yielded a marked hemispheric asymmetry, lasting from 140-190 ms to 390-440 ms post-stimulus onset in the left hemisphere and from 340-390 to 390-440 ms only in the right hemisphere. Taken together, the results supported a preferential involvement of the right hemisphere in the detection of faces, whatever their orientation. By contrast, the early orientation effect in the left hemisphere suggested that upside-down Mooney stimuli were processed as non face objects until facial decision was reached in this hemisphere. The present data show that face perception involves not only spatially but also temporally distributed activities in occipito-temporal regions.
The structural and functional correlates of the efficiency in fearful face detection.
Wang, Yongchao; Guo, Nana; Zhao, Li; Huang, Hui; Yao, Xiaonan; Sang, Na; Hou, Xin; Mao, Yu; Bi, Taiyong; Qiu, Jiang
2017-06-01
Human visual system is found to be much efficient in searching for a fearful face. Some individuals are more sensitive to this threat-related stimulus. However, we still know little about the neural correlates of such variability. In the current study, we exploited a visual search paradigm, and asked the subjects to search for a fearful face or a target gender. Every subject showed a shallower search function for fearful face search than face gender search, indicating a stable fearful face advantage. We then used voxel-based morphometry (VBM) analysis and correlated this advantage to the gray matter volume (GMV) of some presumably face related cortical areas. The result revealed that only the left fusiform gyrus showed a significant positive correlation. Next, we defined the left fusiform gyrus as the seed region and calculated its resting state functional connectivity to the whole brain. Correlations were also calculated between fearful face advantage and these connectivities. In this analysis, we found positive correlations in the inferior parietal lobe and the ventral medial prefrontal cortex. These results suggested that the anatomical structure of the left fusiform gyrus might determine the search efficiency of fearful face, and frontoparietal attention network involved in this process through top-down attentional modulation. Copyright © 2017. Published by Elsevier Ltd.
Video face recognition against a watch list
NASA Astrophysics Data System (ADS)
Abbas, Jehanzeb; Dagli, Charlie K.; Huang, Thomas S.
2007-10-01
Due to a large increase in the video surveillance data recently in an effort to maintain high security at public places, we need more robust systems to analyze this data and make tasks like face recognition a realistic possibility in challenging environments. In this paper we explore a watch-list scenario where we use an appearance based model to classify query faces from low resolution videos into either a watch-list or a non-watch-list face. We then use our simple yet a powerful face recognition system to recognize the faces classified as watch-list faces. Where the watch-list includes those people that we are interested in recognizing. Our system uses simple feature machine algorithms from our previous work to match video faces against still images. To test our approach, we match video faces against a large database of still images obtained from a previous work in the field from Yahoo News over a period of time. We do this matching in an efficient manner to come up with a faster and nearly real-time system. This system can be incorporated into a larger surveillance system equipped with advanced algorithms involving anomalous event detection and activity recognition. This is a step towards more secure and robust surveillance systems and efficient video data analysis.
Mileva, Mila; Burton, A Mike
2018-06-19
Unfamiliar face matching is a surprisingly difficult task, yet we often rely on people's matching decisions in applied settings (e.g., border control). Most attempts to improve accuracy (including training and image manipulation) have had very limited success. In a series of studies, we demonstrate that using smiling rather than neutral pairs of images brings about significant improvements in face matching accuracy. This is true for both match and mismatch trials, implying that the information provided through a smile helps us detect images of the same identity as well as distinguishing between images of different identities. Study 1 compares matching performance when images in the face pair display either an open-mouth smile or a neutral expression. In Study 2, we add an intermediate level, closed-mouth smile, to identify the effect of teeth being exposed, and Study 3 explores face matching accuracy when only information about the lower part of the face is available. Results demonstrate that an open-mouth smile changes the face in an idiosyncratic way which aids face matching decisions. Such findings have practical implications for matching in the applied context where we typically use neutral images to represent ourselves in official documents. © 2018 The British Psychological Society.
Matsugu, Masakazu; Mori, Katsuhiko; Mitari, Yusuke; Kaneda, Yuji
2003-01-01
Reliable detection of ordinary facial expressions (e.g. smile) despite the variability among individuals as well as face appearance is an important step toward the realization of perceptual user interface with autonomous perception of persons. We describe a rule-based algorithm for robust facial expression recognition combined with robust face detection using a convolutional neural network. In this study, we address the problem of subject independence as well as translation, rotation, and scale invariance in the recognition of facial expression. The result shows reliable detection of smiles with recognition rate of 97.6% for 5600 still images of more than 10 subjects. The proposed algorithm demonstrated the ability to discriminate smiling from talking based on the saliency score obtained from voting visual cues. To the best of our knowledge, it is the first facial expression recognition model with the property of subject independence combined with robustness to variability in facial appearance.
Automatic Fatigue Detection of Drivers through Yawning Analysis
NASA Astrophysics Data System (ADS)
Azim, Tayyaba; Jaffar, M. Arfan; Ramzan, M.; Mirza, Anwar M.
This paper presents a non-intrusive fatigue detection system based on the video analysis of drivers. The focus of the paper is on how to detect yawning which is an important cue for determining driver's fatigue. Initially, the face is located through Viola-Jones face detection method in a video frame. Then, a mouth window is extracted from the face region, in which lips are searched through spatial fuzzy c-means (s-FCM) clustering. The degree of mouth openness is extracted on the basis of mouth features, to determine driver's yawning state. If the yawning state of the driver persists for several consecutive frames, the system concludes that the driver is non-vigilant due to fatigue and is thus warned through an alarm. The system reinitializes when occlusion or misdetection occurs. Experiments were carried out using real data, recorded in day and night lighting conditions, and with users belonging to different race and gender.
Less is more? Detecting lies in veiled witnesses.
Leach, Amy-May; Ammar, Nawal; England, D Nicole; Remigio, Laura M; Kleinberg, Bennett; Verschuere, Bruno J
2016-08-01
Judges in the United States, the United Kingdom, and Canada have ruled that witnesses may not wear the niqab-a type of face veil-when testifying, in part because they believed that it was necessary to see a person's face to detect deception (Muhammad v. Enterprise Rent-A-Car, 2006; R. v. N. S., 2010; The Queen v. D(R), 2013). In two studies, we used conventional research methods and safeguards to empirically examine the assumption that niqabs interfere with lie detection. Female witnesses were randomly assigned to lie or tell the truth while remaining unveiled or while wearing a hijab (i.e., a head veil) or a niqab (i.e., a face veil). In Study 1, laypersons in Canada (N = 232) were more accurate at detecting deception in witnesses who wore niqabs or hijabs than in those who did not wear veils. Concealing portions of witnesses' faces led laypersons to change their decision-making strategies without eliciting negative biases. Lie detection results were partially replicated in Study 2, with laypersons in Canada, the United Kingdom, and the Netherlands (N = 291): observers' performance was better when witnesses wore either niqabs or hijabs than when witnesses did not wear veils. These findings suggest that, contrary to judicial opinion, niqabs do not interfere with-and may, in fact, improve-the ability to detect deception. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Brooks, Kevin R; Kemp, Richard I
2007-01-01
Previous studies of face recognition and of face matching have shown a general improvement for the processing of internal features as a face becomes more familiar to the participant. In this study, we used a psychophysical two-alternative forced-choice paradigm to investigate thresholds for the detection of a displacement of the eyes, nose, mouth, or ears for familiar and unfamiliar faces. No clear division between internal and external features was observed. Rather, for familiar (compared to unfamiliar) faces participants were more sensitive to displacements of internal features such as the eyes or the nose; yet, for our third internal feature-the mouth no such difference was observed. Despite large displacements, many subjects were unable to perform above chance when stimuli involved shifts in the position of the ears. These results are consistent with the proposal that familiarity effects may be mediated by the construction of a robust representation of a face, although the involvement of attention in the encoding of face stimuli cannot be ruled out. Furthermore, these effects are mediated by information from a spatial configuration of features, rather than by purely feature-based information.
Explaining Sad People's Memory Advantage for Faces.
Hills, Peter J; Marquardt, Zoe; Young, Isabel; Goodenough, Imogen
2017-01-01
Sad people recognize faces more accurately than happy people (Hills et al., 2011). We devised four hypotheses for this finding that are tested between in the current study. The four hypotheses are: (1) sad people engage in more expert processing associated with face processing; (2) sad people are motivated to be more accurate than happy people in an attempt to repair their mood; (3) sad people have a defocused attentional strategy that allows more information about a face to be encoded; and (4) sad people scan more of the face than happy people leading to more facial features to be encoded. In Experiment 1, we found that dysphoria (sad mood often associated with depression) was not correlated with the face-inversion effect (a measure of expert processing) nor with response times but was correlated with defocused attention and recognition accuracy. Experiment 2 established that dysphoric participants detected changes made to more facial features than happy participants. In Experiment 3, using eye-tracking we found that sad-induced participants sampled more of the face whilst avoiding the eyes. Experiment 4 showed that sad-induced people demonstrated a smaller own-ethnicity bias. These results indicate that sad people show different attentional allocation to faces than happy and neutral people.
Hirai, Masahiro; Muramatsu, Yukako; Mizuno, Seiji; Kurahashi, Naoko; Kurahashi, Hirokazu; Nakamura, Miho
2016-01-01
Evidence indicates that individuals with Williams syndrome (WS) exhibit atypical attentional characteristics when viewing faces. However, the dynamics of visual attention captured by faces remain unclear, especially when explicit attentional forces are present. To clarify this, we introduced a visual search paradigm and assessed how the relative strength of visual attention captured by a face and explicit attentional control changes as search progresses. Participants (WS and controls) searched for a target (butterfly) within an array of distractors, which sometimes contained an upright face. We analyzed reaction time and location of the first fixation-which reflect the attentional profile at the initial stage-and fixation durations. These features represent aspects of attention at later stages of visual search. The strength of visual attention captured by faces and explicit attentional control (toward the butterfly) was characterized by the frequency of first fixations on a face or butterfly and on the duration of face or butterfly fixations. Although reaction time was longer in all groups when faces were present, and visual attention was not dominated by faces in any group during the initial stages of the search, when faces were present, attention to faces dominated in the WS group during the later search stages. Furthermore, for the WS group, reaction time correlated with eye-movement measures at different stages of searching such that longer reaction times were associated with longer face-fixations, specifically at the initial stage of searching. Moreover, longer reaction times were associated with longer face-fixations at the later stages of searching, while shorter reaction times were associated with longer butterfly fixations. The relative strength of attention captured by faces in people with WS is not observed at the initial stage of searching but becomes dominant as the search progresses. Furthermore, although behavioral responses are associated with some aspects of eye movements, they are not as sensitive as eye-movement measurements themselves at detecting atypical attentional characteristics in people with WS.
Madrigal-Garcia, Maria Isabel; Rodrigues, Marcos; Shenfield, Alex; Singer, Mervyn; Moreno-Cuesta, Jeronimo
2018-07-01
To identify facial expressions occurring in patients at risk of deterioration in hospital wards. Prospective observational feasibility study. General ward patients in a London Community Hospital, United Kingdom. Thirty-four patients at risk of clinical deterioration. A 5-minute video (25 frames/s; 7,500 images) was recorded, encrypted, and subsequently analyzed for action units by a trained facial action coding system psychologist blinded to outcome. Action units of the upper face, head position, eyes position, lips and jaw position, and lower face were analyzed in conjunction with clinical measures collected within the National Early Warning Score. The most frequently detected action units were action unit 43 (73%) for upper face, action unit 51 (11.7%) for head position, action unit 62 (5.8%) for eyes position, action unit 25 (44.1%) for lips and jaw, and action unit 15 (67.6%) for lower face. The presence of certain combined face displays was increased in patients requiring admission to intensive care, namely, action units 43 + 15 + 25 (face display 1, p < 0.013), action units 43 + 15 + 51/52 (face display 2, p < 0.003), and action units 43 + 15 + 51 + 25 (face display 3, p < 0.002). Having face display 1, face display 2, and face display 3 increased the risk of being admitted to intensive care eight-fold, 18-fold, and as a sure event, respectively. A logistic regression model with face display 1, face display 2, face display 3, and National Early Warning Score as independent covariates described admission to intensive care with an average concordance statistic (C-index) of 0.71 (p = 0.009). Patterned facial expressions can be identified in deteriorating general ward patients. This tool may potentially augment risk prediction of current scoring systems.
Veligdan, James T.
1995-10-03
An interactive optical panel assembly 34 includes an optical panel 10 having a plurality of ribbon optical waveguides 12 stacked together with opposite ends thereof defining panel first and second faces 16, 18. A light source 20 provides an image beam 22 to the panel first face 16 for being channeled through the waveguides 12 and emitted from the panel second face 18 in the form of a viewable light image 24a. A remote device 38 produces a response beam 40 over a discrete selection area 36 of the panel second face 18 for being channeled through at least one of the waveguides 12 toward the panel first face 16. A light sensor 42,50 is disposed across a plurality of the waveguides 12 for detecting the response beam 40 therein for providing interactive capability.
Effects of an aft facing step on the surface of a laminar flow glider wing
NASA Technical Reports Server (NTRS)
Sandlin, Doral R.; Saiki, Neal
1993-01-01
A motor glider was used to perform a flight test study on the effects of aft facing steps in a laminar boundary layer. This study focuses on two dimensional aft facing steps oriented spanwise to the flow. The size and location of the aft facing steps were varied in order to determine the critical size that will force premature transition. Transition over a step was found to be primarily a function of Reynolds number based on step height. Both of the step height Reynolds numbers for premature and full transition were determined. A hot film anemometry system was used to detect transition.
Zhang, Xiaoyu; Ju, Han; Penney, Trevor B; VanDongen, Antonius M J
2017-01-01
Humans instantly recognize a previously seen face as "familiar." To deepen our understanding of familiarity-novelty detection, we simulated biologically plausible neural network models of generic cortical microcircuits consisting of spiking neurons with random recurrent synaptic connections. NMDA receptor (NMDAR)-dependent synaptic plasticity was implemented to allow for unsupervised learning and bidirectional modifications. Network spiking activity evoked by sensory inputs consisting of face images altered synaptic efficacy, which resulted in the network responding more strongly to a previously seen face than a novel face. Network size determined how many faces could be accurately recognized as familiar. When the simulated model became sufficiently complex in structure, multiple familiarity traces could be retained in the same network by forming partially-overlapping subnetworks that differ slightly from each other, thereby resulting in a high storage capacity. Fisher's discriminant analysis was applied to identify critical neurons whose spiking activity predicted familiar input patterns. Intriguingly, as sensory exposure was prolonged, the selected critical neurons tended to appear at deeper layers of the network model, suggesting recruitment of additional circuits in the network for incremental information storage. We conclude that generic cortical microcircuits with bidirectional synaptic plasticity have an intrinsic ability to detect familiar inputs. This ability does not require a specialized wiring diagram or supervision and can therefore be expected to emerge naturally in developing cortical circuits.
2017-01-01
Abstract Humans instantly recognize a previously seen face as “familiar.” To deepen our understanding of familiarity-novelty detection, we simulated biologically plausible neural network models of generic cortical microcircuits consisting of spiking neurons with random recurrent synaptic connections. NMDA receptor (NMDAR)-dependent synaptic plasticity was implemented to allow for unsupervised learning and bidirectional modifications. Network spiking activity evoked by sensory inputs consisting of face images altered synaptic efficacy, which resulted in the network responding more strongly to a previously seen face than a novel face. Network size determined how many faces could be accurately recognized as familiar. When the simulated model became sufficiently complex in structure, multiple familiarity traces could be retained in the same network by forming partially-overlapping subnetworks that differ slightly from each other, thereby resulting in a high storage capacity. Fisher’s discriminant analysis was applied to identify critical neurons whose spiking activity predicted familiar input patterns. Intriguingly, as sensory exposure was prolonged, the selected critical neurons tended to appear at deeper layers of the network model, suggesting recruitment of additional circuits in the network for incremental information storage. We conclude that generic cortical microcircuits with bidirectional synaptic plasticity have an intrinsic ability to detect familiar inputs. This ability does not require a specialized wiring diagram or supervision and can therefore be expected to emerge naturally in developing cortical circuits. PMID:28534043
Oxytocin increases bias, but not accuracy, in face recognition line-ups.
Bate, Sarah; Bennetts, Rachel; Parris, Benjamin A; Bindemann, Markus; Udale, Robert; Bussunt, Amanda
2015-07-01
Previous work indicates that intranasal inhalation of oxytocin improves face recognition skills, raising the possibility that it may be used in security settings. However, it is unclear whether oxytocin directly acts upon the core face-processing system itself or indirectly improves face recognition via affective or social salience mechanisms. In a double-blind procedure, 60 participants received either an oxytocin or placebo nasal spray before completing the One-in-Ten task-a standardized test of unfamiliar face recognition containing target-present and target-absent line-ups. Participants in the oxytocin condition outperformed those in the placebo condition on target-present trials, yet were more likely to make false-positive errors on target-absent trials. Signal detection analyses indicated that oxytocin induced a more liberal response bias, rather than increasing accuracy per se. These findings support a social salience account of the effects of oxytocin on face recognition and indicate that oxytocin may impede face recognition in certain scenarios. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
2012-07-01
detection only condition followed either face detection only or dual task, thus ensuring that participants were practiced in face detection before...1 ARMY RSCH LABORATORY – HRED RDRL HRM C A DAVISON 320 MANSCEN LOOP STE 115 FORT LEONARD WOOD MO 65473 2 ARMY RSCH LABORATORY...HRED RDRL HRM DI T DAVIS J HANSBERGER BLDG 5400 RM C242 REDSTONE ARSENAL AL 35898-7290 1 ARMY RSCH LABORATORY – HRED RDRL HRS
Schmidt, André; Kometer, Michael; Bachmann, Rosilla; Seifritz, Erich; Vollenweider, Franz
2013-01-01
Both glutamate and serotonin (5-HT) play a key role in the pathophysiology of emotional biases. Recent studies indicate that the glutamate N-methyl-D-aspartate (NMDA) receptor antagonist ketamine and the 5-HT receptor agonist psilocybin are implicated in emotion processing. However, as yet, no study has systematically compared their contribution to emotional biases. This study used event-related potentials (ERPs) and signal detection theory to compare the effects of the NMDA (via S-ketamine) and 5-HT (via psilocybin) receptor system on non-conscious or conscious emotional face processing biases. S-ketamine or psilocybin was administrated to two groups of healthy subjects in a double-blind within-subject placebo-controlled design. We behaviorally assessed objective thresholds for non-conscious discrimination in all drug conditions. Electrophysiological responses to fearful, happy, and neutral faces were subsequently recorded with the face-specific P100 and N170 ERP. Both S-ketamine and psilocybin impaired the encoding of fearful faces as expressed by a reduced N170 over parieto-occipital brain regions. In contrast, while S-ketamine also impaired the encoding of happy facial expressions, psilocybin had no effect on the N170 in response to happy faces. This study demonstrates that the NMDA and 5-HT receptor systems differentially contribute to the structural encoding of emotional face expressions as expressed by the N170. These findings suggest that the assessment of early visual evoked responses might allow detecting pharmacologically induced changes in emotional processing biases and thus provides a framework to study the pathophysiology of dysfunctional emotional biases.
Lau, Tiffany; Wong, Ian Y; Iu, Lawrence; Chhablani, Jay; Yong, Tao; Hideki, Koizumi; Lee, Jacky; Wong, Raymond
2015-05-01
Optical coherence tomography (OCT) is a noninvasive imaging modality providing high-resolution images of the central retina that has completely transformed the field of ophthalmology. While traditional OCT has produced longitudinal cross-sectional images, advancements in data processing have led to the development of en-face OCT, which produces transverse images of retinal and choroidal layers at any specified depth. This offers additional benefit on top of longitudinal cross-sections because it provides an extensive overview of pathological structures in a single image. The aim of this review was to discuss the utility of en-face OCT in the diagnosis and management of age-related macular degeneration (AMD) and polypoidal choroidal vasculopathy (PCV). En-face imaging of the inner segment/outer segment junction of retinal photoreceptors has been shown to be a useful indicator of visual acuity and a predictor of the extent of progression of geographic atrophy. En-face OCT has also enabled high-resolution analysis and quantification of pathological structures such as reticular pseudodrusen (RPD) and choroidal neovascularization, which have the potential to become useful markers for disease monitoring. En-face Doppler OCT enables subtle changes in the choroidal vasculature to be detected in eyes with RPD and AMD, which has significantly advanced our understanding of their pathogenesis. En-face Doppler OCT has also been shown to be useful for detecting the polypoid lesions and branching vascular networks diagnostic of PCV. It may therefore serve as a noninvasive alternative to fluorescein and indocyanine green angiography for the diagnosis of PCV and other forms of the exudative macular disease.
Reduced Processing of Facial and Postural Cues in Social Anxiety: Insights from Electrophysiology
Rossignol, Mandy; Fisch, Sophie-Alexandra; Maurage, Pierre; Joassin, Frédéric; Philippot, Pierre
2013-01-01
Social anxiety is characterized by fear of evaluative interpersonal situations. Many studies have investigated the perception of emotional faces in socially anxious individuals and have reported biases in the processing of threatening faces. However, faces are not the only stimuli carrying an interpersonal evaluative load. The present study investigated the processing of emotional body postures in social anxiety. Participants with high and low social anxiety completed an attention-shifting paradigm using neutral, angry and happy faces and postures as cues. We investigated early visual processes through the P100 component, attentional fixation on the P2, structural encoding mirrored by the N170, and attentional orientation towards stimuli to detect with the P100 locked on target occurrence. Results showed a global reduction of P100 and P200 responses to faces and postures in socially anxious participants as compared to non-anxious participants, with a direct correlation between self-reported social anxiety levels and P100 and P200 amplitudes. Structural encoding of cues and target processing were not modulated by social anxiety, but socially anxious participants were slower to detect the targets. These results suggest a reduced processing of social postural and facial cues in social anxiety. PMID:24040403
Three-dimensional face model reproduction method using multiview images
NASA Astrophysics Data System (ADS)
Nagashima, Yoshio; Agawa, Hiroshi; Kishino, Fumio
1991-11-01
This paper describes a method of reproducing three-dimensional face models using multi-view images for a virtual space teleconferencing system that achieves a realistic visual presence for teleconferencing. The goal of this research, as an integral component of a virtual space teleconferencing system, is to generate a three-dimensional face model from facial images, synthesize images of the model virtually viewed from different angles, and with natural shadow to suit the lighting conditions of the virtual space. The proposed method is as follows: first, front and side view images of the human face are taken by TV cameras. The 3D data of facial feature points are obtained from front- and side-views by an image processing technique based on the color, shape, and correlation of face components. Using these 3D data, the prepared base face models, representing typical Japanese male and female faces, are modified to approximate the input facial image. The personal face model, representing the individual character, is then reproduced. Next, an oblique view image is taken by TV camera. The feature points of the oblique view image are extracted using the same image processing technique. A more precise personal model is reproduced by fitting the boundary of the personal face model to the boundary of the oblique view image. The modified boundary of the personal face model is determined by using face direction, namely rotation angle, which is detected based on the extracted feature points. After the 3D model is established, the new images are synthesized by mapping facial texture onto the model.
Fast and Famous: Looking for the Fastest Speed at Which a Face Can be Recognized
Barragan-Jason, Gladys; Besson, Gabriel; Ceccaldi, Mathieu; Barbeau, Emmanuel J.
2012-01-01
Face recognition is supposed to be fast. However, the actual speed at which faces can be recognized remains unknown. To address this issue, we report two experiments run with speed constraints. In both experiments, famous faces had to be recognized among unknown ones using a large set of stimuli to prevent pre-activation of features which would speed up recognition. In the first experiment (31 participants), recognition of famous faces was investigated using a rapid go/no-go task. In the second experiment, 101 participants performed a highly time constrained recognition task using the Speed and Accuracy Boosting procedure. Results indicate that the fastest speed at which a face can be recognized is around 360–390 ms. Such latencies are about 100 ms longer than the latencies recorded in similar tasks in which subjects have to detect faces among other stimuli. We discuss which model of activation of the visual ventral stream could account for such latencies. These latencies are not consistent with a purely feed-forward pass of activity throughout the visual ventral stream. An alternative is that face recognition relies on the core network underlying face processing identified in fMRI studies (OFA, FFA, and pSTS) and reentrant loops to refine face representation. However, the model of activation favored is that of an activation of the whole visual ventral stream up to anterior areas, such as the perirhinal cortex, combined with parallel and feed-back processes. Further studies are needed to assess which of these three models of activation can best account for face recognition. PMID:23460051
A new paradigm of oral cancer detection using digital infrared thermal imaging
NASA Astrophysics Data System (ADS)
Chakraborty, M.; Mukhopadhyay, S.; Dasgupta, A.; Banerjee, S.; Mukhopadhyay, S.; Patsa, S.; Ray, J. G.; Chaudhuri, K.
2016-03-01
Histopathology is considered the gold standard for oral cancer detection. But a major fraction of patient pop- ulation is incapable of accessing such healthcare facilities due to poverty. Moreover, such analysis may report false negatives when test tissue is not collected from exact cancerous location. The proposed work introduces a pioneering computer aided paradigm of fast, non-invasive and non-ionizing modality for oral cancer detection us- ing Digital Infrared Thermal Imaging (DITI). Due to aberrant metabolic activities in carcinogenic facial regions, heat signatures of patients are different from that of normal subjects. The proposed work utilizes asymmetry of temperature distribution of facial regions as principle cue for cancer detection. Three views of a subject, viz. front, left and right are acquired using long infrared (7:5 - 13μm) camera for analysing distribution of temperature. We study asymmetry of facial temperature distribution between: a) left and right profile faces and b) left and right half of frontal face. Comparison of temperature distribution suggests that patients manifest greater asymmetry compared to normal subjects. For classification, we initially use k-means and fuzzy k-means for unsupervised clustering followed by cluster class prototype assignment based on majority voting. Average classification accuracy of 91:5% and 92:8% are achieved by k-mean and fuzzy k-mean framework for frontal face. The corresponding metrics for profile face are 93:4% and 95%. Combining features of frontal and profile faces, average accuracies are increased to 96:2% and 97:6% respectively for k-means and fuzzy k-means framework.
Mei, Shuang; Wang, Yudan; Wen, Guojun; Hu, Yang
2018-05-03
Increasing deployment of optical fiber networks and the need for reliable high bandwidth make the task of inspecting optical fiber connector end faces a crucial process that must not be neglected. Traditional end face inspections are usually performed by manual visual methods, which are low in efficiency and poor in precision for long-term industrial applications. More seriously, the inspection results cannot be quantified for subsequent analysis. Aiming at the characteristics of typical defects in the inspection process for optical fiber end faces, we propose a novel method, “difference of min-max ranking filtering” (DO2MR), for detection of region-based defects, e.g., dirt, oil, contamination, pits, and chips, and a special model, a “linear enhancement inspector” (LEI), for the detection of scratches. The DO2MR is a morphology method that intends to determine whether a pixel belongs to a defective region by comparing the difference of gray values of pixels in the neighborhood around the pixel. The LEI is also a morphology method that is designed to search for scratches at different orientations with a special linear detector. These two approaches can be easily integrated into optical inspection equipment for automatic quality verification. As far as we know, this is the first time that complete defect detection methods for optical fiber end faces are available in the literature. Experimental results demonstrate that the proposed DO2MR and LEI models yield good comprehensive performance with high precision and accepted recall rates, and the image-level detection accuracies reach 96.0 and 89.3%, respectively.
Fukushima, Hirokata; Hirata, Satoshi; Ueno, Ari; Matsuda, Goh; Fuwa, Kohki; Sugama, Keiko; Kusunoki, Kiyo; Hirai, Masahiro; Hiraki, Kazuo; Tomonaga, Masaki; Hasegawa, Toshikazu
2010-01-01
Background The neural system of our closest living relative, the chimpanzee, is a topic of increasing research interest. However, electrophysiological examinations of neural activity during visual processing in awake chimpanzees are currently lacking. Methodology/Principal Findings In the present report, skin-surface event-related brain potentials (ERPs) were measured while a fully awake chimpanzee observed photographs of faces and objects in two experiments. In Experiment 1, human faces and stimuli composed of scrambled face images were displayed. In Experiment 2, three types of pictures (faces, flowers, and cars) were presented. The waveforms evoked by face stimuli were distinguished from other stimulus types, as reflected by an enhanced early positivity appearing before 200 ms post stimulus, and an enhanced late negativity after 200 ms, around posterior and occipito-temporal sites. Face-sensitive activity was clearly observed in both experiments. However, in contrast to the robustly observed face-evoked N170 component in humans, we found that faces did not elicit a peak in the latency range of 150–200 ms in either experiment. Conclusions/Significance Although this pilot study examined a single subject and requires further examination, the observed scalp voltage patterns suggest that selective processing of faces in the chimpanzee brain can be detected by recording surface ERPs. In addition, this non-invasive method for examining an awake chimpanzee can be used to extend our knowledge of the characteristics of visual cognition in other primate species. PMID:20967284
Helical Face Gear Development Under the Enhanced Rotorcraft Drive System Program
NASA Technical Reports Server (NTRS)
Heath, Gregory F.; Slaughter, Stephen C.; Fisher, David J.; Lewicki, David G.; Fetty, Jason
2011-01-01
U.S. Army goals for the Enhanced Rotorcraft Drive System Program are to achieve a 40 percent increase in horsepower to weight ratio, a 15 dB reduction in drive system generated noise, 30 percent reduction in drive system operating, support, and acquisition cost, and 75 percent automatic detection of critical mechanical component failures. Boeing s technology transition goals are that the operational endurance level of the helical face gearing and related split-torque designs be validated to a TRL 6, and that analytical and manufacturing tools be validated. Helical face gear technology is being developed in this project to augment, and transition into, a Boeing AH-64 Block III split-torque face gear main transmission stage, to yield increased power density and reduced noise. To date, helical face gear grinding development on Northstar s new face gear grinding machine and pattern-development tests at the NASA Glenn/U.S. Army Research Laboratory have been completed and are described.
Leakage Account for Radial Face Contact Seal in Aircraft Engine Support
NASA Astrophysics Data System (ADS)
Vinogradov, A. S.; Sergeeva, T. V.
2018-01-01
The article is dedicated to the development of a methodology for the radial face contact seal design taking into consideration the supporting elements deformations in different aircraft engine operating modes. Radial face contact seals are popular in the aircraft engines bearing support. However, there are no published leakage calculation methodologies of these seals. Radial face contact seal leakage is determined by the gap clearance in the carbon seal ring split. In turn, the size gap clearance depends on the deformation of the seal assembly parts and from the engine operation. The article shows the leakage detection sequence in the intershaft radial face contact seal of the compressor support for take-off and cruising modes. Evaluated calculated leakage values (2.4 g/s at takeoff and 0.75 g/s at cruising) go with experience in designing seals.
Women are better at seeing faces where there are none: an ERP study of face pareidolia
Galli, Jessica
2016-01-01
Event-related potentials (ERPs) were recorded in 26 right-handed students while they detected pictures of animals intermixed with those of familiar objects, faces and faces-in-things (FITs). The face-specific N170 ERP component over the right hemisphere was larger in response to faces and FITs than to objects. The vertex positive potential (VPP) showed a difference in FIT encoding processes between males and females at frontal sites; while for men, the FIT stimuli elicited a VPP of intermediate amplitude (between that for faces and objects), for women, there was no difference in VPP responses to faces or FITs, suggesting a marked anthropomorphization of objects in women. SwLORETA source reconstructions carried out to estimate the intracortical generators of ERPs in the 150–190 ms time window showed how, in the female brain, FIT perception was associated with the activation of brain areas involved in the affective processing of faces (right STS, BA22; posterior cingulate cortex, BA22; and orbitofrontal cortex, BA10) in addition to regions linked to shape processing (left cuneus, BA18/30). Conversely, in the men, the activation of occipito/parietal regions was prevalent, with a considerably smaller activation of BA10. The data suggest that the female brain is more inclined to anthropomorphize perfectly real objects compared to the male brain. PMID:27217120
Brunyé, Tad T; Moran, Joseph M; Holmes, Amanda; Mahoney, Caroline R; Taylor, Holly A
2017-04-01
The human extrastriate cortex contains a region critically involved in face detection and memory, the right fusiform gyrus. The present study evaluated whether transcranial direct current stimulation (tDCS) targeting this anatomical region would selectively influence memory for faces versus non-face objects (houses). Anodal tDCS targeted the right fusiform gyrus (Brodmann's Area 37), with the anode at electrode site PO10, and cathode at FP2. Two stimulation conditions were compared in a repeated-measures design: 0.5mA versus 1.5mA intensity; a separate control group received no stimulation. Participants completed a working memory task for face and house stimuli, varying in memory load from 1 to 4 items. Individual differences measures assessed trait-based differences in facial recognition skills. Results showed 1.5mA intensity stimulation (versus 0.5mA and control) increased performance at high memory loads, but only with faces. Lower overall working memory capacity predicted a positive impact of tDCS. Results provide support for the notion of functional specialization of the right fusiform regions for maintaining face (but not non-face object) stimuli in working memory, and further suggest that low intensity electrical stimulation of this region may enhance demanding face working memory performance particularly in those with relatively poor baseline working memory skills. Published by Elsevier Inc.
Non-intrusive head movement analysis of videotaped seizures of epileptic origin.
Mandal, Bappaditya; Eng, How-Lung; Lu, Haiping; Chan, Derrick W S; Ng, Yen-Ling
2012-01-01
In this work we propose a non-intrusive video analytic system for patient's body parts movement analysis in Epilepsy Monitoring Unit. The system utilizes skin color modeling, head/face pose template matching and face detection to analyze and quantify the head movements. Epileptic patients' heads are analyzed holistically to infer seizure and normal random movements. The patient does not require to wear any special clothing, markers or sensors, hence it is totally non-intrusive. The user initializes the person-specific skin color and selects few face/head poses in the initial few frames. The system then tracks the head/face and extracts spatio-temporal features. Support vector machines are then used on these features to classify seizure-like movements from normal random movements. Experiments are performed on numerous long hour video sequences captured in an Epilepsy Monitoring Unit at a local hospital. The results demonstrate the feasibility of the proposed system in pediatric epilepsy monitoring and seizure detection.
Explaining Sad People’s Memory Advantage for Faces
Hills, Peter J.; Marquardt, Zoe; Young, Isabel; Goodenough, Imogen
2017-01-01
Sad people recognize faces more accurately than happy people (Hills et al., 2011). We devised four hypotheses for this finding that are tested between in the current study. The four hypotheses are: (1) sad people engage in more expert processing associated with face processing; (2) sad people are motivated to be more accurate than happy people in an attempt to repair their mood; (3) sad people have a defocused attentional strategy that allows more information about a face to be encoded; and (4) sad people scan more of the face than happy people leading to more facial features to be encoded. In Experiment 1, we found that dysphoria (sad mood often associated with depression) was not correlated with the face-inversion effect (a measure of expert processing) nor with response times but was correlated with defocused attention and recognition accuracy. Experiment 2 established that dysphoric participants detected changes made to more facial features than happy participants. In Experiment 3, using eye-tracking we found that sad-induced participants sampled more of the face whilst avoiding the eyes. Experiment 4 showed that sad-induced people demonstrated a smaller own-ethnicity bias. These results indicate that sad people show different attentional allocation to faces than happy and neutral people. PMID:28261138
From tiger to panda: animal head detection.
Zhang, Weiwei; Sun, Jian; Tang, Xiaoou
2011-06-01
Robust object detection has many important applications in real-world online photo processing. For example, both Google image search and MSN live image search have integrated human face detector to retrieve face or portrait photos. Inspired by the success of such face filtering approach, in this paper, we focus on another popular online photo category--animal, which is one of the top five categories in the MSN live image search query log. As a first attempt, we focus on the problem of animal head detection of a set of relatively large land animals that are popular on the internet, such as cat, tiger, panda, fox, and cheetah. First, we proposed a new set of gradient oriented feature, Haar of Oriented Gradients (HOOG), to effectively capture the shape and texture features on animal head. Then, we proposed two detection algorithms, namely Bruteforce detection and Deformable detection, to effectively exploit the shape feature and texture feature simultaneously. Experimental results on 14,379 well labeled animals images validate the superiority of the proposed approach. Additionally, we apply the animal head detector to improve the image search result through text based online photo search result filtering.
Lip boundary detection techniques using color and depth information
NASA Astrophysics Data System (ADS)
Kim, Gwang-Myung; Yoon, Sung H.; Kim, Jung H.; Hur, Gi Taek
2002-01-01
This paper presents our approach to using a stereo camera to obtain 3-D image data to be used to improve existing lip boundary detection techniques. We show that depth information as provided by our approach can be used to significantly improve boundary detection systems. Our system detects the face and mouth area in the image by using color, geometric location, and additional depth information for the face. Initially, color and depth information can be used to localize the face. Then we can determine the lip region from the intensity information and the detected eye locations. The system has successfully been used to extract approximate lip regions using RGB color information of the mouth area. Merely using color information is not robust because the quality of the results may vary depending on light conditions, background, and the human race. To overcome this problem, we used a stereo camera to obtain 3-D facial images. 3-D data constructed from the depth information along with color information can provide more accurate lip boundary detection results as compared to color only based techniques.
Swartz, Erik E.; Belmore, Keith; Decoster, Laura C.; Armstrong, Charles W.
2010-01-01
Abstract Context: Football helmet face-mask attachment design changes might affect the effectiveness of face-mask removal. Objective: To compare the efficiency of face-mask removal between newly designed and traditional football helmets. Design: Controlled laboratory study. Setting: Applied biomechanics laboratory. Participants: Twenty-five certified athletic trainers. Intervention(s): The independent variable was face-mask attachment system on 5 levels: (1) Revolution IQ with Quick Release (QR), (2) Revolution IQ with Quick Release hardware altered (QRAlt), (3) traditional (Trad), (4) traditional with hardware altered (TradAlt), and (5) ION 4D (ION). Participants removed face masks using a cordless screwdriver with a back-up cutting tool or only the cutting tool for the ION. Investigators altered face-mask hardware to unexpectedly challenge participants during removal for traditional and Revolution IQ helmets. Participants completed each condition twice in random order and were blinded to hardware alteration. Main Outcome Measure(s): Removal success, removal time, helmet motion, and rating of perceived exertion (RPE). Time and 3-dimensional helmet motion were recorded. If the face mask remained attached at 3 minutes, the trial was categorized as unsuccessful. Participants rated each trial for level of difficulty (RPE). We used repeated-measures analyses of variance (α = .05) with follow-up comparisons to test for differences. Results: Removal success was 100% (48 of 48) for QR, Trad, and ION; 97.9% (47 of 48) for TradAlt; and 72.9% (35 of 48) for QRAlt. Differences in time for face-mask removal were detected (F4,20 = 48.87, P = .001), with times ranging from 33.96 ± 14.14 seconds for QR to 99.22 ± 20.53 seconds for QRAlt. Differences were found in range of motion during face-mask removal (F4,20 = 16.25, P = .001), with range of motion from 10.10° ± 3.07° for QR to 16.91° ± 5.36° for TradAlt. Differences also were detected in RPE during face-mask removal (F4,20 = 43.20, P = .001), with participants reporting average perceived difficulty ranging from 1.44 ± 1.19 for QR to 3.68 ± 1.70 for TradAlt. Conclusions: The QR and Trad trials resulted in superior results. When trials required cutting loop straps, results deteriorated. PMID:21062179
Right-Wing Politicians Prefer the Emotional Left
Thomas, Nicole A.; Loetscher, Tobias; Clode, Danielle; Nicholls, Michael E. R.
2012-01-01
Physiological research suggests that social attitudes, such as political beliefs, may be partly hard-wired in the brain. Conservatives have heightened sensitivity for detecting emotional faces and use emotion more effectively when campaigning. As the left face displays emotion more prominently, we examined 1538 official photographs of conservative and liberal politicians from Australia, Canada, the United Kingdom and the United States for an asymmetry in posing. Across nations, conservatives were more likely than liberals to display the left cheek. In contrast, liberals were more likely to face forward than were conservatives. Emotion is important in political campaigning and as portraits influence voting decisions, conservative politicians may intuitively display the left face to convey emotion to voters. PMID:22567166
A multi-camera system for real-time pose estimation
NASA Astrophysics Data System (ADS)
Savakis, Andreas; Erhard, Matthew; Schimmel, James; Hnatow, Justin
2007-04-01
This paper presents a multi-camera system that performs face detection and pose estimation in real-time and may be used for intelligent computing within a visual sensor network for surveillance or human-computer interaction. The system consists of a Scene View Camera (SVC), which operates at a fixed zoom level, and an Object View Camera (OVC), which continuously adjusts its zoom level to match objects of interest. The SVC is set to survey the whole filed of view. Once a region has been identified by the SVC as a potential object of interest, e.g. a face, the OVC zooms in to locate specific features. In this system, face candidate regions are selected based on skin color and face detection is accomplished using a Support Vector Machine classifier. The locations of the eyes and mouth are detected inside the face region using neural network feature detectors. Pose estimation is performed based on a geometrical model, where the head is modeled as a spherical object that rotates upon the vertical axis. The triangle formed by the mouth and eyes defines a vertical plane that intersects the head sphere. By projecting the eyes-mouth triangle onto a two dimensional viewing plane, equations were obtained that describe the change in its angles as the yaw pose angle increases. These equations are then combined and used for efficient pose estimation. The system achieves real-time performance for live video input. Testing results assessing system performance are presented for both still images and video.
Robust Face Detection from Still Images
2014-01-01
significant change in false acceptance rates. Keywords— face detection; illumination; skin color variation; Haar-like features; OpenCV I. INTRODUCTION... OpenCV and an algorithm which used histogram equalization. The test is performed against 17 subjects under 576 viewing conditions from the extended Yale...original OpenCV algorithm proved the least accurate, having a hit rate of only 75.6%. It also had the lowest FAR but only by a slight margin at 25.2
Nondestructive Evaluation (NDE) for Inspection of Composite Sandwich Structures
NASA Technical Reports Server (NTRS)
Zalameda, Joseph N.; Parker, F. Raymond
2014-01-01
Composite honeycomb structures are widely used in aerospace applications due to their low weight and high strength advantages. Developing nondestructive evaluation (NDE) inspection methods are essential for their safe performance. Flash thermography is a commonly used technique for composite honeycomb structure inspections due to its large area and rapid inspection capability. Flash thermography is shown to be sensitive for detection of face sheet impact damage and face sheet to core disbond. Data processing techniques, using principal component analysis to improve the defect contrast, are discussed. Limitations to the thermal detection of the core are investigated. In addition to flash thermography, X-ray computed tomography is used. The aluminum honeycomb core provides excellent X-ray contrast compared to the composite face sheet. The X-ray CT technique was used to detect impact damage, core crushing, and skin to core disbonds. Additionally, the X-ray CT technique is used to validate the thermography results.
GOM-Face: GKP, EOG, and EMG-based multimodal interface with application to humanoid robot control.
Nam, Yunjun; Koo, Bonkon; Cichocki, Andrzej; Choi, Seungjin
2014-02-01
We present a novel human-machine interface, called GOM-Face , and its application to humanoid robot control. The GOM-Face bases its interfacing on three electric potentials measured on the face: 1) glossokinetic potential (GKP), which involves the tongue movement; 2) electrooculogram (EOG), which involves the eye movement; 3) electromyogram, which involves the teeth clenching. Each potential has been individually used for assistive interfacing to provide persons with limb motor disabilities or even complete quadriplegia an alternative communication channel. However, to the best of our knowledge, GOM-Face is the first interface that exploits all these potentials together. We resolved the interference between GKP and EOG by extracting discriminative features from two covariance matrices: a tongue-movement-only data matrix and eye-movement-only data matrix. With the feature extraction method, GOM-Face can detect four kinds of horizontal tongue or eye movements with an accuracy of 86.7% within 2.77 s. We demonstrated the applicability of the GOM-Face to humanoid robot control: users were able to communicate with the robot by selecting from a predefined menu using the eye and tongue movements.
NASA Astrophysics Data System (ADS)
Morishima, Shigeo; Nakamura, Satoshi
2004-12-01
We introduce a multimodal English-to-Japanese and Japanese-to-English translation system that also translates the speaker's speech motion by synchronizing it to the translated speech. This system also introduces both a face synthesis technique that can generate any viseme lip shape and a face tracking technique that can estimate the original position and rotation of a speaker's face in an image sequence. To retain the speaker's facial expression, we substitute only the speech organ's image with the synthesized one, which is made by a 3D wire-frame model that is adaptable to any speaker. Our approach provides translated image synthesis with an extremely small database. The tracking motion of the face from a video image is performed by template matching. In this system, the translation and rotation of the face are detected by using a 3D personal face model whose texture is captured from a video frame. We also propose a method to customize the personal face model by using our GUI tool. By combining these techniques and the translated voice synthesis technique, an automatic multimodal translation can be achieved that is suitable for video mail or automatic dubbing systems into other languages.
The importance of internal facial features in learning new faces.
Longmore, Christopher A; Liu, Chang Hong; Young, Andrew W
2015-01-01
For familiar faces, the internal features (eyes, nose, and mouth) are known to be differentially salient for recognition compared to external features such as hairstyle. Two experiments are reported that investigate how this internal feature advantage accrues as a face becomes familiar. In Experiment 1, we tested the contribution of internal and external features to the ability to generalize from a single studied photograph to different views of the same face. A recognition advantage for the internal features over the external features was found after a change of viewpoint, whereas there was no internal feature advantage when the same image was used at study and test. In Experiment 2, we removed the most salient external feature (hairstyle) from studied photographs and looked at how this affected generalization to a novel viewpoint. Removing the hair from images of the face assisted generalization to novel viewpoints, and this was especially the case when photographs showing more than one viewpoint were studied. The results suggest that the internal features play an important role in the generalization between different images of an individual's face by enabling the viewer to detect the common identity-diagnostic elements across non-identical instances of the face.
De Los Reyes, Andres; Augenstein, Tara M; Aldao, Amelia; Thomas, Sarah A; Daruwala, Samantha; Kline, Kathryn; Regan, Timothy
2015-01-01
Social stressor tasks induce adolescents' social distress as indexed by low-cost psychophysiological methods. Unknown is how to incorporate these methods within clinical assessments. Having assessors judge graphical depictions of psychophysiological data may facilitate detections of data patterns that may be difficult to identify using judgments about numerical depictions of psychophysiological data. Specifically, the Chernoff Face method involves graphically representing data using features on the human face (eyes, nose, mouth, and face shape). This method capitalizes on humans' abilities to discern subtle variations in facial features. Using adolescent heart rate norms and Chernoff Faces, we illustrated a method for implementing psychophysiology within clinical assessments of adolescent social anxiety. Twenty-two clinic-referred adolescents completed a social anxiety self-report and provided psychophysiological data using wireless heart rate monitors during a social stressor task. We graphically represented participants' psychophysiological data and normative adolescent heart rates. For each participant, two undergraduate coders made comparative judgments between the dimensions (eyes, nose, mouth, and face shape) of two Chernoff Faces. One Chernoff Face represented a participant's heart rate within a context (baseline, speech preparation, or speech-giving). The second Chernoff Face represented normative heart rate data matched to the participant's age. Using Chernoff Faces, coders reliably and accurately identified contextual variation in participants' heart rate responses to social stress. Further, adolescents' self-reported social anxiety symptoms predicted Chernoff Face judgments, and judgments could be differentiated by social stress context. Our findings have important implications for implementing psychophysiology within clinical assessments of adolescent social anxiety.
Enhanced processing of threat stimuli under limited attentional resources.
De Martino, Benedetto; Kalisch, Raffael; Rees, Geraint; Dolan, Raymond J
2009-01-01
The ability to process stimuli that convey potential threat, under conditions of limited attentional resources, confers adaptive advantages. This study examined the neurobiology underpinnings of this capacity. Employing an attentional blink paradigm, in conjunction with functional magnetic resonance imaging, we manipulated the salience of the second of 2 face target stimuli (T2), by varying emotionality. Behaviorally, fearful T2 faces were identified significantly more than neutral faces. Activity in fusiform face area increased with correct identification of T2 faces. Enhanced activity in rostral anterior cingulate cortex (rACC) accounted for the benefit in detection of fearful stimuli reflected in a significant interaction between target valence and correct identification. Thus, under conditions of limited attention resources activation in rACC correlated with enhanced processing of emotional stimuli. We suggest that these data support a model in which a prefrontal "gate" mechanism controls conscious access of emotional information under conditions of limited attentional resources.
Familiarity enhances visual working memory for faces.
Jackson, Margaret C; Raymond, Jane E
2008-06-01
Although it is intuitive that familiarity with complex visual objects should aid their preservation in visual working memory (WM), empirical evidence for this is lacking. This study used a conventional change-detection procedure to assess visual WM for unfamiliar and famous faces in healthy adults. Across experiments, faces were upright or inverted and a low- or high-load concurrent verbal WM task was administered to suppress contribution from verbal WM. Even with a high verbal memory load, visual WM performance was significantly better and capacity estimated as significantly greater for famous versus unfamiliar faces. Face inversion abolished this effect. Thus, neither strategic, explicit support from verbal WM nor low-level feature processing easily accounts for the observed benefit of high familiarity for visual WM. These results demonstrate that storage of items in visual WM can be enhanced if robust visual representations of them already exist in long-term memory.
The influence of working memory on the anger superiority effect.
Moriya, Jun; Koster, Ernst H W; De Raedt, Rudi
2014-01-01
The anger superiority effect shows that an angry face is detected more efficiently than a happy face. However, it is still controversial whether attentional allocation to angry faces is a bottom-up process or not. We investigated whether the anger superiority effect is influenced by top-down control, especially working memory (WM). Participants remembered a colour and then searched for differently coloured facial expressions. Just holding the colour information in WM did not modulate the anger superiority effect. However, when increasing the probabilities of trials in which the colour of a target face matched the colour held in WM, participants were inclined to direct attention to the target face regardless of the facial expression. Moreover, the knowledge of high probability of valid trials eliminated the anger superiority effect. These results suggest that the anger superiority effect is modulated by top-down effects of WM, the probability of events and expectancy about these probabilities.
Holló, Gábor
2015-12-01
In addition to retinal nerve fiber layer thickness measurements, the recently introduced AngioVue optical coherence tomography (OCT) offers corresponding layer-by-layer Doppler OCT and en face OCT functions, for simultaneous evaluation of perfusion and structure of the optic nerve head. We investigated the clinical usefulness of combined use of Doppler and en face Fourier-domain OCT functions of the AngioVue Fourier-domain OCT for discrimination of a disc hemorrhage and a disc hemorrhage-like atypical vessel structure located deep in the lamina cribrosa. We present our findings with AngioVue OCT on a disc hemorrhage and a spatially related retinal nerve fiber layer bundle defect in a glaucomatous eye (case 1). Both alterations were detected on en face OCT images without any Doppler OCT signal. We also report on an aneurysm suggestive for a disc hemorrhage on clinical examination and disc photography in a treated ocular hypertensive eye (case 2). The aneurysm was within the lamina cribrosa tissue at the border of the cup and the neuroretinal rim. This vascular structure produced strong Doppler signals but no structurally detectable signs on the en face OCT images. Combined evaluation of corresponding Doppler OCT and en face OCT images enables ophthalmologists to easily separate true disc hemorrhages from disc hemorrhage-like deep vascular structures. This is of clinical significance in preventing unnecessary intensification of pressure-lowering treatment in glaucoma.
Cross-modal enhancement of speech detection in young and older adults: does signal content matter?
Tye-Murray, Nancy; Spehar, Brent; Myerson, Joel; Sommers, Mitchell S; Hale, Sandra
2011-01-01
The purpose of the present study was to examine the effects of age and visual content on cross-modal enhancement of auditory speech detection. Visual content consisted of three clearly distinct types of visual information: an unaltered video clip of a talker's face, a low-contrast version of the same clip, and a mouth-like Lissajous figure. It was hypothesized that both young and older adults would exhibit reduced enhancement as visual content diverged from the original clip of the talker's face, but that the decrease would be greater for older participants. Nineteen young adults and 19 older adults were asked to detect a single spoken syllable (/ba/) in speech-shaped noise, and the level of the signal was adaptively varied to establish the signal-to-noise ratio (SNR) at threshold. There was an auditory-only baseline condition and three audiovisual conditions in which the syllable was accompanied by one of the three visual signals (the unaltered clip of the talker's face, the low-contrast version of that clip, or the Lissajous figure). For each audiovisual condition, the SNR at threshold was compared with the SNR at threshold for the auditory-only condition to measure the amount of cross-modal enhancement. Young adults exhibited significant cross-modal enhancement with all three types of visual stimuli, with the greatest amount of enhancement observed for the unaltered clip of the talker's face. Older adults, in contrast, exhibited significant cross-modal enhancement only with the unaltered face. Results of this study suggest that visual signal content affects cross-modal enhancement of speech detection in both young and older adults. They also support a hypothesized age-related deficit in processing low-contrast visual speech stimuli, even in older adults with normal contrast sensitivity.
Cao, Jianqin; Liu, Quanying; Li, Yang; Yang, Jun; Gu, Ruolei; Liang, Jin; Qi, Yanyan; Wu, Haiyan; Liu, Xun
2017-07-28
Previous studies of patients with social anxiety have demonstrated abnormal early processing of facial stimuli in social contexts. In other words, patients with social anxiety disorder (SAD) tend to exhibit enhanced early facial processing when compared to healthy controls. Few studies have examined the temporal electrophysiological event-related potential (ERP)-indexed profiles when an individual with SAD compares faces to objects in SAD. Systematic comparisons of ERPs to facial/object stimuli before and after therapy are also lacking. We used a passive visual detection paradigm with upright and inverted faces/objects, which are known to elicit early P1 and N170 components, to study abnormal early face processing and subsequent improvements in this measure in patients with SAD. Seventeen patients with SAD and 17 matched control participants performed a passive visual detection paradigm task while undergoing EEG. The healthy controls were compared to patients with SAD pre-therapy to test the hypothesis that patients with SAD have early hypervigilance to facial cues. We compared patients with SAD before and after therapy to test the hypothesis that the early hypervigilance to facial cues in patients with SAD can be alleviated. Compared to healthy control (HC) participants, patients with SAD had more robust P1-N170 slope but no amplitude effects in response to both upright and inverted faces and objects. Interestingly, we found that patients with SAD had reduced P1 responses to all objects and faces after therapy, but had selectively reduced N170 responses to faces, and especially inverted faces. Interestingly, the slope from P1 to N170 in patients with SAD was flatter post-therapy than pre-therapy. Furthermore, the amplitude of N170 evoked by the facial stimuli was correlated with scores on the interaction anxiousness scale (IAS) after therapy. Our results did not provide electrophysiological support for the early hypervigilance hypothesis in SAD to faces, but confirm that cognitive-behavioural therapy can reduce the early visual processing of faces. These findings have potentially important therapeutic implications in the assessment and treatment of social anxiety. Trial registration HEBDQ2014021.
Women are better at seeing faces where there are none: an ERP study of face pareidolia.
Proverbio, Alice M; Galli, Jessica
2016-09-01
Event-related potentials (ERPs) were recorded in 26 right-handed students while they detected pictures of animals intermixed with those of familiar objects, faces and faces-in-things (FITs). The face-specific N170 ERP component over the right hemisphere was larger in response to faces and FITs than to objects. The vertex positive potential (VPP) showed a difference in FIT encoding processes between males and females at frontal sites; while for men, the FIT stimuli elicited a VPP of intermediate amplitude (between that for faces and objects), for women, there was no difference in VPP responses to faces or FITs, suggesting a marked anthropomorphization of objects in women. SwLORETA source reconstructions carried out to estimate the intracortical generators of ERPs in the 150-190 ms time window showed how, in the female brain, FIT perception was associated with the activation of brain areas involved in the affective processing of faces (right STS, BA22; posterior cingulate cortex, BA22; and orbitofrontal cortex, BA10) in addition to regions linked to shape processing (left cuneus, BA18/30). Conversely, in the men, the activation of occipito/parietal regions was prevalent, with a considerably smaller activation of BA10. The data suggest that the female brain is more inclined to anthropomorphize perfectly real objects compared to the male brain. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Facelock: familiarity-based graphical authentication.
Jenkins, Rob; McLachlan, Jane L; Renaud, Karen
2014-01-01
Authentication codes such as passwords and PIN numbers are widely used to control access to resources. One major drawback of these codes is that they are difficult to remember. Account holders are often faced with a choice between forgetting a code, which can be inconvenient, or writing it down, which compromises security. In two studies, we test a new knowledge-based authentication method that does not impose memory load on the user. Psychological research on face recognition has revealed an important distinction between familiar and unfamiliar face perception: When a face is familiar to the observer, it can be identified across a wide range of images. However, when the face is unfamiliar, generalisation across images is poor. This contrast can be used as the basis for a personalised 'facelock', in which authentication succeeds or fails based on image-invariant recognition of faces that are familiar to the account holder. In Study 1, account holders authenticated easily by detecting familiar targets among other faces (97.5% success rate), even after a one-year delay (86.1% success rate). Zero-acquaintance attackers were reduced to guessing (<1% success rate). Even personal attackers who knew the account holder well were rarely able to authenticate (6.6% success rate). In Study 2, we found that shoulder-surfing attacks by strangers could be defeated by presenting different photos of the same target faces in observed and attacked grids (1.9% success rate). Our findings suggest that the contrast between familiar and unfamiliar face recognition may be useful for developers of graphical authentication systems.
Multi-stream face recognition for crime-fighting
NASA Astrophysics Data System (ADS)
Jassim, Sabah A.; Sellahewa, Harin
2007-04-01
Automatic face recognition (AFR) is a challenging task that is increasingly becoming the preferred biometric trait for identification and has the potential of becoming an essential tool in the fight against crime and terrorism. Closed-circuit television (CCTV) cameras have increasingly been used over the last few years for surveillance in public places such as airports, train stations and shopping centers. They are used to detect and prevent crime, shoplifting, public disorder and terrorism. The work of law-enforcing and intelligence agencies is becoming more reliant on the use of databases of biometric data for large section of the population. Face is one of the most natural biometric traits that can be used for identification and surveillance. However, variations in lighting conditions, facial expressions, face size and pose are a great obstacle to AFR. This paper is concerned with using waveletbased face recognition schemes in the presence of variations of expressions and illumination. In particular, we will investigate the use of a combination of wavelet frequency channels for a multi-stream face recognition using various wavelet subbands as different face signal streams. The proposed schemes extend our recently developed face veri.cation scheme for implementation on mobile devices. We shall present experimental results on the performance of our proposed schemes for a number of face databases including a new AV database recorded on a PDA. By analyzing the various experimental data, we shall demonstrate that the multi-stream approach is robust against variations in illumination and facial expressions than the previous single-stream approach.
Social anxiety and detection of facial untrustworthiness: Spatio-temporal oculomotor profiles.
Gutiérrez-García, Aida; Calvo, Manuel G; Eysenck, Michael W
2018-04-01
Cognitive models posit that social anxiety is associated with biased attention to and interpretation of ambiguous social cues as threatening. We investigated attentional bias (selective early fixation on the eye region) to account for the tendency to distrust ambiguous smiling faces with non-happy eyes (interpretative bias). Eye movements and fixations were recorded while observers viewed video-clips displaying dynamic facial expressions. Low (LSA) and high (HSA) socially anxious undergraduates with clinical levels of anxiety judged expressers' trustworthiness. Social anxiety was unrelated to trustworthiness ratings for faces with congruent happy eyes and a smile, and for neutral expressions. However, social anxiety was associated with reduced trustworthiness rating for faces with an ambiguous smile, when the eyes slightly changed to neutrality, surprise, fear, or anger. Importantly, HSA observers looked earlier and longer at the eye region, whereas LSA observers preferentially looked at the smiling mouth region. This attentional bias in social anxiety generalizes to all the facial expressions, while the interpretative bias is specific for ambiguous faces. Such biases are adaptive, as they facilitate an early detection of expressive incongruences and the recognition of untrustworthy expressers (e.g., with fake smiles), with no false alarms when judging truly happy or neutral faces. Copyright © 2018 Elsevier B.V. All rights reserved.
Varying face occlusion detection and iterative recovery for face recognition
NASA Astrophysics Data System (ADS)
Wang, Meng; Hu, Zhengping; Sun, Zhe; Zhao, Shuhuan; Sun, Mei
2017-05-01
In most sparse representation methods for face recognition (FR), occlusion problems were usually solved via removing the occlusion part of both query samples and training samples to perform the recognition process. This practice ignores the global feature of facial image and may lead to unsatisfactory results due to the limitation of local features. Considering the aforementioned drawback, we propose a method called varying occlusion detection and iterative recovery for FR. The main contributions of our method are as follows: (1) to detect an accurate occlusion area of facial images, an image processing and intersection-based clustering combination method is used for occlusion FR; (2) according to an accurate occlusion map, the new integrated facial images are recovered iteratively and put into a recognition process; and (3) the effectiveness on recognition accuracy of our method is verified by comparing it with three typical occlusion map detection methods. Experiments show that the proposed method has a highly accurate detection and recovery performance and that it outperforms several similar state-of-the-art methods against partial contiguous occlusion.
Image-Based 3D Face Modeling System
NASA Astrophysics Data System (ADS)
Park, In Kyu; Zhang, Hui; Vezhnevets, Vladimir
2005-12-01
This paper describes an automatic system for 3D face modeling using frontal and profile images taken by an ordinary digital camera. The system consists of four subsystems including frontal feature detection, profile feature detection, shape deformation, and texture generation modules. The frontal and profile feature detection modules automatically extract the facial parts such as the eye, nose, mouth, and ear. The shape deformation module utilizes the detected features to deform the generic head mesh model such that the deformed model coincides with the detected features. A texture is created by combining the facial textures augmented from the input images and the synthesized texture and mapped onto the deformed generic head model. This paper provides a practical system for 3D face modeling, which is highly automated by aggregating, customizing, and optimizing a bunch of individual computer vision algorithms. The experimental results show a highly automated process of modeling, which is sufficiently robust to various imaging conditions. The whole model creation including all the optional manual corrections takes only 2[InlineEquation not available: see fulltext.]3 minutes.
Adaptive error correction codes for face identification
NASA Astrophysics Data System (ADS)
Hussein, Wafaa R.; Sellahewa, Harin; Jassim, Sabah A.
2012-06-01
Face recognition in uncontrolled environments is greatly affected by fuzziness of face feature vectors as a result of extreme variation in recording conditions (e.g. illumination, poses or expressions) in different sessions. Many techniques have been developed to deal with these variations, resulting in improved performances. This paper aims to model template fuzziness as errors and investigate the use of error detection/correction techniques for face recognition in uncontrolled environments. Error correction codes (ECC) have recently been used for biometric key generation but not on biometric templates. We have investigated error patterns in binary face feature vectors extracted from different image windows of differing sizes and for different recording conditions. By estimating statistical parameters for the intra-class and inter-class distributions of Hamming distances in each window, we encode with appropriate ECC's. The proposed approached is tested for binarised wavelet templates using two face databases: Extended Yale-B and Yale. We shall demonstrate that using different combinations of BCH-based ECC's for different blocks and different recording conditions leads to in different accuracy rates, and that using ECC's results in significantly improved recognition results.
Error Rates in Users of Automatic Face Recognition Software
White, David; Dunn, James D.; Schmid, Alexandra C.; Kemp, Richard I.
2015-01-01
In recent years, wide deployment of automatic face recognition systems has been accompanied by substantial gains in algorithm performance. However, benchmarking tests designed to evaluate these systems do not account for the errors of human operators, who are often an integral part of face recognition solutions in forensic and security settings. This causes a mismatch between evaluation tests and operational accuracy. We address this by measuring user performance in a face recognition system used to screen passport applications for identity fraud. Experiment 1 measured target detection accuracy in algorithm-generated ‘candidate lists’ selected from a large database of passport images. Accuracy was notably poorer than in previous studies of unfamiliar face matching: participants made over 50% errors for adult target faces, and over 60% when matching images of children. Experiment 2 then compared performance of student participants to trained passport officers–who use the system in their daily work–and found equivalent performance in these groups. Encouragingly, a group of highly trained and experienced “facial examiners” outperformed these groups by 20 percentage points. We conclude that human performance curtails accuracy of face recognition systems–potentially reducing benchmark estimates by 50% in operational settings. Mere practise does not attenuate these limits, but superior performance of trained examiners suggests that recruitment and selection of human operators, in combination with effective training and mentorship, can improve the operational accuracy of face recognition systems. PMID:26465631
Matsuzaki, Naoyuki; Schwarzlose, Rebecca F.; Nishida, Masaaki; Ofen, Noa; Asano, Eishi
2015-01-01
Behavioral studies demonstrate that a face presented in the upright orientation attracts attention more rapidly than an inverted face. Saccades toward an upright face take place in 100-140 ms following presentation. The present study using electrocorticography determined whether upright face-preferential neural activation, as reflected by augmentation of high-gamma activity at 80-150 Hz, involved the lower-order visual cortex within the first 100 ms post-stimulus presentation. Sampled lower-order visual areas were verified by the induction of phosphenes upon electrical stimulation. These areas resided in the lateral-occipital, lingual, and cuneus gyri along the calcarine sulcus, roughly corresponding to V1 and V2. Measurement of high-gamma augmentation during central (circular) and peripheral (annular) checkerboard reversal pattern stimulation indicated that central-field stimuli were processed by the more polar surface whereas peripheral-field stimuli by the more anterior medial surface. Upright face stimuli, compared to inverted ones, elicited up to 23% larger augmentation of high-gamma activity in the lower-order visual regions at 40-90 ms. Upright face-preferential high-gamma augmentation was more highly correlated with high-gamma augmentation for central than peripheral stimuli. Our observations are consistent with the hypothesis that lower-order visual regions, especially those for the central field, are involved in visual cues for rapid detection of upright face stimuli. PMID:25579446
Kokinous, Jenny; Tavano, Alessandro; Kotz, Sonja A; Schröger, Erich
2017-02-01
The role of spatial frequencies (SF) is highly debated in emotion perception, but previous work suggests the importance of low SFs for detecting emotion in faces. Furthermore, emotion perception essentially relies on the rapid integration of multimodal information from faces and voices. We used EEG to test the functional relevance of SFs in the integration of emotional and non-emotional audiovisual stimuli. While viewing dynamic face-voice pairs, participants were asked to identify auditory interjections, and the electroencephalogram (EEG) was recorded. Audiovisual integration was measured as auditory facilitation, indexed by the extent of the auditory N1 amplitude suppression in audiovisual compared to an auditory only condition. We found an interaction of SF filtering and emotion in the auditory response suppression. For neutral faces, larger N1 suppression ensued in the unfiltered and high SF conditions as compared to the low SF condition. Angry face perception led to a larger N1 suppression in the low SF condition. While the results for the neural faces indicate that perceptual quality in terms of SF content plays a major role in audiovisual integration, the results for angry faces suggest that early multisensory integration of emotional information favors low SF neural processing pathways, overruling the predictive value of the visual signal per se. Copyright © 2016 Elsevier B.V. All rights reserved.
Body orientation and face orientation: two factors controlling apes' behavior from humans.
Kaminski, Juliane; Call, Josep; Tomasello, Michael
2004-10-01
A number of animal species have evolved the cognitive ability to detect when they are being watched by other individuals. Precisely what kind of information they use to make this determination is unknown. There is particular controversy in the case of the great apes because different studies report conflicting results. In experiment 1, we presented chimpanzees, orangutans, and bonobos with a situation in which they had to request food from a human observer who was in one of various attentional states. She either stared at the ape, faced the ape with her eyes closed, sat with her back towards the ape, or left the room. In experiment 2, we systematically crossed the observer's body and face orientation so that the observer could have her body and/or face oriented either towards or away from the subject. Results indicated that apes produced more behaviors when they were being watched. They did this not only on the basis of whether they could see the experimenter as a whole, but they were sensitive to her body and face orientation separately. These results suggest that body and face orientation encode two different types of information. Whereas face orientation encodes the observer's perceptual access, body orientation encodes the observer's disposition to transfer food. In contrast to the results on body and face orientation, only two of the tested subjects responded to the state of the observer's eyes.
NASA Astrophysics Data System (ADS)
Marcauteanu, Corina; Negrutiu, Meda; Sinescu, Cosmin; Demjan, Enikö; Hughes, Michael; Bradu, Adrian; Dobre, George; Podoleanu, Adrian G.
2009-07-01
The aim of this study is the early detection and monitoring of occlusal overload in bruxing patients. En-Face Optical coherence tomography (eF-OCT) and fluorescence microscopy (FM) were used for the imaging of several anterior teeth extracted from patients with light active bruxism. We found a characteristic pattern of enamel cracks, that reached the tooth surface. We concluded that the combination of the en-Face OCT and FM is a promising non-invasive alternative technique for reliable monitoring of occlusal overload.
Rigid particulate matter sensor
Hall, Matthew [Austin, TX
2011-02-22
A sensor to detect particulate matter. The sensor includes a first rigid tube, a second rigid tube, a detection surface electrode, and a bias surface electrode. The second rigid tube is mounted substantially parallel to the first rigid tube. The detection surface electrode is disposed on an outer surface of the first rigid tube. The detection surface electrode is disposed to face the second rigid tube. The bias surface electrode is disposed on an outer surface of the second rigid tube. The bias surface electrode is disposed to face the detection surface electrode on the first rigid tube. An air gap exists between the detection surface electrode and the bias surface electrode to allow particulate matter within an exhaust stream to flow between the detection and bias surface electrodes.
Glued to Which Face? Attentional Priority Effect of Female Babyface and Male Mature Face
Zheng, Wenwen; Luo, Ting; Hu, Chuan-Peng; Peng, Kaiping
2018-01-01
A more babyfaced individual is perceived as more child-like and this impression from babyface, as known as babyface effect, has an impact on social life among various age groups. In this study, the influence of babyfaces on visual selective attention was tested by cognitive task, demonstrating that the female babyface and male mature face would draw participants’ attention so that they take their eyes off more slowly. In Experiment 1, a detection task was applied to test the influence of babyfaces on visual selective attention. In this experiment, a babyface and a mature face with the same gender were presented simultaneously with a letter on one of them. The reaction time was shorter when the target letter was overlaid with a female babyface or male mature face, suggesting an attention capture effect. To explore how this competition influenced by attentional resources, we conducted Experiment 2 with a spatial cueing paradigm and controlled the attentional resources by cueing validity and inter-stimulus interval. In this task, the female babyface and male mature face prolonged responses to the spatially separated targets under the condition of an invalid and long interval pre-cue. This observation replicated the result of Experiment 1. This indicates that the female babyface and male mature face glued visual selective attention once attentional resources were directed to them. To further investigate the subliminal influence from a babyface, we used continuous flash suppression paradigm in Experiment 3. The results, again, showed the advantage of the female babyfaces and male mature faces: they broke the suppression faster than other faces. Our results provide primary evidence that the female babyfaces and male mature faces can reliably glue the visual selective attention, both supra- and sub-liminally. PMID:29559946
Glued to Which Face? Attentional Priority Effect of Female Babyface and Male Mature Face.
Zheng, Wenwen; Luo, Ting; Hu, Chuan-Peng; Peng, Kaiping
2018-01-01
A more babyfaced individual is perceived as more child-like and this impression from babyface, as known as babyface effect, has an impact on social life among various age groups. In this study, the influence of babyfaces on visual selective attention was tested by cognitive task, demonstrating that the female babyface and male mature face would draw participants' attention so that they take their eyes off more slowly. In Experiment 1, a detection task was applied to test the influence of babyfaces on visual selective attention. In this experiment, a babyface and a mature face with the same gender were presented simultaneously with a letter on one of them. The reaction time was shorter when the target letter was overlaid with a female babyface or male mature face, suggesting an attention capture effect. To explore how this competition influenced by attentional resources, we conducted Experiment 2 with a spatial cueing paradigm and controlled the attentional resources by cueing validity and inter-stimulus interval. In this task, the female babyface and male mature face prolonged responses to the spatially separated targets under the condition of an invalid and long interval pre-cue. This observation replicated the result of Experiment 1. This indicates that the female babyface and male mature face glued visual selective attention once attentional resources were directed to them. To further investigate the subliminal influence from a babyface, we used continuous flash suppression paradigm in Experiment 3. The results, again, showed the advantage of the female babyfaces and male mature faces: they broke the suppression faster than other faces. Our results provide primary evidence that the female babyfaces and male mature faces can reliably glue the visual selective attention, both supra- and sub-liminally.
Neural circuitry of emotional face processing in autism spectrum disorders.
Monk, Christopher S; Weng, Shih-Jen; Wiggins, Jillian Lee; Kurapati, Nikhil; Louro, Hugo M C; Carrasco, Melisa; Maslowsky, Julie; Risi, Susan; Lord, Catherine
2010-03-01
Autism spectrum disorders (ASD) are associated with severe impairments in social functioning. Because faces provide nonverbal cues that support social interactions, many studies of ASD have examined neural structures that process faces, including the amygdala, ventromedial prefrontal cortex and superior and middle temporal gyri. However, increases or decreases in activation are often contingent on the cognitive task. Specifically, the cognitive domain of attention influences group differences in brain activation. We investigated brain function abnormalities in participants with ASD using a task that monitored attention bias to emotional faces. Twenty-four participants (12 with ASD, 12 controls) completed a functional magnetic resonance imaging study while performing an attention cuing task with emotional (happy, sad, angry) and neutral faces. In response to emotional faces, those in the ASD group showed greater right amygdala activation than those in the control group. A preliminary psychophysiological connectivity analysis showed that ASD participants had stronger positive right amygdala and ventromedial prefrontal cortex coupling and weaker positive right amygdala and temporal lobe coupling than controls. There were no group differences in the behavioural measure of attention bias to the emotional faces. The small sample size may have affected our ability to detect additional group differences. When attention bias to emotional faces was equivalent between ASD and control groups, ASD was associated with greater amygdala activation. Preliminary analyses showed that ASD participants had stronger connectivity between the amygdala ventromedial prefrontal cortex (a network implicated in emotional modulation) and weaker connectivity between the amygdala and temporal lobe (a pathway involved in the identification of facial expressions, although areas of group differences were generally in a more anterior region of the temporal lobe than what is typically reported for emotional face processing). These alterations in connectivity are consistent with emotion and face processing disturbances in ASD.
Real-time detection and discrimination of visual perception using electrocorticographic signals
NASA Astrophysics Data System (ADS)
Kapeller, C.; Ogawa, H.; Schalk, G.; Kunii, N.; Coon, W. G.; Scharinger, J.; Guger, C.; Kamada, K.
2018-06-01
Objective. Several neuroimaging studies have demonstrated that the ventral temporal cortex contains specialized regions that process visual stimuli. This study investigated the spatial and temporal dynamics of electrocorticographic (ECoG) responses to different types and colors of visual stimulation that were presented to four human participants, and demonstrated a real-time decoder that detects and discriminates responses to untrained natural images. Approach. ECoG signals from the participants were recorded while they were shown colored and greyscale versions of seven types of visual stimuli (images of faces, objects, bodies, line drawings, digits, and kanji and hiragana characters), resulting in 14 classes for discrimination (experiment I). Additionally, a real-time system asynchronously classified ECoG responses to faces, kanji and black screens presented via a monitor (experiment II), or to natural scenes (i.e. the face of an experimenter, natural images of faces and kanji, and a mirror) (experiment III). Outcome measures in all experiments included the discrimination performance across types based on broadband γ activity. Main results. Experiment I demonstrated an offline classification accuracy of 72.9% when discriminating among the seven types (without color separation). Further discrimination of grey versus colored images reached an accuracy of 67.1%. Discriminating all colors and types (14 classes) yielded an accuracy of 52.1%. In experiment II and III, the real-time decoder correctly detected 73.7% responses to face, kanji and black computer stimuli and 74.8% responses to presented natural scenes. Significance. Seven different types and their color information (either grey or color) could be detected and discriminated using broadband γ activity. Discrimination performance maximized for combined spatial-temporal information. The discrimination of stimulus color information provided the first ECoG-based evidence for color-related population-level cortical broadband γ responses in humans. Stimulus categories can be detected by their ECoG responses in real time within 500 ms with respect to stimulus onset.
Does Face Inversion Change Spatial Frequency Tuning?
ERIC Educational Resources Information Center
Willenbockel, Verena; Fiset, Daniel; Chauvin, Alan; Blais, Caroline; Arguin, Martin; Tanaka, James W.; Bub, Daniel N.; Gosselin, Frederic
2010-01-01
The authors examined spatial frequency (SF) tuning of upright and inverted face identification using an SF variant of the Bubbles technique (F. Gosselin & P. G. Schyns, 2001). In Experiment 1, they validated the SF Bubbles technique in a plaid detection task. In Experiments 2a-c, the SFs used for identifying upright and inverted inner facial…
Research on facial expression simulation based on depth image
NASA Astrophysics Data System (ADS)
Ding, Sha-sha; Duan, Jin; Zhao, Yi-wu; Xiao, Bo; Wang, Hao
2017-11-01
Nowadays, face expression simulation is widely used in film and television special effects, human-computer interaction and many other fields. Facial expression is captured by the device of Kinect camera .The method of AAM algorithm based on statistical information is employed to detect and track faces. The 2D regression algorithm is applied to align the feature points. Among them, facial feature points are detected automatically and 3D cartoon model feature points are signed artificially. The aligned feature points are mapped by keyframe techniques. In order to improve the animation effect, Non-feature points are interpolated based on empirical models. Under the constraint of Bézier curves we finish the mapping and interpolation. Thus the feature points on the cartoon face model can be driven if the facial expression varies. In this way the purpose of cartoon face expression simulation in real-time is came ture. The experiment result shows that the method proposed in this text can accurately simulate the facial expression. Finally, our method is compared with the previous method. Actual data prove that the implementation efficiency is greatly improved by our method.
Quantifying facial expression recognition across viewing conditions.
Goren, Deborah; Wilson, Hugh R
2006-04-01
Facial expressions are key to social interactions and to assessment of potential danger in various situations. Therefore, our brains must be able to recognize facial expressions when they are transformed in biologically plausible ways. We used synthetic happy, sad, angry and fearful faces to determine the amount of geometric change required to recognize these emotions during brief presentations. Five-alternative forced choice conditions involving central viewing, peripheral viewing and inversion were used to study recognition among the four emotions. Two-alternative forced choice was used to study affect discrimination when spatial frequency information in the stimulus was modified. The results show an emotion and task-dependent pattern of detection. Facial expressions presented with low peak frequencies are much harder to discriminate from neutral than faces defined by either mid or high peak frequencies. Peripheral presentation of faces also makes recognition much more difficult, except for happy faces. Differences between fearful detection and recognition tasks are probably due to common confusions with sadness when recognizing fear from among other emotions. These findings further support the idea that these emotions are processed separately from each other.
Does the perception of moving eyes trigger reflexive visual orienting in autism?
Swettenham, John; Condie, Samantha; Campbell, Ruth; Milne, Elizabeth; Coleman, Mike
2003-01-01
Does movement of the eyes in one or another direction function as an automatic attentional cue to a location of interest? Two experiments explored the directional movement of the eyes in a full face for speed of detection of an aftercoming location target in young people with autism and in control participants. Our aim was to investigate whether a low-level perceptual impairment underlies the delay in gaze following characteristic of autism. The participants' task was to detect a target appearing on the left or right of the screen either 100 ms or 800 ms after a face cue appeared with eyes averting to the left or right. Despite instructions to ignore eye-movement in the face cue, people with autism and control adolescents were quicker to detect targets that had been preceded by an eye movement cue congruent with target location compared with targets preceded by an incongruent eye movement cue. The attention shifts are thought to be reflexive because the cue was to be ignored, and because the effect was found even when cue-target duration was short (100 ms). Because (experiment two) the effect persisted even when the face was inverted, it would seem that the direction of movement of eyes can provide a powerful (involuntary) cue to a location. PMID:12639330
Activation of the right fronto-temporal cortex during maternal facial recognition in young infants.
Carlsson, Jakob; Lagercrantz, Hugo; Olson, Linus; Printz, Gordana; Bartocci, Marco
2008-09-01
Within the first days of life infants can already recognize their mother. This ability is based on several sensory mechanisms and increases during the first year of life, having its most crucial phase between 6 and 9 months when cortical circuits develop. The underlying cortical structures that are involved in this process are still unknown. Herein we report how the prefrontal cortices of healthy 6- to 9-month-old infants react to the sight of their mother's faces compared to that of an unknown female face. Concentrations of oxygenated haemoglobin [HbO2] and deoxygenated haemoglobin [HHb] were measured using near infrared spectroscopy (NIRS) in both fronto-temporal and occipital areas on the right side during the exposure to maternal and unfamiliar faces. The infants exhibited a distinct and significantly higher activation-related haemodynamic response in the right fronto-temporal cortex following exposure to the image of their mother's face, [HbO2] (0.75 micromol/L, p < 0.001), as compared to that of an unknown face (0.25 micromol/L, p < 0.001). Event-related haemodynamic changes, suggesting cortical activation, in response to the sight of human faces were detected in 6- to 9-month old children. The right fronto-temporal cortex appears to be involved in face recognition processes at this age.
Affective learning modulates spatial competition during low-load attentional conditions.
Lim, Seung-Lark; Padmala, Srikanth; Pessoa, Luiz
2008-04-01
It has been hypothesized that the amygdala mediates the processing advantage of emotional items. In the present study, we employed functional magnetic resonance imaging (fMRI) to investigate how fear conditioning affected the visual processing of task-irrelevant faces. We hypothesized that faces previously paired with shock (threat faces) would more effectively vie for processing resources during conditions involving spatial competition. To investigate this question, following conditioning, participants performed a letter-detection task on an array of letters that was superimposed on task-irrelevant faces. Attentional resources were manipulated by having participants perform an easy or a difficult search task. Our findings revealed that threat fearful faces evoked stronger responses in the amygdala and fusiform gyrus relative to safe fearful faces during low-load attentional conditions, but not during high-load conditions. Consistent with the increased processing of shock-paired stimuli during the low-load condition, such stimuli exhibited increased behavioral priming and fMRI repetition effects relative to unpaired faces during a subsequent implicit-memory task. Overall, our results suggest a competition model in which affective significance signals from the amygdala may constitute a key modulatory factor determining the neural fate of visual stimuli. In addition, it appears that such competitive advantage is only evident when sufficient processing resources are available to process the affective stimulus.
Hill, LaBarron K.; Williams, DeWayne P.; Thayer, Julian F.
2016-01-01
Human faces automatically attract visual attention and this process appears to be guided by social group memberships. In two experiments, we examined how social groups guide selective attention toward in-group and out-group faces. Black and White participants detected a target letter among letter strings superimposed on faces (Experiment 1). White participants were less accurate on trials with racial out-group (Black) compared to in-group (White) distractor faces. Likewise, Black participants were less accurate on trials with racial out-group (White) compared to in-group (Black) distractor faces. However, this pattern of out-group bias was only evident under high perceptual load—when the task was visually difficult. To examine the malleability of this pattern of racial bias, a separate sample of participants were assigned to mixed-race minimal groups (Experiment 2). Participants assigned to groups were less accurate on trials with their minimal in-group members compared to minimal out-group distractor faces, regardless of race. Again, this pattern of out-group bias was only evident under high perceptual load. Taken together, these results suggest that social identity guides selective attention toward motivationally relevant social groups—shifting from out-group bias in the domain of race to in-group bias in the domain of minimal groups—when perceptual resources are scarce. PMID:27556646
Burt, Adelaide; Hugrass, Laila; Frith-Belvedere, Tash; Crewther, David
2017-01-01
Low spatial frequency (LSF) visual information is extracted rapidly from fearful faces, suggesting magnocellular involvement. Autistic phenotypes demonstrate altered magnocellular processing, which we propose contributes to a decreased P100 evoked response to LSF fearful faces. Here, we investigated whether rapid processing of fearful facial expressions differs for groups of neurotypical adults with low and high scores on the Autistic Spectrum Quotient (AQ). We created hybrid face stimuli with low and high spatial frequency filtered, fearful, and neutral expressions. Fearful faces produced higher amplitude P100 responses than neutral faces in the low AQ group, particularly when the hybrid face contained a LSF fearful expression. By contrast, there was no effect of fearful expression on P100 amplitude in the high AQ group. Consistent with evidence linking magnocellular differences with autistic personality traits, our non-linear VEP results showed that the high AQ group had higher amplitude K2.1 responses than the low AQ group, which is indicative of less efficient magnocellular recovery. Our results suggest that magnocellular LSF processing of a human face may be the initial visual cue used to rapidly and automatically detect fear, but that this cue functions atypically in those with high autistic tendency.
Appearance-Based Vision and the Automatic Generation of Object Recognition Programs
1992-07-01
q u a groued into equivalence clases with respect o visible featms; the equivalence classes me called alpecu. A recognitio smuegy is generated from...illustates th concept. pge 9 Table 1: Summary o fSnsors Samr Vertex Edge Face Active/ Passive Edge detector line, passive Shape-fzm-shading - r passive...example of the detectability computation for a liht-stripe range finder is shown zn Fqgur 2. Figure 2: Detectability of a face for a light-stripe range
Embedded wavelet-based face recognition under variable position
NASA Astrophysics Data System (ADS)
Cotret, Pascal; Chevobbe, Stéphane; Darouich, Mehdi
2015-02-01
For several years, face recognition has been a hot topic in the image processing field: this technique is applied in several domains such as CCTV, electronic devices delocking and so on. In this context, this work studies the efficiency of a wavelet-based face recognition method in terms of subject position robustness and performance on various systems. The use of wavelet transform has a limited impact on the position robustness of PCA-based face recognition. This work shows, for a well-known database (Yale face database B*), that subject position in a 3D space can vary up to 10% of the original ROI size without decreasing recognition rates. Face recognition is performed on approximation coefficients of the image wavelet transform: results are still satisfying after 3 levels of decomposition. Furthermore, face database size can be divided by a factor 64 (22K with K = 3). In the context of ultra-embedded vision systems, memory footprint is one of the key points to be addressed; that is the reason why compression techniques such as wavelet transform are interesting. Furthermore, it leads to a low-complexity face detection stage compliant with limited computation resources available on such systems. The approach described in this work is tested on three platforms from a standard x86-based computer towards nanocomputers such as RaspberryPi and SECO boards. For K = 3 and a database with 40 faces, the execution mean time for one frame is 0.64 ms on a x86-based computer, 9 ms on a SECO board and 26 ms on a RaspberryPi (B model).
Facelock: familiarity-based graphical authentication
McLachlan, Jane L.; Renaud, Karen
2014-01-01
Authentication codes such as passwords and PIN numbers are widely used to control access to resources. One major drawback of these codes is that they are difficult to remember. Account holders are often faced with a choice between forgetting a code, which can be inconvenient, or writing it down, which compromises security. In two studies, we test a new knowledge-based authentication method that does not impose memory load on the user. Psychological research on face recognition has revealed an important distinction between familiar and unfamiliar face perception: When a face is familiar to the observer, it can be identified across a wide range of images. However, when the face is unfamiliar, generalisation across images is poor. This contrast can be used as the basis for a personalised ‘facelock’, in which authentication succeeds or fails based on image-invariant recognition of faces that are familiar to the account holder. In Study 1, account holders authenticated easily by detecting familiar targets among other faces (97.5% success rate), even after a one-year delay (86.1% success rate). Zero-acquaintance attackers were reduced to guessing (<1% success rate). Even personal attackers who knew the account holder well were rarely able to authenticate (6.6% success rate). In Study 2, we found that shoulder-surfing attacks by strangers could be defeated by presenting different photos of the same target faces in observed and attacked grids (1.9% success rate). Our findings suggest that the contrast between familiar and unfamiliar face recognition may be useful for developers of graphical authentication systems. PMID:25024913
Sensitive periods for the functional specialization of the neural system for human face processing.
Röder, Brigitte; Ley, Pia; Shenoy, Bhamy H; Kekunnaya, Ramesh; Bottari, Davide
2013-10-15
The aim of the study was to identify possible sensitive phases in the development of the processing system for human faces. We tested the neural processing of faces in 11 humans who had been blind from birth and had undergone cataract surgery between 2 mo and 14 y of age. Pictures of faces and houses, scrambled versions of these pictures, and pictures of butterflies were presented while event-related potentials were recorded. Participants had to respond to the pictures of butterflies (targets) only. All participants, even those who had been blind from birth for several years, were able to categorize the pictures and to detect the targets. In healthy controls and in a group of visually impaired individuals with a history of developmental or incomplete congenital cataracts, the well-known enhancement of the N170 (negative peak around 170 ms) event-related potential to faces emerged, but a face-sensitive response was not observed in humans with a history of congenital dense cataracts. By contrast, this group showed a similar N170 response to all visual stimuli, which was indistinguishable from the N170 response to faces in the controls. The face-sensitive N170 response has been associated with the structural encoding of faces. Therefore, these data provide evidence for the hypothesis that the functional differentiation of category-specific neural representations in humans, presumably involving the elaboration of inhibitory circuits, is dependent on experience and linked to a sensitive period. Such functional specialization of neural systems seems necessary to archive high processing proficiency.
Whole-face procedures for recovering facial images from memory.
Frowd, Charlie D; Skelton, Faye; Hepton, Gemma; Holden, Laura; Minahil, Simra; Pitchford, Melanie; McIntyre, Alex; Brown, Charity; Hancock, Peter J B
2013-06-01
Research has indicated that traditional methods for accessing facial memories usually yield unidentifiable images. Recent research, however, has made important improvements in this area to the witness interview, method used for constructing the face and recognition of finished composites. Here, we investigated whether three of these improvements would produce even-more recognisable images when used in conjunction with each other. The techniques are holistic in nature: they involve processes which operate on an entire face. Forty participants first inspected an unfamiliar target face. Nominally 24h later, they were interviewed using a standard type of cognitive interview (CI) to recall the appearance of the target, or an enhanced 'holistic' interview where the CI was followed by procedures for focussing on the target's character. Participants then constructed a composite using EvoFIT, a recognition-type system that requires repeatedly selecting items from face arrays, with 'breeding', to 'evolve' a composite. They either saw faces in these arrays with blurred external features, or an enhanced method where these faces were presented with masked external features. Then, further participants attempted to name the composites, first by looking at the face front-on, the normal method, and then for a second time by looking at the face side-on, which research demonstrates facilitates recognition. All techniques improved correct naming on their own, but together promoted highly-recognisable composites with mean naming at 74% correct. The implication is that these techniques, if used together by practitioners, should substantially increase the detection of suspects using this forensic method of person identification. Copyright © 2013 Forensic Science Society. Published by Elsevier Ireland Ltd. All rights reserved.
Congenital prosopagnosia: face-blind from birth.
Behrmann, Marlene; Avidan, Galia
2005-04-01
Congenital prosopagnosia refers to the deficit in face processing that is apparent from early childhood in the absence of any underlying neurological basis and in the presence of intact sensory and intellectual function. Several such cases have been described recently and elucidating the mechanisms giving rise to this impairment should aid our understanding of the psychological and neural mechanisms mediating face processing. Fundamental questions include: What is the nature and extent of the face-processing deficit in congenital prosopagnosia? Is the deficit related to a more general perceptual deficit such as the failure to process configural information? Are any neural alterations detectable using fMRI, ERP or structural analyses of the anatomy of the ventral visual cortex? We discuss these issues in relation to the existing literature and suggest directions for future research.
Neural Correlates of the In-Group Memory Advantage on the Encoding and Recognition of Faces
Herzmann, Grit; Curran, Tim
2013-01-01
People have a memory advantage for faces that belong to the same group, for example, that attend the same university or have the same personality type. Faces from such in-group members are assumed to receive more attention during memory encoding and are therefore recognized more accurately. Here we use event-related potentials related to memory encoding and retrieval to investigate the neural correlates of the in-group memory advantage. Using the minimal group procedure, subjects were classified based on a bogus personality test as belonging to one of two personality types. While the electroencephalogram was recorded, subjects studied and recognized faces supposedly belonging to the subject’s own and the other personality type. Subjects recognized in-group faces more accurately than out-group faces but the effect size was small. Using the individual behavioral in-group memory advantage in multivariate analyses of covariance, we determined neural correlates of the in-group advantage. During memory encoding (300 to 1000 ms after stimulus onset), subjects with a high in-group memory advantage elicited more positive amplitudes for subsequently remembered in-group than out-group faces, showing that in-group faces received more attention and elicited more neural activity during initial encoding. Early during memory retrieval (300 to 500 ms), frontal brain areas were more activated for remembered in-group faces indicating an early detection of group membership. Surprisingly, the parietal old/new effect (600 to 900 ms) thought to indicate recollection processes differed between in-group and out-group faces independent from the behavioral in-group memory advantage. This finding suggests that group membership affects memory retrieval independent of memory performance. Comparisons with a previous study on the other-race effect, another memory phenomenon influenced by social classification of faces, suggested that the in-group memory advantage is dominated by top-down processing whereas the other-race effect is also influenced by extensive perceptual experience. PMID:24358226
Le, T Hoang Ngan; Luu, Khoa; Savvides, Marios
2013-08-01
Robust facial hair detection and segmentation is a highly valued soft biometric attribute for carrying out forensic facial analysis. In this paper, we propose a novel and fully automatic system, called SparCLeS, for beard/moustache detection and segmentation in challenging facial images. SparCLeS uses the multiscale self-quotient (MSQ) algorithm to preprocess facial images and deal with illumination variation. Histogram of oriented gradients (HOG) features are extracted from the preprocessed images and a dynamic sparse classifier is built using these features to classify a facial region as either containing skin or facial hair. A level set based approach, which makes use of the advantages of both global and local information, is then used to segment the regions of a face containing facial hair. Experimental results demonstrate the effectiveness of our proposed system in detecting and segmenting facial hair regions in images drawn from three databases, i.e., the NIST Multiple Biometric Grand Challenge (MBGC) still face database, the NIST Color Facial Recognition Technology FERET database, and the Labeled Faces in the Wild (LFW) database.
Driver face tracking using semantics-based feature of eyes on single FPGA
NASA Astrophysics Data System (ADS)
Yu, Ying-Hao; Chen, Ji-An; Ting, Yi-Siang; Kwok, Ngaiming
2017-06-01
Tracking driver's face is one of the essentialities for driving safety control. This kind of system is usually designed with complicated algorithms to recognize driver's face by means of powerful computers. The design problem is not only about detecting rate but also from parts damages under rigorous environments by vibration, heat, and humidity. A feasible strategy to counteract these damages is to integrate entire system into a single chip in order to achieve minimum installation dimension, weight, power consumption, and exposure to air. Meanwhile, an extraordinary methodology is also indispensable to overcome the dilemma of low-computing capability and real-time performance on a low-end chip. In this paper, a novel driver face tracking system is proposed by employing semantics-based vague image representation (SVIR) for minimum hardware resource usages on a FPGA, and the real-time performance is also guaranteed at the same time. Our experimental results have indicated that the proposed face tracking system is viable and promising for the smart car design in the future.
NASA Astrophysics Data System (ADS)
Iqtait, M.; Mohamad, F. S.; Mamat, M.
2018-03-01
Biometric is a pattern recognition system which is used for automatic recognition of persons based on characteristics and features of an individual. Face recognition with high recognition rate is still a challenging task and usually accomplished in three phases consisting of face detection, feature extraction, and expression classification. Precise and strong location of trait point is a complicated and difficult issue in face recognition. Cootes proposed a Multi Resolution Active Shape Models (ASM) algorithm, which could extract specified shape accurately and efficiently. Furthermore, as the improvement of ASM, Active Appearance Models algorithm (AAM) is proposed to extracts both shape and texture of specified object simultaneously. In this paper we give more details about the two algorithms and give the results of experiments, testing their performance on one dataset of faces. We found that the ASM is faster and gains more accurate trait point location than the AAM, but the AAM gains a better match to the texture.
Beckes, Lane; Coan, James A; Morris, James P
2013-08-01
Not much is known about the neural and psychological processes that promote the initial conditions necessary for positive social bonding. This study explores one method of conditioned bonding utilizing dynamics related to the social regulation of emotion and attachment theory. This form of conditioning involves repeated presentations of negative stimuli followed by images of warm, smiling faces. L. Beckes, J. Simpson, and A. Erickson (2010) found that this conditioning procedure results in positive associations with the faces measured via a lexical decision task, suggesting they are perceived as comforting. This study found that the P1 ERP was similarly modified by this conditioning procedure and the P1 amplitude predicted lexical decision times to insecure words primed by the faces. The findings have implications for understanding how the brain detects supportive people, the flexibility and modifiability of early ERP components, and social bonding more broadly. Copyright © 2013 Society for Psychophysiological Research.
NASA Astrophysics Data System (ADS)
Huang, Shu-Wei; Yang, Shan-Yi; Huang, Wei-Cheng; Chiu, Han-Mo; Lu, Chih-Wei
2011-06-01
Most of the colorectal cancer has grown from the adenomatous polyp. Adenomatous lesions have a well-documented relationship to colorectal cancer in previous studies. Thus, to detect the morphological changes between polyp and tumor can allow early diagnosis of colorectal cancer and simultaneous removal of lesions. OCT (Optical coherence tomography) has been several advantages including high resolution and non-invasive cross-sectional image in vivo. In this study, we investigated the relationship between the B-scan OCT image features and histology of malignant human colorectal tissues, also en-face OCT image and the endoscopic image pattern. The in-vitro experiments were performed by a swept-source optical coherence tomography (SS-OCT) system; the swept source has a center wavelength at 1310 nm and 160nm in wavelength scanning range which produced 6 um axial resolution. In the study, the en-face images were reconstructed by integrating the axial values in 3D OCT images. The reconstructed en-face images show the same roundish or gyrus-like pattern with endoscopy images. The pattern of en-face images relate to the stages of colon cancer. Endoscopic OCT technique would provide three-dimensional imaging and rapidly reconstruct en-face images which can increase the speed of colon cancer diagnosis. Our results indicate a great potential for early detection of colorectal adenomas by using the OCT imaging.
Detecting and Categorizing Fleeting Emotions in Faces
Sweeny, Timothy D.; Suzuki, Satoru; Grabowecky, Marcia; Paller, Ken A.
2013-01-01
Expressions of emotion are often brief, providing only fleeting images from which to base important social judgments. We sought to characterize the sensitivity and mechanisms of emotion detection and expression categorization when exposure to faces is very brief, and to determine whether these processes dissociate. Observers viewed 2 backward-masked facial expressions in quick succession, 1 neutral and the other emotional (happy, fearful, or angry), in a 2-interval forced-choice task. On each trial, observers attempted to detect the emotional expression (emotion detection) and to classify the expression (expression categorization). Above-chance emotion detection was possible with extremely brief exposures of 10 ms and was most accurate for happy expressions. We compared categorization among expressions using a d′ analysis, and found that categorization was usually above chance for angry versus happy and fearful versus happy, but consistently poor for fearful versus angry expressions. Fearful versus angry categorization was poor even when only negative emotions (fearful, angry, or disgusted) were used, suggesting that this categorization is poor independent of decision context. Inverting faces impaired angry versus happy categorization, but not emotion detection, suggesting that information from facial features is used differently for emotion detection and expression categorizations. Emotion detection often occurred without expression categorization, and expression categorization sometimes occurred without emotion detection. These results are consistent with the notion that emotion detection and expression categorization involve separate mechanisms. PMID:22866885
Appearance-based multimodal human tracking and identification for healthcare in the digital home.
Yang, Mau-Tsuen; Huang, Shen-Yen
2014-08-05
There is an urgent need for intelligent home surveillance systems to provide home security, monitor health conditions, and detect emergencies of family members. One of the fundamental problems to realize the power of these intelligent services is how to detect, track, and identify people at home. Compared to RFID tags that need to be worn all the time, vision-based sensors provide a natural and nonintrusive solution. Observing that body appearance and body build, as well as face, provide valuable cues for human identification, we model and record multi-view faces, full-body colors and shapes of family members in an appearance database by using two Kinects located at a home's entrance. Then the Kinects and another set of color cameras installed in other parts of the house are used to detect, track, and identify people by matching the captured color images with the registered templates in the appearance database. People are detected and tracked by multisensor fusion (Kinects and color cameras) using a Kalman filter that can handle duplicate or partial measurements. People are identified by multimodal fusion (face, body appearance, and silhouette) using a track-based majority voting. Moreover, the appearance-based human detection, tracking, and identification modules can cooperate seamlessly and benefit from each other. Experimental results show the effectiveness of the human tracking across multiple sensors and human identification considering the information of multi-view faces, full-body clothes, and silhouettes. The proposed home surveillance system can be applied to domestic applications in digital home security and intelligent healthcare.
Appearance-Based Multimodal Human Tracking and Identification for Healthcare in the Digital Home
Yang, Mau-Tsuen; Huang, Shen-Yen
2014-01-01
There is an urgent need for intelligent home surveillance systems to provide home security, monitor health conditions, and detect emergencies of family members. One of the fundamental problems to realize the power of these intelligent services is how to detect, track, and identify people at home. Compared to RFID tags that need to be worn all the time, vision-based sensors provide a natural and nonintrusive solution. Observing that body appearance and body build, as well as face, provide valuable cues for human identification, we model and record multi-view faces, full-body colors and shapes of family members in an appearance database by using two Kinects located at a home's entrance. Then the Kinects and another set of color cameras installed in other parts of the house are used to detect, track, and identify people by matching the captured color images with the registered templates in the appearance database. People are detected and tracked by multisensor fusion (Kinects and color cameras) using a Kalman filter that can handle duplicate or partial measurements. People are identified by multimodal fusion (face, body appearance, and silhouette) using a track-based majority voting. Moreover, the appearance-based human detection, tracking, and identification modules can cooperate seamlessly and benefit from each other. Experimental results show the effectiveness of the human tracking across multiple sensors and human identification considering the information of multi-view faces, full-body clothes, and silhouettes. The proposed home surveillance system can be applied to domestic applications in digital home security and intelligent healthcare. PMID:25098207
Nishii, Ryuichi; Tong, William; Wendt, Richard; Soghomonyan, Suren; Mukhopadhyay, Uday; Balatoni, Julius; Mawlawi, Osama; Bidaut, Luc; Tinkey, Peggy; Borne, Agatha; Alauddin, Mian; Gonzalez-Lepera, Carlos; Yang, Bijun; Gelovani, Juri G
2012-04-01
To facilitate the clinical translation of (18)F-fluoroacetate ((18)F-FACE), the pharmacokinetics, biodistribution, radiolabeled metabolites, radiation dosimetry, and pharmacological safety of diagnostic doses of (18)F-FACE were determined in non-human primates. (18)F-FACE was synthesized using a custom-built automated synthesis module. Six rhesus monkeys (three of each sex) were injected intravenously with (18)F-FACE (165.4 ± 28.5 MBq), followed by dynamic positron emission tomography (PET) imaging of the thoracoabdominal area during 0-30 min post-injection and static whole-body PET imaging at 40, 100, and 170 min. Serial blood samples and a urine sample were obtained from each animal to determine the time course of (18)F-FACE and its radiolabeled metabolites. Electrocardiograms and hematology analyses were obtained to evaluate the acute and delayed toxicity of diagnostic dosages of (18)F-FACE. The time-integrated activity coefficients for individual source organs and the whole body after administration of (18)F-FACE were obtained using quantitative analyses of dynamic and static PET images and were extrapolated to humans. The blood clearance of (18)F-FACE exhibited bi-exponential kinetics with half-times of 4 and 250 min for the fast and slow phases, respectively. A rapid accumulation of (18)F-FACE-derived radioactivity was observed in the liver and kidneys, followed by clearance of the radioactivity into the intestine and the urinary bladder. Radio-HPLC analyses of blood and urine samples demonstrated that (18)F-fluoride was the only detectable radiolabeled metabolite at the level of less than 9% of total radioactivity in blood at 180 min after the (18)F-FACE injection. The uptake of free (18)F-fluoride in the bones was insignificant during the course of the imaging studies. No significant changes in ECG, CBC, liver enzymes, or renal function were observed. The estimated effective dose for an adult human is 3.90-7.81 mSv from the administration of 185-370 MBq of (18)F-FACE. The effective dose and individual organ radiation absorbed doses from administration of a diagnostic dosage of (18)F-FACE are acceptable. From a pharmacologic perspective, diagnostic dosages of (18)F-FACE are non-toxic in primates and, therefore, could be safely administered to human patients for PET imaging.
Undercut feature recognition for core and cavity generation
NASA Astrophysics Data System (ADS)
Yusof, Mursyidah Md; Salman Abu Mansor, Mohd
2018-01-01
Core and cavity is one of the important components in injection mould where the quality of the final product is mostly dependent on it. In the industry, with years of experience and skill, mould designers commonly use commercial CAD software to design the core and cavity which is time consuming. This paper proposes an algorithm that detect possible undercut features and generate the core and cavity. Two approaches are presented; edge convexity and face connectivity approach. The edge convexity approach is used to recognize undercut features while face connectivity is used to divide the faces into top and bottom region.
Is Your Avatar Ethical? On-Line Course Tools that Are Methods for Student Identity and Verification
ERIC Educational Resources Information Center
Semple, Mid; Hatala, Jeffrey; Franks, Patricia; Rossi, Margherita A.
2011-01-01
On-line college courses present a mandate for student identity verification for accreditation and funding sources. Student authentication requires course modification to detect fraud and misrepresentation of authorship in assignment submissions. The reality is that some college students cheat in face-to-face classrooms; however, the potential for…
On the Flexibility of Social Source Memory: A Test of the Emotional Incongruity Hypothesis
ERIC Educational Resources Information Center
Bell, Raoul; Buchner, Axel; Kroneisen, Meike; Giang, Trang
2012-01-01
A popular hypothesis in evolutionary psychology posits that reciprocal altruism is supported by a cognitive module that helps cooperative individuals to detect and remember cheaters. Consistent with this hypothesis, a source memory advantage for faces of cheaters (better memory for the cheating context in which these faces were encountered) was…
Anti Theft Mechanism Through Face recognition Using FPGA
NASA Astrophysics Data System (ADS)
Sundari, Y. B. T.; Laxminarayana, G.; Laxmi, G. Vijaya
2012-11-01
The use of vehicle is must for everyone. At the same time, protection from theft is also very important. Prevention of vehicle theft can be done remotely by an authorized person. The location of the car can be found by using GPS and GSM controlled by FPGA. In this paper, face recognition is used to identify the persons and comparison is done with the preloaded faces for authorization. The vehicle will start only when the authorized personís face is identified. In the event of theft attempt or unauthorized personís trial to drive the vehicle, an MMS/SMS will be sent to the owner along with the location. Then the authorized person can alert the security personnel for tracking and catching the vehicle. For face recognition, a Principal Component Analysis (PCA) algorithm is developed using MATLAB. The control technique for GPS and GSM is developed using VHDL over SPTRAN 3E FPGA. The MMS sending method is written in VB6.0. The proposed application can be implemented with some modifications in the systems wherever the face recognition or detection is needed like, airports, international borders, banking applications etc.
Midura, Ronald J; Cali, Valbona; Lauer, Mark E; Calabro, Anthony; Hascall, Vincent C
2018-01-01
Hyaluronan (HA) exhibits numerous important roles in physiology and pathologies, and these facts necessitate an ability to accurately and reproducibly measure its quantities in tissues and cell cultures. Our group previously reported a rigorous and analytical procedure to quantify HA (and chondroitin sulfate, CS) using a reductive amination chemistry and separation of the fluorophore-conjugated, unsaturated disaccharides unique to HA and CS on high concentration acrylamide gels. This procedure is known as fluorophore-assisted carbohydrate electrophoresis (FACE) and has been adapted for the detection and quantification of all glycosaminoglycan types. While this previous FACE procedure is relatively straightforward to implement by carbohydrate research investigators, many nonglycoscience laboratories now studying HA biology might have difficulties establishing this prior FACE procedure as a routine assay for HA. To address this need, we have greatly simplified our prior FACE procedure for accurate and reproducible assessment of HA in tissues and cell cultures. This chapter describes in detail this simplified FACE procedure and, because it uses an enzyme that degrades both HA and CS, investigators will also gain additional insight into the quantities of CS in the same samples dedicated for HA analysis. © 2018 Elsevier Inc. All rights reserved.
Facial Expressions and Ability to Recognize Emotions From Eyes or Mouth in Children
Guarnera, Maria; Hichy, Zira; Cascio, Maura I.; Carrubba, Stefano
2015-01-01
This research aims to contribute to the literature on the ability to recognize anger, happiness, fear, surprise, sadness, disgust and neutral emotions from facial information. By investigating children’s performance in detecting these emotions from a specific face region, we were interested to know whether children would show differences in recognizing these expressions from the upper or lower face, and if any difference between specific facial regions depended on the emotion in question. For this purpose, a group of 6-7 year-old children was selected. Participants were asked to recognize emotions by using a labeling task with three stimulus types (region of the eyes, of the mouth, and full face). The findings seem to indicate that children correctly recognize basic facial expressions when pictures represent the whole face, except for a neutral expression, which was recognized from the mouth, and sadness, which was recognized from the eyes. Children are also able to identify anger from the eyes as well as from the whole face. With respect to gender differences, there is no female advantage in emotional recognition. The results indicate a significant interaction ‘gender x face region’ only for anger and neutral emotions. PMID:27247651
The fusiform face area: a cortical region specialized for the perception of faces
Kanwisher, Nancy; Yovel, Galit
2006-01-01
Faces are among the most important visual stimuli we perceive, informing us not only about a person's identity, but also about their mood, sex, age and direction of gaze. The ability to extract this information within a fraction of a second of viewing a face is important for normal social interactions and has probably played a critical role in the survival of our primate ancestors. Considerable evidence from behavioural, neuropsychological and neurophysiological investigations supports the hypothesis that humans have specialized cognitive and neural mechanisms dedicated to the perception of faces (the face-specificity hypothesis). Here, we review the literature on a region of the human brain that appears to play a key role in face perception, known as the fusiform face area (FFA). Section 1 outlines the theoretical background for much of this work. The face-specificity hypothesis falls squarely on one side of a longstanding debate in the fields of cognitive science and cognitive neuroscience concerning the extent to which the mind/brain is composed of: (i) special-purpose (‘domain-specific’) mechanisms, each dedicated to processing a specific kind of information (e.g. faces, according to the face-specificity hypothesis), versus (ii) general-purpose (‘domain-general’) mechanisms, each capable of operating on any kind of information. Face perception has long served both as one of the prime candidates of a domain-specific process and as a key target for attack by proponents of domain-general theories of brain and mind. Section 2 briefly reviews the prior literature on face perception from behaviour and neurophysiology. This work supports the face-specificity hypothesis and argues against its domain-general alternatives (the individuation hypothesis, the expertise hypothesis and others). Section 3 outlines the more recent evidence on this debate from brain imaging, focusing particularly on the FFA. We review the evidence that the FFA is selectively engaged in face perception, by addressing (and rebutting) five of the most widely discussed alternatives to this hypothesis. In §4, we consider recent findings that are beginning to provide clues into the computations conducted in the FFA and the nature of the representations the FFA extracts from faces. We argue that the FFA is engaged both in detecting faces and in extracting the necessary perceptual information to recognize them, and that the properties of the FFA mirror previously identified behavioural signatures of face-specific processing (e.g. the face-inversion effect). Section 5 asks how the computations and representations in the FFA differ from those occurring in other nearby regions of cortex that respond strongly to faces and objects. The evidence indicates clear functional dissociations between these regions, demonstrating that the FFA shows not only functional specificity but also area specificity. We end by speculating in §6 on some of the broader questions raised by current research on the FFA, including the developmental origins of this region and the question of whether faces are unique versus whether similarly specialized mechanisms also exist for other domains of high-level perception and cognition. PMID:17118927
Gácsi, Márta; Miklósi, Adám; Varga, Orsolya; Topál, József; Csányi, Vilmos
2004-07-01
The ability of animals to use behavioral/facial cues in detection of human attention has been widely investigated. In this test series we studied the ability of dogs to recognize human attention in different experimental situations (ball-fetching game, fetching objects on command, begging from humans). The attentional state of the humans was varied along two variables: (1) facing versus not facing the dog; (2) visible versus non-visible eyes. In the first set of experiments (fetching) the owners were told to take up different body positions (facing or not facing the dog) and to either cover or not cover their eyes with a blindfold. In the second set of experiments (begging) dogs had to choose between two eating humans based on either the visibility of the eyes or direction of the face. Our results show that the efficiency of dogs to discriminate between "attentive" and "inattentive" humans depended on the context of the test, but they could rely on the orientation of the body, the orientation of the head and the visibility of the eyes. With the exception of the fetching-game situation, they brought the object to the front of the human (even if he/she turned his/her back towards the dog), and preferentially begged from the facing (or seeing) human. There were also indications that dogs were sensitive to the visibility of the eyes because they showed increased hesitative behavior when approaching a blindfolded owner, and they also preferred to beg from the person with visible eyes. We conclude that dogs are able to rely on the same set of human facial cues for detection of attention, which form the behavioral basis of understanding attention in humans. Showing the ability of recognizing human attention across different situations dogs proved to be more flexible than chimpanzees investigated in similar circumstances.
Schneider, Till R; Hipp, Joerg F; Domnick, Claudia; Carl, Christine; Büchel, Christian; Engel, Andreas K
2018-05-26
Human faces are among the most salient visual stimuli and act both as socially and emotionally relevant signals. Faces and especially faces with emotional expression receive prioritized processing in the human brain and activate a distributed network of brain areas reflected, e.g., in enhanced oscillatory neuronal activity. However, an inconsistent picture emerged so far regarding neuronal oscillatory activity across different frequency-bands modulated by emotionally and socially relevant stimuli. The individual level of anxiety among healthy populations might be one explanation for these inconsistent findings. Therefore, we tested the hypothesis whether oscillatory neuronal activity is associated with individual anxiety levels during perception of faces with neutral and fearful facial expressions. We recorded neuronal activity using magnetoencephalography (MEG) in 27 healthy participants and determined their individual state anxiety levels. Images of human faces with neutral and fearful expressions, and physically matched visual control stimuli were presented while participants performed a simple color detection task. Spectral analyses revealed that face processing and in particular processing of fearful faces was characterized by enhanced neuronal activity in the theta- and gamma-band and decreased activity in the beta-band in early visual cortex and the fusiform gyrus (FFG). Moreover, the individuals' state anxiety levels correlated positively with the gamma-band response and negatively with the beta response in the FFG and the amygdala. Our results suggest that oscillatory neuronal activity plays an important role in affective face processing and is dependent on the individual level of state anxiety. Our work provides new insights on the role of oscillatory neuronal activity underlying processing of faces. Copyright © 2018. Published by Elsevier Inc.
Chien, Sarina Hui-Lin; Wang, Jing-Fong; Huang, Tsung-Ren
2016-01-01
Previous infant studies on the other-race effect have favored the perceptual narrowing view, or declined sensitivities to rarely exposed other-race faces. Here we wish to provide an alternative possibility, perceptual learning, manifested by improved sensitivity for frequently exposed own-race faces in the first year of life. Using the familiarization/visual-paired comparison paradigm, we presented 4-, 6-, and 9-month-old Taiwanese infants with oval-cropped Taiwanese, Caucasian, Filipino faces, and each with three different manipulations of increasing task difficulty (i.e., change identity, change eyes, and widen eye spacing). An adult experiment was first conducted to verify the task difficulty. Our results showed that, with oval-cropped faces, the 4 month-old infants could only discriminate Taiwanese “change identity” condition and not any others, suggesting an early own-race advantage at 4 months. The 6 month-old infants demonstrated novelty preferences in both Taiwanese and Caucasian “change identity” conditions, and proceeded to the Taiwanese “change eyes” condition. The 9-month-old infants demonstrated novelty preferences in the “change identity” condition of all three ethnic faces. They also passed the Taiwanese “change eyes” condition but could not extend this refined ability of detecting a change in the eyes for the Caucasian or Philippine faces. Taken together, we interpret the pattern of results as evidence supporting perceptual learning during the first year: the ability to discriminate own-race faces emerges at 4 months and continues to refine, while the ability to discriminate other-race faces emerges between 6 and 9 months and retains at 9 months. Additionally, the discrepancies in the face stimuli and methods between studies advocating the narrowing view and those supporting the learning view were discussed. PMID:27807427
Kuhn, Gustav; Teszka, Robert; Tenaw, Natalia; Kingstone, Alan
2016-01-01
People's attention is oriented towards faces, but the extent to which these social attention effects are under top down control is more ambiguous. Our first aim was to measure and compare, in real life and in the lab, people's top-down control over overt and covert shifts in reflexive social attention to the face of another. We employed a magic trick in which the magician used social cues (i.e. asking a question whilst establishing eye contact) to misdirect attention towards his face, and thus preventing participants from noticing a visible colour change to a playing card. Our results show that overall people spend more time looking at the magician's face when he is seen on video than in reality. Additionally, although most participants looked at the magician's face when misdirected, this tendency to look at the face was modulated by instruction (i.e., "keep your attention on the cards"), and therefore, by top down control. Moreover, while the card's colour change was fully visible, the majority of participants failed to notice the change, and critically, change detection (our measure of covert attention) was not affected by where people looked (overt attention). We conclude that there is a tendency to shift overt and covert attention reflexively to faces, but that people exert more top down control over this overt shift in attention. These finding are discussed within a new framework that focuses on the role of eye movements as an attentional process as well as a form of non-verbal communication. Copyright © 2015 Elsevier B.V. All rights reserved.
Detecting Visually Observable Disease Symptoms from Faces.
Wang, Kuan; Luo, Jiebo
2016-12-01
Recent years have witnessed an increasing interest in the application of machine learning to clinical informatics and healthcare systems. A significant amount of research has been done on healthcare systems based on supervised learning. In this study, we present a generalized solution to detect visually observable symptoms on faces using semi-supervised anomaly detection combined with machine vision algorithms. We rely on the disease-related statistical facts to detect abnormalities and classify them into multiple categories to narrow down the possible medical reasons of detecting. Our method is in contrast with most existing approaches, which are limited by the availability of labeled training data required for supervised learning, and therefore offers the major advantage of flagging any unusual and visually observable symptoms.
van der Wel, Robrecht P; Welsh, Timothy; Böckler, Anne
2018-01-01
The direction of gaze towards or away from an observer has immediate effects on attentional processing in the observer. Previous research indicates that faces with direct gaze are processed more efficiently than faces with averted gaze. We recently reported additional processing advantages for faces that suddenly adopt direct gaze (abruptly shift from averted to direct gaze) relative to static direct gaze (always in direct gaze), sudden averted gaze (abruptly shift from direct to averted gaze), and static averted gaze (always in averted gaze). Because changes in gaze orientation in previous study co-occurred with changes in head orientation, it was not clear if the effect is contingent on face or eye processing, or whether it requires both the eyes and the face to provide consistent information. The present study delineates the impact of head orientation, sudden onset motion cues, and gaze cues. Participants completed a target-detection task in which head position remained in a static averted or direct orientation while sudden onset motion and eye gaze cues were manipulated within each trial. The results indicate a sudden direct gaze advantage that resulted from the additive role of motion and gaze cues. Interestingly, the orientation of the face towards or away from the observer did not influence the sudden direct gaze effect, suggesting that eye gaze cues, not face orientation cues, are critical for the sudden direct gaze effect.
Brebner, Joanne L; Krigolson, Olav; Handy, Todd C; Quadflieg, Susanne; Turk, David J
2011-05-04
The own-race bias (ORB) is a well-documented recognition advantage for own-race (OR) over cross-race (CR) faces, the origin of which remains unclear. In the current study, event-related potentials (ERPs) were recorded while Caucasian participants age-categorized Black and White faces which were digitally altered to display either a race congruent or incongruent facial structure. The results of a subsequent surprise memory test indicated that regardless of facial structure participants recognized White faces better than Black faces. Additional analyses revealed that temporally-early ERP components associated with face-specific perceptual processing (N170) and the individuation of facial exemplars (N250) were selectively sensitive to skin color. In addition, the N200 (a component that has been linked to increased attention and depth of encoding afforded to in-group and OR faces) was modulated by color and structure, and correlated with subsequent memory performance. However, the LPP component associated with the cognitive evaluation of perceptual input was influenced by racial differences in facial structure alone. These findings suggest that racial differences in skin color and facial structure are detected during the encoding of unfamiliar faces, and that the categorization of conspecifics as members of our social in-group on the basis of their skin color may be a determining factor in our ability to subsequently remember them. Copyright © 2011 Elsevier B.V. All rights reserved.
Assessing facial attractiveness: individual decisions and evolutionary constraints
Kocsor, Ferenc; Feldmann, Adam; Bereczkei, Tamas; Kállai, János
2013-01-01
Background Several studies showed that facial attractiveness, as a highly salient social cue, influences behavioral responses. It has also been found that attractive faces evoke distinctive neural activation compared to unattractive or neutral faces. Objectives Our aim was to design a face recognition task where individual preferences for facial cues are controlled for, and to create conditions that are more similar to natural circumstances in terms of decision making. Design In an event-related functional magnetic resonance imaging (fMRI) experiment, subjects were shown attractive and unattractive faces, categorized on the basis of their own individual ratings. Results Statistical analysis of all subjects showed elevated brain activation for attractive opposite-sex faces in contrast to less attractive ones in regions that previously have been reported to show enhanced activation with increasing attractiveness level (e.g. the medial and superior occipital gyri, fusiform gyrus, precentral gyrus, and anterior cingular cortex). Besides these, females showed additional brain activation in areas thought to be involved in basic emotions and desires (insula), detection of facial emotions (superior temporal gyrus), and memory retrieval (hippocampus). Conclusions From these data, we speculate that because of the risks involving mate choice faced by women during evolutionary times, selection might have preferred the development of an elaborated neural system in females to assess the attractiveness and social value of male faces. PMID:24693356
Topçu, Çağdaş; Uysal, Hilmi; Özkan, Ömer; Özkan, Özlenen; Polat, Övünç; Bedeloğlu, Merve; Akgül, Arzu; Döğer, Ela Naz; Sever, Refik; Çolak, Ömer Halil
2018-03-06
We assessed the recovery of 2 face transplantation patients with measures of complexity during neuromuscular rehabilitation. Cognitive rehabilitation methods and functional electrical stimulation were used to improve facial emotional expressions of full-face transplantation patients for 5 months. Rehabilitation and analyses were conducted at approximately 3 years after full facial transplantation in the patient group. We report complexity analysis of surface electromyography signals of these two patients in comparison to the results of 10 healthy individuals. Facial surface electromyography data were collected during 6 basic emotional expressions and 4 primary facial movements from 2 full-face transplantation patients and 10 healthy individuals to determine a strategy of functional electrical stimulation and understand the mechanisms of rehabilitation. A new personalized rehabilitation technique was developed using the wavelet packet method. Rehabilitation sessions were applied twice a month for 5 months. Subsequently, motor and functional progress was assessed by comparing the fuzzy entropy of surface electromyography data against the results obtained from patients before rehabilitation and the mean results obtained from 10 healthy subjects. At the end of personalized rehabilitation, the patient group showed improvements in their facial symmetry and their ability to perform basic facial expressions and primary facial movements. Similarity in the pattern of fuzzy entropy for facial expressions between the patient group and healthy individuals increased. Synkinesis was detected during primary facial movements in the patient group, and one patient showed synkinesis during the happiness expression. Synkinesis in the lower face region of one of the patients was eliminated for the lid tightening movement. The recovery of emotional expressions after personalized rehabilitation was satisfactory to the patients. The assessment with complexity analysis of sEMG data can be used for developing new neurorehabilitation techniques and detecting synkinesis after full-face transplantation.
Impaired threat prioritisation after selective bilateral amygdala lesions
Bach, Dominik R.; Hurlemann, Rene; Dolan, Raymond J.
2015-01-01
The amygdala is proposed to process threat-related information in non-human animals. In humans, empirical evidence from lesion studies has provided the strongest evidence for a role in emotional face recognition and social judgement. Here we use a face-in-the-crowd (FITC) task which in healthy control individuals reveals prioritised threat processing, evident in faster serial search for angry compared to happy target faces. We investigate AM and BG, two individuals with bilateral amygdala lesions due to Urbach–Wiethe syndrome, and 16 control individuals. In lesion patients we show a reversal of a threat detection advantage indicating a profound impairment in prioritising threat information. This is the first direct demonstration that human amygdala lesions impair prioritisation of threatening faces, providing evidence that this structure has a causal role in responding to imminent danger. PMID:25282058
Door Security using Face Detection and Raspberry Pi
NASA Astrophysics Data System (ADS)
Bhutra, Venkatesh; Kumar, Harshav; Jangid, Santosh; Solanki, L.
2018-03-01
With the world moving towards advanced technologies, security forms a crucial part in daily life. Among the many techniques used for this purpose, Face Recognition stands as effective means of authentication and security. This paper deals with the user of principal component and security. PCA is a statistical approach used to simplify a data set. The minimum Euclidean distance found from the PCA technique is used to recognize the face. Raspberry Pi a low cost ARM based computer on a small circuit board, controls the servo motor and other sensors. The servo-motor is in turn attached to the doors of home and opens up when the face is recognized. The proposed work has been done using a self-made training database of students from B.K. Birla Institute of Engineering and Technology, Pilani, Rajasthan, India.
Emotion Recognition - the need for a complete analysis of the phenomenon of expression formation
NASA Astrophysics Data System (ADS)
Bobkowska, Katarzyna; Przyborski, Marek; Skorupka, Dariusz
2018-01-01
This article shows how complex emotions are. This has been proven by the analysis of the changes that occur on the face. The authors present the problem of image analysis for the purpose of identifying emotions. In addition, they point out the importance of recording the phenomenon of the development of emotions on the human face with the use of high-speed cameras, which allows the detection of micro expression. The work that was prepared for this article was based on analyzing the parallax pair correlation coefficients for specific faces. In the article authors proposed to divide the facial image into 8 characteristic segments. With this approach, it was confirmed that at different moments of emotion the pace of expression and the maximum change characteristic of a particular emotion, for each part of the face is different.
Arizpe, Joseph; Kravitz, Dwight J; Walsh, Vincent; Yovel, Galit; Baker, Chris I
2016-01-01
The Other-Race Effect (ORE) is the robust and well-established finding that people are generally poorer at facial recognition of individuals of another race than of their own race. Over the past four decades, much research has focused on the ORE because understanding this phenomenon is expected to elucidate fundamental face processing mechanisms and the influence of experience on such mechanisms. Several recent studies of the ORE in which the eye-movements of participants viewing own- and other-race faces were tracked have, however, reported highly conflicting results regarding the presence or absence of differential patterns of eye-movements to own- versus other-race faces. This discrepancy, of course, leads to conflicting theoretical interpretations of the perceptual basis for the ORE. Here we investigate fixation patterns to own- versus other-race (African and Chinese) faces for Caucasian participants using different analysis methods. While we detect statistically significant, though subtle, differences in fixation pattern using an Area of Interest (AOI) approach, we fail to detect significant differences when applying a spatial density map approach. Though there were no significant differences in the spatial density maps, the qualitative patterns matched the results from the AOI analyses reflecting how, in certain contexts, Area of Interest (AOI) analyses can be more sensitive in detecting the differential fixation patterns than spatial density analyses, due to spatial pooling of data with AOIs. AOI analyses, however, also come with the limitation of requiring a priori specification. These findings provide evidence that the conflicting reports in the prior literature may be at least partially accounted for by the differences in the statistical sensitivity associated with the different analysis methods employed across studies. Overall, our results suggest that detection of differences in eye-movement patterns can be analysis-dependent and rests on the assumptions inherent in the given analysis.
Arizpe, Joseph; Kravitz, Dwight J.; Walsh, Vincent; Yovel, Galit; Baker, Chris I.
2016-01-01
The Other-Race Effect (ORE) is the robust and well-established finding that people are generally poorer at facial recognition of individuals of another race than of their own race. Over the past four decades, much research has focused on the ORE because understanding this phenomenon is expected to elucidate fundamental face processing mechanisms and the influence of experience on such mechanisms. Several recent studies of the ORE in which the eye-movements of participants viewing own- and other-race faces were tracked have, however, reported highly conflicting results regarding the presence or absence of differential patterns of eye-movements to own- versus other-race faces. This discrepancy, of course, leads to conflicting theoretical interpretations of the perceptual basis for the ORE. Here we investigate fixation patterns to own- versus other-race (African and Chinese) faces for Caucasian participants using different analysis methods. While we detect statistically significant, though subtle, differences in fixation pattern using an Area of Interest (AOI) approach, we fail to detect significant differences when applying a spatial density map approach. Though there were no significant differences in the spatial density maps, the qualitative patterns matched the results from the AOI analyses reflecting how, in certain contexts, Area of Interest (AOI) analyses can be more sensitive in detecting the differential fixation patterns than spatial density analyses, due to spatial pooling of data with AOIs. AOI analyses, however, also come with the limitation of requiring a priori specification. These findings provide evidence that the conflicting reports in the prior literature may be at least partially accounted for by the differences in the statistical sensitivity associated with the different analysis methods employed across studies. Overall, our results suggest that detection of differences in eye-movement patterns can be analysis-dependent and rests on the assumptions inherent in the given analysis. PMID:26849447
Pornographic information of Internet views detection method based on the connected areas
NASA Astrophysics Data System (ADS)
Wang, Huibai; Fan, Ajie
2017-01-01
Nowadays online porn video broadcasting and downloading is very popular. In view of the widespread phenomenon of Internet pornography, this paper proposed a new method of pornographic video detection based on connected areas. Firstly, decode the video into a serious of static images and detect skin color on the extracted key frames. If the area of skin color reaches a certain threshold, use the AdaBoost algorithm to detect the human face. Judge the connectivity of the human face and the large area of skin color to determine whether detect the sensitive area finally. The experimental results show that the method can effectively remove the non-pornographic videos contain human who wear less. This method can improve the efficiency and reduce the workload of detection.
ERIC Educational Resources Information Center
Busey, Thomas A.; Arici, Anne
2009-01-01
The authors tested the role of individual items in recognition memory using a forced-choice paradigm with face stimuli. They constructed distractor stimuli using morphing procedures that were similar to two parent faces and then compared a studied morph against an unstudied morph that was similar to two studied parents. The similarity of the…
Takase, Noriaki; Nozaki, Miho; Kato, Aki; Ozeki, Hironori; Yoshida, Munenori; Ogura, Yuichiro
2015-11-01
To evaluate the area of the foveal avascular zone (FAZ) detected by en face OCTA (AngioVue, Avanti OCT; Optovue) in healthy and diabetic eyes. Retrospective chart review of patients who underwent fundus examination including en face OCTA. Eyes with proliferative diabetic retinopathy and history of laser photocoagulation were excluded. The FAZ area in the superficial and deep plexus layers were measured and evaluated using ImageJ software. The FAZ area in the superficial layer was 0.25 ± 0.06 mm² in healthy eyes (n = 19), whereas it was 0.37 ± 0.07 mm² in diabetic eyes without retinopathy (n = 24) and 0.38 ± 0.11 mm² in eyes with diabetic retinopathy (n = 20). Diabetic eyes showed statistically significant FAZ enlargement compared with healthy eyes, regardless of the presence of retinopathy (P < 0.01). The FAZ area in the deep plexus layer was also significantly larger in diabetic eyes than in healthy eyes (P < 0.01). Our data suggest that diabetic eyes show retinal microcirculation impairment in the macula even before retinopathy develops. En face OCTA is a useful noninvasive screening tool for detecting early microcirculatory disturbance in patients with diabetes.
An efficient method for facial component detection in thermal images
NASA Astrophysics Data System (ADS)
Paul, Michael; Blanik, Nikolai; Blazek, Vladimir; Leonhardt, Steffen
2015-04-01
A method to detect certain regions in thermal images of human faces is presented. In this approach, the following steps are necessary to locate the periorbital and the nose regions: First, the face is segmented from the background by thresholding and morphological filtering. Subsequently, a search region within the face, around its center of mass, is evaluated. Automatically computed temperature thresholds are used per subject and image or image sequence to generate binary images, in which the periorbital regions are located by integral projections. Then, the located positions are used to approximate the nose position. It is possible to track features in the located regions. Therefore, these regions are interesting for different applications like human-machine interaction, biometrics and biomedical imaging. The method is easy to implement and does not rely on any training images or templates. Furthermore, the approach saves processing resources due to simple computations and restricted search regions.
Cadmium telluride photovoltaic radiation detector
Agouridis, D.C.; Fox, R.J.
A dosimetry-type radiation detector is provided which employs a polycrystalline, chlorine-compensated cadmium telluride wafer fabricated to operate as a photovoltaic current generator used as the basic detecting element. A photovoltaic junction is formed in the wafer by painting one face of the cadmium telluride wafer with an n-type semi-conductive material. The opposite face of the wafer is painted with an electrically conductive material to serve as a current collector. The detector is mounted in a hermetically sealed vacuum containment. The detector is operated in a photovoltaic mode (zero bias) while DC coupled to a symmetrical differential current amplifier having a very low input impedance. The amplifier converts the current signal generated by radiation impinging upon the barrier surface face of the wafer to a voltage which is supplied to a voltmeter calibrated to read quantitatively the level of radiation incident upon the detecting wafer.
Cadmium telluride photovoltaic radiation detector
Agouridis, Dimitrios C.; Fox, Richard J.
1981-01-01
A dosimetry-type radiation detector is provided which employs a polycrystalline, chlorine-compensated cadmium telluride wafer fabricated to operate as a photovoltaic current generator used as the basic detecting element. A photovoltaic junction is formed in the wafer by painting one face of the cadmium telluride wafer with an n-type semiconductive material. The opposite face of the wafer is painted with an electrically conductive material to serve as a current collector. The detector is mounted in a hermetically sealed vacuum containment. The detector is operated in a photovoltaic mode (zero bias) while DC coupled to a symmetrical differential current amplifier having a very low input impedance. The amplifier converts the current signal generated by radiation impinging upon the barrier surface face of the wafer to a voltage which is supplied to a voltmeter calibrated to read quantitatively the level of radiation incident upon the detecting wafer.
Tweaked residual convolutional network for face alignment
NASA Astrophysics Data System (ADS)
Du, Wenchao; Li, Ke; Zhao, Qijun; Zhang, Yi; Chen, Hu
2017-08-01
We propose a novel Tweaked Residual Convolutional Network approach for face alignment with two-level convolutional networks architecture. Specifically, the first-level Tweaked Convolutional Network (TCN) module predicts the landmark quickly but accurately enough as a preliminary, by taking low-resolution version of the detected face holistically as the input. The following Residual Convolutional Networks (RCN) module progressively refines the landmark by taking as input the local patch extracted around the predicted landmark, particularly, which allows the Convolutional Neural Network (CNN) to extract local shape-indexed features to fine tune landmark position. Extensive evaluations show that the proposed Tweaked Residual Convolutional Network approach outperforms existing methods.
Noseleaf pit in Egyptian slit-faced bat as a doubly curved reflector
NASA Astrophysics Data System (ADS)
Zhuang, Qiao; Wang, Xiao-Min; Li, Ming-Xuan; Mao, Jie; Wang, Fu-Xun
2012-02-01
Noseleaves in slit-faced bats have been hypothesized to affect the sonar beam. Using numerical methods, we show that the pit in the noseleaf of an Egyptian slit-faced bat has an effect on focusing the acoustic near field as well as shaping the radiation patterns and hence enhancing the directionality. The underlying physical mechanism suggested by the properties of the effect is that the pit acts as a doubly curved reflector. Thanks to the pit the beam shape is overall directional and more selectively widened at the high end of the biosonar frequency range to improve spatial coverage and detectability of targets.
Suyama, Motohiro [Hamamatsu, JP; Fukasawa, Atsuhito [Hamamatsu, JP; Arisaka, Katsushi [Los Angeles, CA; Wang, Hanguo [North Hills, CA
2011-12-20
An electron tube of the present invention includes: a vacuum vessel including a face plate portion made of synthetic silica and having a surface on which a photoelectric surface is provided, a stem portion arranged facing the photoelectric surface and made of synthetic silica, and a side tube portion having one end connected to the face plate portion and the other end connected to the stem portion and made of synthetic silica; a projection portion arranged in the vacuum vessel, extending from the stem portion toward the photoelectric surface, and made of synthetic silica; and an electron detector arranged on the projection portion, for detecting electrons from the photoelectric surface, and made of silicon.
Aviezer, Hillel; Hassin, Ran. R.; Bentin, Shlomo
2011-01-01
In the current study we examined the recognition of facial expressions embedded in emotionally expressive bodies in case LG, an individual with a rare form of developmental visual agnosia who suffers from severe prosopagnosia. Neuropsychological testing demonstrated that LG‘s agnosia is characterized by profoundly impaired visual integration. Unlike individuals with typical developmental prosopagnosia who display specific difficulties with face identity (but typically not expression) recognition, LG was also impaired at recognizing isolated facial expressions. By contrast, he successfully recognized the expressions portrayed by faceless emotional bodies handling affective paraphernalia. When presented with contextualized faces in emotional bodies his ability to detect the emotion expressed by a face did not improve even if it was embedded in an emotionally-congruent body context. Furthermore, in contrast to controls, LG displayed an abnormal pattern of contextual influence from emotionally-incongruent bodies. The results are interpreted in the context of a general integration deficit in developmental visual agnosia, suggesting that impaired integration may extend from the level of the face to the level of the full person. PMID:21482423
Reconstruction of lower face defect or deformity with submental artery perforator flaps.
Shi, Cheng-li; Wang, Xian-cheng
2012-07-01
Reconstruction of lower face defects or deformity often presents as a challenge for plastic surgeons. Many methods, including skin graft, tissue expander, or free flap are introduced. Submental artery perforator flaps have been used in the reconstruction of defects or deformities of the lower face. Between August 2006 and December 2008, 22 patients with lower face defects or deformity underwent reconstruction with pedicled submental artery perforator flaps. Their age ranged between 14 and 36 years. The perforator arteries were detected and labeled with a hand-held Doppler flowmeter. The size of flaps ranged from 4 × 6 to 6 × 7 cm, and the designed flaps included the perforator artery. All the flaps survived well, except 1 flap which resulted in partial necrosis in distal region and healed after conservative therapy. No other complication occurred with satisfactory aesthetic appearance of the donor site. The submental artery perforator flap is a thin and reliable flap with robust blood supply. This flap can reduce donor-site morbidity significantly and is a good choice for reconstructive surgery of lower face.
Koehler, Kirsten A.; Anthony, T. Renee; Van Dyke, Michael
2016-01-01
The objective of this study was to examine the facing-the-wind sampling efficiency of three personal aerosol samplers as a function of particle phase (solid versus liquid). Samplers examined were the IOM, Button, and a prototype personal high-flow inhalable sampler head (PHISH). The prototype PHISH was designed to interface with the 37-mm closed-face cassette and provide an inhalable sample at 10 l min−1 of flow. Increased flow rate increases the amount of mass collected during a typical work shift and helps to ensure that limits of detection are met, particularly for well-controlled but highly toxic species. Two PHISH prototypes were tested: one with a screened inlet and one with a single-pore open-face inlet. Personal aerosol samplers were tested on a bluff-body disc that was rotated along the facing-the-wind axis to reduce spatiotemporal variability associated with sampling supermicron aerosol in low-velocity wind tunnels. When compared to published data for facing-wind aspiration efficiency for a mouth-breathing mannequin, the IOM oversampled relative to mannequin facing-the-wind aspiration efficiency for all sizes and particle types (solid and liquid). The sampling efficiency of the Button sampler was closer to the mannequin facing-the-wind aspiration efficiency than the IOM for solid particles, but the screened inlet removed most liquid particles, resulting in a large underestimation compared to the mannequin facing-the-wind aspiration efficiency. The open-face PHISH results showed overestimation for solid particles and underestimation for liquid particles when compared to the mannequin facing-the-wind aspiration efficiency. Substantial (and statistically significant) differences in sampling efficiency were observed between liquid and solid particles, particularly for the Button and screened-PHISH, with a majority of aerosol mass depositing on the screened inlets of these samplers. Our results suggest that large droplets have low penetration efficiencies through screened inlets and that particle bounce, for solid particles, is an important determinant of aspiration and sampling efficiencies for samplers with screened inlets. PMID:21965462
A screening questionnaire for convulsive seizures: A three-stage field-validation in rural Bolivia.
Giuliano, Loretta; Cicero, Calogero Edoardo; Crespo Gómez, Elizabeth Blanca; Padilla, Sandra; Bruno, Elisa; Camargo, Mario; Marin, Benoit; Sofia, Vito; Preux, Pierre-Marie; Strohmeyer, Marianne; Bartoloni, Alessandro; Nicoletti, Alessandra
2017-01-01
Epilepsy is one of the most common neurological diseases in Latin American Countries (LAC) and epilepsy associated with convulsive seizures is the most frequent type. Therefore, the detection of convulsive seizures is a priority, but a validated Spanish-language screening tool to detect convulsive seizures is not available. We performed a field validation to evaluate the accuracy of a Spanish-language questionnaire to detect convulsive seizures in rural Bolivia using a three-stage design. The questionnaire was also administered face-to-face, using a two-stage design, to evaluate the difference in accuracy. The study was carried out in the rural communities of the Gran Chaco region. The questionnaire consists of a single screening question directed toward the householders and a confirmatory section administered face-to-face to the index case. Positive subjects underwent a neurological examination to detect false positive and true positive subjects. To estimate the proportion of false negative, a random sample of about 20% of the screened negative underwent a neurological evaluation. 792 householders have been interviewed representing a population of 3,562 subjects (52.2% men; mean age 24.5 ± 19.7 years). We found a sensitivity of 76.3% (95% CI 59.8-88.6) with a specificity of 99.6% (95% CI 99.4-99.8). The two-stage design showed only a slightly higher sensitivity respect to the three-stage design. Our screening tool shows a good accuracy and can be easily used by trained health workers to quickly screen the population of the rural communities of LAC through the householders using a three-stage design.
The N400 as an index of racial stereotype accessibility.
Hehman, Eric; Volpert, Hannah I; Simons, Robert F
2014-04-01
The current research examined the viability of the N400, an event-related potential (ERP) related to the detection of semantic incongruity, as an index of both stereotype accessibility and interracial prejudice. Participants' EEG was recorded while they completed a sequential priming task, in which negative or positive, stereotypically black (African American) or white (Caucasian American) traits followed the presentation of either a black or white face acting as a prime. ERP examination focused on the N400, but additionally examined N100 and P200 reactivity. Replicating and extending previous N400 stereotype research, results indicated that the N400 can indeed function as an index of stereotype accessibility in an interracial domain, as greater N400 reactivity was elicited by trials in which the face prime was incongruent with the target trait than when primes and traits matched. Furthermore, N400 activity was moderated by participants' self-reported explicit bias. More explicitly biased participants demonstrated greater N400 reactivity to stereotypically white traits following black faces than black traits following black faces. P200 activity was additionally associated with participants' implicit biases, as more implicitly biased participants similarly demonstrated greater P200 reactivity to stereotypically white traits following black faces than black traits following black faces.
[Face recognition in patients with schizophrenia].
Doi, Hirokazu; Shinohara, Kazuyuki
2012-07-01
It is well known that patients with schizophrenia show severe deficiencies in social communication skills. These deficiencies are believed to be partly derived from abnormalities in face recognition. However, the exact nature of these abnormalities exhibited by schizophrenic patients with respect to face recognition has yet to be clarified. In the present paper, we review the main findings on face recognition deficiencies in patients with schizophrenia, particularly focusing on abnormalities in the recognition of facial expression and gaze direction, which are the primary sources of information of others' mental states. The existing studies reveal that the abnormal recognition of facial expression and gaze direction in schizophrenic patients is attributable to impairments in both perceptual processing of visual stimuli, and cognitive-emotional responses to social information. Furthermore, schizophrenic patients show malfunctions in distributed neural regions, ranging from the fusiform gyrus recruited in the structural encoding of facial stimuli, to the amygdala which plays a primary role in the detection of the emotional significance of stimuli. These findings were obtained from research in patient groups with heterogeneous characteristics. Because previous studies have indicated that impairments in face recognition in schizophrenic patients might vary according to the types of symptoms, it is of primary importance to compare the nature of face recognition deficiencies and the impairments of underlying neural functions across sub-groups of patients.
En Face Optical Coherence Tomography for Visualization of the Choroid.
Savastano, Maria Cristina; Rispoli, Marco; Savastano, Alfonso; Lumbroso, Bruno
2015-05-01
To assess posterior pole choroid patterns in healthy eyes using en face optical coherence tomography (OCT). This observational study included 154 healthy eyes of 77 patients who underwent en face OCT. The mean age of the patients was 31.2 years (standard deviation: 13 years); 40 patients were women, and 37 patients were men. En face imaging of the choroidal vasculature was assessed using an OCT Optovue RTVue (Optovue, Fremont, CA). To generate an appropriate choroid image, the best detectable vessels in Haller's layer below the retinal pigment epithelium surface parallel plane were selected. Images of diverse choroidal vessel patterns at the posterior pole were observed and recorded with en face OCT. Five different patterns of Haller's layer with different occurrences were assessed. Pattern 1 (temporal herringbone) represented 49.2%, pattern 2 (branched from below) and pattern 3 (laterally diagonal) represented 14.2%, pattern 4 (doubled arcuate) was observed in 11.9%, and pattern 5 (reticular feature) was observed in 10.5% of the reference plane. In vivo assessment of human choroid microvasculature in healthy eyes using en face OCT demonstrated five different patterns. The choroid vasculature pattern may play a role in the origin and development of neuroretinal pathologies, with potential importance in chorioretinal diseases and circulatory abnormalities. Copyright 2015, SLACK Incorporated.
Johnson, Mark H; Senju, Atsushi; Tomalski, Przemyslaw
2015-03-01
Johnson and Morton (1991. Biology and Cognitive Development: The Case of Face Recognition. Blackwell, Oxford) used Gabriel Horn's work on the filial imprinting model to inspire a two-process theory of the development of face processing in humans. In this paper we review evidence accrued over the past two decades from infants and adults, and from other primates, that informs this two-process model. While work with newborns and infants has been broadly consistent with predictions from the model, further refinements and questions have been raised. With regard to adults, we discuss more recent evidence on the extension of the model to eye contact detection, and to subcortical face processing, reviewing functional imaging and patient studies. We conclude with discussion of outstanding caveats and future directions of research in this field. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Kroneisen, Meike; Bell, Raoul
2013-01-01
The present study examines memory for social-exchange-relevant information. In Experiment 1 male and female faces were shown together with behaviour descriptions of cheating, altruistic, and neutral behaviour. Previous results have led to the hypothesis that people preferentially remember schema-atypical information. Given the common gender stereotype that women are kinder and less egoistic than men, this atypicality account would predict that source memory (that is, memory for the type of context to which a face was associated) should be enhanced for female cheaters in comparison to male cheaters. The results of Experiment 1 confirmed this hypothesis. Experiment 2 reveals that source memory for female faces associated with disgusting behaviours is enhanced in comparison to male faces associated with disgusting behaviours. Thus the atypicality effect generalises beyond social-exchange-relevant information, a result which is inconsistent with the assumption that the findings can be ascribed to a highly specific cheater detection module.
Neuro-fuzzy model for estimating race and gender from geometric distances of human face across pose
NASA Astrophysics Data System (ADS)
Nanaa, K.; Rahman, M. N. A.; Rizon, M.; Mohamad, F. S.; Mamat, M.
2018-03-01
Classifying human face based on race and gender is a vital process in face recognition. It contributes to an index database and eases 3D synthesis of the human face. Identifying race and gender based on intrinsic factor is problematic, which is more fitting to utilizing nonlinear model for estimating process. In this paper, we aim to estimate race and gender in varied head pose. For this purpose, we collect dataset from PICS and CAS-PEAL databases, detect the landmarks and rotate them to the frontal pose. After geometric distances are calculated, all of distance values will be normalized. Implementation is carried out by using Neural Network Model and Fuzzy Logic Model. These models are combined by using Adaptive Neuro-Fuzzy Model. The experimental results showed that the optimization of address fuzzy membership. Model gives a better assessment rate and found that estimating race contributing to a more accurate gender assessment.
ERIC Educational Resources Information Center
Rosset, Delphine; Santos, Andreia; Da Fonseca, David; Rondan, Cecilie; Poinso, Francois; Deruelle, Christine
2011-01-01
The angry superiority effect refers to more efficient way individuals detect angry relative to happy faces in a crowd. Given their socio-emotional deficits, children with autism spectrum disorders (ASD) may be impervious to this effect. Thirty children with ASD and 30 matched-typically developing children were presented with a visual search task,…
A Survey of nearby, nearly face-on spiral galaxies
NASA Astrophysics Data System (ADS)
Garmire, Gordon
2014-09-01
This is a continuation of a survey of nearby, nearly face-on spiral galaxies. The main purpose is to search for evidence of collisions with small galaxies that show up in X-rays by the generation of hot shocked gas from the collision. Secondary objectives include study of the spatial distribution point sources in the galaxy and to detect evidence for a central massive blackhole.
Sensitivity to Spacing Changes in Faces and Nonface Objects in Preschool-Aged Children and Adults
ERIC Educational Resources Information Center
Cassia, Viola Macchi; Turati, Chiara; Schwarzer, Gudrun
2011-01-01
Sensitivity to variations in the spacing of features in faces and a class of nonface objects (i.e., frontal images of cars) was tested in 3- and 4-year-old children and adults using a delayed or simultaneous two-alternative forced choice matching-to-sample task. In the adults, detection of spacing information was robust against exemplar…
ERIC Educational Resources Information Center
Stirling, Lucy J.; Eley, Thalia C.; Clark, David M.
2006-01-01
Attentional biases with regard to emotional facial expressions are associated with social anxiety in adults. We investigated whether similar relations exist in children. Seventy-nine 8- to 11-year-olds completed a probe detection task. On a given trial, 1 of 3 pairs of faces was presented: negative-neutral, negative-positive, and positive-neutral.…
Reduced specificity in emotion judgment in people with autism spectrum disorder
Wang, Shuo; Adolphs, Ralph
2017-01-01
There is a conflicting literature on facial emotion processing in autism spectrum disorder (ASD): both typical and atypical performance have been reported, and inconsistencies in the literature may stem from different processes examined (emotion judgment, face perception, fixations) as well as differences in participant populations. Here we conducted a detailed investigation of the ability to discriminate graded emotions shown in morphs of fear-happy faces, in a well-characterized high-functioning sample of participants with ASD and matched controls. Signal detection approaches were used in the analyses, and concurrent high-resolution eye-tracking was collected. Although people with ASD had typical thresholds for categorical fear and confidence judgments, their psychometric specificity to detect emotions across the entire range of intensities was reduced. However, fixation patterns onto the stimuli were typical and could not account for the reduced specificity of emotion judgment. Together, our results argue for a subtle and specific deficit in emotion perception in ASD that, from a signal detection perspective, is best understood as a reduced specificity due to increased noise in central processing of the face stimuli. PMID:28343960
Bayet, Laurie; Pascalis, Olivier; Quinn, Paul C.; Lee, Kang; Gentaz, Édouard; Tanaka, James W.
2015-01-01
Angry faces are perceived as more masculine by adults. However, the developmental course and underlying mechanism (bottom-up stimulus driven or top-down belief driven) associated with the angry-male bias remain unclear. Here we report that anger biases face gender categorization toward “male” responding in children as young as 5–6 years. The bias is observed for both own- and other-race faces, and is remarkably unchanged across development (into adulthood) as revealed by signal detection analyses (Experiments 1–2). The developmental course of the angry-male bias, along with its extension to other-race faces, combine to suggest that it is not rooted in extensive experience, e.g., observing males engaging in aggressive acts during the school years. Based on several computational simulations of gender categorization (Experiment 3), we further conclude that (1) the angry-male bias results, at least partially, from a strategy of attending to facial features or their second-order relations when categorizing face gender, and (2) any single choice of computational representation (e.g., Principal Component Analysis) is insufficient to assess resemblances between face categories, as different representations of the very same faces suggest different bases for the angry-male bias. Our findings are thus consistent with stimulus-and stereotyped-belief driven accounts of the angry-male bias. Taken together, the evidence suggests considerable stability in the interaction between some facial dimensions in social categorization that is present prior to the onset of formal schooling. PMID:25859238
Hiramatsu, Chihiro; Melin, Amanda D; Allen, William L; Dubuc, Constance; Higham, James P
2017-06-14
Primate trichromatic colour vision has been hypothesized to be well tuned for detecting variation in facial coloration, which could be due to selection on either signal wavelengths or the sensitivities of the photoreceptors themselves. We provide one of the first empirical tests of this idea by asking whether, when compared with other visual systems, the information obtained through primate trichromatic vision confers an improved ability to detect the changes in facial colour that female macaque monkeys exhibit when they are proceptive. We presented pairs of digital images of faces of the same monkey to human observers and asked them to select the proceptive face. We tested images that simulated what would be seen by common catarrhine trichromatic vision, two additional trichromatic conditions and three dichromatic conditions. Performance under conditions of common catarrhine trichromacy, and trichromacy with narrowly separated LM cone pigments (common in female platyrrhines), was better than for evenly spaced trichromacy or for any of the dichromatic conditions. These results suggest that primate trichromatic colour vision confers excellent ability to detect meaningful variation in primate face colour. This is consistent with the hypothesis that social information detection has acted on either primate signal spectral reflectance or photoreceptor spectral tuning, or both. © 2017 The Authors.
Higham, James P.
2017-01-01
Primate trichromatic colour vision has been hypothesized to be well tuned for detecting variation in facial coloration, which could be due to selection on either signal wavelengths or the sensitivities of the photoreceptors themselves. We provide one of the first empirical tests of this idea by asking whether, when compared with other visual systems, the information obtained through primate trichromatic vision confers an improved ability to detect the changes in facial colour that female macaque monkeys exhibit when they are proceptive. We presented pairs of digital images of faces of the same monkey to human observers and asked them to select the proceptive face. We tested images that simulated what would be seen by common catarrhine trichromatic vision, two additional trichromatic conditions and three dichromatic conditions. Performance under conditions of common catarrhine trichromacy, and trichromacy with narrowly separated LM cone pigments (common in female platyrrhines), was better than for evenly spaced trichromacy or for any of the dichromatic conditions. These results suggest that primate trichromatic colour vision confers excellent ability to detect meaningful variation in primate face colour. This is consistent with the hypothesis that social information detection has acted on either primate signal spectral reflectance or photoreceptor spectral tuning, or both. PMID:28615496
Ameller, Aurely; Picard, Aline; D'Hondt, Fabien; Vaiva, Guillaume; Thomas, Pierre; Pins, Delphine
2017-01-01
Familiarity is a subjective sensation that contributes to person recognition. This process is described as an emotion-based memory-trace of previous meetings and could be disrupted in schizophrenia. Consequently, familiarity disorders could be involved in the impaired social interactions observed in patients with schizophrenia. Previous studies have primarily focused on famous people recognition. Our aim was to identify underlying features, such as emotional disturbances, that may contribute to familiarity disorders in schizophrenia. We hypothesize that patients with familiarity disorders will exhibit a lack of familiarity that could be detected by a flattened skin conductance response (SCR). The SCR was recorded to test the hypothesis that emotional reactivity disturbances occur in patients with schizophrenia during the categorization of specific familiar, famous and unknown faces as male or female. Forty-eight subjects were divided into the following 3 matched groups with 16 subjects per group: control subjects, schizophrenic people with familiarity disorder, and schizophrenic people without familiarity disorders. Emotional arousal is reflected by the skin conductance measures. The control subjects and the patients without familiarity disorders experienced a differential emotional response to the specific familiar faces compared with that to the unknown faces. Nevertheless, overall, the schizophrenic patients without familiarity disorders showed a weaker response across conditions compared with the control subjects. In contrast, the patients with familiarity disorders did not show any significant differences in their emotional response to the faces, regardless of the condition. Only patients with familiarity disorders fail to exhibit a difference in emotional response between familiar and non-familiar faces. These patients likely emotionally process familiar faces similarly to unknown faces. Hence, the lower feelings of familiarity in schizophrenia may be a premise enabling the emergence of familiarity disorders.
Near-infrared face recognition utilizing open CV software
NASA Astrophysics Data System (ADS)
Sellami, Louiza; Ngo, Hau; Fowler, Chris J.; Kearney, Liam M.
2014-06-01
Commercially available hardware, freely available algorithms, and authors' developed software are synergized successfully to detect and recognize subjects in an environment without visible light. This project integrates three major components: an illumination device operating in near infrared (NIR) spectrum, a NIR capable camera and a software algorithm capable of performing image manipulation, facial detection and recognition. Focusing our efforts in the near infrared spectrum allows the low budget system to operate covertly while still allowing for accurate face recognition. In doing so a valuable function has been developed which presents potential benefits in future civilian and military security and surveillance operations.
Taylor, James M; Whalen, Paul J
2014-06-01
We previously demonstrated that fearful facial expressions implicitly facilitate memory for contextual events whereas angry facial expressions do not. The current study sought to more directly address the implicit effect of fearful expressions on attention for contextual events within a classic attentional paradigm (i.e., the attentional blink) in which memory is tested on a trial-by-trial basis, thereby providing subjects with a clear, explicit attentional strategy. Neutral faces of a single gender were presented via rapid serial visual presentation (RSVP) while bordered by four gray pound signs. Participants were told to watch for a gender change within the sequence (T1). It is critical to note that the T1 face displayed a neutral, fearful, or angry expression. Subjects were then told to detect a color change (i.e., gray to green; T2) at one of the four peripheral pound sign locations appearing after T1. This T2 color change could appear at one of six temporal positions. Complementing previous attentional blink paradigms, participants were told to respond via button press immediately when a T2 target was detected. We found that, compared with the neutral T1 faces, fearful faces significantly increased target detection ability at four of the six temporal locations (all ps < .05) whereas angry expressions did not. The results of this study demonstrate that fearful facial expressions can uniquely and implicitly enhance environmental monitoring above and beyond explicit attentional effects related to task instructions.
Human face detection using motion and color information
NASA Astrophysics Data System (ADS)
Kim, Yang-Gyun; Bang, Man-Won; Park, Soon-Young; Choi, Kyoung-Ho; Hwang, Jeong-Hyun
2008-02-01
In this paper, we present a hardware implementation of a face detector for surveillance applications. To come up with a computationally cheap and fast algorithm with minimal memory requirement, motion and skin color information are fused successfully. More specifically, a newly appeared object is extracted first by comparing average Hue and Saturation values of background image and a current image. Then, the result of skin color filtering of the current image is combined with the result of a newly appeared object. Finally, labeling is performed to locate a true face region. The proposed system is implemented on Altera Cyclone2 using Quartus II 6.1 and ModelSim 6.1. For hardware description language (HDL), Verilog-HDL is used.
A Survey of nearby, nearly face-on spiral galaxies
NASA Astrophysics Data System (ADS)
Garmire, Gordon
2014-09-01
This is a continuation of a survey of nearby, nearly face-on spiral galaxies. The main purpose is to search for evidence of collisions with small galaxies that show up in X-rays by the generation of hot shocked gas from the collision. Secondary objectives include study of the spatial distribution point sources in the galaxy and to detect evidence for a central massive blackhole. These are alternate targets.
Adult-like neuroelectrical response to inequity in children: Evidence from the ultimatum game.
Rêgo, Gabriel Gaudencio; Campanhã, Camila; Kassab, Ana Paula; Romero, Ruth Lyra; Minati, Ludovico; Boggio, Paulo Sérgio
2016-01-01
People react aversely when faced with unfair situations, a phenomenon that has been related to an electroencephalographic (EEG) potential known as medial frontal negativity (MFN). To our knowledge, the existence of the MFN in children has not yet been demonstrated. Here, we recorded EEG activity from 15 children playing the ultimatum game (UG) and who afterward performed a recognition task, in order to assess whether they could recognize the unfair and fair (familiar) proposers among unfamiliar faces. During the recognition task, we also acquired pupil dilation data to investigate subconscious recognition processes. A typical (adult-like) MFN component was detected in reaction to unfair proposals. We found a positive correlation between reaction time and empathy, as well as a negative correlation between reaction time and systematic reasoning scores. Finally, we detected a significant difference in pupil dilation in response to unfamiliar faces versus UG proposers. Our data provide the first evidence of MFN in children, which appears to index similar neurophysiological phenomena as in adults. Also, reaction time to fair proposals seems to be related to individual traits, as represented by empathy and systematizing. Our pupil dilation data provide evidence that automatic responses to faces did not index fairness, but familiarity. These findings have implications for our understanding of social development in typically developing children.
Hügelschäfer, Sabine; Jaudas, Alexander; Achtziger, Anja
2016-10-15
Gender categorization is highly automatic. Studies measuring ERPs during the presentation of male and female faces in a categorization task showed that this categorization is extremely quick (around 130ms, indicated by the N170). We tested whether this automatic process can be controlled by goal intentions and implementation intentions. First, we replicated the N170 modulation on gender-incongruent faces as reported in previous research. This effect was only observed in a task in which faces had to be categorized according to gender, but not in a task that required responding to a visual feature added to the face stimuli (the color of a dot) while gender was irrelevant. Second, it turned out that the N170 modulation on gender-incongruent faces was altered if a goal intention was set that aimed at controlling a gender bias. We interpret this finding as an indicator of nonconscious goal pursuit. The N170 modulation was completely absent when this goal intention was furnished with an implementation intention. In contrast, intentions did not alter brain activity at a later time window (P300), which is associated with more complex and rather conscious processes. In line with previous research, the P300 was modulated by gender incongruency even if individuals were strongly involved in another task, demonstrating the automaticity of gender detection. We interpret our findings as evidence that automatic gender categorization that occurs at a very early processing stage can be effectively controlled by intentions. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Matheus, B. R. N.; Centurion, B. S.; Rubira-Bullen, I. R. F.; Schiabel, H.
2017-03-01
Cone Beam Computed Tomography (CBCT), a kind of face and neck exams can be opportunity to identify, as an incidental finding, calcifications of the carotid artery (CACA). Given the similarity of the CACA with calcification found in several x-ray exams, this work suggests that a similar technique designed to detect breast calcifications in mammography images could be applied to detect such calcifications in CBCT. The method used a 3D version of the calcification detection technique [1], based on a signal enhancement using a convolution with a 3D Laplacian of Gaussian (LoG) function followed by removing the high contrast bone structure from the image. Initial promising results show a 71% sensitivity with 0.48 false positive per exam.
Georgakis, Marios K; Papadopoulos, Fotios C; Beratis, Ion; Michelakos, Theodoros; Kanavidis, Prodromos; Dafermos, Vasilios; Tousoulis, Dimitrios; Papageorgiou, Sokratis G; Petridou, Eleni Th
2017-01-01
The efficacy of the most widely used tests for dementia screening is limited in populations characterized by low levels of education. This study aimed to validate the face-to-face administered Telephone Interview for Cognitive Status (TICS) for detection of dementia and mild cognitive impairment (MCI) in a population-based sample of community dwelling individuals characterized by low levels of education or illiteracy in rural Greece. The translated Greek version of TICS was administered through face-to-face interview in 133 elderly residents of Velestino of low educational level (<12 years). We assessed its internal consistency and test-retest reliability, its correlation with sociodemographic parameters, and its discriminant ability for cognitive impairment and dementia, as defined by a brief neurological evaluation, including assessment of cognitive status and level of independence. TICS was characterized by adequate internal consistency (Cronbach's α: .72) and very high test-retest reliability (intra-class correlation coefficient: .93); it was positively correlated with age and educational years. MCI and dementia were diagnosed in 18 and 10.5% of the population, respectively. Its discriminant ability for detection of dementia was high (Area under the curve, AUC: .85), with a sensitivity and specificity of 86 and 82%, respectively, at a cut-off point of 24/25. TICS did not perform well in differentiating MCI from cognitively normal individuals though (AUC: .67). The directly administered TICS questionnaire provides an easily applicable and brief option for detection of dementia in populations of low educational level and might be useful in the context of both clinical and research purposes.
Paulus, Martin P.; Simmons, Alan N.; Fitzpatrick, Summer N.; Potterat, Eric G.; Van Orden, Karl F.; Bauman, James; Swain, Judith L.
2010-01-01
Background Little is known about the neural basis of elite performers and their optimal performance in extreme environments. The purpose of this study was to examine brain processing differences between elite warfighters and comparison subjects in brain structures that are important for emotion processing and interoception. Methodology/Principal Findings Navy Sea, Air, and Land Forces (SEALs) while off duty (n = 11) were compared with n = 23 healthy male volunteers while performing a simple emotion face-processing task during functional magnetic resonance imaging. Irrespective of the target emotion, elite warfighters relative to comparison subjects showed relatively greater right-sided insula, but attenuated left-sided insula, activation. Navy SEALs showed selectively greater activation to angry target faces relative to fearful or happy target faces bilaterally in the insula. This was not accounted for by contrasting positive versus negative emotions. Finally, these individuals also showed slower response latencies to fearful and happy target faces than did comparison subjects. Conclusions/Significance These findings support the hypothesis that elite warfighters deploy greater processing resources toward potential threat-related facial expressions and reduced processing resources to non-threat-related facial expressions. Moreover, rather than expending more effort in general, elite warfighters show more focused neural and performance tuning. In other words, greater neural processing resources are directed toward threat stimuli and processing resources are conserved when facing a nonthreat stimulus situation. PMID:20418943
NASA Astrophysics Data System (ADS)
Jeong, Hyeon-Ho; Erdene, Norov; Lee, Seung-Ki; Jeong, Dae-Hong; Park, Jae-Hyoung
2011-12-01
A fiber-optic localized surface plasmon (FO LSPR) sensor was fabricated by gold nanoparticles (Au NPs) immobilized on the end-face of an optical fiber. When Au NPs were formed on the end-face of an optical fiber by chemical reaction, Au NPs aggregation occurred and the Au NPs were immobilized in various forms such as monomers, dimers, trimers, etc. The component ratio of the Au NPs on the end-face of the fabricated FO LSPR sensor was slightly changed whenever the sensors were fabricated in the same condition. Including this phenomenon, the FO LSPR sensor was fabricated with high sensitivity by controlling the density of Au NPs. Also, the fabricated sensors were measured for the resonance intensity for the different optical systems and analyzed for the effect on sensitivity. Finally, for application as a biosensor, the sensor was used for detecting the antibody-antigen reaction of interferon-gamma.
NASA Astrophysics Data System (ADS)
Jelen, Lukasz; Kobel, Joanna; Podbielska, Halina
2003-11-01
This paper discusses the possibility of exploiting of the tennovision registration and artificial neural networks for facial recognition systems. A biometric system that is able to identify people from thermograms is presented. To identify a person we used the Eigenfaces algorithm. For the face detection in the picture the backpropagation neural network was designed. For this purpose thermograms of 10 people in various external conditions were studies. The Eigenfaces algorithm calculated an average face and then the set of characteristic features for each studied person was produced. The neural network has to detect the face in the image before it actually can be identified. We used five hidden layers for that purpose. It was shown that the errors in recognition depend on the feature extraction, for low quality pictures the error was so high as 30%. However, for pictures with a good feature extraction the results of proper identification higher then 90%, were obtained.
Music to my ears: Age-related decline in musical and facial emotion recognition.
Sutcliffe, Ryan; Rendell, Peter G; Henry, Julie D; Bailey, Phoebe E; Ruffman, Ted
2017-12-01
We investigated young-old differences in emotion recognition using music and face stimuli and tested explanatory hypotheses regarding older adults' typically worse emotion recognition. In Experiment 1, young and older adults labeled emotions in an established set of faces, and in classical piano stimuli that we pilot-tested on other young and older adults. Older adults were worse at detecting anger, sadness, fear, and happiness in music. Performance on the music and face emotion tasks was not correlated for either age group. Because musical expressions of fear were not equated for age groups in the pilot study of Experiment 1, we conducted a second experiment in which we created a novel set of music stimuli that included more accessible musical styles, and which we again pilot-tested on young and older adults. In this pilot study, all musical emotions were identified similarly by young and older adults. In Experiment 2, participants also made age estimations in another set of faces to examine whether potential relations between the face and music emotion tasks would be shared with the age estimation task. Older adults did worse in each of the tasks, and had specific difficulty recognizing happy, sad, peaceful, angry, and fearful music clips. Older adults' difficulties in each of the 3 tasks-music emotion, face emotion, and face age-were not correlated with each other. General cognitive decline did not appear to explain our results as increasing age predicted emotion performance even after fluid IQ was controlled for within the older adult group. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Watson, Rebecca; Latinus, Marianne; Noguchi, Takao; Garrod, Oliver; Crabbe, Frances; Belin, Pascal
2014-05-14
The integration of emotional information from the face and voice of other persons is known to be mediated by a number of "multisensory" cerebral regions, such as the right posterior superior temporal sulcus (pSTS). However, whether multimodal integration in these regions is attributable to interleaved populations of unisensory neurons responding to face or voice or rather by multimodal neurons receiving input from the two modalities is not fully clear. Here, we examine this question using functional magnetic resonance adaptation and dynamic audiovisual stimuli in which emotional information was manipulated parametrically and independently in the face and voice via morphing between angry and happy expressions. Healthy human adult subjects were scanned while performing a happy/angry emotion categorization task on a series of such stimuli included in a fast event-related, continuous carryover design. Subjects integrated both face and voice information when categorizing emotion-although there was a greater weighting of face information-and showed behavioral adaptation effects both within and across modality. Adaptation also occurred at the neural level: in addition to modality-specific adaptation in visual and auditory cortices, we observed for the first time a crossmodal adaptation effect. Specifically, fMRI signal in the right pSTS was reduced in response to a stimulus in which facial emotion was similar to the vocal emotion of the preceding stimulus. These results suggest that the integration of emotional information from face and voice in the pSTS involves a detectable proportion of bimodal neurons that combine inputs from visual and auditory cortices. Copyright © 2014 the authors 0270-6474/14/346813-09$15.00/0.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valenciaga, Y; Prout, D; Chatziioannou, A
2015-06-15
Purpose: To examine the effect of different scintillator surface treatments (BGO crystals) on the fraction of scintillation photons that exit the crystal and reach the photodetector (SiPM). Methods: Positron Emission Tomography is based on the detection of light that exits scintillator crystals, after annihilation photons deposit energy inside these crystals. A considerable fraction of the scintillation light gets trapped or absorbed after going through multiple internal reflections on the interfaces surrounding the crystals. BGO scintillator crystals generate considerably less scintillation light than crystals made of LSO and its variants. Therefore, it is crucial that the small amount of light producedmore » by BGO exits towards the light detector. The surface treatment of scintillator crystals is among the factors affecting the ability of scintillation light to reach the detectors. In this study, we analyze the effect of different crystal surface treatments on the fraction of scintillation light that is detected by the solid state photodetector (SiPM), once energy is deposited inside a BGO crystal. Simulations were performed by a Monte Carlo based software named GATE, and validated by measurements from individual BGO crystals coupled to Philips digital-SiPM sensor (DPC-3200). Results: The results showed an increment in light collection of about 4 percent when only the exit face of the BGO crystal, is unpolished; compared to when all the faces are polished. However, leaving several faces unpolished caused a reduction of at least 10 percent of light output when the interaction occurs as far from the exit face of the crystal as possible compared to when it occurs very close to the exit face. Conclusion: This work demonstrates the advantages on light collection from leaving unpolished the exit face of BGO crystals. The configuration with best light output will be used to obtain flood images from BGO crystal arrays coupled to SiPM sensors.« less
Fink, B; Matts, P J; Brauckmann, C; Gundlach, S
2018-04-01
Previous studies investigating the effects of skin surface topography and colouration cues on the perception of female faces reported a differential weighting for the perception of skin topography and colour evenness, where topography was a stronger visual cue for the perception of age, whereas skin colour evenness was a stronger visual cue for the perception of health. We extend these findings in a study of the effect of skin surface topography and colour evenness cues on the perceptions of facial age, health and attractiveness in males. Facial images of six men (aged 40 to 70 years), selected for co-expression of lines/wrinkles and discolouration, were manipulated digitally to create eight stimuli, namely, separate removal of these two features (a) on the forehead, (b) in the periorbital area, (c) on the cheeks and (d) across the entire face. Omnibus (within-face) pairwise combinations, including the original (unmodified) face, were presented to a total of 240 male and female judges, who selected the face they considered younger, healthier and more attractive. Significant effects were detected for facial image choice, in response to skin feature manipulation. The combined removal of skin surface topography resulted in younger age perception compared with that seen with the removal of skin colouration cues, whereas the opposite pattern was found for health preference. No difference was detected for the perception of attractiveness. These perceptual effects were seen particularly on the forehead and cheeks. Removing skin topography cues (but not discolouration) in the periorbital area resulted in higher preferences for all three attributes. Skin surface topography and colouration cues affect the perception of age, health and attractiveness in men's faces. The combined removal of these features on the forehead, cheeks and in the periorbital area results in the most positive assessments. © 2018 Society of Cosmetic Scientists and the Société Française de Cosmétologie.
Webcam mouse using face and eye tracking in various illumination environments.
Lin, Yuan-Pin; Chao, Yi-Ping; Lin, Chung-Chih; Chen, Jyh-Horng
2005-01-01
Nowadays, due to enhancement of computer performance and popular usage of webcam devices, it has become possible to acquire users' gestures for the human-computer-interface with PC via webcam. However, the effects of illumination variation would dramatically decrease the stability and accuracy of skin-based face tracking system; especially for a notebook or portable platform. In this study we present an effective illumination recognition technique, combining K-Nearest Neighbor classifier and adaptive skin model, to realize the real-time tracking system. We have demonstrated that the accuracy of face detection based on the KNN classifier is higher than 92% in various illumination environments. In real-time implementation, the system successfully tracks user face and eyes features at 15 fps under standard notebook platforms. Although KNN classifier only initiates five environments at preliminary stage, the system permits users to define and add their favorite environments to KNN for computer access. Eventually, based on this efficient tracking algorithm, we have developed a "Webcam Mouse" system to control the PC cursor using face and eye tracking. Preliminary studies in "point and click" style PC web games also shows promising applications in consumer electronic markets in the future.
Influence of cooling face masks on nasal air conditioning and nasal geometry.
Lindemann, J; Hoffmann, T; Koehl, A; Walz, E M; Sommer, F
2017-06-01
Nasal geometries and temperature of the nasal mucosa are the primary factors affecting nasal air conditioning. Data on intranasal air conditioning after provoking the trigeminal nerve with a cold stimulus simulating the effects of an arctic condition is still missing. The objective was to investigate the influence of skin cooling face masks on nasal air conditioning, mucosal temperature and nasal geometry. Standardized in vivo measurements of intranasal air temperature, humidity and mucosal temperature were performed in 55 healthy subjects at defined detection sites before and after wearing a cooling face mask. Measurements of skin temperature, rhinomanometry and acoustic rhinometry were accomplished. After wearing the face mask the facial skin temperature was significantly reduced. Intranasal air temperature did not change. Absolute humidity and mucosal temperature increased significantly. The acoustic rhinometric results showed a significant increase of the volumes and the cross-sectional areas. There was no change in nasal airflow. Nasal mucosal temperature, humidity of inhaled air, and volume of the anterior nose increased after application of a cold face mask. The response is mediated by the trigeminal nerve. Increased mucosal temperatures as well as changes in nasal geometries seem to guarantee sufficient steady intranasal nasal air conditioning.
Liu, Wenbo; Li, Ming; Yi, Li
2016-08-01
The atypical face scanning patterns in individuals with Autism Spectrum Disorder (ASD) has been repeatedly discovered by previous research. The present study examined whether their face scanning patterns could be potentially useful to identify children with ASD by adopting the machine learning algorithm for the classification purpose. Particularly, we applied the machine learning method to analyze an eye movement dataset from a face recognition task [Yi et al., 2016], to classify children with and without ASD. We evaluated the performance of our model in terms of its accuracy, sensitivity, and specificity of classifying ASD. Results indicated promising evidence for applying the machine learning algorithm based on the face scanning patterns to identify children with ASD, with a maximum classification accuracy of 88.51%. Nevertheless, our study is still preliminary with some constraints that may apply in the clinical practice. Future research should shed light on further valuation of our method and contribute to the development of a multitask and multimodel approach to aid the process of early detection and diagnosis of ASD. Autism Res 2016, 9: 888-898. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.
Ding, Liya; Martinez, Aleix M
2010-11-01
The appearance-based approach to face detection has seen great advances in the last several years. In this approach, we learn the image statistics describing the texture pattern (appearance) of the object class we want to detect, e.g., the face. However, this approach has had limited success in providing an accurate and detailed description of the internal facial features, i.e., eyes, brows, nose, and mouth. In general, this is due to the limited information carried by the learned statistical model. While the face template is relatively rich in texture, facial features (e.g., eyes, nose, and mouth) do not carry enough discriminative information to tell them apart from all possible background images. We resolve this problem by adding the context information of each facial feature in the design of the statistical model. In the proposed approach, the context information defines the image statistics most correlated with the surroundings of each facial component. This means that when we search for a face or facial feature, we look for those locations which most resemble the feature yet are most dissimilar to its context. This dissimilarity with the context features forces the detector to gravitate toward an accurate estimate of the position of the facial feature. Learning to discriminate between feature and context templates is difficult, however, because the context and the texture of the facial features vary widely under changing expression, pose, and illumination, and may even resemble one another. We address this problem with the use of subclass divisions. We derive two algorithms to automatically divide the training samples of each facial feature into a set of subclasses, each representing a distinct construction of the same facial component (e.g., closed versus open eyes) or its context (e.g., different hairstyles). The first algorithm is based on a discriminant analysis formulation. The second algorithm is an extension of the AdaBoost approach. We provide extensive experimental results using still images and video sequences for a total of 3,930 images. We show that the results are almost as good as those obtained with manual detection.
NASA Astrophysics Data System (ADS)
Schaeffer, Kevin P.
Tunnel boring machines (TBMs) are routinely used for the excavation of tunnels across a range of ground conditions, from hard rock to soft ground. In complex ground conditions and in urban environments, the TBM susceptible to damage due to uncertainty of what lies ahead of the tunnel face. The research presented here explores the application of electrical resistivity theory for use in the TBM tunneling environment to detect changing conditions ahead of the machine. Electrical resistivity offers a real-time and continuous imaging solution to increase the resolution of information along the tunnel alignment and may even unveil previously unknown geologic or man-made features ahead of the TBM. The studies presented herein, break down the tunneling environment and the electrical system to understand how its fundamental parameters can be isolated and tested, identifying how they influence the ability to predict changes ahead of the tunnel face. A proof-of-concept, scaled experimental model was constructed in order assess the ability of the model to predict a metal pipe (or rod) ahead of face as the TBM excavates through a saturated sand. The model shows that a prediction of up to three tunnel diameters could be achieved, but the unique presence of the pipe (or rod) could not be concluded with certainty. Full scale finite element models were developed in order evaluate the various influences on the ability to detect changing conditions ahead of the face. Results show that TBM/tunnel geometry, TBM type, and electrode geometry can drastically influence prediction ahead of the face by tens of meters. In certain conditions (i.e., small TBM diameter, low cover depth, large material contrasts), changes can be detected over 100 meters in front of the TBM. Various electrode arrays were considered and show that in order to better detect more finite differences (e.g., boulder, lens, pipe), the use of individual cutting tools as electrodes is highly advantageous to increase spatial resolution and current density close to the cutterhead.
An Application for Driver Drowsiness Identification based on Pupil Detection using IR Camera
NASA Astrophysics Data System (ADS)
Kumar, K. S. Chidanand; Bhowmick, Brojeshwar
A Driver drowsiness identification system has been proposed that generates alarms when driver falls asleep during driving. A number of different physical phenomena can be monitored and measured in order to detect drowsiness of driver in a vehicle. This paper presents a methodology for driver drowsiness identification using IR camera by detecting and tracking pupils. The face region is first determined first using euler number and template matching. Pupils are then located in the face region. In subsequent frames of video, pupils are tracked in order to find whether the eyes are open or closed. If eyes are closed for several consecutive frames then it is concluded that the driver is fatigued and alarm is generated.
A signal-detection-based diagnostic-feature-detection model of eyewitness identification.
Wixted, John T; Mickes, Laura
2014-04-01
The theoretical understanding of eyewitness identifications made from a police lineup has long been guided by the distinction between absolute and relative decision strategies. In addition, the accuracy of identifications associated with different eyewitness memory procedures has long been evaluated using measures like the diagnosticity ratio (the correct identification rate divided by the false identification rate). Framed in terms of signal-detection theory, both the absolute/relative distinction and the diagnosticity ratio are mainly relevant to response bias while remaining silent about the key issue of diagnostic accuracy, or discriminability (i.e., the ability to tell the difference between innocent and guilty suspects in a lineup). Here, we propose a signal-detection-based model of eyewitness identification, one that encourages the use of (and helps to conceptualize) receiver operating characteristic (ROC) analysis to measure discriminability. Recent ROC analyses indicate that the simultaneous presentation of faces in a lineup yields higher discriminability than the presentation of faces in isolation, and we propose a diagnostic feature-detection hypothesis to account for that result. According to this hypothesis, the simultaneous presentation of faces allows the eyewitness to appreciate that certain facial features (viz., those that are shared by everyone in the lineup) are non-diagnostic of guilt. To the extent that those non-diagnostic features are discounted in favor of potentially more diagnostic features, the ability to discriminate innocent from guilty suspects will be enhanced.
[Psychometric validation of the telephone memory test].
Ortiz, T; Fernández, A; Martínez-Castillo, E; Maestú, F; Martínez-Arias, R; López-Ibor, J J
1999-01-01
Several pathologies (i.e. Alzheimer's disease) that courses with memory alterations, appears in a context of impaired cognitive status and mobility. In recent years, several investigations were carried out in order to design short batteries that detect those subjects under risk of dementia. Some of this batteries were also design to be administrated over the telephone, trying to overcome the accessibility limitations of this patients. In this paper we present a battery (called Autotest de Memoria) essentially composed by episodic and semantic memory tests, administered both over the telephone and face to face. This battery was employed in the cognitive assessment of healthy controls and subjects diagnosed as probable Alzheimer's disease patients. Results show the capability of this battery in order to discriminate patients and healthy controls, a great sensibility and specificity, and a nearly absolute parallelism of telephone and face to face administrations. These data led us to claim the usefulness and practicality of our so called
Segmentation of the Speaker's Face Region with Audiovisual Correlation
NASA Astrophysics Data System (ADS)
Liu, Yuyu; Sato, Yoichi
The ability to find the speaker's face region in a video is useful for various applications. In this work, we develop a novel technique to find this region within different time windows, which is robust against the changes of view, scale, and background. The main thrust of our technique is to integrate audiovisual correlation analysis into a video segmentation framework. We analyze the audiovisual correlation locally by computing quadratic mutual information between our audiovisual features. The computation of quadratic mutual information is based on the probability density functions estimated by kernel density estimation with adaptive kernel bandwidth. The results of this audiovisual correlation analysis are incorporated into graph cut-based video segmentation to resolve a globally optimum extraction of the speaker's face region. The setting of any heuristic threshold in this segmentation is avoided by learning the correlation distributions of speaker and background by expectation maximization. Experimental results demonstrate that our method can detect the speaker's face region accurately and robustly for different views, scales, and backgrounds.
Visual search for emotional expressions: Effect of stimulus set on anger and happiness superiority.
Savage, Ruth A; Becker, Stefanie I; Lipp, Ottmar V
2016-01-01
Prior reports of preferential detection of emotional expressions in visual search have yielded inconsistent results, even for face stimuli that avoid obvious expression-related perceptual confounds. The current study investigated inconsistent reports of anger and happiness superiority effects using face stimuli drawn from the same database. Experiment 1 excluded procedural differences as a potential factor, replicating a happiness superiority effect in a procedure that previously yielded an anger superiority effect. Experiments 2a and 2b confirmed that image colour or poser gender did not account for prior inconsistent findings. Experiments 3a and 3b identified stimulus set as the critical variable, revealing happiness or anger superiority effects for two partially overlapping sets of face stimuli. The current results highlight the critical role of stimulus selection for the observation of happiness or anger superiority effects in visual search even for face stimuli that avoid obvious expression related perceptual confounds and are drawn from a single database.
Alpha-band rhythm modulation under the condition of subliminal face presentation: MEG study.
Sakuraba, Satoshi; Kobayashi, Hana; Sakai, Shinya; Yokosawa, Koichi
2013-01-01
The human brain has two streams to process visual information: a dorsal stream and a ventral stream. Negative potential N170 or its magnetic counterpart M170 is known as the face-specific signal originating from the ventral stream. It is possible to present a visual image unconsciously by using continuous flash suppression (CFS), which is a visual masking technique adopting binocular rivalry. In this work, magnetoencephalograms were recorded during presentation of the three invisible images: face images, which are processed by the ventral stream; tool images, which could be processed by the dorsal stream, and a blank image. Alpha-band activities detected by sensors that are sensitive to M170 were compared. The alpha-band rhythm was suppressed more during presentation of face images than during presentation of the blank image (p=.028). The suppression remained for about 1 s after ending presentations. However, no significant difference was observed between tool and other images. These results suggest that alpha-band rhythm can be modulated also by unconscious visual images.
"Avoiding or approaching eyes"? Introversion/extraversion affects the gaze-cueing effect.
Ponari, Marta; Trojano, Luigi; Grossi, Dario; Conson, Massimiliano
2013-08-01
We investigated whether the extra-/introversion personality dimension can influence processing of others' eye gaze direction and emotional facial expression during a target detection task. On the basis of previous evidence showing that self-reported trait anxiety can affect gaze-cueing with emotional faces, we also verified whether trait anxiety can modulate the influence of intro-/extraversion on behavioral performance. Fearful, happy, angry or neutral faces, with either direct or averted gaze, were presented before the target appeared in spatial locations congruent or incongruent with stimuli's eye gaze direction. Results showed a significant influence of intra-/extraversion dimension on gaze-cueing effect for angry, happy, and neutral faces with averted gaze. Introverts did not show the gaze congruency effect when viewing angry expressions, but did so with happy and neutral faces; extraverts showed the opposite pattern. Importantly, the influence of intro-/extraversion on gaze-cueing was not mediated by trait anxiety. These findings demonstrated that personality differences can shape processing of interactions between relevant social signals.
The review and results of different methods for facial recognition
NASA Astrophysics Data System (ADS)
Le, Yifan
2017-09-01
In recent years, facial recognition draws much attention due to its wide potential applications. As a unique technology in Biometric Identification, facial recognition represents a significant improvement since it could be operated without cooperation of people under detection. Hence, facial recognition will be taken into defense system, medical detection, human behavior understanding, etc. Several theories and methods have been established to make progress in facial recognition: (1) A novel two-stage facial landmark localization method is proposed which has more accurate facial localization effect under specific database; (2) A statistical face frontalization method is proposed which outperforms state-of-the-art methods for face landmark localization; (3) It proposes a general facial landmark detection algorithm to handle images with severe occlusion and images with large head poses; (4) There are three methods proposed on Face Alignment including shape augmented regression method, pose-indexed based multi-view method and a learning based method via regressing local binary features. The aim of this paper is to analyze previous work of different aspects in facial recognition, focusing on concrete method and performance under various databases. In addition, some improvement measures and suggestions in potential applications will be put forward.
Arndt, Brian; Tuan, Wen-Jan; White, Jennifer; Schumacher, Jessica
2014-01-01
An understanding of primary care provider (PCP) workload is an important consideration in establishing optimal PCP panel size. However, no widely acceptable measure of PCP workload exists that incorporates the effort involved with both non-face-to-face patient care activities and face-to-face encounters. Accounting for this gap is critical given the increase in non-face-to-face PCP activities that has accompanied electronic health records (EHRs) (eg, electronic messaging). Our goal was to provide a comprehensive assessment of perceived PCP workload, accounting for aspects of both face-to-face and non-face-to-face encounters. Internal medicine, family medicine, and pediatric PCPs completed a self-administered survey about the perceived workload involved with face-to-face and non-face-to-face panel management activities as well as the perceived challenge associated with caring for patients with particular biomedical, demographic, and psychosocial characteristics (n = 185). Survey results were combined with EHR data at the individual patient and PCP service levels to assess PCP panel workload, accounting for face-to-face and non-face-to-face utilization. Of the multiple face-to-face and non-face-to-face activities associated with routine primary care, PCPs considered hospital admissions, obstetric care, hospital discharges, and new patient preventive health visits to be greater workload than non-face-to-face activities such as telephone calls, electronic communication, generating letters, and medication refills. Total workload within PCP panels at the individual patient level varied by overall health status, and the total workload of non-face-to-face panel management activities associated with routine primary care was greater than the total workload associated with face-to-face encounters regardless of health status. We used PCP survey results coupled with EHR data to assess PCP workload associated with both face-to-face as well as non-face-to-face panel management activities in primary care. The non-face-to-face workload was an important contributor to overall PCP workload for all patients regardless of overall health status. This is an important consideration for PCP workload assessment given the changing nature of primary care that requires more non-face-to-face effort, resulting in an overall increase in PCP workload. © Copyright 2014 by the American Board of Family Medicine.
Simulation and visualization of face seal motion stability by means of computer generated movies
NASA Technical Reports Server (NTRS)
Etsion, I.; Auer, B. M.
1980-01-01
A computer aided design method for mechanical face seals is described. Based on computer simulation, the actual motion of the flexibly mounted element of the seal can be visualized. This is achieved by solving the equations of motion of this element, calculating the displacements in its various degrees of freedom vs. time, and displaying the transient behavior in the form of a motion picture. Incorporating such a method in the design phase allows one to detect instabilities and to correct undesirable behavior of the seal. A theoretical background is presented. Details of the motion display technique are described, and the usefulness of the method is demonstrated by an example of a noncontacting conical face seal.
Simulation and visualization of face seal motion stability by means of computer generated movies
NASA Technical Reports Server (NTRS)
Etsion, I.; Auer, B. M.
1981-01-01
A computer aided design method for mechanical face seals is described. Based on computer simulation, the actual motion of the flexibly mounted element of the seal can be visualized. This is achieved by solving the equations of motion of this element, calculating the displacements in its various degrees of freedom vs. time, and displaying the transient behavior in the form of a motion picture. Incorporating such a method in the design phase allows one to detect instabilities and to correct undesirable behavior of the seal. A theoretical background is presented. Details of the motion display technique are described, and the usefulness of the method is demonstrated by an example of a noncontacting conical face seal.
DOT National Transportation Integrated Search
2007-12-01
Vehicle-based alcohol detection systems use technologies designed to detect the presence of alcohol in a driver. Technology suitable for use in all vehicles that will detect an impaired driver faces many challenges including public acceptability, pas...
NASA Astrophysics Data System (ADS)
Sablik, Thomas; Velten, Jörg; Kummert, Anton
2015-03-01
An novel system for automatic privacy protection in digital media based on spectral domain watermarking and JPEG compression is described in the present paper. In a first step private areas are detected. Therefore a detection method is presented. The implemented method uses Haar cascades to detects faces. Integral images are used to speed up calculations and the detection. Multiple detections of one face are combined. Succeeding steps comprise embedding the data into the image as part of JPEG compression using spectral domain methods and protecting the area of privacy. The embedding process is integrated into and adapted to JPEG compression. A Spread Spectrum Watermarking method is used to embed the size and position of the private areas into the cover image. Different methods for embedding regarding their robustness are compared. Moreover the performance of the method concerning tampered images is presented.
The modular nature of trustworthiness detection.
Bonnefon, Jean-François; Hopfensitz, Astrid; De Neys, Wim
2013-02-01
The capacity to trust wisely is a critical facilitator of success and prosperity, and it has been conjectured that people of higher intelligence are better able to detect signs of untrustworthiness from potential partners. In contrast, this article reports five trust game studies suggesting that reading trustworthiness of the faces of strangers is a modular process. Trustworthiness detection from faces is independent of general intelligence (Study 1) and effortless (Study 2). Pictures that include nonfacial features such as hair and clothing impair trustworthiness detection (Study 3) by increasing reliance on conscious judgments (Study 4), but people largely prefer to make decisions from this sort of pictures (Study 5). In sum, trustworthiness detection in an economic interaction is a genuine and effortless ability, possessed in equal amount by people of all cognitive capacities, but whose impenetrability leads to inaccurate conscious judgments and inappropriate informational preferences. 2013 APA, all rights reserved
Multistage audiovisual integration of speech: dissociating identification and detection.
Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias S
2011-02-01
Speech perception integrates auditory and visual information. This is evidenced by the McGurk illusion where seeing the talking face influences the auditory phonetic percept and by the audiovisual detection advantage where seeing the talking face influences the detectability of the acoustic speech signal. Here, we show that identification of phonetic content and detection can be dissociated as speech-specific and non-specific audiovisual integration effects. To this end, we employed synthetically modified stimuli, sine wave speech (SWS), which is an impoverished speech signal that only observers informed of its speech-like nature recognize as speech. While the McGurk illusion only occurred for informed observers, the audiovisual detection advantage occurred for naïve observers as well. This finding supports a multistage account of audiovisual integration of speech in which the many attributes of the audiovisual speech signal are integrated by separate integration processes.
Holistic processing of static and moving faces.
Zhao, Mintao; Bülthoff, Isabelle
2017-07-01
Humans' face ability develops and matures with extensive experience in perceiving, recognizing, and interacting with faces that move most of the time. However, how facial movements affect 1 core aspect of face ability-holistic face processing-remains unclear. Here we investigated the influence of rigid facial motion on holistic and part-based face processing by manipulating the presence of facial motion during study and at test in a composite face task. The results showed that rigidly moving faces were processed as holistically as static faces (Experiment 1). Holistic processing of moving faces persisted whether facial motion was presented during study, at test, or both (Experiment 2). Moreover, when faces were inverted to eliminate the contributions of both an upright face template and observers' expertise with upright faces, rigid facial motion facilitated holistic face processing (Experiment 3). Thus, holistic processing represents a general principle of face perception that applies to both static and dynamic faces, rather than being limited to static faces. These results support an emerging view that both perceiver-based and face-based factors contribute to holistic face processing, and they offer new insights on what underlies holistic face processing, how information supporting holistic face processing interacts with each other, and why facial motion may affect face recognition and holistic face processing differently. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Adaptive skin detection based on online training
NASA Astrophysics Data System (ADS)
Zhang, Ming; Tang, Liang; Zhou, Jie; Rong, Gang
2007-11-01
Skin is a widely used cue for porn image classification. Most conventional methods are off-line training schemes. They usually use a fixed boundary to segment skin regions in the images and are effective only in restricted conditions: e.g. good lightness and unique human race. This paper presents an adaptive online training scheme for skin detection which can handle these tough cases. In our approach, skin detection is considered as a classification problem on Gaussian mixture model. For each image, human face is detected and the face color is used to establish a primary estimation of skin color distribution. Then an adaptive online training algorithm is used to find the real boundary between skin color and background color in current image. Experimental results on 450 images showed that the proposed method is more robust in general situations than the conventional ones.
Real-time driver fatigue detection based on face alignment
NASA Astrophysics Data System (ADS)
Tao, Huanhuan; Zhang, Guiying; Zhao, Yong; Zhou, Yi
2017-07-01
The performance and robustness of fatigue detection largely decrease if the driver with glasses. To address this issue, this paper proposes a practical driver fatigue detection method based on face alignment at 3000 FPS algorithm. Firstly, the eye regions of the driver are localized by exploiting 6 landmarks surrounding each eye. Secondly, the HOG features of the extracted eye regions are calculated and put into SVM classifier to recognize the eye state. Finally, the value of PERCLOS is calculated to determine whether the driver is drowsy or not. An alarm will be generated if the eye is closed for a specified period of time. The accuracy and real-time on testing videos with different drivers demonstrate that the proposed algorithm is robust and obtain better accuracy for driver fatigue detection compared with some previous method.
How Well Do Computer-Generated Faces Tap Face Expertise?
Crookes, Kate; Ewing, Louise; Gildenhuys, Ju-Dith; Kloth, Nadine; Hayward, William G; Oxner, Matt; Pond, Stephen; Rhodes, Gillian
2015-01-01
The use of computer-generated (CG) stimuli in face processing research is proliferating due to the ease with which faces can be generated, standardised and manipulated. However there has been surprisingly little research into whether CG faces are processed in the same way as photographs of real faces. The present study assessed how well CG faces tap face identity expertise by investigating whether two indicators of face expertise are reduced for CG faces when compared to face photographs. These indicators were accuracy for identification of own-race faces and the other-race effect (ORE)-the well-established finding that own-race faces are recognised more accurately than other-race faces. In Experiment 1 Caucasian and Asian participants completed a recognition memory task for own- and other-race real and CG faces. Overall accuracy for own-race faces was dramatically reduced for CG compared to real faces and the ORE was significantly and substantially attenuated for CG faces. Experiment 2 investigated perceptual discrimination for own- and other-race real and CG faces with Caucasian and Asian participants. Here again, accuracy for own-race faces was significantly reduced for CG compared to real faces. However the ORE was not affected by format. Together these results signal that CG faces of the type tested here do not fully tap face expertise. Technological advancement may, in the future, produce CG faces that are equivalent to real photographs. Until then caution is advised when interpreting results obtained using CG faces.
NASA Astrophysics Data System (ADS)
Salva-Catarineu, Montserrat; Salvador-Franch, Ferran; Lopez-Bustins, Joan A.; Padrón-Padrón, Pedro A.; Cortés-Lucas, Amparo
2016-04-01
The current extent of Juniperus turbinata in the island of El Hierro is very small due to heavy exploitation for centuries. The recovery of its natural habitat has such a high environmental and scenic interest since this is a protected species in Europe. The study of the environmental factors that help or limit its recovery is indispensable. Our research project (JUNITUR) studied the populations of juniper woodlands in El Hierro from different environments. These environments are mainly determined by their altitude and exposure to north-easterly trade winds. The main objective of this study was to compare the thermohygrometric conditions of three juniper woodlands: La Dehesa (north-west face at 528 m a.s.l.), El Julan (south face at 996 m a.s.l.) and Sabinosa (north face at 258 m a.s.l.). They are located at different altitude and orientation in El Hierro and present different recovery rates. We used air sensor data loggers fixed to tree branches for recording hourly temperature and humidity data in the three study areas. We analysed daily data of three annual cycles (from September 2012 to August 2015). Similar thermohygrometric annual cycles among the three study areas were observed. We detected the largest differences in winter temperature and summer humidity between the north (to windward) (Sabinosa and La Dehesa) and south (to leeward) (El Julan) faces of the island. The juniper woodland with a highest recovery rate (El Julan) showed the most extreme temperature conditions in both winter and summer seasons. The results of this project might contribute to the knowledge of the juniper bioclimatology in El Hierro, where there is the biggest population of Juniperus turbinata throughout the Canary Islands.
Grossman, Ruth B; Steinhart, Erin; Mitchell, Teresa; McIlvane, William
2015-06-01
Conversation requires integration of information from faces and voices to fully understand the speaker's message. To detect auditory-visual asynchrony of speech, listeners must integrate visual movements of the face, particularly the mouth, with auditory speech information. Individuals with autism spectrum disorder may be less successful at such multisensory integration, despite their demonstrated preference for looking at the mouth region of a speaker. We showed participants (individuals with and without high-functioning autism (HFA) aged 8-19) a split-screen video of two identical individuals speaking side by side. Only one of the speakers was in synchrony with the corresponding audio track and synchrony switched between the two speakers every few seconds. Participants were asked to watch the video without further instructions (implicit condition) or to specifically watch the in-synch speaker (explicit condition). We recorded which part of the screen and face their eyes targeted. Both groups looked at the in-synch video significantly more with explicit instructions. However, participants with HFA looked at the in-synch video less than typically developing (TD) peers and did not increase their gaze time as much as TD participants in the explicit task. Importantly, the HFA group looked significantly less at the mouth than their TD peers, and significantly more at non-face regions of the image. There were no between-group differences for eye-directed gaze. Overall, individuals with HFA spend less time looking at the crucially important mouth region of the face during auditory-visual speech integration, which is maladaptive gaze behavior for this type of task. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.
Wang, Zhe; Quinn, Paul C; Jin, Haiyang; Sun, Yu-Hao P; Tanaka, James W; Pascalis, Olivier; Lee, Kang
2018-04-25
Using a composite-face paradigm, we examined the holistic processing induced by Asian faces, Caucasian faces, and monkey faces with human Asian participants in two experiments. In Experiment 1, participants were asked to judge whether the upper halves of two faces successively presented were the same or different. A composite-face effect was found for Asian faces and Caucasian faces, but not for monkey faces. In Experiment 2, participants were asked to judge whether the lower halves of the two faces successively presented were the same or different. A composite-face effect was found for monkey faces as well as for Asian faces and Caucasian faces. Collectively, these results reveal that own-species (i.e., own-race and other-race) faces engage holistic processing in both upper and lower halves of the face, but other-species (i.e., monkey) faces engage holistic processing only when participants are asked to match the lower halves of the face. The findings are discussed in the context of a region-based holistic processing account for the species-specific effect in face recognition. Copyright © 2018 Elsevier Ltd. All rights reserved.
USDA-ARS?s Scientific Manuscript database
H5N1 high pathogenicity avian influenza virus (HPAIV) emerged in 1996 in Guangdong China (A/goose/Guangdong/1/1996, Gs/GD) has caused outbreaks in over 80 countries throughout Eurasia, Africa, and North America. A H5N6 HPAIV clade 2.3.4.4, A/ black-faced spoonbill /Taiwan/DB645/2017 (SB/Tw/17), was ...
Organochlorine contaminants in white-faced ibis eggs in southern Texas
Custer, T.W.; Mitchell, C.A.
1989-01-01
White-faced ibis eggs collected from 2 colonies in southern Texas in 1985 had low mean concentrations of DDE. DDD, the only other organochlorine contaminant detected, was found in only 1 of 20 eggs. DDE concentrations in eggs were not significantly correlated with eggshell thickness. Mean DDE concentrations were significantly higher in eggs collected from nests where not all of the remaining eggs hatched than in eggs collected from nests where all the remaining eggs hatched.
Effects of boundary-layer separation controllers on a desktop fume hood.
Huang, Rong Fung; Chen, Jia-Kun; Hsu, Ching Min; Hung, Shuo-Fu
2016-10-02
A desktop fume hood installed with an innovative design of flow boundary-layer separation controllers on the leading edges of the side plates, work surface, and corners was developed and characterized for its flow and containment leakage characteristics. The geometric features of the developed desktop fume hood included a rearward offset suction slot, two side plates, two side-plate boundary-layer separation controllers on the leading edges of the side plates, a slanted surface on the leading edge of the work surface, and two small triangular plates on the upper left and right corners of the hood face. The flow characteristics were examined using the laser-assisted smoke flow visualization technique. The containment leakages were measured by the tracer gas (sulphur hexafluoride) detection method on the hood face plane with a mannequin installed in front of the hood. The results of flow visualization showed that the smoke dispersions induced by the boundary-layer separations on the leading edges of the side plates and work surface, as well as the three-dimensional complex flows on the upper-left and -right corners of the hood face, were effectively alleviated by the boundary-layer separation controllers. The results of the tracer gas detection method with a mannequin standing in front of the hood showed that the leakage levels were negligibly small (≤0.003 ppm) at low face velocities (≥0.19 m/s).
Kovács-Bálint, Zsófia; Bereczkei, Tamás; Hernádi, István
2013-11-01
In this study, we investigated the role of facial cues in cooperator and defector recognition. First, a face image database was constructed from pairs of full face portraits of target subjects taken at the moment of decision-making in a prisoner's dilemma game (PDG) and in a preceding neutral task. Image pairs with no deficiencies (n = 67) were standardized for orientation and luminance. Then, confidence in defector and cooperator recognition was tested with image rating in a different group of lay judges (n = 62). Results indicate that (1) defectors were better recognized (58% vs. 47%), (2) they looked different from cooperators (p < .01), (3) males but not females evaluated the images with a relative bias towards the cooperator category (p < .01), and (4) females were more confident in detecting defectors (p < .05). According to facial microexpression analysis, defection was strongly linked with depressed lower lips and less opened eyes. Significant correlation was found between the intensity of micromimics and the rating of images in the cooperator-defector dimension. In summary, facial expressions can be considered as reliable indicators of momentary social dispositions in the PDG. Females may exhibit an evolutionary-based overestimation bias to detecting social visual cues of the defector face. © 2012 The British Psychological Society.
Impressions of dominance are made relative to others in the visual environment.
Re, Daniel E; Lefevre, Carmen E; DeBruine, Lisa M; Jones, Benedict C; Perrett, David I
2014-03-27
Face judgments of dominance play an important role in human social interaction. Perceived facial dominance is thought to indicate physical formidability, as well as resource acquisition and holding potential. Dominance cues in the face affect perceptions of attractiveness, emotional state, and physical strength. Most experimental paradigms test perceptions of facial dominance in individual faces, or they use manipulated versions of the same face in a forced-choice task but in the absence of other faces. Here, we extend this work by assessing whether dominance ratings are absolute or are judged relative to other faces. We presented participants with faces to be rated for dominance (target faces), while also presenting a second face (non-target faces) that was not to be rated. We found that both the masculinity and sex of the non-target face affected dominance ratings of the target face. Masculinized non-target faces decreased the perceived dominance of a target face relative to a feminized non-target face, and displaying a male non-target face decreased perceived dominance of a target face more so than a female non-target face. Perceived dominance of male target faces was affected more by masculinization of male non-target faces than female non-target faces. These results indicate that dominance perceptions can be altered by surrounding faces, demonstrating that facial dominance is judged at least partly relative to other faces.
Allely, Rebekah R; Van-Buendia, Lan B; Jeng, James C; White, Patricia; Wu, Jingshu; Niszczak, Jonathan; Jordan, Marion H
2008-01-01
A paradigm shift in management of postburn facial scarring is lurking "just beneath the waves" with the widespread availability of two recent technologies: precise three-dimensional scanning/digitizing of complex surfaces and computer-controlled rapid prototyping three-dimensional "printers". Laser Doppler imaging may be the sensible method to track the scar hyperemia that should form the basis of assessing progress and directing incremental changes in the digitized topographical face mask "prescription". The purpose of this study was to establish feasibility of detecting perfusion through transparent face masks using the Laser Doppler Imaging scanner. Laser Doppler images of perfusion were obtained at multiple facial regions on five uninjured staff members. Images were obtained without a mask, followed by images with a loose fitting mask with and without a silicone liner, and then with a tight fitting mask with and without a silicone liner. Right and left oblique images, in addition to the frontal images, were used to overcome unobtainable measurements at the extremes of face mask curvature. General linear model, mixed model, and t tests were used for data analysis. Three hundred seventy-five measurements were used for analysis, with a mean perfusion unit of 299 and pixel validity of 97%. The effect of face mask pressure with and without the silicone liner was readily quantified with significant changes in mean cutaneous blood flow (P < .5). High valid pixel rate laser Doppler imager flow data can be obtained through transparent face masks. Perfusion decreases with the application of pressure and with silicone. Every participant measured differently in perfusion units; however, consistent perfusion patterns in the face were observed.
False memory for face in short-term memory and neural activity in human amygdala.
Iidaka, Tetsuya; Harada, Tokiko; Sadato, Norihiro
2014-12-03
Human memory is often inaccurate. Similar to words and figures, new faces are often recognized as seen or studied items in long- and short-term memory tests; however, the neural mechanisms underlying this false memory remain elusive. In a previous fMRI study using morphed faces and a standard false memory paradigm, we found that there was a U-shaped response curve of the amygdala to old, new, and lure items. This indicates that the amygdala is more active in response to items that are salient (hit and correct rejection) compared to items that are less salient (false alarm), in terms of memory retrieval. In the present fMRI study, we determined whether the false memory for faces occurs within the short-term memory range (a few seconds), and assessed which neural correlates are involved in veridical and illusory memories. Nineteen healthy participants were scanned by 3T MRI during a short-term memory task using morphed faces. The behavioral results indicated that the occurrence of false memories was within the short-term range. We found that the amygdala displayed a U-shaped response curve to memory items, similar to those observed in our previous study. These results suggest that the amygdala plays a common role in both long- and short-term false memory for faces. We made the following conclusions: First, the amygdala is involved in detecting the saliency of items, in addition to fear, and supports goal-oriented behavior by modulating memory. Second, amygdala activity and response time might be related with a subject's response criterion for similar faces. Copyright © 2014 Elsevier B.V. All rights reserved.
Diagnostic Features of Emotional Expressions Are Processed Preferentially
Scheller, Elisa; Büchel, Christian; Gamer, Matthias
2012-01-01
Diagnostic features of emotional expressions are differentially distributed across the face. The current study examined whether these diagnostic features are preferentially attended to even when they are irrelevant for the task at hand or when faces appear at different locations in the visual field. To this aim, fearful, happy and neutral faces were presented to healthy individuals in two experiments while measuring eye movements. In Experiment 1, participants had to accomplish an emotion classification, a gender discrimination or a passive viewing task. To differentiate fast, potentially reflexive, eye movements from a more elaborate scanning of faces, stimuli were either presented for 150 or 2000 ms. In Experiment 2, similar faces were presented at different spatial positions to rule out the possibility that eye movements only reflect a general bias for certain visual field locations. In both experiments, participants fixated the eye region much longer than any other region in the face. Furthermore, the eye region was attended to more pronouncedly when fearful or neutral faces were shown whereas more attention was directed toward the mouth of happy facial expressions. Since these results were similar across the other experimental manipulations, they indicate that diagnostic features of emotional expressions are preferentially processed irrespective of task demands and spatial locations. Saliency analyses revealed that a computational model of bottom-up visual attention could not explain these results. Furthermore, as these gaze preferences were evident very early after stimulus onset and occurred even when saccades did not allow for extracting further information from these stimuli, they may reflect a preattentive mechanism that automatically detects relevant facial features in the visual field and facilitates the orientation of attention towards them. This mechanism might crucially depend on amygdala functioning and it is potentially impaired in a number of clinical conditions such as autism or social anxiety disorders. PMID:22848607
Diagnostic features of emotional expressions are processed preferentially.
Scheller, Elisa; Büchel, Christian; Gamer, Matthias
2012-01-01
Diagnostic features of emotional expressions are differentially distributed across the face. The current study examined whether these diagnostic features are preferentially attended to even when they are irrelevant for the task at hand or when faces appear at different locations in the visual field. To this aim, fearful, happy and neutral faces were presented to healthy individuals in two experiments while measuring eye movements. In Experiment 1, participants had to accomplish an emotion classification, a gender discrimination or a passive viewing task. To differentiate fast, potentially reflexive, eye movements from a more elaborate scanning of faces, stimuli were either presented for 150 or 2000 ms. In Experiment 2, similar faces were presented at different spatial positions to rule out the possibility that eye movements only reflect a general bias for certain visual field locations. In both experiments, participants fixated the eye region much longer than any other region in the face. Furthermore, the eye region was attended to more pronouncedly when fearful or neutral faces were shown whereas more attention was directed toward the mouth of happy facial expressions. Since these results were similar across the other experimental manipulations, they indicate that diagnostic features of emotional expressions are preferentially processed irrespective of task demands and spatial locations. Saliency analyses revealed that a computational model of bottom-up visual attention could not explain these results. Furthermore, as these gaze preferences were evident very early after stimulus onset and occurred even when saccades did not allow for extracting further information from these stimuli, they may reflect a preattentive mechanism that automatically detects relevant facial features in the visual field and facilitates the orientation of attention towards them. This mechanism might crucially depend on amygdala functioning and it is potentially impaired in a number of clinical conditions such as autism or social anxiety disorders.
Newborn preference for a new face vs. a previously seen communicative or motionless face.
Cecchini, Marco; Baroni, Eleonora; Di Vito, Cinzia; Piccolo, Federica; Lai, Carlo
2011-06-01
Newborn infants prefer to look at a new face compared to a known face (still-face). This effect does not happen with the mother-face. The newborns could be attracted by the mother-face because, unlike the still-face, it confirms an expectation of communication. Fifty newborns were video-recorded. Sixteen of them were recruited in the final sample: nine were exposed to a communicative face and seven to a still-face. All the 16 newborns were successively exposed to two preference-tasks where a new face was compared with the known face. Only newborns previously exposed to a still-face preferred to look at a new face instead of the known face. The results suggest that the newborns are able to build a dynamic representation of faces. Copyright © 2011 Elsevier Inc. All rights reserved.
Ibáñez, Agustín; Hurtado, Esteban; Lobos, Alejandro; Escobar, Josefina; Trujillo, Natalia; Baez, Sandra; Huepe, David; Manes, Facundo; Decety, Jean
2011-06-29
Current research on empathy for pain emphasizes the overlap in the neural response between the first-hand experience of pain and its perception in others. However, recent studies suggest that the perception of the pain of others may reflect the processing of a threat or negative arousal rather than an automatic pro-social response. It can thus be suggested that pain processing of other-related, but not self-related, information could imply danger rather than empathy, due to the possible threat represented in the expressions of others (especially if associated with pain stimuli). To test this hypothesis, two experiments considering subliminal stimuli were designed. In Experiment 1, neutral and semantic pain expressions previously primed with own or other faces were presented to participants. When other-face priming was used, only the detection of semantic pain expressions was facilitated. In Experiment 2, pictures with pain and neutral scenarios previously used in ERP and fMRI research were used in a categorization task. Those pictures were primed with own or other faces following the same procedure as in Experiment 1 while ERPs were recorded. Early (N1) and late (P3) cortical responses between pain and no-pain were modulated only in the other-face priming condition. These results support the threat value of pain hypothesis and suggest the necessity for the inclusion of own- versus other-related information in future empathy for pain research. Copyright © 2011 Elsevier B.V. All rights reserved.
My Brain Reads Pain in Your Face, Before Knowing Your Gender.
Czekala, Claire; Mauguière, François; Mazza, Stéphanie; Jackson, Philip L; Frot, Maud
2015-12-01
Humans are expert at recognizing facial features whether they are variable (emotions) or unchangeable (gender). Because of its huge communicative value, pain might be detected faster in faces than unchangeable features. Based on this assumption, we aimed to find a presentation time that enables subliminal discrimination of pain facial expression without permitting gender discrimination. For 80 individuals, we compared the time needed (50, 100, 150, or 200 milliseconds) to discriminate masked static pain faces among anger and neutral faces with the time needed to discriminate male from female faces. Whether these discriminations were associated with conscious reportability was tested with confidence measures on 40 other individuals. The results showed that, at 100 milliseconds, 75% of participants discriminated pain above chance level, whereas only 20% of participants discriminated the gender. Moreover, this pain discrimination appeared to be subliminal. This priority of pain over gender might exist because, even if pain faces are complex stimuli encoding both the sensory and the affective component of pain, they signal a danger. This supports the evolution theory relating to the necessity of quickly reading aversive emotions to ensure survival but might also be at the basis of altruistic behavior such as help and compassion. This study shows that pain facial expression can be processed subliminally after brief presentation times, which might be helpful for critical emergency situations in clinical settings. Copyright © 2015 American Pain Society. Published by Elsevier Inc. All rights reserved.
The neural representation of social status in the extended face-processing network.
Koski, Jessica E; Collins, Jessica A; Olson, Ingrid R
2017-12-01
Social status is a salient cue that shapes our perceptions of other people and ultimately guides our social interactions. Despite the pervasive influence of status on social behavior, how information about the status of others is represented in the brain remains unclear. Here, we tested the hypothesis that social status information is embedded in our neural representations of other individuals. Participants learned to associate faces with names, job titles that varied in associated status, and explicit markers of reputational status (star ratings). Trained stimuli were presented in an functional magnetic resonance imaging experiment where participants performed a target detection task orthogonal to the variable of interest. A network of face-selective brain regions extending from the occipital lobe to the orbitofrontal cortex was localized and served as regions of interest. Using multivoxel pattern analysis, we found that face-selective voxels in the lateral orbitofrontal cortex - a region involved in social and nonsocial valuation, could decode faces based on their status. Similar effects were observed with two different status manipulations - one based on stored semantic knowledge (e.g., different careers) and one based on learned reputation (e.g., star ranking). These data suggest that a face-selective region of the lateral orbitofrontal cortex may contribute to the perception of social status, potentially underlying the preferential attention and favorable biases humans display toward high-status individuals. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Vrancken, Leia; Germeys, Filip; Verfaillie, Karl
2017-01-01
A considerable amount of research on identity recognition and emotion identification with the composite design points to the holistic processing of these aspects in faces and bodies. In this paradigm, the interference from a nonattended face half on the perception of the attended half is taken as evidence for holistic processing (i.e., a composite effect). Far less research, however, has been dedicated to the concept of gaze. Nonetheless, gaze perception is a substantial component of face and body perception, and holds critical information for everyday communicative interactions. Furthermore, the ability of human observers to detect direct versus averted eye gaze is effortless, perhaps similar to identity perception and emotion recognition. However, the hypothesis of holistic perception of eye gaze has never been tested directly. Research on gaze perception with the composite design could facilitate further systematic comparison with other aspects of face and body perception that have been investigated using the composite design (i.e., identity and emotion). In the present research, a composite design was administered to assess holistic processing of gaze cues in faces (Experiment 1) and bodies (Experiment 2). Results confirmed that eye and head orientation (Experiment 1A) and head and body orientation (Experiment 2A) are integrated in a holistic manner. However, the composite effect was not completely disrupted by inversion (Experiments 1B and 2B), a finding that will be discussed together with implications for future research.
Kluczniok, Dorothea; Hindi Attar, Catherine; Stein, Jenny; Poppinga, Sina; Fydrich, Thomas; Jaite, Charlotte; Kappel, Viola; Brunner, Romuald; Herpertz, Sabine C; Boedeker, Katja; Bermpohl, Felix
2017-01-01
Maternal sensitive behavior depends on recognizing one's own child's affective states. The present study investigated distinct and overlapping neural responses of mothers to sad and happy facial expressions of their own child (in comparison to facial expressions of an unfamiliar child). We used functional MRI to measure dissociable and overlapping activation patterns in 27 healthy mothers in response to happy, neutral and sad facial expressions of their own school-aged child and a gender- and age-matched unfamiliar child. To investigate differential activation to sad compared to happy faces of one's own child, we used interaction contrasts. During the scan, mothers had to indicate the affect of the presented face. After scanning, they were asked to rate the perceived emotional arousal and valence levels for each face using a 7-point Likert-scale (adapted SAM version). While viewing their own child's sad faces, mothers showed activation in the amygdala and anterior cingulate cortex whereas happy facial expressions of the own child elicited activation in the hippocampus. Conjoint activation in response to one's own child happy and sad expressions was found in the insula and the superior temporal gyrus. Maternal brain activations differed depending on the child's affective state. Sad faces of the own child activated areas commonly associated with a threat detection network, whereas happy faces activated reward related brain areas. Overlapping activation was found in empathy related networks. These distinct neural activation patterns might facilitate sensitive maternal behavior.
Gaze cueing by pareidolia faces.
Takahashi, Kohske; Watanabe, Katsumi
2013-01-01
Visual images that are not faces are sometimes perceived as faces (the pareidolia phenomenon). While the pareidolia phenomenon provides people with a strong impression that a face is present, it is unclear how deeply pareidolia faces are processed as faces. In the present study, we examined whether a shift in spatial attention would be produced by gaze cueing of face-like objects. A robust cueing effect was observed when the face-like objects were perceived as faces. The magnitude of the cueing effect was comparable between the face-like objects and a cartoon face. However, the cueing effect was eliminated when the observer did not perceive the objects as faces. These results demonstrated that pareidolia faces do more than give the impression of the presence of faces; indeed, they trigger an additional face-specific attentional process.
Gaze cueing by pareidolia faces
Takahashi, Kohske; Watanabe, Katsumi
2013-01-01
Visual images that are not faces are sometimes perceived as faces (the pareidolia phenomenon). While the pareidolia phenomenon provides people with a strong impression that a face is present, it is unclear how deeply pareidolia faces are processed as faces. In the present study, we examined whether a shift in spatial attention would be produced by gaze cueing of face-like objects. A robust cueing effect was observed when the face-like objects were perceived as faces. The magnitude of the cueing effect was comparable between the face-like objects and a cartoon face. However, the cueing effect was eliminated when the observer did not perceive the objects as faces. These results demonstrated that pareidolia faces do more than give the impression of the presence of faces; indeed, they trigger an additional face-specific attentional process. PMID:25165505
Processing of configural and componential information in face-selective cortical areas.
Zhao, Mintao; Cheung, Sing-Hang; Wong, Alan C-N; Rhodes, Gillian; Chan, Erich K S; Chan, Winnie W L; Hayward, William G
2014-01-01
We investigated how face-selective cortical areas process configural and componential face information and how race of faces may influence these processes. Participants saw blurred (preserving configural information), scrambled (preserving componential information), and whole faces during fMRI scan, and performed a post-scan face recognition task using blurred or scrambled faces. The fusiform face area (FFA) showed stronger activation to blurred than to scrambled faces, and equivalent responses to blurred and whole faces. The occipital face area (OFA) showed stronger activation to whole than to blurred faces, which elicited similar responses to scrambled faces. Therefore, the FFA may be more tuned to process configural than componential information, whereas the OFA similarly participates in perception of both. Differences in recognizing own- and other-race blurred faces were correlated with differences in FFA activation to those faces, suggesting that configural processing within the FFA may underlie the other-race effect in face recognition.
Is face distinctiveness gender based?
Baudouin, Jean-Yves; Gallay, Mathieu
2006-08-01
Two experiments were carried out to study the role of gender category in evaluations of face distinctiveness. In Experiment 1, participants had to evaluate the distinctiveness and the femininity-masculinity of real or artificial composite faces. The composite faces were created by blending either faces of the same gender (sexed composite faces, approximating the sexed prototypes) or faces of both genders (nonsexed composite faces, approximating the face prototype). The results show that the distinctiveness ratings decreased as the number of blended faces increased. Distinctiveness and gender ratings did not covary for real faces or sexed composite faces, but they did vary for nonsexed composite faces. In Experiment 2, participants were asked to state which of two composite faces, one sexed and one nonsexed, was more distinctive. Sexed composite faces were selected less often. The results are interpreted as indicating that distinctiveness is based on sexed prototypes. Implications for face recognition models are discussed. ((c) 2006 APA, all rights reserved).
Morita, Plinio P; Burns, Catherine M
2011-01-01
Healthcare institutions face high levels of risk on a daily basis. Efforts have been made to address these risks and turn this complex environment into a safer environment for patients, staff, and visitors. However, healthcare institutions need more advanced risk management tools to achieve the safety levels currently seen in other industries. One of these potential tools is occurrence investigation systems. In order to be investigated, occurrences must be detected and selected for investigation, since not all institutions have enough resources to investigate all occurrences. A survey was conducted in healthcare institutions in Canada and Brazil to evaluate currently used risk management tools, the difficulties faced, and the possibilities for improvement. The findings include detectability difficulties, lack of resources, lack of support, and insufficient staff involvement.
Neural synchronization during face-to-face communication.
Jiang, Jing; Dai, Bohan; Peng, Danling; Zhu, Chaozhe; Liu, Li; Lu, Chunming
2012-11-07
Although the human brain may have evolutionarily adapted to face-to-face communication, other modes of communication, e.g., telephone and e-mail, increasingly dominate our modern daily life. This study examined the neural difference between face-to-face communication and other types of communication by simultaneously measuring two brains using a hyperscanning approach. The results showed a significant increase in the neural synchronization in the left inferior frontal cortex during a face-to-face dialog between partners but none during a back-to-back dialog, a face-to-face monologue, or a back-to-back monologue. Moreover, the neural synchronization between partners during the face-to-face dialog resulted primarily from the direct interactions between the partners, including multimodal sensory information integration and turn-taking behavior. The communicating behavior during the face-to-face dialog could be predicted accurately based on the neural synchronization level. These results suggest that face-to-face communication, particularly dialog, has special neural features that other types of communication do not have and that the neural synchronization between partners may underlie successful face-to-face communication.
Pose invariant face recognition: 3D model from single photo
NASA Astrophysics Data System (ADS)
Napoléon, Thibault; Alfalou, Ayman
2017-02-01
Face recognition is widely studied in the literature for its possibilities in surveillance and security. In this paper, we report a novel algorithm for the identification task. This technique is based on an optimized 3D modeling allowing to reconstruct faces in different poses from a limited number of references (i.e. one image by class/person). Particularly, we propose to use an active shape model to detect a set of keypoints on the face necessary to deform our synthetic model with our optimized finite element method. Indeed, in order to improve our deformation, we propose a regularization by distances on graph. To perform the identification we use the VanderLugt correlator well know to effectively address this task. On the other hand we add a difference of Gaussian filtering step to highlight the edges and a description step based on the local binary patterns. The experiments are performed on the PHPID database enhanced with our 3D reconstructed faces of each person with an azimuth and an elevation ranging from -30° to +30°. The obtained results prove the robustness of our new method with 88.76% of good identification when the classic 2D approach (based on the VLC) obtains just 44.97%.
Face shape differs in phylogenetically related populations.
Hopman, Saskia M J; Merks, Johannes H M; Suttie, Michael; Hennekam, Raoul C M; Hammond, Peter
2014-11-01
3D analysis of facial morphology has delineated facial phenotypes in many medical conditions and detected fine grained differences between typical and atypical patients to inform genotype-phenotype studies. Next-generation sequencing techniques have enabled extremely detailed genotype-phenotype correlative analysis. Such comparisons typically employ control groups matched for age, sex and ethnicity and the distinction between ethnic categories in genotype-phenotype studies has been widely debated. The phylogenetic tree based on genetic polymorphism studies divides the world population into nine subpopulations. Here we show statistically significant face shape differences between two European Caucasian populations of close phylogenetic and geographic proximity from the UK and The Netherlands. The average face shape differences between the Dutch and UK cohorts were visualised in dynamic morphs and signature heat maps, and quantified for their statistical significance using both conventional anthropometry and state of the art dense surface modelling techniques. Our results demonstrate significant differences between Dutch and UK face shape. Other studies have shown that genetic variants influence normal facial variation. Thus, face shape difference between populations could reflect underlying genetic difference. This should be taken into account in genotype-phenotype studies and we recommend that in those studies reference groups be established in the same population as the individuals who form the subject of the study.
Facial expression identification using 3D geometric features from Microsoft Kinect device
NASA Astrophysics Data System (ADS)
Han, Dongxu; Al Jawad, Naseer; Du, Hongbo
2016-05-01
Facial expression identification is an important part of face recognition and closely related to emotion detection from face images. Various solutions have been proposed in the past using different types of cameras and features. Microsoft Kinect device has been widely used for multimedia interactions. More recently, the device has been increasingly deployed for supporting scientific investigations. This paper explores the effectiveness of using the device in identifying emotional facial expressions such as surprise, smile, sad, etc. and evaluates the usefulness of 3D data points on a face mesh structure obtained from the Kinect device. We present a distance-based geometric feature component that is derived from the distances between points on the face mesh and selected reference points in a single frame. The feature components extracted across a sequence of frames starting and ending by neutral emotion represent a whole expression. The feature vector eliminates the need for complex face orientation correction, simplifying the feature extraction process and making it more efficient. We applied the kNN classifier that exploits a feature component based similarity measure following the principle of dynamic time warping to determine the closest neighbors. Preliminary tests on a small scale database of different facial expressions show promises of the newly developed features and the usefulness of the Kinect device in facial expression identification.
Gender-based prototype formation in face recognition.
Baudouin, Jean-Yves; Brochard, Renaud
2011-07-01
The role of gender categories in prototype formation during face recognition was investigated in 2 experiments. The participants were asked to learn individual faces and then to recognize them. During recognition, individual faces were mixed with faces, which were blended faces of same or different genders. The results of the 2 experiments showed that blended faces made with learned individual faces were recognized, even though they had never been seen before. In Experiment 1, this effect was stronger when faces belonged to the same gender category (same-sex blended faces), but it also emerged across gender categories (cross-sex blended faces). Experiment 2 further showed that this prototype effect was not affected by the presentation order for same-sex blended faces: The effect was equally strong when the faces were presented one after the other during learning or alternated with faces of the opposite gender. By contrast, the prototype effect across gender categories was highly sensitive to the temporal proximity of the faces blended into the blended faces and almost disappeared when other faces were intermixed. These results indicate that distinct neural populations code for female and male faces. However, the formation of a facial representation can also be mediated by both neural populations. The implications for face-space properties and face-encoding processes are discussed.
Successful decoding of famous faces in the fusiform face area.
Axelrod, Vadim; Yovel, Galit
2015-01-01
What are the neural mechanisms of face recognition? It is believed that the network of face-selective areas, which spans the occipital, temporal, and frontal cortices, is important in face recognition. A number of previous studies indeed reported that face identity could be discriminated based on patterns of multivoxel activity in the fusiform face area and the anterior temporal lobe. However, given the difficulty in localizing the face-selective area in the anterior temporal lobe, its role in face recognition is still unknown. Furthermore, previous studies limited their analysis to occipito-temporal regions without testing identity decoding in more anterior face-selective regions, such as the amygdala and prefrontal cortex. In the current high-resolution functional Magnetic Resonance Imaging study, we systematically examined the decoding of the identity of famous faces in the temporo-frontal network of face-selective and adjacent non-face-selective regions. A special focus has been put on the face-area in the anterior temporal lobe, which was reliably localized using an optimized scanning protocol. We found that face-identity could be discriminated above chance level only in the fusiform face area. Our results corroborate the role of the fusiform face area in face recognition. Future studies are needed to further explore the role of the more recently discovered anterior face-selective areas in face recognition.
Potter, Timothy; Corneille, Olivier; Ruys, Kirsten I; Rhodes, Ginwan
2007-04-01
Findings on both attractiveness and memory for faces suggest that people should perceive more similarity among attractive than among unattractive faces. A multidimensional scaling approach was used to test this hypothesis in two studies. In Study 1, we derived a psychological face space from similarity ratings of attractive and unattractive Caucasian female faces. In Study 2, we derived a face space for attractive and unattractive male faces of Caucasians and non-Caucasians. Both studies confirm that attractive faces are indeed more tightly clustered than unattractive faces in people's psychological face spaces. These studies provide direct and original support for theoretical assumptions previously made in the face space and face memory literatures.
The Role of Familiarity for Representations in Norm-Based Face Space
Faerber, Stella J.; Kaufmann, Jürgen M.; Leder, Helmut; Martin, Eva Maria; Schweinberger, Stefan R.
2016-01-01
According to the norm-based version of the multidimensional face space model (nMDFS, Valentine, 1991), any given face and its corresponding anti-face (which deviates from the norm in exactly opposite direction as the original face) should be equidistant to a hypothetical prototype face (norm), such that by definition face and anti-face should bear the same level of perceived typicality. However, it has been argued that familiarity affects perceived typicality and that representations of familiar faces are qualitatively different (e.g., more robust and image-independent) from those for unfamiliar faces. Here we investigated the role of face familiarity for rated typicality, using two frequently used operationalisations of typicality (deviation-based: DEV), and distinctiveness (face in the crowd: FITC) for faces of celebrities and their corresponding anti-faces. We further assessed attractiveness, likeability and trustworthiness ratings of the stimuli, which are potentially related to typicality. For unfamiliar faces and their corresponding anti-faces, in line with the predictions of the nMDFS, our results demonstrate comparable levels of perceived typicality (DEV). In contrast, familiar faces were perceived much less typical than their anti-faces. Furthermore, familiar faces were rated higher than their anti-faces in distinctiveness, attractiveness, likability and trustworthiness. These findings suggest that familiarity strongly affects the distribution of facial representations in norm-based face space. Overall, our study suggests (1) that familiarity needs to be considered in studies of mental representations of faces, and (2) that familiarity, general distance-to-norm and more specific vector directions in face space make different and interactive contributions to different types of facial evaluations. PMID:27168323
The Role of Familiarity for Representations in Norm-Based Face Space.
Faerber, Stella J; Kaufmann, Jürgen M; Leder, Helmut; Martin, Eva Maria; Schweinberger, Stefan R
2016-01-01
According to the norm-based version of the multidimensional face space model (nMDFS, Valentine, 1991), any given face and its corresponding anti-face (which deviates from the norm in exactly opposite direction as the original face) should be equidistant to a hypothetical prototype face (norm), such that by definition face and anti-face should bear the same level of perceived typicality. However, it has been argued that familiarity affects perceived typicality and that representations of familiar faces are qualitatively different (e.g., more robust and image-independent) from those for unfamiliar faces. Here we investigated the role of face familiarity for rated typicality, using two frequently used operationalisations of typicality (deviation-based: DEV), and distinctiveness (face in the crowd: FITC) for faces of celebrities and their corresponding anti-faces. We further assessed attractiveness, likeability and trustworthiness ratings of the stimuli, which are potentially related to typicality. For unfamiliar faces and their corresponding anti-faces, in line with the predictions of the nMDFS, our results demonstrate comparable levels of perceived typicality (DEV). In contrast, familiar faces were perceived much less typical than their anti-faces. Furthermore, familiar faces were rated higher than their anti-faces in distinctiveness, attractiveness, likability and trustworthiness. These findings suggest that familiarity strongly affects the distribution of facial representations in norm-based face space. Overall, our study suggests (1) that familiarity needs to be considered in studies of mental representations of faces, and (2) that familiarity, general distance-to-norm and more specific vector directions in face space make different and interactive contributions to different types of facial evaluations.
Wang, Qiandong; Xiao, Naiqi G.; Quinn, Paul C.; Hu, Chao S.; Qian, Miao; Fu, Genyue; Lee, Kang
2014-01-01
Recent studies have shown that participants use different eye movement strategies when scanning own- and other-race faces. However, it is unclear (1) whether this effect is related to face recognition performance, and (2) to what extent this effect is influenced by top-down or bottom-up facial information. In the present study, Chinese participants performed a face recognition task with Chinese faces, Caucasian faces, and racially ambiguous morphed face stimuli. For the racially ambiguous faces, we led participants to believe that they were viewing either own-race Chinese faces or other-race Caucasian faces. Results showed that (1) Chinese participants scanned the nose of the true Chinese faces more than that of the true Caucasian faces, whereas they scanned the eyes of the Caucasian faces more than those of the Chinese faces; (2) they scanned the eyes, nose, and mouth equally for the ambiguous faces in the Chinese condition compared with those in the Caucasian condition; (3) when recognizing the true Chinese target faces, but not the true target Caucasian faces, the greater the fixation proportion on the nose, the faster the participants correctly recognized these faces. The same was true when racially ambiguous face stimuli were thought to be Chinese faces. These results provide the first evidence to show that (1) visual scanning patterns of faces are related to own-race face recognition response time, and (2) it is bottom-up facial physiognomic information of racial categories that mainly contributes to face scanning. However, top-down knowledge of racial categories can influence the relationship between face scanning patterns and recognition response time. PMID:25497461
Thermal Inspection of Composite Honeycomb Structures
NASA Technical Reports Server (NTRS)
Zalameda, Joseph N.; Parker, F. Raymond
2014-01-01
Composite honeycomb structures continue to be widely used in aerospace applications due to their low weight and high strength advantages. Developing nondestructive evaluation (NDE) inspection methods are essential for their safe performance. Pulsed thermography is a commonly used technique for composite honeycomb structure inspections due to its large area and rapid inspection capability. Pulsed thermography is shown to be sensitive for detection of face sheet impact damage and face sheet to core disbond. Data processing techniques, using principal component analysis to improve the defect contrast, are presented. In addition, limitations to the thermal detection of the core are investigated. Other NDE techniques, such as computed tomography X-ray and ultrasound, are used for comparison to the thermography results.
Calculation of stresses in a rock mass and lining in stagewise face drivage
NASA Astrophysics Data System (ADS)
Seryakov, VM; Zhamalova, BR
2018-03-01
Using the method of calculating mechanical state of a rock mass for the conditions of stagewise drivage of a production face in large cross-section excavations, the specific features of stress redistribution in lining of excavations are found. The zones of tensile stresses in the lining are detected. The authors discuss the influence of the initial stress state of rocks on the tension stress zones induced in the lining in course of the heading advance
2014-09-01
curves. Level 2 or subject-based analysis describes the performance of the system using the-so-called “Doddington’s Zoo ” categorization of individuals...Doddington’s Zoo ” categorization of individuals, which detects whether an individual belongs to an easier or a harder classes of people that the system is able...Marcialis, and F. Roli, “An experimental analysis of the relationship between biometric template update and the doddingtons zoo : A case study in face
Friend, Catherine; Fox Hamilton, Nicola
2016-09-01
Where humans have been found to detect lies or deception only at the rate of chance in offline face-to-face communication (F2F), computer-mediated communication (CMC) online can elicit higher rates of trust and sharing of personal information than F2F. How do levels of trust and empathetic personality traits like perspective taking (PT) relate to deception detection in real-time CMC compared to F2F? A between groups correlational design (N = 40) demonstrated that, through a paired deceptive conversation task with confederates, levels of participant trust could predict accurate detection online but not offline. Second, participant PT abilities could not predict accurate detection in either conversation medium. Finally, this study found that conversation medium also had no effect on deception detection. This study finds support for the effects of the Truth Bias and online disinhibition in deception, and further implications in law enforcement are discussed.
Bang, Lasse; Rø, Øyvind; Endestad, Tor
2017-03-01
Behavioral studies have shown that anorexia nervosa (AN) is associated with attentional bias to general threat cues. The neurobiological underpinnings of attentional bias to threat in AN are unknown. This study investigated the neural responses associated with threat-detection and attentional bias to threat in AN. We measured neural responses to a dot-probe task, involving pairs of angry and neutral face stimuli, in 22 adult women recovered from AN and 21 comparison women. Recovered AN women did not exhibit a behavioral attentional bias to threat. In response to angry faces, recovered women showed significant hypoactivation in the extrastriate cortex. During attentional bias to angry faces, recovered women showed significant hyperactivation in the medial prefrontal cortex. This was because of significant deactivation in comparison women, which was absent in recovered AN women. Women recovered from AN are characterized by altered neural responses to threat cues. Copyright © 2016 John Wiley & Sons, Ltd and Eating Disorders Association. Copyright © 2016 John Wiley & Sons, Ltd and Eating Disorders Association.
Effects of aluminum on epitaxial graphene grown on C-face SiC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xia, Chao, E-mail: chaxi@ifm.liu.se; Johansson, Leif I.; Hultman, Lars
The effects of Al layers deposited on graphene grown on C-face SiC substrates are investigated before and after subsequent annealing using low energy electron diffraction (LEED), photoelectron spectroscopy, and angle resolved photoemission. As-deposited layers appear inert. Annealing at a temperature of about 400 °C initiates migration of Al through the graphene into the graphene/SiC interface. Further annealing at temperatures from 500 °C to 700 °C induces formation of an ordered compound, producing a two domain √7× √7R19° LEED pattern and significant changes in the core level spectra that suggest formation of an Al-Si-C compound. Decomposition of this compound starts after annealing at 800 °C, andmore » at 1000 °C, Al is no longer possible to detect at the surface. On Si-face graphene, deposited Al layers did not form such an Al-Si-C compound, and Al was still detectable after annealing above 1000 °C.« less
Face-to-face: Perceived personal relevance amplifies face processing
Pittig, Andre; Schupp, Harald T.; Alpers, Georg W.
2017-01-01
Abstract The human face conveys emotional and social information, but it is not well understood how these two aspects influence face perception. In order to model a group situation, two faces displaying happy, neutral or angry expressions were presented. Importantly, faces were either facing the observer, or they were presented in profile view directed towards, or looking away from each other. In Experiment 1 (n = 64), face pairs were rated regarding perceived relevance, wish-to-interact, and displayed interactivity, as well as valence and arousal. All variables revealed main effects of facial expression (emotional > neutral), face orientation (facing observer > towards > away) and interactions showed that evaluation of emotional faces strongly varies with their orientation. Experiment 2 (n = 33) examined the temporal dynamics of perceptual-attentional processing of these face constellations with event-related potentials. Processing of emotional and neutral faces differed significantly in N170 amplitudes, early posterior negativity (EPN), and sustained positive potentials. Importantly, selective emotional face processing varied as a function of face orientation, indicating early emotion-specific (N170, EPN) and late threat-specific effects (LPP, sustained positivity). Taken together, perceived personal relevance to the observer—conveyed by facial expression and face direction—amplifies emotional face processing within triadic group situations. PMID:28158672
Artificial faces are harder to remember
Balas, Benjamin; Pacella, Jonathan
2015-01-01
Observers interact with artificial faces in a range of different settings and in many cases must remember and identify computer-generated faces. In general, however, most adults have heavily biased experience favoring real faces over synthetic faces. It is well known that face recognition abilities are affected by experience such that faces belonging to “out-groups” defined by race or age are more poorly remembered and harder to discriminate from one another than faces belonging to the “in-group.” Here, we examine the extent to which artificial faces form an “out-group” in this sense when other perceptual categories are matched. We rendered synthetic faces using photographs of real human faces and compared performance in a memory task and a discrimination task across real and artificial versions of the same faces. We found that real faces were easier to remember, but only slightly more discriminable than artificial faces. Artificial faces were also equally susceptible to the well-known face inversion effect, suggesting that while these patterns are still processed by the human visual system in a face-like manner, artificial appearance does compromise the efficiency of face processing. PMID:26195852
Development of Neural Sensitivity to Face Identity Correlates with Perceptual Discriminability
Barnett, Michael A.; Hartley, Jake; Gomez, Jesse; Stigliani, Anthony; Grill-Spector, Kalanit
2016-01-01
Face perception is subserved by a series of face-selective regions in the human ventral stream, which undergo prolonged development from childhood to adulthood. However, it is unknown how neural development of these regions relates to the development of face-perception abilities. Here, we used functional magnetic resonance imaging (fMRI) to measure brain responses of ventral occipitotemporal regions in children (ages, 5–12 years) and adults (ages, 19–34 years) when they viewed faces that parametrically varied in dissimilarity. Since similar faces generate lower responses than dissimilar faces due to fMRI adaptation, this design objectively evaluates neural sensitivity to face identity across development. Additionally, a subset of subjects participated in a behavioral experiment to assess perceptual discriminability of face identity. Our data reveal three main findings: (1) neural sensitivity to face identity increases with age in face-selective but not object-selective regions; (2) the amplitude of responses to faces increases with age in both face-selective and object-selective regions; and (3) perceptual discriminability of face identity is correlated with the neural sensitivity to face identity of face-selective regions. In contrast, perceptual discriminability is not correlated with the amplitude of response in face-selective regions or of responses of object-selective regions. These data suggest that developmental increases in neural sensitivity to face identity in face-selective regions improve perceptual discriminability of faces. Our findings significantly advance the understanding of the neural mechanisms of development of face perception and open new avenues for using fMRI adaptation to study the neural development of high-level visual and cognitive functions more broadly. SIGNIFICANCE STATEMENT Face perception, which is critical for daily social interactions, develops from childhood to adulthood. However, it is unknown what developmental changes in the brain lead to improved performance. Using fMRI in children and adults, we find that from childhood to adulthood, neural sensitivity to changes in face identity increases in face-selective regions. Critically, subjects' perceptual discriminability among faces is linked to neural sensitivity: participants with higher neural sensitivity in face-selective regions demonstrate higher perceptual discriminability. Thus, our results suggest that developmental increases in face-selective regions' sensitivity to face identity improve perceptual discrimination of faces. These findings significantly advance understanding of the neural mechanisms underlying the development of face perception and have important implications for assessing both typical and atypical development. PMID:27798143
Successful Decoding of Famous Faces in the Fusiform Face Area
Axelrod, Vadim; Yovel, Galit
2015-01-01
What are the neural mechanisms of face recognition? It is believed that the network of face-selective areas, which spans the occipital, temporal, and frontal cortices, is important in face recognition. A number of previous studies indeed reported that face identity could be discriminated based on patterns of multivoxel activity in the fusiform face area and the anterior temporal lobe. However, given the difficulty in localizing the face-selective area in the anterior temporal lobe, its role in face recognition is still unknown. Furthermore, previous studies limited their analysis to occipito-temporal regions without testing identity decoding in more anterior face-selective regions, such as the amygdala and prefrontal cortex. In the current high-resolution functional Magnetic Resonance Imaging study, we systematically examined the decoding of the identity of famous faces in the temporo-frontal network of face-selective and adjacent non-face-selective regions. A special focus has been put on the face-area in the anterior temporal lobe, which was reliably localized using an optimized scanning protocol. We found that face-identity could be discriminated above chance level only in the fusiform face area. Our results corroborate the role of the fusiform face area in face recognition. Future studies are needed to further explore the role of the more recently discovered anterior face-selective areas in face recognition. PMID:25714434
Face-Likeness and Image Variability Drive Responses in Human Face-Selective Ventral Regions
Davidenko, Nicolas; Remus, David A.; Grill-Spector, Kalanit
2012-01-01
The human ventral visual stream contains regions that respond selectively to faces over objects. However, it is unknown whether responses in these regions correlate with how face-like stimuli appear. Here, we use parameterized face silhouettes to manipulate the perceived face-likeness of stimuli and measure responses in face- and object-selective ventral regions with high-resolution fMRI. We first use “concentric hyper-sphere” (CH) sampling to define face silhouettes at different distances from the prototype face. Observers rate the stimuli as progressively more face-like the closer they are to the prototype face. Paradoxically, responses in both face- and object-selective regions decrease as face-likeness ratings increase. Because CH sampling produces blocks of stimuli whose variability is negatively correlated with face-likeness, this effect may be driven by more adaptation during high face-likeness (low-variability) blocks than during low face-likeness (high-variability) blocks. We tested this hypothesis by measuring responses to matched-variability (MV) blocks of stimuli with similar face-likeness ratings as with CH sampling. Critically, under MV sampling, we find a face-specific effect: responses in face-selective regions gradually increase with perceived face-likeness, but responses in object-selective regions are unchanged. Our studies provide novel evidence that face-selective responses correlate with the perceived face-likeness of stimuli, but this effect is revealed only when image variability is controlled across conditions. Finally, our data show that variability is a powerful factor that drives responses across the ventral stream. This indicates that controlling variability across conditions should be a critical tool in future neuroimaging studies of face and object representation. PMID:21823208
Van Belle, Goedele; Vanduffel, Wim; Rossion, Bruno; Vogels, Rufin
2014-01-01
It is widely believed that face processing in the primate brain occurs in a network of category-selective cortical regions. Combined functional MRI (fMRI)-single-cell recording studies in macaques have identified high concentrations of neurons that respond more to faces than objects within face-selective patches. However, cells with a preference for faces over objects are also found scattered throughout inferior temporal (IT) cortex, raising the question whether face-selective cells inside and outside of the face patches differ functionally. Here, we compare the properties of face-selective cells inside and outside of face-selective patches in the IT cortex by means of an image manipulation that reliably disrupts behavior toward face processing: inversion. We recorded IT neurons from two fMRI-defined face-patches (ML and AL) and a region outside of the face patches (herein labeled OUT) during upright and inverted face stimulation. Overall, turning faces upside down reduced the firing rate of face-selective cells. However, there were differences among the recording regions. First, the reduced neuronal response for inverted faces was independent of stimulus position, relative to fixation, in the face-selective patches (ML and AL) only. Additionally, the effect of inversion for face-selective cells in ML, but not those in AL or OUT, was impervious to whether the neurons were initially searched for using upright or inverted stimuli. Collectively, these results show that face-selective cells differ in their functional characteristics depending on their anatomicofunctional location, suggesting that upright faces are preferably coded by face-selective cells inside but not outside of the fMRI-defined face-selective regions of the posterior IT cortex. PMID:25520434
2015-01-01
Color is one of the most prominent features of an image and used in many skin and face detection applications. Color space transformation is widely used by researchers to improve face and skin detection performance. Despite the substantial research efforts in this area, choosing a proper color space in terms of skin and face classification performance which can address issues like illumination variations, various camera characteristics and diversity in skin color tones has remained an open issue. This research proposes a new three-dimensional hybrid color space termed SKN by employing the Genetic Algorithm heuristic and Principal Component Analysis to find the optimal representation of human skin color in over seventeen existing color spaces. Genetic Algorithm heuristic is used to find the optimal color component combination setup in terms of skin detection accuracy while the Principal Component Analysis projects the optimal Genetic Algorithm solution to a less complex dimension. Pixel wise skin detection was used to evaluate the performance of the proposed color space. We have employed four classifiers including Random Forest, Naïve Bayes, Support Vector Machine and Multilayer Perceptron in order to generate the human skin color predictive model. The proposed color space was compared to some existing color spaces and shows superior results in terms of pixel-wise skin detection accuracy. Experimental results show that by using Random Forest classifier, the proposed SKN color space obtained an average F-score and True Positive Rate of 0.953 and False Positive Rate of 0.0482 which outperformed the existing color spaces in terms of pixel wise skin detection accuracy. The results also indicate that among the classifiers used in this study, Random Forest is the most suitable classifier for pixel wise skin detection applications. PMID:26267377
Famous face recognition, face matching, and extraversion.
Lander, Karen; Poyarekar, Siddhi
2015-01-01
It has been previously established that extraverts who are skilled at interpersonal interaction perform significantly better than introverts on a face-specific recognition memory task. In our experiment we further investigate the relationship between extraversion and face recognition, focusing on famous face recognition and face matching. Results indicate that more extraverted individuals perform significantly better on an upright famous face recognition task and show significantly larger face inversion effects. However, our results did not find an effect of extraversion on face matching or inverted famous face recognition.
Visual adaptation of the perception of "life": animacy is a basic perceptual dimension of faces.
Koldewyn, Kami; Hanus, Patricia; Balas, Benjamin
2014-08-01
One critical component of understanding another's mind is the perception of "life" in a face. However, little is known about the cognitive and neural mechanisms underlying this perception of animacy. Here, using a visual adaptation paradigm, we ask whether face animacy is (1) a basic dimension of face perception and (2) supported by a common neural mechanism across distinct face categories defined by age and species. Observers rated the perceived animacy of adult human faces before and after adaptation to (1) adult faces, (2) child faces, and (3) dog faces. When testing the perception of animacy in human faces, we found significant adaptation to both adult and child faces, but not dog faces. We did, however, find significant adaptation when morphed dog images and dog adaptors were used. Thus, animacy perception in faces appears to be a basic dimension of face perception that is species specific but not constrained by age categories.
Rêgo, Gabriel Gaudencio; Campanhã, Camila; do Egito, Julia Horta Tabosa; Boggio, Paulo Sérgio
2017-10-01
The ultimatum game (UG) is an endowment sharing game in which a proposer suggests a division of an asset to a recipient, who must accept or reject it. Economic studies showed that despite recipients usually rejecting unfair offers, perception and reaction to unfairness are highly dependent on who is the proposer. Event-related potentials (ERPs) commonly detected in UG games are the medial frontal negativity (MFN), a component detected in recipients facing unfair offers, and the P300, a component related to attentional and memory processes. Given this, we aimed to investigate the behavioral and ERP responses of healthy people playing the UG game with Down syndrome (DS) and typical development (TD) proposers. Nineteen subjects participated in this study. The UG behavioral data were similar to previous studies. ERP analysis showed no MFN in participants facing unfair offers. A higher P300 amplitude was detected when participants faced fair offers from TD compared to DS fair offers. We also found a positive correlation between P300 amplitude for TD offers and self-esteem scale score. Together these findings indicate that insertion of an atypical player in the UG led to changes in participants' perception and expectancy of the game.
Balconi, Michela; Canavesio, Ylenia
2016-01-01
The present research explored the effect of social empathy on processing emotional facial expressions. Previous evidence suggested a close relationship between emotional empathy and both the ability to detect facial emotions and the attentional mechanisms involved. A multi-measure approach was adopted: we investigated the association between trait empathy (Balanced Emotional Empathy Scale) and individuals' performance (response times; RTs), attentional mechanisms (eye movements; number and duration of fixations), correlates of cortical activation (event-related potential (ERP) N200 component), and facial responsiveness (facial zygomatic and corrugator activity). Trait empathy was found to affect face detection performance (reduced RTs), attentional processes (more scanning eye movements in specific areas of interest), ERP salience effect (increased N200 amplitude), and electromyographic activity (more facial responses). A second important result was the demonstration of strong, direct correlations among these measures. We suggest that empathy may function as a social facilitator of the processes underlying the detection of facial emotion, and a general "facial response effect" is proposed to explain these results. We assumed that empathy influences cognitive and the facial responsiveness, such that empathic individuals are more skilful in processing facial emotion.
Crossing the “Uncanny Valley”: adaptation to cartoon faces can influence perception of human faces
Chen, Haiwen; Russell, Richard; Nakayama, Ken; Livingstone, Margaret
2013-01-01
Adaptation can shift what individuals identify to be a prototypical or attractive face. Past work suggests that low-level shape adaptation can affect high-level face processing but is position dependent. Adaptation to distorted images of faces can also affect face processing but only within sub-categories of faces, such as gender, age, and race/ethnicity. This study assesses whether there is a representation of face that is specific to faces (as opposed to all shapes) but general to all kinds of faces (as opposed to subcategories) by testing whether adaptation to one type of face can affect perception of another. Participants were shown cartoon videos containing faces with abnormally large eyes. Using animated videos allowed us to simulate naturalistic exposure and avoid positional shape adaptation. Results suggest that adaptation to cartoon faces with large eyes shifts preferences for human faces toward larger eyes, supporting the existence of general face representations. PMID:20465173
Face-space: A unifying concept in face recognition research.
Valentine, Tim; Lewis, Michael B; Hills, Peter J
2016-10-01
The concept of a multidimensional psychological space, in which faces can be represented according to their perceived properties, is fundamental to the modern theorist in face processing. Yet the idea was not clearly expressed until 1991. The background that led to the development of face-space is explained, and its continuing influence on theories of face processing is discussed. Research that has explored the properties of the face-space and sought to understand caricature, including facial adaptation paradigms, is reviewed. Face-space as a theoretical framework for understanding the effect of ethnicity and the development of face recognition is evaluated. Finally, two applications of face-space in the forensic setting are discussed. From initially being presented as a model to explain distinctiveness, inversion, and the effect of ethnicity, face-space has become a central pillar in many aspects of face processing. It is currently being developed to help us understand adaptation effects with faces. While being in principle a simple concept, face-space has shaped, and continues to shape, our understanding of face perception.
Two areas for familiar face recognition in the primate brain.
Landi, Sofia M; Freiwald, Winrich A
2017-08-11
Familiarity alters face recognition: Familiar faces are recognized more accurately than unfamiliar ones and under difficult viewing conditions when unfamiliar face recognition fails. The neural basis for this fundamental difference remains unknown. Using whole-brain functional magnetic resonance imaging, we found that personally familiar faces engage the macaque face-processing network more than unfamiliar faces. Familiar faces also recruited two hitherto unknown face areas at anatomically conserved locations within the perirhinal cortex and the temporal pole. These two areas, but not the core face-processing network, responded to familiar faces emerging from a blur with a characteristic nonlinear surge, akin to the abruptness of familiar face recognition. In contrast, responses to unfamiliar faces and objects remained linear. Thus, two temporal lobe areas extend the core face-processing network into a familiar face-recognition system. Copyright © 2017 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
Keyes, Helen; Zalicks, Catherine
2016-01-01
Using a priming paradigm, we investigate whether socially important faces are processed preferentially compared to other familiar and unfamiliar faces, and whether any such effects are affected by changes in viewpoint. Participants were primed with frontal images of personally familiar, famous or unfamiliar faces, and responded to target images of congruent or incongruent identity, presented in frontal, three quarter or profile views. We report that participants responded significantly faster to socially important faces (a friend’s face) compared to other highly familiar (famous) faces or unfamiliar faces. Crucially, responses to famous and unfamiliar faces did not differ. This suggests that, when presented in the context of a socially important stimulus, socially unimportant familiar faces (famous faces) are treated in a similar manner to unfamiliar faces. This effect was not tied to viewpoint, and priming did not affect socially important face processing differently to other faces. PMID:27219101
The Online and Face-to-Face Counseling Attitudes Scales: A Validation Study
ERIC Educational Resources Information Center
Rochlen, Aaron B.; Beretvas, S. Natasha; Zack, Jason S.
2004-01-01
This article reports on the development of measures of attitudes toward online and face-to-face counseling. Overall, participants expressed more favorable evaluations of face-to-face counseling than of online counseling. Significant correlations were found between online and face-to-face counseling with traditional help-seeking attitudes, comfort…
Lin, Tian; Lendry, Reesa; Ebner, Natalie C
2016-11-01
Evidence of effects of face attractiveness on memory is mixed and little is known about the underlying mechanisms of this relationship. Previous work suggests a possible mediating role of affective responding to faces (i.e., face likeability) on the relationship between face attractiveness and memory. Age-related change in social motivation may reduce the relevance of face attractiveness in older adults, with downstream effects on memory. In the present study, 50 young and 51 older participants were presented with face-trait pairs. Faces varied in attractiveness. Participants then completed a face-trait associative recognition memory task and provided likeability ratings for each face. There was a memory-enhancing effect of face attractiveness in young (but not older) participants, which was partially mediated by face likeability. In addition, more attractive and less attractive (compared to moderately attractive) faces were more likely remembered by both young and older participants. This quadratic effect of face attractiveness on memory was not mediated by face likeability. Findings are discussed in the context of motivational influences on memory that vary with age.
The Impact of Perceptual Load on the Non-Conscious Processing of Fearful Faces
Wang, Lili; Feng, Chunliang; Mai, Xiaoqin; Jia, Lina; Zhu, Xiangru; Luo, Wenbo; Luo, Yue-jia
2016-01-01
Emotional stimuli can be processed without consciousness. In the current study, we used event-related potentials (ERPs) to assess whether perceptual load influences non-conscious processing of fearful facial expressions. Perceptual load was manipulated using a letter search task with the target letter presented at the fixation point, while facial expressions were presented peripherally and masked to prevent conscious awareness. The letter string comprised six letters (X or N) that were identical (low load) or different (high load). Participants were instructed to discriminate the letters at fixation or the facial expression (fearful or neutral) in the periphery. Participants were faster and more accurate at detecting letters in the low load condition than in the high load condition. Fearful faces elicited a sustained positivity from 250 ms to 700 ms post-stimulus over fronto-central areas during the face discrimination and low-load letter discrimination conditions, but this effect was completely eliminated during high-load letter discrimination. Our findings imply that non-conscious processing of fearful faces depends on perceptual load, and attentional resources are necessary for non-conscious processing. PMID:27149273
Nozadi, Sara S; Spinrad, Tracy L; Johnson, Scott P; Eisenberg, Nancy
2018-06-01
The current study examined whether an important temperamental characteristic, effortful control (EC), moderates the associations between dispositional anger and sadness, attention biases, and social functioning in a group of preschool-aged children (N = 77). Preschoolers' attentional biases toward angry and sad facial expressions were assessed using eye-tracking, and we obtained teachers' reports of children's temperament and social functioning. Associations of dispositional anger and sadness with time looking at relevant negative emotional stimuli were moderated by children's EC, but relations between time looking at emotional faces and indicators of social functioning, for the most part, were direct and not moderated by EC. In particular, time looking at angry faces (and low EC) predicted high levels of aggressive behaviors, whereas longer time looking at sad faces (and high EC) predicted higher social competence. Finally, latency to detect angry faces predicted aggressive behavior under conditions of average and low levels of EC. Findings are discussed in terms of the importance of differentiating between components of attention biases toward distinct negative emotions, and implications for attention training. (PsycINFO Database Record (c) 2018 APA, all rights reserved).