Science.gov

Sample records for viola-jones face detection

  1. A Viola-Jones based hybrid face detection framework

    NASA Astrophysics Data System (ADS)

    Murphy, Thomas M.; Broussard, Randy; Schultz, Robert; Rakvic, Ryan; Ngo, Hau

    2013-12-01

    Improvements in face detection performance would benefit many applications. The OpenCV library implements a standard solution, the Viola-Jones detector, with a statistically boosted rejection cascade of binary classifiers. Empirical evidence has shown that Viola-Jones underdetects in some instances. This research shows that a truncated cascade augmented by a neural network could recover these undetected faces. A hybrid framework is constructed, with a truncated Viola-Jones cascade followed by an artificial neural network, used to refine the face decision. Optimally, a truncation stage that captured all faces and allowed the neural network to remove the false alarms is selected. A feedforward backpropagation network with one hidden layer is trained to discriminate faces based upon the thresholding (detection) values of intermediate stages of the full rejection cascade. A clustering algorithm is used as a precursor to the neural network, to group significant overlappings. Evaluated on the CMU/VASC Image Database, comparison with an unmodified OpenCV approach shows: (1) a 37% increase in detection rates if constrained by the requirement of no increase in false alarms, (2) a 48% increase in detection rates if some additional false alarms are tolerated, and (3) an 82% reduction in false alarms with no reduction in detection rates. These results demonstrate improved face detection and could address the need for such improvement in various applications.

  2. Performance improvement of multi-class detection using greedy algorithm for Viola-Jones cascade selection

    NASA Astrophysics Data System (ADS)

    Tereshin, Alexander A.; Usilin, Sergey A.; Arlazarov, Vladimir V.

    2018-04-01

    This paper aims to study the problem of multi-class object detection in video stream with Viola-Jones cascades. An adaptive algorithm for selecting Viola-Jones cascade based on greedy choice strategy in solution of the N-armed bandit problem is proposed. The efficiency of the algorithm on the problem of detection and recognition of the bank card logos in the video stream is shown. The proposed algorithm can be effectively used in documents localization and identification, recognition of road scene elements, localization and tracking of the lengthy objects , and for solving other problems of rigid object detection in a heterogeneous data flows. The computational efficiency of the algorithm makes it possible to use it both on personal computers and on mobile devices based on processors with low power consumption.

  3. An application of viola jones method for face recognition for absence process efficiency

    NASA Astrophysics Data System (ADS)

    Rizki Damanik, Rudolfo; Sitanggang, Delima; Pasaribu, Hendra; Siagian, Hendrik; Gulo, Frisman

    2018-04-01

    Absence was a list of documents that the company used to record the attendance time of each employee. The most common problem in a fingerprint machine is the identification of a slow sensor or a sensor not recognizing a finger. The employees late to work because they get difficulties at fingerprint system, they need about 3 – 5 minutes to absence when the condition of finger is wet or not fit. To overcome this problem, this research tried to utilize facial recognition for attendance process. The method used for facial recognition was Viola Jones. Through the processing phase of the RGB face image was converted into a histogram equalization face image for the next stage of recognition. The result of this research was the absence process could be done less than 1 second with a maximum slope of ± 700 and a distance of 20-200 cm. After implement facial recognition the process of absence is more efficient, just take less 1 minute to absence.

  4. A Hybrid Vehicle Detection Method Based on Viola-Jones and HOG + SVM from UAV Images

    PubMed Central

    Xu, Yongzheng; Yu, Guizhen; Wang, Yunpeng; Wu, Xinkai; Ma, Yalong

    2016-01-01

    A new hybrid vehicle detection scheme which integrates the Viola-Jones (V-J) and linear SVM classifier with HOG feature (HOG + SVM) methods is proposed for vehicle detection from low-altitude unmanned aerial vehicle (UAV) images. As both V-J and HOG + SVM are sensitive to on-road vehicles’ in-plane rotation, the proposed scheme first adopts a roadway orientation adjustment method, which rotates each UAV image to align the roads with the horizontal direction so the original V-J or HOG + SVM method can be directly applied to achieve fast detection and high accuracy. To address the issue of descending detection speed for V-J and HOG + SVM, the proposed scheme further develops an adaptive switching strategy which sophistically integrates V-J and HOG + SVM methods based on their different descending trends of detection speed to improve detection efficiency. A comprehensive evaluation shows that the switching strategy, combined with the road orientation adjustment method, can significantly improve the efficiency and effectiveness of the vehicle detection from UAV images. The results also show that the proposed vehicle detection method is competitive compared with other existing vehicle detection methods. Furthermore, since the proposed vehicle detection method can be performed on videos captured from moving UAV platforms without the need of image registration or additional road database, it has great potentials of field applications. Future research will be focusing on expanding the current method for detecting other transportation modes such as buses, trucks, motors, bicycles, and pedestrians. PMID:27548179

  5. A Hybrid Vehicle Detection Method Based on Viola-Jones and HOG + SVM from UAV Images.

    PubMed

    Xu, Yongzheng; Yu, Guizhen; Wang, Yunpeng; Wu, Xinkai; Ma, Yalong

    2016-08-19

    A new hybrid vehicle detection scheme which integrates the Viola-Jones (V-J) and linear SVM classifier with HOG feature (HOG + SVM) methods is proposed for vehicle detection from low-altitude unmanned aerial vehicle (UAV) images. As both V-J and HOG + SVM are sensitive to on-road vehicles' in-plane rotation, the proposed scheme first adopts a roadway orientation adjustment method, which rotates each UAV image to align the roads with the horizontal direction so the original V-J or HOG + SVM method can be directly applied to achieve fast detection and high accuracy. To address the issue of descending detection speed for V-J and HOG + SVM, the proposed scheme further develops an adaptive switching strategy which sophistically integrates V-J and HOG + SVM methods based on their different descending trends of detection speed to improve detection efficiency. A comprehensive evaluation shows that the switching strategy, combined with the road orientation adjustment method, can significantly improve the efficiency and effectiveness of the vehicle detection from UAV images. The results also show that the proposed vehicle detection method is competitive compared with other existing vehicle detection methods. Furthermore, since the proposed vehicle detection method can be performed on videos captured from moving UAV platforms without the need of image registration or additional road database, it has great potentials of field applications. Future research will be focusing on expanding the current method for detecting other transportation modes such as buses, trucks, motors, bicycles, and pedestrians.

  6. Text extraction from images in the wild using the Viola-Jones algorithm

    NASA Astrophysics Data System (ADS)

    Saabna, Raid M.; Zingboim, Eran

    2018-04-01

    Text Localization and extraction is an important issue in modern applications of computer vision. Applications such as reading and translating texts in the wild or from videos are among the many applications that can benefit results of this field. In this work, we adopt the well-known Viola-Jones algorithm to enable text extraction and localization from images in the wild. The Viola-Jones is an efficient, and a fast image-processing algorithm originally used for face detection. Based on some resemblance between text and face detection tasks in the wild, we have modified the viola-jones to detect regions of interest where text may be localized. In the proposed approach, some modification to the HAAR like features and a semi-automatic process of data set generating and manipulation were presented to train the algorithm. A process of sliding windows with different sizes have been used to scan the image for individual letters and letter clusters existence. A post processing step is used in order to combine the detected letters into words and to remove false positives. The novelty of the presented approach is using the strengths of a modified Viola-Jones algorithm to identify many different objects representing different letters and clusters of similar letters and later combine them into words of varying lengths. Impressive results were obtained on the ICDAR contest data sets.

  7. Enhancement of Fast Face Detection Algorithm Based on a Cascade of Decision Trees

    NASA Astrophysics Data System (ADS)

    Khryashchev, V. V.; Lebedev, A. A.; Priorov, A. L.

    2017-05-01

    Face detection algorithm based on a cascade of ensembles of decision trees (CEDT) is presented. The new approach allows detecting faces other than the front position through the use of multiple classifiers. Each classifier is trained for a specific range of angles of the rotation head. The results showed a high rate of productivity for CEDT on images with standard size. The algorithm increases the area under the ROC-curve of 13% compared to a standard Viola-Jones face detection algorithm. Final realization of given algorithm consist of 5 different cascades for frontal/non-frontal faces. One more thing which we take from the simulation results is a low computational complexity of CEDT algorithm in comparison with standard Viola-Jones approach. This could prove important in the embedded system and mobile device industries because it can reduce the cost of hardware and make battery life longer.

  8. Toward automated face detection in thermal and polarimetric thermal imagery

    NASA Astrophysics Data System (ADS)

    Gordon, Christopher; Acosta, Mark; Short, Nathan; Hu, Shuowen; Chan, Alex L.

    2016-05-01

    Visible spectrum face detection algorithms perform pretty reliably under controlled lighting conditions. However, variations in illumination and application of cosmetics can distort the features used by common face detectors, thereby degrade their detection performance. Thermal and polarimetric thermal facial imaging are relatively invariant to illumination and robust to the application of makeup, due to their measurement of emitted radiation instead of reflected light signals. The objective of this work is to evaluate a government off-the-shelf wavelet based naïve-Bayes face detection algorithm and a commercial off-the-shelf Viola-Jones cascade face detection algorithm on face imagery acquired in different spectral bands. New classifiers were trained using the Viola-Jones cascade object detection framework with preprocessed facial imagery. Preprocessing using Difference of Gaussians (DoG) filtering reduces the modality gap between facial signatures across the different spectral bands, thus enabling more correlated histogram of oriented gradients (HOG) features to be extracted from the preprocessed thermal and visible face images. Since the availability of training data is much more limited in the thermal spectrum than in the visible spectrum, it is not feasible to train a robust multi-modal face detector using thermal imagery alone. A large training dataset was constituted with DoG filtered visible and thermal imagery, which was subsequently used to generate a custom trained Viola-Jones detector. A 40% increase in face detection rate was achieved on a testing dataset, as compared to the performance of a pre-trained/baseline face detector. Insights gained in this research are valuable in the development of more robust multi-modal face detectors.

  9. Improving face image extraction by using deep learning technique

    NASA Astrophysics Data System (ADS)

    Xue, Zhiyun; Antani, Sameer; Long, L. R.; Demner-Fushman, Dina; Thoma, George R.

    2016-03-01

    The National Library of Medicine (NLM) has made a collection of over a 1.2 million research articles containing 3.2 million figure images searchable using the Open-iSM multimodal (text+image) search engine. Many images are visible light photographs, some of which are images containing faces ("face images"). Some of these face images are acquired in unconstrained settings, while others are studio photos. To extract the face regions in the images, we first applied one of the most widely-used face detectors, a pre-trained Viola-Jones detector implemented in Matlab and OpenCV. The Viola-Jones detector was trained for unconstrained face image detection, but the results for the NLM database included many false positives, which resulted in a very low precision. To improve this performance, we applied a deep learning technique, which reduced the number of false positives and as a result, the detection precision was improved significantly. (For example, the classification accuracy for identifying whether the face regions output by this Viola- Jones detector are true positives or not in a test set is about 96%.) By combining these two techniques (Viola-Jones and deep learning) we were able to increase the system precision considerably, while avoiding the need to manually construct a large training set by manual delineation of the face regions.

  10. Classification of facial-emotion expression in the application of psychotherapy using Viola-Jones and Edge-Histogram of Oriented Gradient.

    PubMed

    Candra, Henry; Yuwono, Mitchell; Rifai Chai; Nguyen, Hung T; Su, Steven

    2016-08-01

    Psychotherapy requires appropriate recognition of patient's facial-emotion expression to provide proper treatment in psychotherapy session. To address the needs this paper proposed a facial emotion recognition system using Combination of Viola-Jones detector together with a feature descriptor we term Edge-Histogram of Oriented Gradients (E-HOG). The performance of the proposed method is compared with various feature sources including the face, the eyes, the mouth, as well as both the eyes and the mouth. Seven classes of basic emotions have been successfully identified with 96.4% accuracy using Multi-class Support Vector Machine (SVM). The proposed descriptor E-HOG is much leaner to compute compared to traditional HOG as shown by a significant improvement in processing time as high as 1833.33% (p-value = 2.43E-17) with a slight reduction in accuracy of only 1.17% (p-value = 0.0016).

  11. Skin Color Segmentation Using Coarse-to-Fine Region on Normalized RGB Chromaticity Diagram for Face Detection

    NASA Astrophysics Data System (ADS)

    Soetedjo, Aryuanto; Yamada, Koichi

    This paper describes a new color segmentation based on a normalized RGB chromaticity diagram for face detection. Face skin is extracted from color images using a coarse skin region with fixed boundaries followed by a fine skin region with variable boundaries. Two newly developed histograms that have prominent peaks of skin color and non-skin colors are employed to adjust the boundaries of the skin region. The proposed approach does not need a skin color model, which depends on a specific camera parameter and is usually limited to a particular environment condition, and no sample images are required. The experimental results using color face images of various races under varying lighting conditions and complex backgrounds, obtained from four different resources on the Internet, show a high detection rate of 87%. The results of the detection rate and computation time are comparable to the well known real-time face detection method proposed by Viola-Jones [11], [12].

  12. The feasibility test of state-of-the-art face detection algorithms for vehicle occupant detection

    NASA Astrophysics Data System (ADS)

    Makrushin, Andrey; Dittmann, Jana; Vielhauer, Claus; Langnickel, Mirko; Kraetzer, Christian

    2010-01-01

    Vehicle seat occupancy detection systems are designed to prevent the deployment of airbags at unoccupied seats, thus avoiding the considerable cost imposed by the replacement of airbags. Occupancy detection can also improve passenger comfort, e.g. by activating air-conditioning systems. The most promising development perspectives are seen in optical sensing systems which have become cheaper and smaller in recent years. The most plausible way to check the seat occupancy by occupants is the detection of presence and location of heads, or more precisely, faces. This paper compares the detection performances of the three most commonly used and widely available face detection algorithms: Viola- Jones, Kienzle et al. and Nilsson et al. The main objective of this work is to identify whether one of these systems is suitable for use in a vehicle environment with variable and mostly non-uniform illumination conditions, and whether any one face detection system can be sufficient for seat occupancy detection. The evaluation of detection performance is based on a large database comprising 53,928 video frames containing proprietary data collected from 39 persons of both sexes and different ages and body height as well as different objects such as bags and rearward/forward facing child restraint systems.

  13. Adaptive skin segmentation via feature-based face detection

    NASA Astrophysics Data System (ADS)

    Taylor, Michael J.; Morris, Tim

    2014-05-01

    Variations in illumination can have significant effects on the apparent colour of skin, which can be damaging to the efficacy of any colour-based segmentation approach. We attempt to overcome this issue by presenting a new adaptive approach, capable of generating skin colour models at run-time. Our approach adopts a Viola-Jones feature-based face detector, in a moderate-recall, high-precision configuration, to sample faces within an image, with an emphasis on avoiding potentially detrimental false positives. From these samples, we extract a set of pixels that are likely to be from skin regions, filter them according to their relative luma values in an attempt to eliminate typical non-skin facial features (eyes, mouths, nostrils, etc.), and hence establish a set of pixels that we can be confident represent skin. Using this representative set, we train a unimodal Gaussian function to model the skin colour in the given image in the normalised rg colour space - a combination of modelling approach and colour space that benefits us in a number of ways. A generated function can subsequently be applied to every pixel in the given image, and, hence, the probability that any given pixel represents skin can be determined. Segmentation of the skin, therefore, can be as simple as applying a binary threshold to the calculated probabilities. In this paper, we touch upon a number of existing approaches, describe the methods behind our new system, present the results of its application to arbitrary images of people with detectable faces, which we have found to be extremely encouraging, and investigate its potential to be used as part of real-time systems.

  14. Automatic Fatigue Detection of Drivers through Yawning Analysis

    NASA Astrophysics Data System (ADS)

    Azim, Tayyaba; Jaffar, M. Arfan; Ramzan, M.; Mirza, Anwar M.

    This paper presents a non-intrusive fatigue detection system based on the video analysis of drivers. The focus of the paper is on how to detect yawning which is an important cue for determining driver's fatigue. Initially, the face is located through Viola-Jones face detection method in a video frame. Then, a mouth window is extracted from the face region, in which lips are searched through spatial fuzzy c-means (s-FCM) clustering. The degree of mouth openness is extracted on the basis of mouth features, to determine driver's yawning state. If the yawning state of the driver persists for several consecutive frames, the system concludes that the driver is non-vigilant due to fatigue and is thus warned through an alarm. The system reinitializes when occlusion or misdetection occurs. Experiments were carried out using real data, recorded in day and night lighting conditions, and with users belonging to different race and gender.

  15. Rapid prototyping of SoC-based real-time vision system: application to image preprocessing and face detection

    NASA Astrophysics Data System (ADS)

    Jridi, Maher; Alfalou, Ayman

    2017-05-01

    By this paper, the major goal is to investigate the Multi-CPU/FPGA SoC (System on Chip) design flow and to transfer a know-how and skills to rapidly design embedded real-time vision system. Our aim is to show how the use of these devices can be benefit for system level integration since they make possible simultaneous hardware and software development. We take the facial detection and pretreatments as case study since they have a great potential to be used in several applications such as video surveillance, building access control and criminal identification. The designed system use the Xilinx Zedboard platform. The last is the central element of the developed vision system. The video acquisition is performed using either standard webcam connected to the Zedboard via USB interface or several camera IP devices. The visualization of video content and intermediate results are possible with HDMI interface connected to HD display. The treatments embedded in the system are as follow: (i) pre-processing such as edge detection implemented in the ARM and in the reconfigurable logic, (ii) software implementation of motion detection and face detection using either ViolaJones or LBP (Local Binary Pattern), and (iii) application layer to select processing application and to display results in a web page. One uniquely interesting feature of the proposed system is that two functions have been developed to transmit data from and to the VDMA port. With the proposed optimization, the hardware implementation of the Sobel filter takes 27 ms and 76 ms for 640x480, and 720p resolutions, respectively. Hence, with the FPGA implementation, an acceleration of 5 times is obtained which allow the processing of 37 fps and 13 fps for 640x480, and 720p resolutions, respectively.

  16. Human ear detection in the thermal infrared spectrum

    NASA Astrophysics Data System (ADS)

    Abaza, Ayman; Bourlai, Thirimachos

    2012-06-01

    In this paper the problem of human ear detection in the thermal infrared (IR) spectrum is studied in order to illustrate the advantages and limitations of the most important steps of ear-based biometrics that can operate in day and night time environments. The main contributions of this work are two-fold: First, a dual-band database is assembled that consists of visible and thermal profile face images. The thermal data was collected using a high definition middle-wave infrared (3-5 microns) camera that is capable of acquiring thermal imprints of human skin. Second, a fully automated, thermal imaging based ear detection method is developed for real-time segmentation of human ears in either day or night time environments. The proposed method is based on Haar features forming a cascaded AdaBoost classifier (our modified version of the original Viola-Jones approach1 that was designed to be applied mainly in visible band images). The main advantage of the proposed method, applied on our profile face image data set collected in the thermal-band, is that it is designed to reduce the learning time required by the original Viola-Jones method from several weeks to several hours. Unlike other approaches reported in the literature, which have been tested but not designed to operate in the thermal band, our method yields a high detection accuracy that reaches ~ 91.5%. Further analysis on our data set yielded that: (a) photometric normalization techniques do not directly improve ear detection performance. However, when using a certain photometric normalization technique (CLAHE) on falsely detected images, the detection rate improved by ~ 4%; (b) the high detection accuracy of our method did not degrade when we lowered down the original spatial resolution of thermal ear images. For example, even after using one third of the original spatial resolution (i.e. ~ 20% of the original computational time) of the thermal profile face images, the high ear detection accuracy of our method

  17. Face detection and eyeglasses detection for thermal face recognition

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng

    2012-01-01

    Thermal face recognition becomes an active research direction in human identification because it does not rely on illumination condition. Face detection and eyeglasses detection are necessary steps prior to face recognition using thermal images. Infrared light cannot go through glasses and thus glasses will appear as dark areas in a thermal image. One possible solution is to detect eyeglasses and to exclude the eyeglasses areas before face matching. In thermal face detection, a projection profile analysis algorithm is proposed, where region growing and morphology operations are used to segment the body of a subject; then the derivatives of two projections (horizontal and vertical) are calculated and analyzed to locate a minimal rectangle of containing the face area. Of course, the searching region of a pair of eyeglasses is within the detected face area. The eyeglasses detection algorithm should produce either a binary mask if eyeglasses present, or an empty set if no eyeglasses at all. In the proposed eyeglasses detection algorithm, block processing, region growing, and priori knowledge (i.e., low mean and variance within glasses areas, the shapes and locations of eyeglasses) are employed. The results of face detection and eyeglasses detection are quantitatively measured and analyzed using the manually defined ground truths (for both face and eyeglasses). Our experimental results shown that the proposed face detection and eyeglasses detection algorithms performed very well in contrast with the predefined ground truths.

  18. Energy conservation using face detection

    NASA Astrophysics Data System (ADS)

    Deotale, Nilesh T.; Kalbande, Dhananjay R.; Mishra, Akassh A.

    2011-10-01

    Computerized Face Detection, is concerned with the difficult task of converting a video signal of a person to written text. It has several applications like face recognition, simultaneous multiple face processing, biometrics, security, video surveillance, human computer interface, image database management, digital cameras use face detection for autofocus, selecting regions of interest in photo slideshows that use a pan-and-scale and The Present Paper deals with energy conservation using face detection. Automating the process to a computer requires the use of various image processing techniques. There are various methods that can be used for Face Detection such as Contour tracking methods, Template matching, Controlled background, Model based, Motion based and color based. Basically, the video of the subject are converted into images are further selected manually for processing. However, several factors like poor illumination, movement of face, viewpoint-dependent Physical appearance, Acquisition geometry, Imaging conditions, Compression artifacts makes Face detection difficult. This paper reports an algorithm for conservation of energy using face detection for various devices. The present paper suggests Energy Conservation can be done by Detecting the Face and reducing the brightness of complete image and then adjusting the brightness of the particular area of an image where the face is located using histogram equalization.

  19. Face Liveness Detection Using Defocus

    PubMed Central

    Kim, Sooyeon; Ban, Yuseok; Lee, Sangyoun

    2015-01-01

    In order to develop security systems for identity authentication, face recognition (FR) technology has been applied. One of the main problems of applying FR technology is that the systems are especially vulnerable to attacks with spoofing faces (e.g., 2D pictures). To defend from these attacks and to enhance the reliability of FR systems, many anti-spoofing approaches have been recently developed. In this paper, we propose a method for face liveness detection using the effect of defocus. From two images sequentially taken at different focuses, three features, focus, power histogram and gradient location and orientation histogram (GLOH), are extracted. Afterwards, we detect forged faces through the feature-level fusion approach. For reliable performance verification, we develop two databases with a handheld digital camera and a webcam. The proposed method achieves a 3.29% half total error rate (HTER) at a given depth of field (DoF) and can be extended to camera-equipped devices, like smartphones. PMID:25594594

  20. The wide window of face detection.

    PubMed

    Hershler, Orit; Golan, Tal; Bentin, Shlomo; Hochstein, Shaul

    2010-08-20

    Faces are detected more rapidly than other objects in visual scenes and search arrays, but the cause for this face advantage has been contested. In the present study, we found that under conditions of spatial uncertainty, faces were easier to detect than control targets (dog faces, clocks and cars) even in the absence of surrounding stimuli, making an explanation based only on low-level differences unlikely. This advantage improved with eccentricity in the visual field, enabling face detection in wider visual windows, and pointing to selective sparing of face detection at greater eccentricities. This face advantage might be due to perceptual factors favoring face detection. In addition, the relative face advantage is greater under flanked than non-flanked conditions, suggesting an additional, possibly attention-related benefit enabling face detection in groups of distracters.

  1. A robust human face detection algorithm

    NASA Astrophysics Data System (ADS)

    Raviteja, Thaluru; Karanam, Srikrishna; Yeduguru, Dinesh Reddy V.

    2012-01-01

    Human face detection plays a vital role in many applications like video surveillance, managing a face image database, human computer interface among others. This paper proposes a robust algorithm for face detection in still color images that works well even in a crowded environment. The algorithm uses conjunction of skin color histogram, morphological processing and geometrical analysis for detecting human faces. To reinforce the accuracy of face detection, we further identify mouth and eye regions to establish the presence/absence of face in a particular region of interest.

  2. Seeing Objects as Faces Enhances Object Detection.

    PubMed

    Takahashi, Kohske; Watanabe, Katsumi

    2015-10-01

    The face is a special visual stimulus. Both bottom-up processes for low-level facial features and top-down modulation by face expectations contribute to the advantages of face perception. However, it is hard to dissociate the top-down factors from the bottom-up processes, since facial stimuli mandatorily lead to face awareness. In the present study, using the face pareidolia phenomenon, we demonstrated that face awareness, namely seeing an object as a face, enhances object detection performance. In face pareidolia, some people see a visual stimulus, for example, three dots arranged in V shape, as a face, while others do not. This phenomenon allows us to investigate the effect of face awareness leaving the stimulus per se unchanged. Participants were asked to detect a face target or a triangle target. While target per se was identical between the two tasks, the detection sensitivity was higher when the participants recognized the target as a face. This was the case irrespective of the stimulus eccentricity or the vertical orientation of the stimulus. These results demonstrate that seeing an object as a face facilitates object detection via top-down modulation. The advantages of face perception are, therefore, at least partly, due to face awareness.

  3. Seeing Objects as Faces Enhances Object Detection

    PubMed Central

    Watanabe, Katsumi

    2015-01-01

    The face is a special visual stimulus. Both bottom-up processes for low-level facial features and top-down modulation by face expectations contribute to the advantages of face perception. However, it is hard to dissociate the top-down factors from the bottom-up processes, since facial stimuli mandatorily lead to face awareness. In the present study, using the face pareidolia phenomenon, we demonstrated that face awareness, namely seeing an object as a face, enhances object detection performance. In face pareidolia, some people see a visual stimulus, for example, three dots arranged in V shape, as a face, while others do not. This phenomenon allows us to investigate the effect of face awareness leaving the stimulus per se unchanged. Participants were asked to detect a face target or a triangle target. While target per se was identical between the two tasks, the detection sensitivity was higher when the participants recognized the target as a face. This was the case irrespective of the stimulus eccentricity or the vertical orientation of the stimulus. These results demonstrate that seeing an object as a face facilitates object detection via top-down modulation. The advantages of face perception are, therefore, at least partly, due to face awareness. PMID:27648219

  4. Efficient human face detection in infancy.

    PubMed

    Jakobsen, Krisztina V; Umstead, Lindsey; Simpson, Elizabeth A

    2016-01-01

    Adults detect conspecific faces more efficiently than heterospecific faces; however, the development of this own-species bias (OSB) remains unexplored. We tested whether 6- and 11-month-olds exhibit OSB in their attention to human and animal faces in complex visual displays with high perceptual load (25 images competing for attention). Infants (n = 48) and adults (n = 43) passively viewed arrays containing a face among 24 non-face distractors while we measured their gaze with remote eye tracking. While OSB is typically not observed until about 9 months, we found that, already by 6 months, human faces were more likely to be detected, were detected more quickly (attention capture), and received longer looks (attention holding) than animal faces. These data suggest that 6-month-olds already exhibit OSB in face detection efficiency, consistent with perceptual attunement. This specialization may reflect the biological importance of detecting conspecific faces, a foundational ability for early social interactions. © 2015 Wiley Periodicals, Inc.

  5. A causal relationship between face-patch activity and face-detection behavior.

    PubMed

    Sadagopan, Srivatsun; Zarco, Wilbert; Freiwald, Winrich A

    2017-04-04

    The primate brain contains distinct areas densely populated by face-selective neurons. One of these, face-patch ML, contains neurons selective for contrast relationships between face parts. Such contrast-relationships can serve as powerful heuristics for face detection. However, it is unknown whether neurons with such selectivity actually support face-detection behavior. Here, we devised a naturalistic face-detection task and combined it with fMRI-guided pharmacological inactivation of ML to test whether ML is of critical importance for real-world face detection. We found that inactivation of ML impairs face detection. The effect was anatomically specific, as inactivation of areas outside ML did not affect face detection, and it was categorically specific, as inactivation of ML impaired face detection while sparing body and object detection. These results establish that ML function is crucial for detection of faces in natural scenes, performing a critical first step on which other face processing operations can build.

  6. Varying face occlusion detection and iterative recovery for face recognition

    NASA Astrophysics Data System (ADS)

    Wang, Meng; Hu, Zhengping; Sun, Zhe; Zhao, Shuhuan; Sun, Mei

    2017-05-01

    In most sparse representation methods for face recognition (FR), occlusion problems were usually solved via removing the occlusion part of both query samples and training samples to perform the recognition process. This practice ignores the global feature of facial image and may lead to unsatisfactory results due to the limitation of local features. Considering the aforementioned drawback, we propose a method called varying occlusion detection and iterative recovery for FR. The main contributions of our method are as follows: (1) to detect an accurate occlusion area of facial images, an image processing and intersection-based clustering combination method is used for occlusion FR; (2) according to an accurate occlusion map, the new integrated facial images are recovered iteratively and put into a recognition process; and (3) the effectiveness on recognition accuracy of our method is verified by comparing it with three typical occlusion map detection methods. Experiments show that the proposed method has a highly accurate detection and recovery performance and that it outperforms several similar state-of-the-art methods against partial contiguous occlusion.

  7. Detecting and Categorizing Fleeting Emotions in Faces

    PubMed Central

    Sweeny, Timothy D.; Suzuki, Satoru; Grabowecky, Marcia; Paller, Ken A.

    2013-01-01

    Expressions of emotion are often brief, providing only fleeting images from which to base important social judgments. We sought to characterize the sensitivity and mechanisms of emotion detection and expression categorization when exposure to faces is very brief, and to determine whether these processes dissociate. Observers viewed 2 backward-masked facial expressions in quick succession, 1 neutral and the other emotional (happy, fearful, or angry), in a 2-interval forced-choice task. On each trial, observers attempted to detect the emotional expression (emotion detection) and to classify the expression (expression categorization). Above-chance emotion detection was possible with extremely brief exposures of 10 ms and was most accurate for happy expressions. We compared categorization among expressions using a d′ analysis, and found that categorization was usually above chance for angry versus happy and fearful versus happy, but consistently poor for fearful versus angry expressions. Fearful versus angry categorization was poor even when only negative emotions (fearful, angry, or disgusted) were used, suggesting that this categorization is poor independent of decision context. Inverting faces impaired angry versus happy categorization, but not emotion detection, suggesting that information from facial features is used differently for emotion detection and expression categorizations. Emotion detection often occurred without expression categorization, and expression categorization sometimes occurred without emotion detection. These results are consistent with the notion that emotion detection and expression categorization involve separate mechanisms. PMID:22866885

  8. Robust Face Detection from Still Images

    DTIC Science & Technology

    2014-01-01

    significant change in false acceptance rates. Keywords— face detection; illumination; skin color variation; Haar-like features; OpenCV I. INTRODUCTION... OpenCV and an algorithm which used histogram equalization. The test is performed against 17 subjects under 576 viewing conditions from the extended Yale...original OpenCV algorithm proved the least accurate, having a hit rate of only 75.6%. It also had the lowest FAR but only by a slight margin at 25.2

  9. Detecting Visually Observable Disease Symptoms from Faces.

    PubMed

    Wang, Kuan; Luo, Jiebo

    2016-12-01

    Recent years have witnessed an increasing interest in the application of machine learning to clinical informatics and healthcare systems. A significant amount of research has been done on healthcare systems based on supervised learning. In this study, we present a generalized solution to detect visually observable symptoms on faces using semi-supervised anomaly detection combined with machine vision algorithms. We rely on the disease-related statistical facts to detect abnormalities and classify them into multiple categories to narrow down the possible medical reasons of detecting. Our method is in contrast with most existing approaches, which are limited by the availability of labeled training data required for supervised learning, and therefore offers the major advantage of flagging any unusual and visually observable symptoms.

  10. The shape of the face template: geometric distortions of faces and their detection in natural scenes.

    PubMed

    Pongakkasira, Kaewmart; Bindemann, Markus

    2015-04-01

    Human face detection might be driven by skin-coloured face-shaped templates. To explore this idea, this study compared the detection of faces for which the natural height-to-width ratios were preserved with distorted faces that were stretched vertically or horizontally. The impact of stretching on detection performance was not obvious when faces were equated to their unstretched counterparts in terms of their height or width dimension (Experiment 1). However, stretching impaired detection when the original and distorted faces were matched for their surface area (Experiment 2), and this was found with both vertically and horizontally stretched faces (Experiment 3). This effect was evident in accuracy, response times, and also observers' eye movements to faces. These findings demonstrate that height-to-width ratios are an important component of the cognitive template for face detection. The results also highlight important differences between face detection and face recognition. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Greater sensitivity of the cortical face processing system to perceptually-equated face detection

    PubMed Central

    Maher, S.; Ekstrom, T.; Tong, Y.; Nickerson, L.D.; Frederick, B.; Chen, Y.

    2015-01-01

    Face detection, the perceptual capacity to identify a visual stimulus as a face before probing deeper into specific attributes (such as its identity or emotion), is essential for social functioning. Despite the importance of this functional capacity, face detection and its underlying brain mechanisms are not well understood. This study evaluated the roles that the cortical face processing system, which is identified largely through studying other aspects of face perception, play in face detection. Specifically, we used functional magnetic resonance imaging (fMRI) to examine the activations of the fusifom face area (FFA), occipital face area (OFA) and superior temporal sulcus (STS) when face detection was isolated from other aspects of face perception and when face detection was perceptually-equated across individual human participants (n=20). During face detection, FFA and OFA were significantly activated, even for stimuli presented at perceptual-threshold levels, whereas STS was not. During tree detection, however, FFA and OFA were responsive only for highly salient (i.e., high contrast) stimuli. Moreover, activation of FFA during face detection predicted a significant portion of the perceptual performance levels that were determined psychophysically for each participant. This pattern of result indicates that FFA and OFA have a greater sensitivity to face detection signals and selectively support the initial process of face vs. non-face object perception. PMID:26592952

  12. A Fuzzy Aproach For Facial Emotion Recognition

    NASA Astrophysics Data System (ADS)

    Gîlcă, Gheorghe; Bîzdoacă, Nicu-George

    2015-09-01

    This article deals with an emotion recognition system based on the fuzzy sets. Human faces are detected in images with the Viola - Jones algorithm and for its tracking in video sequences we used the Camshift algorithm. The detected human faces are transferred to the decisional fuzzy system, which is based on the variable fuzzyfication measurements of the face: eyebrow, eyelid and mouth. The system can easily determine the emotional state of a person.

  13. Novel face-detection method under various environments

    NASA Astrophysics Data System (ADS)

    Jing, Min-Quan; Chen, Ling-Hwei

    2009-06-01

    We propose a method to detect a face with different poses under various environments. On the basis of skin color information, skin regions are first extracted from an input image. Next, the shoulder part is cut out by using shape information and the head part is then identified as a face candidate. For a face candidate, a set of geometric features is applied to determine if it is a profile face. If not, then a set of eyelike rectangles extracted from the face candidate and the lighting distribution are used to determine if the face candidate is a nonprofile face. Experimental results show that the proposed method is robust under a wide range of lighting conditions, different poses, and races. The detection rate for the HHI face database is 93.68%. For the Champion face database, the detection rate is 95.15%.

  14. The Face-to-Face Light Detection Paradigm: A New Methodology for Investigating Visuospatial Attention Across Different Face Regions in Live Face-to-Face Communication Settings

    PubMed Central

    Thompson, Laura A.; Malloy, Daniel M.; Cone, John M.; Hendrickson, David L.

    2009-01-01

    We introduce a novel paradigm for studying the cognitive processes used by listeners within interactive settings. This paradigm places the talker and the listener in the same physical space, creating opportunities for investigations of attention and comprehension processes taking place during interactive discourse situations. An experiment was conducted to compare results from previous research using videotaped stimuli to those obtained within the live face-to-face task paradigm. A headworn apparatus is used to briefly display LEDs on the talker’s face in four locations as the talker communicates with the participant. In addition to the primary task of comprehending speeches, participants make a secondary task light detection response. In the present experiment, the talker gave non-emotionally-expressive speeches that were used in past research with videotaped stimuli. Signal detection analysis was employed to determine which areas of the face received the greatest focus of attention. Results replicate previous findings using videotaped methods. PMID:21113354

  15. The Face-to-Face Light Detection Paradigm: A New Methodology for Investigating Visuospatial Attention Across Different Face Regions in Live Face-to-Face Communication Settings.

    PubMed

    Thompson, Laura A; Malloy, Daniel M; Cone, John M; Hendrickson, David L

    2010-01-01

    We introduce a novel paradigm for studying the cognitive processes used by listeners within interactive settings. This paradigm places the talker and the listener in the same physical space, creating opportunities for investigations of attention and comprehension processes taking place during interactive discourse situations. An experiment was conducted to compare results from previous research using videotaped stimuli to those obtained within the live face-to-face task paradigm. A headworn apparatus is used to briefly display LEDs on the talker's face in four locations as the talker communicates with the participant. In addition to the primary task of comprehending speeches, participants make a secondary task light detection response. In the present experiment, the talker gave non-emotionally-expressive speeches that were used in past research with videotaped stimuli. Signal detection analysis was employed to determine which areas of the face received the greatest focus of attention. Results replicate previous findings using videotaped methods.

  16. A Method of Face Detection with Bayesian Probability

    NASA Astrophysics Data System (ADS)

    Sarker, Goutam

    2010-10-01

    The objective of face detection is to identify all images which contain a face, irrespective of its orientation, illumination conditions etc. This is a hard problem, because the faces are highly variable in size, shape lighting conditions etc. Many methods have been designed and developed to detect faces in a single image. The present paper is based on one `Appearance Based Method' which relies on learning the facial and non facial features from image examples. This in its turn is based on statistical analysis of examples and counter examples of facial images and employs Bayesian Conditional Classification Rule to detect the probability of belongingness of a face (or non-face) within an image frame. The detection rate of the present system is very high and thereby the number of false positive and false negative detection is substantially low.

  17. Face liveness detection using shearlet-based feature descriptors

    NASA Astrophysics Data System (ADS)

    Feng, Litong; Po, Lai-Man; Li, Yuming; Yuan, Fang

    2016-07-01

    Face recognition is a widely used biometric technology due to its convenience but it is vulnerable to spoofing attacks made by nonreal faces such as photographs or videos of valid users. The antispoof problem must be well resolved before widely applying face recognition in our daily life. Face liveness detection is a core technology to make sure that the input face is a live person. However, this is still very challenging using conventional liveness detection approaches of texture analysis and motion detection. The aim of this paper is to propose a feature descriptor and an efficient framework that can be used to effectively deal with the face liveness detection problem. In this framework, new feature descriptors are defined using a multiscale directional transform (shearlet transform). Then, stacked autoencoders and a softmax classifier are concatenated to detect face liveness. We evaluated this approach using the CASIA Face antispoofing database and replay-attack database. The experimental results show that our approach performs better than the state-of-the-art techniques following the provided protocols of these databases, and it is possible to significantly enhance the security of the face recognition biometric system. In addition, the experimental results also demonstrate that this framework can be easily extended to classify different spoofing attacks.

  18. Live face detection based on the analysis of Fourier spectra

    NASA Astrophysics Data System (ADS)

    Li, Jiangwei; Wang, Yunhong; Tan, Tieniu; Jain, Anil K.

    2004-08-01

    Biometrics is a rapidly developing technology that is to identify a person based on his or her physiological or behavioral characteristics. To ensure the correction of authentication, the biometric system must be able to detect and reject the use of a copy of a biometric instead of the live biometric. This function is usually termed "liveness detection". This paper describes a new method for live face detection. Using structure and movement information of live face, an effective live face detection algorithm is presented. Compared to existing approaches, which concentrate on the measurement of 3D depth information, this method is based on the analysis of Fourier spectra of a single face image or face image sequences. Experimental results show that the proposed method has an encouraging performance.

  19. Face liveness detection for face recognition based on cardiac features of skin color image

    NASA Astrophysics Data System (ADS)

    Suh, Kun Ha; Lee, Eui Chul

    2016-07-01

    With the growth of biometric technology, spoofing attacks have been emerged a threat to the security of the system. Main spoofing scenarios in the face recognition system include the printing attack, replay attack, and 3D mask attack. To prevent such attacks, techniques that evaluating liveness of the biometric data can be considered as a solution. In this paper, a novel face liveness detection method based on cardiac signal extracted from face is presented. The key point of proposed method is that the cardiac characteristic is detected in live faces but not detected in non-live faces. Experimental results showed that the proposed method can be effective way for determining printing attack or 3D mask attack.

  20. Efficient live face detection to counter spoof attack in face recognition systems

    NASA Astrophysics Data System (ADS)

    Biswas, Bikram Kumar; Alam, Mohammad S.

    2015-03-01

    Face recognition is a critical tool used in almost all major biometrics based security systems. But recognition, authentication and liveness detection of the face of an actual user is a major challenge because an imposter or a non-live face of the actual user can be used to spoof the security system. In this research, a robust technique is proposed which detects liveness of faces in order to counter spoof attacks. The proposed technique uses a three-dimensional (3D) fast Fourier transform to compare spectral energies of a live face and a fake face in a mathematically selective manner. The mathematical model involves evaluation of energies of selective high frequency bands of average power spectra of both live and non-live faces. It also carries out proper recognition and authentication of the face of the actual user using the fringe-adjusted joint transform correlation technique, which has been found to yield the highest correlation output for a match. Experimental tests show that the proposed technique yields excellent results for identifying live faces.

  1. Adapting Local Features for Face Detection in Thermal Image.

    PubMed

    Ma, Chao; Trung, Ngo Thanh; Uchiyama, Hideaki; Nagahara, Hajime; Shimada, Atsushi; Taniguchi, Rin-Ichiro

    2017-11-27

    A thermal camera captures the temperature distribution of a scene as a thermal image. In thermal images, facial appearances of different people under different lighting conditions are similar. This is because facial temperature distribution is generally constant and not affected by lighting condition. This similarity in face appearances is advantageous for face detection. To detect faces in thermal images, cascade classifiers with Haar-like features are generally used. However, there are few studies exploring the local features for face detection in thermal images. In this paper, we introduce two approaches relying on local features for face detection in thermal images. First, we create new feature types by extending Multi-Block LBP. We consider a margin around the reference and the generally constant distribution of facial temperature. In this way, we make the features more robust to image noise and more effective for face detection in thermal images. Second, we propose an AdaBoost-based training method to get cascade classifiers with multiple types of local features. These feature types have different advantages. In this way we enhance the description power of local features. We did a hold-out validation experiment and a field experiment. In the hold-out validation experiment, we captured a dataset from 20 participants, comprising 14 males and 6 females. For each participant, we captured 420 images with 10 variations in camera distance, 21 poses, and 2 appearances (participant with/without glasses). We compared the performance of cascade classifiers trained by different sets of the features. The experiment results showed that the proposed approaches effectively improve the performance of face detection in thermal images. In the field experiment, we compared the face detection performance in realistic scenes using thermal and RGB images, and gave discussion based on the results.

  2. Automated face detection for occurrence and occupancy estimation in chimpanzees.

    PubMed

    Crunchant, Anne-Sophie; Egerer, Monika; Loos, Alexander; Burghardt, Tilo; Zuberbühler, Klaus; Corogenes, Katherine; Leinert, Vera; Kulik, Lars; Kühl, Hjalmar S

    2017-03-01

    Surveying endangered species is necessary to evaluate conservation effectiveness. Camera trapping and biometric computer vision are recent technological advances. They have impacted on the methods applicable to field surveys and these methods have gained significant momentum over the last decade. Yet, most researchers inspect footage manually and few studies have used automated semantic processing of video trap data from the field. The particular aim of this study is to evaluate methods that incorporate automated face detection technology as an aid to estimate site use of two chimpanzee communities based on camera trapping. As a comparative baseline we employ traditional manual inspection of footage. Our analysis focuses specifically on the basic parameter of occurrence where we assess the performance and practical value of chimpanzee face detection software. We found that the semi-automated data processing required only 2-4% of the time compared to the purely manual analysis. This is a non-negligible increase in efficiency that is critical when assessing the feasibility of camera trap occupancy surveys. Our evaluations suggest that our methodology estimates the proportion of sites used relatively reliably. Chimpanzees are mostly detected when they are present and when videos are filmed in high-resolution: the highest recall rate was 77%, for a false alarm rate of 2.8% for videos containing only chimpanzee frontal face views. Certainly, our study is only a first step for transferring face detection software from the lab into field application. Our results are promising and indicate that the current limitation of detecting chimpanzees in camera trap footage due to lack of suitable face views can be easily overcome on the level of field data collection, that is, by the combined placement of multiple high-resolution cameras facing reverse directions. This will enable to routinely conduct chimpanzee occupancy surveys based on camera trapping and semi

  3. Applying face identification to detecting hijacking of airplane

    NASA Astrophysics Data System (ADS)

    Luo, Xuanwen; Cheng, Qiang

    2004-09-01

    That terrorists hijacked the airplanes and crashed the World Trade Center is disaster to civilization. To avoid the happening of hijack is critical to homeland security. To report the hijacking in time, limit the terrorist to operate the plane if happened and land the plane to the nearest airport could be an efficient way to avoid the misery. Image processing technique in human face recognition or identification could be used for this task. Before the plane take off, the face images of pilots are input into a face identification system installed in the airplane. The camera in front of pilot seat keeps taking the pilot face image during the flight and comparing it with pre-input pilot face images. If a different face is detected, a warning signal is sent to ground automatically. At the same time, the automatic cruise system is started or the plane is controlled by the ground. The terrorists will have no control over the plane. The plane will be landed to a nearest or appropriate airport under the control of the ground or cruise system. This technique could also be used in automobile industry as an image key to avoid car stealth.

  4. Colour detection thresholds in faces and colour patches.

    PubMed

    Tan, Kok Wei; Stephen, Ian D

    2013-01-01

    Human facial skin colour reflects individuals' underlying health (Stephen et al 2011 Evolution & Human Behavior 32 216-227); and enhanced facial skin CIELab b* (yellowness), a* (redness), and L* (lightness) are perceived as healthy (also Stephen et al 2009a International Journal of Primatology 30 845-857). Here, we examine Malaysian Chinese participants' detection thresholds for CIELab L* (lightness), a* (redness), and b* (yellowness) colour changes in Asian, African, and Caucasian faces and skin coloured patches. Twelve face photos and three skin coloured patches were transformed to produce four pairs of images of each individual face and colour patch with different amounts of red, yellow, or lightness, from very subtle (deltaE = 1.2) to quite large differences (deltaE = 9.6). Participants were asked to decide which of sequentially displayed, paired same-face images or colour patches were lighter, redder, or yellower. Changes in facial redness, followed by changes in yellowness, were more easily discriminated than changes in luminance. However, visual sensitivity was not greater for redness and yellowness in nonface stimuli, suggesting red facial skin colour special salience. Participants were also significantly better at recognizing colour differences in own-race (Asian) and Caucasian faces than in African faces, suggesting the existence of cross-race effect in discriminating facial colours. Humans' colour vision may have been selected for skin colour signalling (Changizi et al 2006 Biology Letters 2 217-221), enabling individuals to perceive subtle changes in skin colour, reflecting health and emotional status.

  5. Gear Tooth Wear Detection Algorithm

    NASA Technical Reports Server (NTRS)

    Delgado, Irebert R.

    2015-01-01

    Vibration-based condition indicators continue to be developed for Health Usage Monitoring of rotorcraft gearboxes. Testing performed at NASA Glenn Research Center have shown correlations between specific condition indicators and specific types of gear wear. To speed up the detection and analysis of gear teeth, an image detection program based on the Viola-Jones algorithm was trained to automatically detect spiral bevel gear wear pitting. The detector was tested using a training set of gear wear pictures and a blind set of gear wear pictures. The detector accuracy for the training set was 75 percent while the accuracy for the blind set was 15 percent. Further improvements on the accuracy of the detector are required but preliminary results have shown its ability to automatically detect gear tooth wear. The trained detector would be used to quickly evaluate a set of gear or pinion pictures for pits, spalls, or abrasive wear. The results could then be used to correlate with vibration or oil debris data. In general, the program could be retrained to detect features of interest from pictures of a component taken over a period of time.

  6. Global Binary Continuity for Color Face Detection With Complex Background

    NASA Astrophysics Data System (ADS)

    Belavadi, Bhaskar; Mahendra Prashanth, K. V.; Joshi, Sujay S.; Suprathik, N.

    2017-08-01

    In this paper, we propose a method to detect human faces in color images, with complex background. The proposed algorithm makes use of basically two color space models, specifically HSV and YCgCr. The color segmented image is filled uniformly with a single color (binary) and then all unwanted discontinuous lines are removed to get the final image. Experimental results on Caltech database manifests that the purported model is able to accomplish far better segmentation for faces of varying orientations, skin color and background environment.

  7. Correlation based efficient face recognition and color change detection

    NASA Astrophysics Data System (ADS)

    Elbouz, M.; Alfalou, A.; Brosseau, C.; Alam, M. S.; Qasmi, S.

    2013-01-01

    Identifying the human face via correlation is a topic attracting widespread interest. At the heart of this technique lies the comparison of an unknown target image to a known reference database of images. However, the color information in the target image remains notoriously difficult to interpret. In this paper, we report a new technique which: (i) is robust against illumination change, (ii) offers discrimination ability to detect color change between faces having similar shape, and (iii) is specifically designed to detect red colored stains (i.e. facial bleeding). We adopt the Vanderlugt correlator (VLC) architecture with a segmented phase filter and we decompose the color target image using normalized red, green, and blue (RGB), and hue, saturation, and value (HSV) scales. We propose a new strategy to effectively utilize color information in signatures for further increasing the discrimination ability. The proposed algorithm has been found to be very efficient for discriminating face subjects with different skin colors, and those having color stains in different areas of the facial image.

  8. The Face in the Crowd Effect Unconfounded: Happy Faces, Not Angry Faces, Are More Efficiently Detected in Single- and Multiple-Target Visual Search Tasks

    ERIC Educational Resources Information Center

    Becker, D. Vaughn; Anderson, Uriah S.; Mortensen, Chad R.; Neufeld, Samantha L.; Neel, Rebecca

    2011-01-01

    Is it easier to detect angry or happy facial expressions in crowds of faces? The present studies used several variations of the visual search task to assess whether people selectively attend to expressive faces. Contrary to widely cited studies (e.g., Ohman, Lundqvist, & Esteves, 2001) that suggest angry faces "pop out" of crowds, our review of…

  9. Real-time driver fatigue detection based on face alignment

    NASA Astrophysics Data System (ADS)

    Tao, Huanhuan; Zhang, Guiying; Zhao, Yong; Zhou, Yi

    2017-07-01

    The performance and robustness of fatigue detection largely decrease if the driver with glasses. To address this issue, this paper proposes a practical driver fatigue detection method based on face alignment at 3000 FPS algorithm. Firstly, the eye regions of the driver are localized by exploiting 6 landmarks surrounding each eye. Secondly, the HOG features of the extracted eye regions are calculated and put into SVM classifier to recognize the eye state. Finally, the value of PERCLOS is calculated to determine whether the driver is drowsy or not. An alarm will be generated if the eye is closed for a specified period of time. The accuracy and real-time on testing videos with different drivers demonstrate that the proposed algorithm is robust and obtain better accuracy for driver fatigue detection compared with some previous method.

  10. Multivoxel patterns in face-sensitive temporal regions reveal an encoding schema based on detecting life in a face.

    PubMed

    Looser, Christine E; Guntupalli, Jyothi S; Wheatley, Thalia

    2013-10-01

    More than a decade of research has demonstrated that faces evoke prioritized processing in a 'core face network' of three brain regions. However, whether these regions prioritize the detection of global facial form (shared by humans and mannequins) or the detection of life in a face has remained unclear. Here, we dissociate form-based and animacy-based encoding of faces by using animate and inanimate faces with human form (humans, mannequins) and dog form (real dogs, toy dogs). We used multivariate pattern analysis of BOLD responses to uncover the representational similarity space for each area in the core face network. Here, we show that only responses in the inferior occipital gyrus are organized by global facial form alone (human vs dog) while animacy becomes an additional organizational priority in later face-processing regions: the lateral fusiform gyri (latFG) and right superior temporal sulcus. Additionally, patterns evoked by human faces were maximally distinct from all other face categories in the latFG and parts of the extended face perception system. These results suggest that once a face configuration is perceived, faces are further scrutinized for whether the face is alive and worthy of social cognitive resources.

  11. Joint Transform Correlation for face tracking: elderly fall detection application

    NASA Astrophysics Data System (ADS)

    Katz, Philippe; Aron, Michael; Alfalou, Ayman

    2013-03-01

    In this paper, an iterative tracking algorithm based on a non-linear JTC (Joint Transform Correlator) architecture and enhanced by a digital image processing method is proposed and validated. This algorithm is based on the computation of a correlation plane where the reference image is updated at each frame. For that purpose, we use the JTC technique in real time to track a patient (target image) in a room fitted with a video camera. The correlation plane is used to localize the target image in the current video frame (frame i). Then, the reference image to be exploited in the next frame (frame i+1) is updated according to the previous one (frame i). In an effort to validate our algorithm, our work is divided into two parts: (i) a large study based on different sequences with several situations and different JTC parameters is achieved in order to quantify their effects on the tracking performances (decimation, non-linearity coefficient, size of the correlation plane, size of the region of interest...). (ii) the tracking algorithm is integrated into an application of elderly fall detection. The first reference image is a face detected by means of Haar descriptors, and then localized into the new video image thanks to our tracking method. In order to avoid a bad update of the reference frame, a method based on a comparison of image intensity histograms is proposed and integrated in our algorithm. This step ensures a robust tracking of the reference frame. This article focuses on face tracking step optimisation and evalutation. A supplementary step of fall detection, based on vertical acceleration and position, will be added and studied in further work.

  12. Impaired face detection may explain some but not all cases of developmental prosopagnosia.

    PubMed

    Dalrymple, Kirsten A; Duchaine, Brad

    2016-05-01

    Developmental prosopagnosia (DP) is defined by severe face recognition difficulties due to the failure to develop the visual mechanisms for processing faces. The two-process theory of face recognition (Morton & Johnson, 1991) implies that DP could result from a failure of an innate face detection system; this failure could prevent an individual from then tuning higher-level processes for face recognition (Johnson, 2005). Work with adults indicates that some individuals with DP have normal face detection whereas others are impaired. However, face detection has not been addressed in children with DP, even though their results may be especially informative because they have had less opportunity to develop strategies that could mask detection deficits. We tested the face detection abilities of seven children with DP. Four were impaired at face detection to some degree (i.e. abnormally slow, or failed to find faces) while the remaining three children had normal face detection. Hence, the cases with impaired detection are consistent with the two-process account suggesting that DP could result from a failure of face detection. However, the cases with normal detection implicate a higher-level origin. The dissociation between normal face detection and impaired identity perception also indicates that these abilities depend on different neurocognitive processes. © 2015 John Wiley & Sons Ltd.

  13. Real-time detection with AdaBoost-svm combination in various face orientation

    NASA Astrophysics Data System (ADS)

    Fhonna, R. P.; Nasution, M. K. M.; Tulus

    2018-03-01

    Most of the research has used algorithm AdaBoost-SVM for face detection. However, to our knowledge so far there is no research has been facing detection on real-time data with various orientations using the combination of AdaBoost and Support Vector Machine (SVM). Characteristics of complex and diverse face variations and real-time data in various orientations, and with a very complex application will slow down the performance of the face detection system this becomes a challenge in this research. Face orientation performed on the detection system, that is 900, 450, 00, -450, and -900. This combination method is expected to be an effective and efficient solution in various face orientations. The results showed that the highest average detection rate is on the face detection oriented 00 and the lowest detection rate is in the face orientation 900.

  14. Face detection assisted auto exposure: supporting evidence from a psychophysical study

    NASA Astrophysics Data System (ADS)

    Jin, Elaine W.; Lin, Sheng; Dharumalingam, Dhandapani

    2010-01-01

    Face detection has been implemented in many digital still cameras and camera phones with the promise of enhancing existing camera functions (e.g. auto exposure) and adding new features to cameras (e.g. blink detection). In this study we examined the use of face detection algorithms in assisting auto exposure (AE). The set of 706 images, used in this study, was captured using Canon Digital Single Lens Reflex cameras and subsequently processed with an image processing pipeline. A psychophysical study was performed to obtain optimal exposure along with the upper and lower bounds of exposure for all 706 images. Three methods of marking faces were utilized: manual marking, face detection algorithm A (FD-A), and face detection algorithm B (FD-B). The manual marking method found 751 faces in 426 images, which served as the ground-truth for face regions of interest. The remaining images do not have any faces or the faces are too small to be considered detectable. The two face detection algorithms are different in resource requirements and in performance. FD-A uses less memory and gate counts compared to FD-B, but FD-B detects more faces and has less false positives. A face detection assisted auto exposure algorithm was developed and tested against the evaluation results from the psychophysical study. The AE test results showed noticeable improvement when faces were detected and used in auto exposure. However, the presence of false positives would negatively impact the added benefit.

  15. Traffic Sign Detection System for Locating Road Intersections and Roundabouts: The Chilean Case.

    PubMed

    Villalón-Sepúlveda, Gabriel; Torres-Torriti, Miguel; Flores-Calero, Marco

    2017-05-25

    This paper presents a traffic sign detection method for signs close to road intersections and roundabouts, such as stop and yield (give way) signs. The proposed method relies on statistical templates built using color information for both segmentation and classification. The segmentation method uses the RGB-normalized (ErEgEb) color space for ROIs (Regions of Interest) generation based on a chromaticity filter, where templates at 10 scales are applied to the entire image. Templates consider the mean and standard deviation of normalized color of the traffic signs to build thresholding intervals where the expected color should lie for a given sign. The classification stage employs the information of the statistical templates over YCbCr and ErEgEb color spaces, for which the background has been previously removed by using a probability function that models the probability that the pixel corresponds to a sign given its chromaticity values. This work includes an analysis of the detection rate as a function of the distance between the vehicle and the sign. Such information is useful to validate the robustness of the approach and is often not included in the existing literature. The detection rates, as a function of distance, are compared to those of the well-known Viola-Jones method. The results show that for distances less than 48 m, the proposed method achieves a detection rate of 87.5 % and 95.4 % for yield and stop signs, respectively. For distances less than 30 m, the detection rate is 100 % for both signs. The Viola-Jones approach has detection rates below 20 % for distances between 30 and 48 m, and barely improves in the 20-30 m range with detection rates of up to 60 % . Thus, the proposed method provides a robust alternative for intersection detection that relies on statistical color-based templates instead of shape information. The experiments employed videos of traffic signs taken in several streets of Santiago, Chile, using a research platform implemented at

  16. Adaboost multi-view face detection based on YCgCr skin color model

    NASA Astrophysics Data System (ADS)

    Lan, Qi; Xu, Zhiyong

    2016-09-01

    Traditional Adaboost face detection algorithm uses Haar-like features training face classifiers, whose detection error rate is low in the face region. While under the complex background, the classifiers will make wrong detection easily to the background regions with the similar faces gray level distribution, which leads to the error detection rate of traditional Adaboost algorithm is high. As one of the most important features of a face, skin in YCgCr color space has good clustering. We can fast exclude the non-face areas through the skin color model. Therefore, combining with the advantages of the Adaboost algorithm and skin color detection algorithm, this paper proposes Adaboost face detection algorithm method that bases on YCgCr skin color model. Experiments show that, compared with traditional algorithm, the method we proposed has improved significantly in the detection accuracy and errors.

  17. Detecting 'infant-directedness' in face and voice.

    PubMed

    Kim, Hojin I; Johnson, Scott P

    2014-07-01

    Five- and 3-month-old infants' perception of infant-directed (ID) faces and the role of speech in perceiving faces were examined. Infants' eye movements were recorded as they viewed a series of two side-by-side talking faces, one infant-directed and one adult-directed (AD), while listening to ID speech, AD speech, or in silence. Infants showed consistently greater dwell time on ID faces vs. AD faces, and this ID face preference was consistent across all three sound conditions. ID speech resulted in higher looking overall, but it did not increase looking at the ID face per se. Together, these findings demonstrate that infants' preferences for ID speech extend to ID faces. © 2014 John Wiley & Sons Ltd.

  18. The Effect of Early Visual Deprivation on the Development of Face Detection

    ERIC Educational Resources Information Center

    Mondloch, Catherine J.; Segalowitz, Sidney J.; Lewis, Terri L.; Dywan, Jane; Le Grand, Richard; Maurer, Daphne

    2013-01-01

    The expertise of adults in face perception is facilitated by their ability to rapidly detect that a stimulus is a face. In two experiments, we examined the role of early visual input in the development of face detection by testing patients who had been treated as infants for bilateral congenital cataract. Experiment 1 indicated that, at age 9 to…

  19. Detecting "Infant-Directedness" in Face and Voice

    ERIC Educational Resources Information Center

    Kim, Hojin I.; Johnson, Scott P.

    2014-01-01

    Five- and 3-month-old infants' perception of infant-directed (ID) faces and the role of speech in perceiving faces were examined. Infants' eye movements were recorded as they viewed a series of two side-by-side talking faces, one infant-directed and one adult-directed (AD), while listening to ID speech, AD speech, or in silence. Infants…

  20. Searching for differences in race: is there evidence for preferential detection of other-race faces?

    PubMed

    Lipp, Ottmar V; Terry, Deborah J; Smith, Joanne R; Tellegen, Cassandra L; Kuebbeler, Jennifer; Newey, Mareka

    2009-06-01

    Previous research has suggested that like animal and social fear-relevant stimuli, other-race faces (African American) are detected preferentially in visual search. Three experiments using Chinese or Indonesian faces as other-race faces yielded the opposite pattern of results: faster detection of same-race faces among other-race faces. This apparently inconsistent pattern of results was resolved by showing that Asian and African American faces are detected preferentially in tasks that have small stimulus sets and employ fixed target searches. Asian and African American other-race faces are found slower among Caucasian face backgrounds if larger stimulus sets are used in tasks with a variable mapping of stimulus to background or target. Thus, preferential detection of other-race faces was not found under task conditions in which preferential detection of animal and social fear-relevant stimuli is evident. Although consistent with the view that same-race faces are processed in more detail than other-race faces, the current findings suggest that other-race faces do not draw attention preferentially.

  1. Robust vehicle detection under various environmental conditions using an infrared thermal camera and its application to road traffic flow monitoring.

    PubMed

    Iwasaki, Yoichiro; Misumi, Masato; Nakamiya, Toshiyuki

    2013-06-17

    We have already proposed a method for detecting vehicle positions and their movements (henceforth referred to as "our previous method") using thermal images taken with an infrared thermal camera. Our experiments have shown that our previous method detects vehicles robustly under four different environmental conditions which involve poor visibility conditions in snow and thick fog. Our previous method uses the windshield and its surroundings as the target of the Viola-Jones detector. Some experiments in winter show that the vehicle detection accuracy decreases because the temperatures of many windshields approximate those of the exterior of the windshields. In this paper, we propose a new vehicle detection method (henceforth referred to as "our new method"). Our new method detects vehicles based on tires' thermal energy reflection. We have done experiments using three series of thermal images for which the vehicle detection accuracies of our previous method are low. Our new method detects 1,417 vehicles (92.8%) out of 1,527 vehicles, and the number of false detection is 52 in total. Therefore, by combining our two methods, high vehicle detection accuracies are maintained under various environmental conditions. Finally, we apply the traffic information obtained by our two methods to traffic flow automatic monitoring, and show the effectiveness of our proposal.

  2. Robust Vehicle Detection under Various Environmental Conditions Using an Infrared Thermal Camera and Its Application to Road Traffic Flow Monitoring

    PubMed Central

    Iwasaki, Yoichiro; Misumi, Masato; Nakamiya, Toshiyuki

    2013-01-01

    We have already proposed a method for detecting vehicle positions and their movements (henceforth referred to as “our previous method”) using thermal images taken with an infrared thermal camera. Our experiments have shown that our previous method detects vehicles robustly under four different environmental conditions which involve poor visibility conditions in snow and thick fog. Our previous method uses the windshield and its surroundings as the target of the Viola-Jones detector. Some experiments in winter show that the vehicle detection accuracy decreases because the temperatures of many windshields approximate those of the exterior of the windshields. In this paper, we propose a new vehicle detection method (henceforth referred to as “our new method”). Our new method detects vehicles based on tires' thermal energy reflection. We have done experiments using three series of thermal images for which the vehicle detection accuracies of our previous method are low. Our new method detects 1,417 vehicles (92.8%) out of 1,527 vehicles, and the number of false detection is 52 in total. Therefore, by combining our two methods, high vehicle detection accuracies are maintained under various environmental conditions. Finally, we apply the traffic information obtained by our two methods to traffic flow automatic monitoring, and show the effectiveness of our proposal. PMID:23774988

  3. Door Security using Face Detection and Raspberry Pi

    NASA Astrophysics Data System (ADS)

    Bhutra, Venkatesh; Kumar, Harshav; Jangid, Santosh; Solanki, L.

    2018-03-01

    With the world moving towards advanced technologies, security forms a crucial part in daily life. Among the many techniques used for this purpose, Face Recognition stands as effective means of authentication and security. This paper deals with the user of principal component and security. PCA is a statistical approach used to simplify a data set. The minimum Euclidean distance found from the PCA technique is used to recognize the face. Raspberry Pi a low cost ARM based computer on a small circuit board, controls the servo motor and other sensors. The servo-motor is in turn attached to the doors of home and opens up when the face is recognized. The proposed work has been done using a self-made training database of students from B.K. Birla Institute of Engineering and Technology, Pilani, Rajasthan, India.

  4. Detection of Emotional Faces: Salient Physical Features Guide Effective Visual Search

    ERIC Educational Resources Information Center

    Calvo, Manuel G.; Nummenmaa, Lauri

    2008-01-01

    In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent,…

  5. Human face detection using motion and color information

    NASA Astrophysics Data System (ADS)

    Kim, Yang-Gyun; Bang, Man-Won; Park, Soon-Young; Choi, Kyoung-Ho; Hwang, Jeong-Hyun

    2008-02-01

    In this paper, we present a hardware implementation of a face detector for surveillance applications. To come up with a computationally cheap and fast algorithm with minimal memory requirement, motion and skin color information are fused successfully. More specifically, a newly appeared object is extracted first by comparing average Hue and Saturation values of background image and a current image. Then, the result of skin color filtering of the current image is combined with the result of a newly appeared object. Finally, labeling is performed to locate a true face region. The proposed system is implemented on Altera Cyclone2 using Quartus II 6.1 and ModelSim 6.1. For hardware description language (HDL), Verilog-HDL is used.

  6. Detecting Emotional Expression in Face-to-Face and Online Breast Cancer Support Groups

    ERIC Educational Resources Information Center

    Liess, Anna; Simon, Wendy; Yutsis, Maya; Owen, Jason E.; Piemme, Karen Altree; Golant, Mitch; Giese-Davis, Janine

    2008-01-01

    Accurately detecting emotional expression in women with primary breast cancer participating in support groups may be important for therapists and researchers. In 2 small studies (N = 20 and N = 16), the authors examined whether video coding, human text coding, and automated text analysis provided consistent estimates of the level of emotional…

  7. Detection of emotional faces: salient physical features guide effective visual search.

    PubMed

    Calvo, Manuel G; Nummenmaa, Lauri

    2008-08-01

    In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent, surprised and disgusted faces was found both under upright and inverted display conditions. Inversion slowed down the detection of these faces less than that of others (fearful, angry, and sad). Accordingly, the detection advantage involves processing of featural rather than configural information. The facial features responsible for the detection advantage are located in the mouth rather than the eye region. Computationally modeled visual saliency predicted both attentional orienting and detection. Saliency was greatest for the faces (happy) and regions (mouth) that were fixated earlier and detected faster, and there was close correspondence between the onset of the modeled saliency peak and the time at which observers initially fixated the faces. The authors conclude that visual saliency of specific facial features--especially the smiling mouth--is responsible for facilitated initial orienting, which thus shortens detection. (PsycINFO Database Record (c) 2008 APA, all rights reserved).

  8. Dialog detection in narrative video by shot and face analysis

    NASA Astrophysics Data System (ADS)

    Kroon, B.; Nesvadba, J.; Hanjalic, A.

    2007-01-01

    The proliferation of captured personal and broadcast content in personal consumer archives necessitates comfortable access to stored audiovisual content. Intuitive retrieval and navigation solutions require however a semantic level that cannot be reached by generic multimedia content analysis alone. A fusion with film grammar rules can help to boost the reliability significantly. The current paper describes the fusion of low-level content analysis cues including face parameters and inter-shot similarities to segment commercial content into film grammar rule-based entities and subsequently classify those sequences into so-called shot reverse shots, i.e. dialog sequences. Moreover shot reverse shot specific mid-level cues are analyzed augmenting the shot reverse shot information with dialog specific descriptions.

  9. Right wing authoritarianism is associated with race bias in face detection

    PubMed Central

    Bret, Amélie; Beffara, Brice; McFadyen, Jessica; Mermillod, Martial

    2017-01-01

    Racial discrimination can be observed in a wide range of psychological processes, including even the earliest phases of face detection. It remains unclear, however, whether racially-biased low-level face processing is influenced by ideologies, such as right wing authoritarianism or social dominance orientation. In the current study, we hypothesized that socio-political ideologies such as these can substantially predict perceptive racial bias during early perception. To test this hypothesis, 67 participants detected faces within arrays of neutral objects. The faces were either Caucasian (in-group) or North African (out-group) and either had a neutral or angry expression. Results showed that participants with higher self-reported right-wing authoritarianism were more likely to show slower response times for detecting out- vs. in-groups faces. We interpreted our results according to the Dual Process Motivational Model and suggest that socio-political ideologies may foster early racial bias via attentional disengagement. PMID:28692705

  10. Directional templates for real-time detection of coronal axis rotated faces

    NASA Astrophysics Data System (ADS)

    Perez, Claudio A.; Estevez, Pablo A.; Garate, Patricio

    2004-10-01

    Real-time face and iris detection on video images has gained renewed attention because of multiple possible applications in studying eye function, drowsiness detection, virtual keyboard interfaces, face recognition, video processing and multimedia retrieval. In this paper, a study is presented on using directional templates in the detection of faces rotated in the coronal axis. The templates are built by extracting the directional image information from the regions of the eyes, nose and mouth. The face position is determined by computing a line integral using the templates over the face directional image. The line integral reaches a maximum when it coincides with the face position. It is shown an improvement in localization selectivity by the increased value in the line integral computed with the directional template. Besides, improvements in the line integral value for face size and face rotation angle was also found through the computation of the line integral using the directional template. Based on these results the new templates should improve selectivity and hence provide the means to restrict computations to a fewer number of templates and restrict the region of search during the face and eye tracking procedure. The proposed method is real time, completely non invasive and was applied with no background limitation and normal illumination conditions in an indoor environment.

  11. Fast hierarchical knowledge-based approach for human face detection in color images

    NASA Astrophysics Data System (ADS)

    Jiang, Jun; Gong, Jie; Zhang, Guilin; Hu, Ruolan

    2001-09-01

    This paper presents a fast hierarchical knowledge-based approach for automatically detecting multi-scale upright faces in still color images. The approach consists of three levels. At the highest level, skin-like regions are determinated by skin model, which is based on the color attributes hue and saturation in HSV color space, as well color attributes red and green in normalized color space. In level 2, a new eye model is devised to select human face candidates in segmented skin-like regions. An important feature of the eye model is that it is independent of the scale of human face. So it is possible for finding human faces in different scale with scanning image only once, and it leads to reduction the computation time of face detection greatly. In level 3, a human face mosaic image model, which is consistent with physical structure features of human face well, is applied to judge whether there are face detects in human face candidate regions. This model includes edge and gray rules. Experiment results show that the approach has high robustness and fast speed. It has wide application perspective at human-computer interactions and visual telephone etc.

  12. Internal representations for face detection: an application of noise-based image classification to BOLD responses.

    PubMed

    Nestor, Adrian; Vettel, Jean M; Tarr, Michael J

    2013-11-01

    What basic visual structures underlie human face detection and how can we extract such structures directly from the amplitude of neural responses elicited by face processing? Here, we address these issues by investigating an extension of noise-based image classification to BOLD responses recorded in high-level visual areas. First, we assess the applicability of this classification method to such data and, second, we explore its results in connection with the neural processing of faces. To this end, we construct luminance templates from white noise fields based on the response of face-selective areas in the human ventral cortex. Using behaviorally and neurally-derived classification images, our results reveal a family of simple but robust image structures subserving face representation and detection. Thus, we confirm the role played by classical face selective regions in face detection and we help clarify the representational basis of this perceptual function. From a theory standpoint, our findings support the idea of simple but highly diagnostic neurally-coded features for face detection. At the same time, from a methodological perspective, our work demonstrates the ability of noise-based image classification in conjunction with fMRI to help uncover the structure of high-level perceptual representations. Copyright © 2012 Wiley Periodicals, Inc.

  13. Multiview face detection based on position estimation over multicamera surveillance system

    NASA Astrophysics Data System (ADS)

    Huang, Ching-chun; Chou, Jay; Shiu, Jia-Hou; Wang, Sheng-Jyh

    2012-02-01

    In this paper, we propose a multi-view face detection system that locates head positions and indicates the direction of each face in 3-D space over a multi-camera surveillance system. To locate 3-D head positions, conventional methods relied on face detection in 2-D images and projected the face regions back to 3-D space for correspondence. However, the inevitable false face detection and rejection usually degrades the system performance. Instead, our system searches for the heads and face directions over the 3-D space using a sliding cube. Each searched 3-D cube is projected onto the 2-D camera views to determine the existence and direction of human faces. Moreover, a pre-process to estimate the locations of candidate targets is illustrated to speed-up the searching process over the 3-D space. In summary, our proposed method can efficiently fuse multi-camera information and suppress the ambiguity caused by detection errors. Our evaluation shows that the proposed approach can efficiently indicate the head position and face direction on real video sequences even under serious occlusion.

  14. Face detection on distorted images using perceptual quality-aware features

    NASA Astrophysics Data System (ADS)

    Gunasekar, Suriya; Ghosh, Joydeep; Bovik, Alan C.

    2014-02-01

    We quantify the degradation in performance of a popular and effective face detector when human-perceived image quality is degraded by distortions due to additive white gaussian noise, gaussian blur or JPEG compression. It is observed that, within a certain range of perceived image quality, a modest increase in image quality can drastically improve face detection performance. These results can be used to guide resource or bandwidth allocation in a communication/delivery system that is associated with face detection tasks. A new face detector based on QualHOG features is also proposed that augments face-indicative HOG features with perceptual quality-aware spatial Natural Scene Statistics (NSS) features, yielding improved tolerance against image distortions. The new detector provides statistically significant improvements over a strong baseline on a large database of face images representing a wide range of distortions. To facilitate this study, we created a new Distorted Face Database, containing face and non-face patches from images impaired by a variety of common distortion types and levels. This new dataset is available for download and further experimentation at www.ideal.ece.utexas.edu/˜suriya/DFD/.

  15. An ERP study of famous face incongruity detection in middle age.

    PubMed

    Chaby, L; Jemel, B; George, N; Renault, B; Fiori, N

    2001-04-01

    Age-related changes in famous face incongruity detection were examined in middle-aged (mean = 50.6) and young (mean = 24.8) subjects. Behavioral and ERP responses were recorded while subjects, after a presentation of a "prime face" (a famous person with the eyes masked), had to decide whether the following "test face" was completed with its authentic eyes (congruent) or with other eyes (incongruent). The principal effects of advancing age were (1) behavioral difficulties in discriminating between incongruent and congruent faces; (2) a reduced N400 effect due to N400 enhancement for both congruent and incongruent faces; (3) a latency increase of both N400 and P600 components. ERPs to primes (face encoding) were not affected by aging. These results are interpreted in terms of early signs of aging. Copyright 2001 Academic Press.

  16. Face, Body, and Center of Gravity Mediate Person Detection in Natural Scenes

    ERIC Educational Resources Information Center

    Bindemann, Markus; Scheepers, Christoph; Ferguson, Heather J.; Burton, A. Mike

    2010-01-01

    Person detection is an important prerequisite of social interaction, but is not well understood. Following suggestions that people in the visual field can capture a viewer's attention, this study examines the role of the face and the body for person detection in natural scenes. We observed that viewers tend first to look at the center of a scene,…

  17. Hardware-software face detection system based on multi-block local binary patterns

    NASA Astrophysics Data System (ADS)

    Acasandrei, Laurentiu; Barriga, Angel

    2015-03-01

    Face detection is an important aspect for biometrics, video surveillance and human computer interaction. Due to the complexity of the detection algorithms any face detection system requires a huge amount of computational and memory resources. In this communication an accelerated implementation of MB LBP face detection algorithm targeting low frequency, low memory and low power embedded system is presented. The resulted implementation is time deterministic and uses a customizable AMBA IP hardware accelerator. The IP implements the kernel operations of the MB-LBP algorithm and can be used as universal accelerator for MB LBP based applications. The IP employs 8 parallel MB-LBP feature evaluators cores, uses a deterministic bandwidth, has a low area profile and the power consumption is ~95 mW on a Virtex5 XC5VLX50T. The resulted implementation acceleration gain is between 5 to 8 times, while the hardware MB-LBP feature evaluation gain is between 69 and 139 times.

  18. Automatic Detection of Frontal Face Midline by Chain-coded Merlin-Farber Hough Trasform

    NASA Astrophysics Data System (ADS)

    Okamoto, Daichi; Ohyama, Wataru; Wakabayashi, Tetsushi; Kimura, Fumitaka

    We propose a novel approach for detection of the facial midline (facial symmetry axis) from a frontal face image. The facial midline has several applications, for instance reducing computational cost required for facial feature extraction (FFE) and postoperative assessment for cosmetic or dental surgery. The proposed method detects the facial midline of a frontal face from an edge image as the symmetry axis using the Merlin-Faber Hough transformation. And a new performance improvement scheme for midline detection by MFHT is present. The main concept of the proposed scheme is suppression of redundant vote on the Hough parameter space by introducing chain code representation for the binary edge image. Experimental results on the image dataset containing 2409 images from FERET database indicate that the proposed algorithm can improve the accuracy of midline detection from 89.9% to 95.1 % for face images with different scales and rotation.

  19. A Comparative Survey of Methods for Remote Heart Rate Detection From Frontal Face Videos

    PubMed Central

    Wang, Chen; Pun, Thierry; Chanel, Guillaume

    2018-01-01

    Remotely measuring physiological activity can provide substantial benefits for both the medical and the affective computing applications. Recent research has proposed different methodologies for the unobtrusive detection of heart rate (HR) using human face recordings. These methods are based on subtle color changes or motions of the face due to cardiovascular activities, which are invisible to human eyes but can be captured by digital cameras. Several approaches have been proposed such as signal processing and machine learning. However, these methods are compared with different datasets, and there is consequently no consensus on method performance. In this article, we describe and evaluate several methods defined in literature, from 2008 until present day, for the remote detection of HR using human face recordings. The general HR processing pipeline is divided into three stages: face video processing, face blood volume pulse (BVP) signal extraction, and HR computation. Approaches presented in the paper are classified and grouped according to each stage. At each stage, algorithms are analyzed and compared based on their performance using the public database MAHNOB-HCI. Results found in this article are limited on MAHNOB-HCI dataset. Results show that extracted face skin area contains more BVP information. Blind source separation and peak detection methods are more robust with head motions for estimating HR. PMID:29765940

  20. Unconstrained face detection and recognition based on RGB-D camera for the visually impaired

    NASA Astrophysics Data System (ADS)

    Zhao, Xiangdong; Wang, Kaiwei; Yang, Kailun; Hu, Weijian

    2017-02-01

    It is highly important for visually impaired people (VIP) to be aware of human beings around themselves, so correctly recognizing people in VIP assisting apparatus provide great convenience. However, in classical face recognition technology, faces used in training and prediction procedures are usually frontal, and the procedures of acquiring face images require subjects to get close to the camera so that frontal face and illumination guaranteed. Meanwhile, labels of faces are defined manually rather than automatically. Most of the time, labels belonging to different classes need to be input one by one. It prevents assisting application for VIP with these constraints in practice. In this article, a face recognition system under unconstrained environment is proposed. Specifically, it doesn't require frontal pose or uniform illumination as required by previous algorithms. The attributes of this work lie in three aspects. First, a real time frontal-face synthesizing enhancement is implemented, and frontal faces help to increase recognition rate, which is proved with experiment results. Secondly, RGB-D camera plays a significant role in our system, from which both color and depth information are utilized to achieve real time face tracking which not only raises the detection rate but also gives an access to label faces automatically. Finally, we propose to use neural networks to train a face recognition system, and Principal Component Analysis (PCA) is applied to pre-refine the input data. This system is expected to provide convenient help for VIP to get familiar with others, and make an access for them to recognize people when the system is trained enough.

  1. Cooperative multisensor system for real-time face detection and tracking in uncontrolled conditions

    NASA Astrophysics Data System (ADS)

    Marchesotti, Luca; Piva, Stefano; Turolla, Andrea; Minetti, Deborah; Regazzoni, Carlo S.

    2005-03-01

    The presented work describes an innovative architecture for multi-sensor distributed video surveillance applications. The aim of the system is to track moving objects in outdoor environments with a cooperative strategy exploiting two video cameras. The system also exhibits the capacity of focusing its attention on the faces of detected pedestrians collecting snapshot frames of face images, by segmenting and tracking them over time at different resolution. The system is designed to employ two video cameras in a cooperative client/server structure: the first camera monitors the entire area of interest and detects the moving objects using change detection techniques. The detected objects are tracked over time and their position is indicated on a map representing the monitored area. The objects" coordinates are sent to the server sensor in order to point its zooming optics towards the moving object. The second camera tracks the objects at high resolution. As well as the client camera, this sensor is calibrated and the position of the object detected on the image plane reference system is translated in its coordinates referred to the same area map. In the map common reference system, data fusion techniques are applied to achieve a more precise and robust estimation of the objects" track and to perform face detection and tracking. The work novelties and strength reside in the cooperative multi-sensor approach, in the high resolution long distance tracking and in the automatic collection of biometric data such as a person face clip for recognition purposes.

  2. Face detection in color images using skin color, Laplacian of Gaussian, and Euler number

    NASA Astrophysics Data System (ADS)

    Saligrama Sundara Raman, Shylaja; Kannanedhi Narasimha Sastry, Balasubramanya Murthy; Subramanyam, Natarajan; Senkutuvan, Ramya; Srikanth, Radhika; John, Nikita; Rao, Prateek

    2010-02-01

    In this a paper, a feature based approach to face detection has been proposed using an ensemble of algorithms. The method uses chrominance values and edge features to classify the image as skin and nonskin regions. The edge detector used for this purpose is Laplacian of Gaussian (LoG) which is found to be appropriate when images having multiple faces with noise in them. Eight connectivity analysis of these regions will segregate them as probable face or nonface. The procedure is made more robust by identifying local features within these skin regions which include number of holes, percentage of skin and the golden ratio. The method proposed has been tested on color face images of various races obtained from different sources and its performance is found to be encouraging as the color segmentation cleans up almost all the complex facial features. The result obtained has a calculated accuracy of 86.5% on a test set of 230 images.

  3. Preserved search asymmetry in the detection of fearful faces among neutral faces in individuals with Williams syndrome revealed by measurement of both manual responses and eye tracking.

    PubMed

    Hirai, Masahiro; Muramatsu, Yukako; Mizuno, Seiji; Kurahashi, Naoko; Kurahashi, Hirokazu; Nakamura, Miho

    2017-01-01

    Individuals with Williams syndrome (WS) exhibit an atypical social phenotype termed hypersociability. One theory accounting for hypersociability presumes an atypical function of the amygdala, which processes fear-related information. However, evidence is lacking regarding the detection mechanisms of fearful faces for individuals with WS. Here, we introduce a visual search paradigm to elucidate the mechanisms for detecting fearful faces by evaluating the search asymmetry; the reaction time when both the target and distractors were swapped was asymmetrical. Eye movements reflect subtle atypical attentional properties, whereas, manual responses are unable to capture atypical attentional profiles toward faces in individuals with WS. Therefore, we measured both eye movements and manual responses of individuals with WS and typically developed children and adults in visual searching for a fearful face among neutral faces or a neutral face among fearful faces. Two task measures, namely reaction time and performance accuracy, were analyzed for each stimulus as well as gaze behavior and the initial fixation onset latency. Overall, reaction times in the WS group and the mentally age-matched control group were significantly longer than those in the chronologically age-matched group. We observed a search asymmetry effect in all groups: when a neutral target facial expression was presented among fearful faces, the reaction times were significantly prolonged in comparison with when a fearful target facial expression was displayed among neutral distractor faces. Furthermore, the first fixation onset latency of eye movement toward a target facial expression showed a similar tendency for manual responses. Although overall responses in detecting fearful faces for individuals with WS are slower than those for control groups, search asymmetry was observed. Therefore, cognitive mechanisms underlying the detection of fearful faces seem to be typical in individuals with WS. This finding

  4. Detecting gear tooth fracture in a high contact ratio face gear mesh

    NASA Technical Reports Server (NTRS)

    Zakrajsek, James J.; Handschuh, Robert F.; Lewicki, David G.; Decker, Harry J.

    1995-01-01

    This paper summarized the results of a study in which three different vibration diagnostic methods were used to detect gear tooth fracture in a high contact ratio face gear mesh. The NASA spiral bevel gear fatigue test rig was used to produce unseeded fault, natural failures of four face gear specimens. During the fatigue tests, which were run to determine load capacity and primary failure mechanisms for face gears, vibration signals were monitored and recorded for gear diagnostic purposes. Gear tooth bending fatigue and surface pitting were the primary failure modes found in the tests. The damage ranged from partial tooth fracture on a single tooth in one test to heavy wear, severe pitting, and complete tooth fracture of several teeth on another test. Three gear fault detection techniques, FM4, NA4*, and NB4, were applied to the experimental data. These methods use the signal average in both the time and frequency domain. Method NA4* was able to conclusively detect the gear tooth fractures in three out of the four fatigue tests, along with gear tooth surface pitting and heavy wear. For multiple tooth fractures, all of the methods gave a clear indication of the damage. It was also found that due to the high contact ratio of the face gear mesh, single tooth fractures did not significantly affect the vibration signal, making this type of failure difficult to detect.

  5. Subject independent facial expression recognition with robust face detection using a convolutional neural network.

    PubMed

    Matsugu, Masakazu; Mori, Katsuhiko; Mitari, Yusuke; Kaneda, Yuji

    2003-01-01

    Reliable detection of ordinary facial expressions (e.g. smile) despite the variability among individuals as well as face appearance is an important step toward the realization of perceptual user interface with autonomous perception of persons. We describe a rule-based algorithm for robust facial expression recognition combined with robust face detection using a convolutional neural network. In this study, we address the problem of subject independence as well as translation, rotation, and scale invariance in the recognition of facial expression. The result shows reliable detection of smiles with recognition rate of 97.6% for 5600 still images of more than 10 subjects. The proposed algorithm demonstrated the ability to discriminate smiling from talking based on the saliency score obtained from voting visual cues. To the best of our knowledge, it is the first facial expression recognition model with the property of subject independence combined with robustness to variability in facial appearance.

  6. Face repetition detection and social interest: An ERP study in adults with and without Williams syndrome.

    PubMed

    Key, Alexandra P; Dykens, Elisabeth M

    2016-12-01

    The present study examined possible neural mechanisms underlying increased social interest in persons with Williams syndrome (WS). Visual event-related potentials (ERPs) during passive viewing were used to compare incidental memory traces for repeated vs. single presentations of previously unfamiliar social (faces) and nonsocial (houses) images in 26 adults with WS and 26 typical adults. Results indicated that participants with WS developed familiarity with the repeated faces and houses (frontal N400 response), but only typical adults evidenced the parietal old/new effect (previously associated with stimulus recollection) for the repeated faces. There was also no evidence of exceptional salience of social information in WS, as ERP markers of memory for repeated faces vs. houses were not significantly different. Thus, while persons with WS exhibit behavioral evidence of increased social interest, their processing of social information in the absence of specific instructions may be relatively superficial. The ERP evidence of face repetition detection in WS was independent of IQ and the earlier perceptual differentiation of social vs. nonsocial stimuli. Large individual differences in ERPs of participants with WS may provide valuable information for understanding the WS phenotype and have relevance for educational and treatment purposes.

  7. Real-time camera-based face detection using a modified LAMSTAR neural network system

    NASA Astrophysics Data System (ADS)

    Girado, Javier I.; Sandin, Daniel J.; DeFanti, Thomas A.; Wolf, Laura K.

    2003-03-01

    This paper describes a cost-effective, real-time (640x480 at 30Hz) upright frontal face detector as part of an ongoing project to develop a video-based, tetherless 3D head position and orientation tracking system. The work is specifically targeted for auto-stereoscopic displays and projection-based virtual reality systems. The proposed face detector is based on a modified LAMSTAR neural network system. At the input stage, after achieving image normalization and equalization, a sub-window analyzes facial features using a neural network. The sub-window is segmented, and each part is fed to a neural network layer consisting of a Kohonen Self-Organizing Map (SOM). The output of the SOM neural networks are interconnected and related by correlation-links, and can hence determine the presence of a face with enough redundancy to provide a high detection rate. To avoid tracking multiple faces simultaneously, the system is initially trained to track only the face centered in a box superimposed on the display. The system is also rotationally and size invariant to a certain degree.

  8. Early detection of tooth wear by en-face optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Mărcăuteanu, Corina; Negrutiu, Meda; Sinescu, Cosmin; Demjan, Eniko; Hughes, Mike; Bradu, Adrian; Dobre, George; Podoleanu, Adrian G.

    2009-02-01

    Excessive dental wear (pathological attrition and/or abfractions) is a frequent complication in bruxing patients. The parafunction causes heavy occlusal loads. The aim of this study is the early detection and monitoring of occlusal overload in bruxing patients. En-face optical coherence tomography was used for investigating and imaging of several extracted tooth, with a normal morphology, derived from patients with active bruxism and from subjects without parafunction. We found a characteristic pattern of enamel cracks in patients with first degree bruxism and with a normal tooth morphology. We conclude that the en-face optical coherence tomography is a promising non-invasive alternative technique for the early detection of occlusal overload, before it becomes clinically evident as tooth wear.

  9. An Implementation of Privacy Protection for a Surveillance Camera Using ROI Coding of JPEG2000 with Face Detection

    NASA Astrophysics Data System (ADS)

    Muneyasu, Mitsuji; Odani, Shuhei; Kitaura, Yoshihiro; Namba, Hitoshi

    On the use of a surveillance camera, there is a case where privacy protection should be considered. This paper proposes a new privacy protection method by automatically degrading the face region in surveillance images. The proposed method consists of ROI coding of JPEG2000 and a face detection method based on template matching. The experimental result shows that the face region can be detected and hidden correctly.

  10. Face mask sampling for the detection of Mycobacterium tuberculosis in expelled aerosols.

    PubMed

    Williams, Caroline M L; Cheah, Eddy S G; Malkin, Joanne; Patel, Hemu; Otu, Jacob; Mlaga, Kodjovi; Sutherland, Jayne S; Antonio, Martin; Perera, Nelun; Woltmann, Gerrit; Haldar, Pranabashis; Garton, Natalie J; Barer, Michael R

    2014-01-01

    Although tuberculosis is transmitted by the airborne route, direct information on the natural output of bacilli into air by source cases is very limited. We sought to address this through sampling of expelled aerosols in face masks that were subsequently analyzed for mycobacterial contamination. In series 1, 17 smear microscopy positive patients wore standard surgical face masks once or twice for periods between 10 minutes and 5 hours; mycobacterial contamination was detected using a bacteriophage assay. In series 2, 19 patients with suspected tuberculosis were studied in Leicester UK and 10 patients with at least one positive smear were studied in The Gambia. These subjects wore one FFP30 mask modified to contain a gelatin filter for one hour; this was subsequently analyzed by the Xpert MTB/RIF system. In series 1, the bacteriophage assay detected live mycobacteria in 11/17 patients with wearing times between 10 and 120 minutes. Variation was seen in mask positivity and the level of contamination detected in multiple samples from the same patient. Two patients had non-tuberculous mycobacterial infections. In series 2, 13/20 patients with pulmonary tuberculosis produced positive masks and 0/9 patients with extrapulmonary or non-tuberculous diagnoses were mask positive. Overall, 65% of patients with confirmed pulmonary mycobacterial infection gave positive masks and this included 3/6 patients who received diagnostic bronchoalveolar lavages. Mask sampling provides a simple means of assessing mycobacterial output in non-sputum expectorant. The approach shows potential for application to the study of airborne transmission and to diagnosis.

  11. Face Mask Sampling for the Detection of Mycobacterium tuberculosis in Expelled Aerosols

    PubMed Central

    Malkin, Joanne; Patel, Hemu; Otu, Jacob; Mlaga, Kodjovi; Sutherland, Jayne S.; Antonio, Martin; Perera, Nelun; Woltmann, Gerrit; Haldar, Pranabashis; Garton, Natalie J.; Barer, Michael R.

    2014-01-01

    Background Although tuberculosis is transmitted by the airborne route, direct information on the natural output of bacilli into air by source cases is very limited. We sought to address this through sampling of expelled aerosols in face masks that were subsequently analyzed for mycobacterial contamination. Methods In series 1, 17 smear microscopy positive patients wore standard surgical face masks once or twice for periods between 10 minutes and 5 hours; mycobacterial contamination was detected using a bacteriophage assay. In series 2, 19 patients with suspected tuberculosis were studied in Leicester UK and 10 patients with at least one positive smear were studied in The Gambia. These subjects wore one FFP30 mask modified to contain a gelatin filter for one hour; this was subsequently analyzed by the Xpert MTB/RIF system. Results In series 1, the bacteriophage assay detected live mycobacteria in 11/17 patients with wearing times between 10 and 120 minutes. Variation was seen in mask positivity and the level of contamination detected in multiple samples from the same patient. Two patients had non-tuberculous mycobacterial infections. In series 2, 13/20 patients with pulmonary tuberculosis produced positive masks and 0/9 patients with extrapulmonary or non-tuberculous diagnoses were mask positive. Overall, 65% of patients with confirmed pulmonary mycobacterial infection gave positive masks and this included 3/6 patients who received diagnostic bronchoalveolar lavages. Conclusion Mask sampling provides a simple means of assessing mycobacterial output in non-sputum expectorant. The approach shows potential for application to the study of airborne transmission and to diagnosis. PMID:25122163

  12. An objective method for measuring face detection thresholds using the sweep steady-state visual evoked response

    PubMed Central

    Ales, Justin M.; Farzin, Faraz; Rossion, Bruno; Norcia, Anthony M.

    2012-01-01

    We introduce a sensitive method for measuring face detection thresholds rapidly, objectively, and independently of low-level visual cues. The method is based on the swept parameter steady-state visual evoked potential (ssVEP), in which a stimulus is presented at a specific temporal frequency while parametrically varying (“sweeping”) the detectability of the stimulus. Here, the visibility of a face image was increased by progressive derandomization of the phase spectra of the image in a series of equally spaced steps. Alternations between face and fully randomized images at a constant rate (3/s) elicit a robust first harmonic response at 3 Hz specific to the structure of the face. High-density EEG was recorded from 10 human adult participants, who were asked to respond with a button-press as soon as they detected a face. The majority of participants produced an evoked response at the first harmonic (3 Hz) that emerged abruptly between 30% and 35% phase-coherence of the face, which was most prominent on right occipito-temporal sites. Thresholds for face detection were estimated reliably in single participants from 15 trials, or on each of the 15 individual face trials. The ssVEP-derived thresholds correlated with the concurrently measured perceptual face detection thresholds. This first application of the sweep VEP approach to high-level vision provides a sensitive and objective method that could be used to measure and compare visual perception thresholds for various object shapes and levels of categorization in different human populations, including infants and individuals with developmental delay. PMID:23024355

  13. Face Detection Technique as Interactive Audio/Video Controller for a Mother-Tongue-Based Instructional Material

    NASA Astrophysics Data System (ADS)

    Guidang, Excel Philip B.; Llanda, Christopher John R.; Palaoag, Thelma D.

    2018-03-01

    Face Detection Technique as a strategy in controlling a multimedia instructional material was implemented in this study. Specifically, it achieved the following objectives: 1) developed a face detection application that controls an embedded mother-tongue-based instructional material for face-recognition configuration using Python; 2) determined the perceptions of the students using the Mutt Susan’s student app review rubric. The study concludes that face detection technique is effective in controlling an electronic instructional material. It can be used to change the method of interaction of the student with an instructional material. 90% of the students perceived the application to be a great app and 10% rated the application to be good.

  14. Image Quality Assessment for Fake Biometric Detection: Application to Iris, Fingerprint, and Face Recognition.

    PubMed

    Galbally, Javier; Marcel, Sébastien; Fierrez, Julian

    2014-02-01

    To ensure the actual presence of a real legitimate trait in contrast to a fake self-manufactured synthetic or reconstructed sample is a significant problem in biometric authentication, which requires the development of new and efficient protection measures. In this paper, we present a novel software-based fake detection method that can be used in multiple biometric systems to detect different types of fraudulent access attempts. The objective of the proposed system is to enhance the security of biometric recognition frameworks, by adding liveness assessment in a fast, user-friendly, and non-intrusive manner, through the use of image quality assessment. The proposed approach presents a very low degree of complexity, which makes it suitable for real-time applications, using 25 general image quality features extracted from one image (i.e., the same acquired for authentication purposes) to distinguish between legitimate and impostor samples. The experimental results, obtained on publicly available data sets of fingerprint, iris, and 2D face, show that the proposed method is highly competitive compared with other state-of-the-art approaches and that the analysis of the general image quality of real biometric samples reveals highly valuable information that may be very efficiently used to discriminate them from fake traits.

  15. Moving human full body and body parts detection, tracking, and applications on human activity estimation, walking pattern and face recognition

    NASA Astrophysics Data System (ADS)

    Chen, Hai-Wen; McGurr, Mike

    2016-05-01

    We have developed a new way for detection and tracking of human full-body and body-parts with color (intensity) patch morphological segmentation and adaptive thresholding for security surveillance cameras. An adaptive threshold scheme has been developed for dealing with body size changes, illumination condition changes, and cross camera parameter changes. Tests with the PETS 2009 and 2014 datasets show that we can obtain high probability of detection and low probability of false alarm for full-body. Test results indicate that our human full-body detection method can considerably outperform the current state-of-the-art methods in both detection performance and computational complexity. Furthermore, in this paper, we have developed several methods using color features for detection and tracking of human body-parts (arms, legs, torso, and head, etc.). For example, we have developed a human skin color sub-patch segmentation algorithm by first conducting a RGB to YIQ transformation and then applying a Subtractive I/Q image Fusion with morphological operations. With this method, we can reliably detect and track human skin color related body-parts such as face, neck, arms, and legs. Reliable body-parts (e.g. head) detection allows us to continuously track the individual person even in the case that multiple closely spaced persons are merged. Accordingly, we have developed a new algorithm to split a merged detection blob back to individual detections based on the detected head positions. Detected body-parts also allow us to extract important local constellation features of the body-parts positions and angles related to the full-body. These features are useful for human walking gait pattern recognition and human pose (e.g. standing or falling down) estimation for potential abnormal behavior and accidental event detection, as evidenced with our experimental tests. Furthermore, based on the reliable head (face) tacking, we have applied a super-resolution algorithm to enhance

  16. Neutral face classification using personalized appearance models for fast and robust emotion detection.

    PubMed

    Chiranjeevi, Pojala; Gopalakrishnan, Viswanath; Moogi, Pratibha

    2015-09-01

    Facial expression recognition is one of the open problems in computer vision. Robust neutral face recognition in real time is a major challenge for various supervised learning-based facial expression recognition methods. This is due to the fact that supervised methods cannot accommodate all appearance variability across the faces with respect to race, pose, lighting, facial biases, and so on, in the limited amount of training data. Moreover, processing each and every frame to classify emotions is not required, as user stays neutral for majority of the time in usual applications like video chat or photo album/web browsing. Detecting neutral state at an early stage, thereby bypassing those frames from emotion classification would save the computational power. In this paper, we propose a light-weight neutral versus emotion classification engine, which acts as a pre-processer to the traditional supervised emotion classification approaches. It dynamically learns neutral appearance at key emotion (KE) points using a statistical texture model, constructed by a set of reference neutral frames for each user. The proposed method is made robust to various types of user head motions by accounting for affine distortions based on a statistical texture model. Robustness to dynamic shift of KE points is achieved by evaluating the similarities on a subset of neighborhood patches around each KE point using the prior information regarding the directionality of specific facial action units acting on the respective KE point. The proposed method, as a result, improves emotion recognition (ER) accuracy and simultaneously reduces computational complexity of the ER system, as validated on multiple databases.

  17. Multi-Frame Object Detection

    DTIC Science & Technology

    2012-09-01

    ensures that the trainer will produce a cascade that achieves a 0.9044 hit rate (= 0.9910) or better, or it will fail trying. The Viola-Jones...by the user. Thus, a final cascade cannot be produced, and the trainer has failed at the specific hit and FA rate requirements. 19 THIS PAGE...International Journal of Computer Vision, vol. 63, no. 2, pp. 153–161, July 2005. [3] L. Lee, “ Gait dynamics for recognition and classification,” in AI Memo

  18. The structural and functional correlates of the efficiency in fearful face detection.

    PubMed

    Wang, Yongchao; Guo, Nana; Zhao, Li; Huang, Hui; Yao, Xiaonan; Sang, Na; Hou, Xin; Mao, Yu; Bi, Taiyong; Qiu, Jiang

    2017-06-01

    Human visual system is found to be much efficient in searching for a fearful face. Some individuals are more sensitive to this threat-related stimulus. However, we still know little about the neural correlates of such variability. In the current study, we exploited a visual search paradigm, and asked the subjects to search for a fearful face or a target gender. Every subject showed a shallower search function for fearful face search than face gender search, indicating a stable fearful face advantage. We then used voxel-based morphometry (VBM) analysis and correlated this advantage to the gray matter volume (GMV) of some presumably face related cortical areas. The result revealed that only the left fusiform gyrus showed a significant positive correlation. Next, we defined the left fusiform gyrus as the seed region and calculated its resting state functional connectivity to the whole brain. Correlations were also calculated between fearful face advantage and these connectivities. In this analysis, we found positive correlations in the inferior parietal lobe and the ventral medial prefrontal cortex. These results suggested that the anatomical structure of the left fusiform gyrus might determine the search efficiency of fearful face, and frontoparietal attention network involved in this process through top-down attentional modulation. Copyright © 2017. Published by Elsevier Ltd.

  19. SENSITIVITY AND SPECIFICITY OF DETECTING POLYPOIDAL CHOROIDAL VASCULOPATHY WITH EN FACE OPTICAL COHERENCE TOMOGRAPHY AND OPTICAL COHERENCE TOMOGRAPHY ANGIOGRAPHY.

    PubMed

    de Carlo, Talisa E; Kokame, Gregg T; Kaneko, Kyle N; Lian, Rebecca; Lai, James C; Wee, Raymond

    2018-03-20

    Determine sensitivity and specificity of polypoidal choroidal vasculopathy (PCV) diagnosis with structural en face optical coherence tomography (OCT) and OCT angiography (OCTA). Retrospective review of the medical records of eyes diagnosed with PCV by indocyanine green angiography with review of diagnostic testing with structural en face OCT and OCTA by a trained reader. Structural en face OCT, cross-sectional OCT angiograms alone, and OCTA in its entirety were reviewed blinded to the findings of indocyanine green angiography and each other to determine if they could demonstrate the PCV complex. Sensitivity and specificity of PCV diagnosis was determined for each imaging technique using indocyanine green angiography as the ground truth. Sensitivity and specificity of structural en face OCT were 30.0% and 85.7%, of OCT angiograms alone were 26.8% and 96.8%, and of the entire OCTA were 43.9% and 87.1%, respectively. Sensitivity and specificity were improved for OCT angiograms and OCTA when looking at images taken within 1 month of PCV diagnosis. Sensitivity of detecting PCV was low using structural en face OCT and OCTA but specificity was high. Indocyanine green angiography remains the gold standard for PCV detection.

  20. Detecting Superior Face Recognition Skills in a Large Sample of Young British Adults

    PubMed Central

    Bobak, Anna K.; Pampoulov, Philip; Bate, Sarah

    2016-01-01

    The Cambridge Face Memory Test Long Form (CFMT+) and Cambridge Face Perception Test (CFPT) are typically used to assess the face processing ability of individuals who believe they have superior face recognition skills. Previous large-scale studies have presented norms for the CFPT but not the CFMT+. However, previous research has also highlighted the necessity for establishing country-specific norms for these tests, indicating that norming data is required for both tests using young British adults. The current study addressed this issue in 254 British participants. In addition to providing the first norm for performance on the CFMT+ in any large sample, we also report the first UK specific cut-off for superior face recognition on the CFPT. Further analyses identified a small advantage for females on both tests, and only small associations between objective face recognition skills and self-report measures. A secondary aim of the study was to examine the relationship between trait or social anxiety and face processing ability, and no associations were noted. The implications of these findings for the classification of super-recognizers are discussed. PMID:27713706

  1. Using pattern recognition to automatically localize reflection hyperbolas in data from ground penetrating radar

    NASA Astrophysics Data System (ADS)

    Maas, Christian; Schmalzl, Jörg

    2013-08-01

    Ground Penetrating Radar (GPR) is used for the localization of supply lines, land mines, pipes and many other buried objects. These objects can be recognized in the recorded data as reflection hyperbolas with a typical shape depending on depth and material of the object and the surrounding material. To obtain the parameters, the shape of the hyperbola has to be fitted. In the last years several methods were developed to automate this task during post-processing. In this paper we show another approach for the automated localization of reflection hyperbolas in GPR data by solving a pattern recognition problem in grayscale images. In contrast to other methods our detection program is also able to immediately mark potential objects in real-time. For this task we use a version of the Viola-Jones learning algorithm, which is part of the open source library "OpenCV". This algorithm was initially developed for face recognition, but can be adapted to any other simple shape. In our program it is used to narrow down the location of reflection hyperbolas to certain areas in the GPR data. In order to extract the exact location and the velocity of the hyperbolas we apply a simple Hough Transform for hyperbolas. Because the Viola-Jones Algorithm reduces the input for the computational expensive Hough Transform dramatically the detection system can also be implemented on normal field computers, so on-site application is possible. The developed detection system shows promising results and detection rates in unprocessed radargrams. In order to improve the detection results and apply the program to noisy radar images more data of different GPR systems as input for the learning algorithm is necessary.

  2. Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors

    PubMed Central

    Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung

    2018-01-01

    Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases. PMID:29495417

  3. Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors.

    PubMed

    Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung

    2018-02-26

    Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases.

  4. Differential Brain Activation to Angry Faces by Elite Warfighters: Neural Processing Evidence for Enhanced Threat Detection

    PubMed Central

    Paulus, Martin P.; Simmons, Alan N.; Fitzpatrick, Summer N.; Potterat, Eric G.; Van Orden, Karl F.; Bauman, James; Swain, Judith L.

    2010-01-01

    Background Little is known about the neural basis of elite performers and their optimal performance in extreme environments. The purpose of this study was to examine brain processing differences between elite warfighters and comparison subjects in brain structures that are important for emotion processing and interoception. Methodology/Principal Findings Navy Sea, Air, and Land Forces (SEALs) while off duty (n = 11) were compared with n = 23 healthy male volunteers while performing a simple emotion face-processing task during functional magnetic resonance imaging. Irrespective of the target emotion, elite warfighters relative to comparison subjects showed relatively greater right-sided insula, but attenuated left-sided insula, activation. Navy SEALs showed selectively greater activation to angry target faces relative to fearful or happy target faces bilaterally in the insula. This was not accounted for by contrasting positive versus negative emotions. Finally, these individuals also showed slower response latencies to fearful and happy target faces than did comparison subjects. Conclusions/Significance These findings support the hypothesis that elite warfighters deploy greater processing resources toward potential threat-related facial expressions and reduced processing resources to non-threat-related facial expressions. Moreover, rather than expending more effort in general, elite warfighters show more focused neural and performance tuning. In other words, greater neural processing resources are directed toward threat stimuli and processing resources are conserved when facing a nonthreat stimulus situation. PMID:20418943

  5. Three-dimensional face pose detection and tracking using monocular videos: tool and application.

    PubMed

    Dornaika, Fadi; Raducanu, Bogdan

    2009-08-01

    Recently, we have proposed a real-time tracker that simultaneously tracks the 3-D head pose and facial actions in monocular video sequences that can be provided by low quality cameras. This paper has two main contributions. First, we propose an automatic 3-D face pose initialization scheme for the real-time tracker by adopting a 2-D face detector and an eigenface system. Second, we use the proposed methods-the initialization and tracking-for enhancing the human-machine interaction functionality of an AIBO robot. More precisely, we show how the orientation of the robot's camera (or any active vision system) can be controlled through the estimation of the user's head pose. Applications based on head-pose imitation such as telepresence, virtual reality, and video games can directly exploit the proposed techniques. Experiments on real videos confirm the robustness and usefulness of the proposed methods.

  6. 3D Face Model Dataset: Automatic Detection of Facial Expressions and Emotions for Educational Environments

    ERIC Educational Resources Information Center

    Chickerur, Satyadhyan; Joshi, Kartik

    2015-01-01

    Emotion detection using facial images is a technique that researchers have been using for the last two decades to try to analyze a person's emotional state given his/her image. Detection of various kinds of emotion using facial expressions of students in educational environment is useful in providing insight into the effectiveness of tutoring…

  7. Evaluation of a processing scheme for calcified atheromatous carotid artery detection in face/neck CBCT images

    NASA Astrophysics Data System (ADS)

    Matheus, B. R. N.; Centurion, B. S.; Rubira-Bullen, I. R. F.; Schiabel, H.

    2017-03-01

    Cone Beam Computed Tomography (CBCT), a kind of face and neck exams can be opportunity to identify, as an incidental finding, calcifications of the carotid artery (CACA). Given the similarity of the CACA with calcification found in several x-ray exams, this work suggests that a similar technique designed to detect breast calcifications in mammography images could be applied to detect such calcifications in CBCT. The method used a 3D version of the calcification detection technique [1], based on a signal enhancement using a convolution with a 3D Laplacian of Gaussian (LoG) function followed by removing the high contrast bone structure from the image. Initial promising results show a 71% sensitivity with 0.48 false positive per exam.

  8. Technological advances for improving adenoma detection rates: The changing face of colonoscopy.

    PubMed

    Ishaq, Sauid; Siau, Keith; Harrison, Elizabeth; Tontini, Gian Eugenio; Hoffman, Arthur; Gross, Seth; Kiesslich, Ralf; Neumann, Helmut

    2017-07-01

    Worldwide, colorectal cancer is the third commonest cancer. Over 90% follow an adenoma-to-cancer sequence over many years. Colonoscopy is the gold standard method for cancer screening and early adenoma detection. However, considerable variation exists between endoscopists' detection rates. This review considers the effects of different endoscopic techniques on adenoma detection. Two areas of technological interest were considered: (1) optical technologies and (2) mechanical technologies. Optical solutions, including FICE, NBI, i-SCAN and high definition colonoscopy showed mixed results. In contrast, mechanical advances, such as cap-assisted colonoscopy, FUSE, EndoCuff and G-EYE™, showed promise, with reported detections rates of up to 69%. However, before definitive recommendations can be made for their incorporation into daily practice, further studies and comparison trials are required. Copyright © 2017 Editrice Gastroenterologica Italiana S.r.l. Published by Elsevier Ltd. All rights reserved.

  9. Flexibility in Visual Working Memory: Accurate Change Detection in the Face of Irrelevant Variations in Position

    PubMed Central

    Woodman, Geoffrey F.; Vogel, Edward K.; Luck, Steven J.

    2012-01-01

    Many recent studies of visual working memory have used change-detection tasks in which subjects view sequential displays and are asked to report whether they are identical or if one object has changed. A key question is whether the memory system used to perform this task is sufficiently flexible to detect changes in object identity independent of spatial transformations, but previous research has yielded contradictory results. To address this issue, the present study compared standard change-detection tasks with tasks in which the objects varied in size or position between successive arrays. Performance was nearly identical across the standard and transformed tasks unless the task implicitly encouraged spatial encoding. These results resolve the discrepancies in prior studies and demonstrate that the visual working memory system can detect changes in object identity across spatial transformations. PMID:22287933

  10. Baseline Face Detection, Head Pose Estimation, and Coarse Direction Detection for Facial Data in the SHRP2 Naturalistic Driving Study

    SciTech Connect

    Paone, Jeffrey R; Bolme, David S; Ferrell, Regina Kay

    Keeping a driver focused on the road is one of the most critical steps in insuring the safe operation of a vehicle. The Strategic Highway Research Program 2 (SHRP2) has over 3,100 recorded videos of volunteer drivers during a period of 2 years. This extensive naturalistic driving study (NDS) contains over one million hours of video and associated data that could aid safety researchers in understanding where the driver s attention is focused. Manual analysis of this data is infeasible, therefore efforts are underway to develop automated feature extraction algorithms to process and characterize the data. The real-world nature, volume,more » and acquisition conditions are unmatched in the transportation community, but there are also challenges because the data has relatively low resolution, high compression rates, and differing illumination conditions. A smaller dataset, the head pose validation study, is available which used the same recording equipment as SHRP2 but is more easily accessible with less privacy constraints. In this work we report initial head pose accuracy using commercial and open source face pose estimation algorithms on the head pose validation data set.« less

  11. Features versus context: An approach for precise and detailed detection and delineation of faces and facial features.

    PubMed

    Ding, Liya; Martinez, Aleix M

    2010-11-01

    The appearance-based approach to face detection has seen great advances in the last several years. In this approach, we learn the image statistics describing the texture pattern (appearance) of the object class we want to detect, e.g., the face. However, this approach has had limited success in providing an accurate and detailed description of the internal facial features, i.e., eyes, brows, nose, and mouth. In general, this is due to the limited information carried by the learned statistical model. While the face template is relatively rich in texture, facial features (e.g., eyes, nose, and mouth) do not carry enough discriminative information to tell them apart from all possible background images. We resolve this problem by adding the context information of each facial feature in the design of the statistical model. In the proposed approach, the context information defines the image statistics most correlated with the surroundings of each facial component. This means that when we search for a face or facial feature, we look for those locations which most resemble the feature yet are most dissimilar to its context. This dissimilarity with the context features forces the detector to gravitate toward an accurate estimate of the position of the facial feature. Learning to discriminate between feature and context templates is difficult, however, because the context and the texture of the facial features vary widely under changing expression, pose, and illumination, and may even resemble one another. We address this problem with the use of subclass divisions. We derive two algorithms to automatically divide the training samples of each facial feature into a set of subclasses, each representing a distinct construction of the same facial component (e.g., closed versus open eyes) or its context (e.g., different hairstyles). The first algorithm is based on a discriminant analysis formulation. The second algorithm is an extension of the AdaBoost approach. We provide

  12. Microfluidic Analysis with Front-Face Fluorometric Detection for the Determination of Total Inorganic Iodine in Drinking Water.

    PubMed

    Inpota, Prawpan; Strzelak, Kamil; Koncki, Robert; Sripumkhai, Wisaroot; Jeamsaksiri, Wutthinan; Ratanawimarnwong, Nuanlaor; Wilairat, Prapin; Choengchan, Nathawut; Chantiwas, Rattikan; Nacapricha, Duangjai

    2018-01-01

    A microfluidic method with front-face fluorometric detection was developed for the determination of total inorganic iodine in drinking water. A polydimethylsiloxane (PDMS) microfluidic device was employed in conjunction with the Sandell-Kolthoff reaction, in which iodide catalyzed the redox reaction between Ce(IV) and As(III). Direct alignment of an optical fiber attached to a spectrofluorometer was used as a convenient detector for remote front-face fluorometric detection. Trace inorganic iodine (IO 3 - and I - ) present naturally in drinking water was measured by on-line conversion of iodate to iodide for determination of total inorganic iodine. On-line conversion efficiency of iodate to iodide using the microfluidic device was investigated. Excellent conversion efficiency of 93 - 103% (%RSD = 1.6 - 11%) was obtained. Inorganic iodine concentrations in drinking water samples were measured, and the results obtained were in good agreement with those obtained by an ICP-MS method. Spiked sample recoveries were in the range of 86%(±5) - 128%(±8) (n = 12). Interference of various anions and cations were investigated with tolerance limit concentrations ranging from 10 -6 to 2.5 M depending on the type of ions. The developed method is simple and convenient, and it is a green method for iodine analysis, as it greatly reduces the amount of toxic reagent consumed with reagent volumes in the microfluidic scale.

  13. Feasibility evaluation of a motion detection system with face images for stereotactic radiosurgery.

    PubMed

    Yamakawa, Takuya; Ogawa, Koichi; Iyatomi, Hitoshi; Kunieda, Etsuo

    2011-01-01

    In stereotactic radiosurgery we can irradiate a targeted volume precisely with a narrow high-energy x-ray beam, and thus the motion of a targeted area may cause side effects to normal organs. This paper describes our motion detection system with three USB cameras. To reduce the effect of change in illuminance in a tracking area we used an infrared light and USB cameras that were sensitive to the infrared light. The motion detection of a patient was performed by tracking his/her ears and nose with three USB cameras, where pattern matching between a predefined template image for each view and acquired images was done by an exhaustive search method with a general-purpose computing on a graphics processing unit (GPGPU). The results of the experiments showed that the measurement accuracy of our system was less than 0.7 mm, amounting to less than half of that of our previous system.

  14. Challenges older adults face in detecting deceit: the role of emotion recognition.

    PubMed

    Stanley, Jennifer Tehan; Blanchard-Fields, Fredda

    2008-03-01

    Facial expressions of emotion are key cues to deceit (M. G. Frank & P. Ekman, 1997). Given that the literature on aging has shown an age-related decline in decoding emotions, we investigated (a) whether there are age differences in deceit detection and (b) if so, whether they are related to impairments in emotion recognition. Young and older adults (N = 364) were presented with 20 interviews (crime and opinion topics) and asked to decide whether each interview subject was lying or telling the truth. There were 3 presentation conditions: visual, audio, or audiovisual. In older adults, reduced emotion recognition was related to poor deceit detection in the visual condition for crime interviews only. (c) 2008 APA, all rights reserved.

  15. Facing possible illness detected through screening--experiences of healthy women with pathological cervical smears.

    PubMed

    Hounsgaard, Lise; Petersen, Lone Kjeld; Pedersen, Birthe D

    2007-12-01

    The aim of this study is to gain knowledge about women's perceptions of illness based on their abnormal PAP smears, following screening for cervical cancer. The study uses a phenomenological, hermeneutic approach inspired by Ricoeur's theory of interpretation. Twelve women, aged between 23 and 59 years, were consecutively selected and then followed by participant observation during their examinations and treatment in hospital. They were interviewed on entering the study, a week following their surgery, and 6 months later. The material collected was analysed through a dialectic process consisting of a face-value review of participant experiences (naive reading), structural analysis and, critical interpretation of what it means to be potentially ill. The women were unprepared to find that their screening results showed abnormal cells, indicative of incipient genital cancer. They were frustrated by the results as they had not experienced any symptoms and felt well, despite being diagnosed with a potential disease. Being diagnosed with abnormal cells caused the participants to feel anxious. Their anxiety had subsided 6 months after the cells had been removed. For those who did not require treatment, anxiety flared up with recurrent check-ups. The bio-medical differentiation between pre-stage and actual cancer provided no comfort to the participants, who continued to see themselves as having early stage cancer.

  16. Rock face stability analysis and potential rockfall source detection in Yosemite Valley

    NASA Astrophysics Data System (ADS)

    Matasci, B.; Stock, G. M.; Jaboyedoff, M.; Oppikofer, T.; Pedrazzini, A.; Carrea, D.

    2012-04-01

    Rockfall hazard in Yosemite Valley is especially high owing to the great cliff heights (~1 km), the fracturing of the steep granitic cliffs, and the widespread occurrence of surface parallel sheeting or exfoliation joints. Between 1857 and 2011, 890 documented rockfalls and other slope movements caused 15 fatalities and at least 82 injuries. The first part of this study focused on realizing a structural study for Yosemite Valley at both regional (valley-wide) and local (rockfall source area) scales. The dominant joint sets were completely characterized by their orientation, persistence, spacing, roughness and opening. Spacing and trace length for each joint set were accurately measured on terrestrial laser scanning (TLS) point clouds with the software PolyWorks (InnovMetric). Based on this fundamental information the second part of the study aimed to detect the most important failure mechanisms leading to rockfalls. With the software Matterocking and the 1m cell size DEM, we calculated the number of possible failure mechanisms (wedge sliding, planar sliding, toppling) per cell, for several cliffs of the valley. Orientation, spacing and persistence measurements directly issued from field and TLS data were inserted in the Matterocking calculations. TLS point clouds are much more accurate than the 1m DEM and show the overhangs of the cliffs. Accordingly, with the software Coltop 3D we developed a methodology similar to the one used with Matterocking to identify on the TLS point clouds the areas of a cliff with the highest number of failure mechanisms. Exfoliation joints are included in this stability analysis in the same way as the other joint sets, with the only difference that their orientation is parallel to the local cliff orientation and thus variable. This means that, in two separate areas of a cliff, the exfoliation joint set is taken into account with different dip direction and dip, but its effect on the stability assessment is the same. Areas with a high

  17. Development of three-dimensional patient face model that enables real-time collision detection and cutting operation for a dental simulator.

    PubMed

    Yamaguchi, Satoshi; Yamada, Yuya; Yoshida, Yoshinori; Noborio, Hiroshi; Imazato, Satoshi

    2012-01-01

    The virtual reality (VR) simulator is a useful tool to develop dental hand skill. However, VR simulations with reactions of patients have limited computational time to reproduce a face model. Our aim was to develop a patient face model that enables real-time collision detection and cutting operation by using stereolithography (STL) and deterministic finite automaton (DFA) data files. We evaluated dependence of computational cost and constructed the patient face model using the optimum condition for combining STL and DFA data files, and assessed the computational costs for operation in do-nothing, collision, cutting, and combination of collision and cutting. The face model was successfully constructed with low computational costs of 11.3, 18.3, 30.3, and 33.5 ms for do-nothing, collision, cutting, and collision and cutting, respectively. The patient face model could be useful for developing dental hand skill with VR.

  18. Increasing the power for detecting impairment in older adults with the Faces subtest from Wechsler Memory Scale-III: an empirical trial.

    PubMed

    Levy, Boaz

    2006-10-01

    Empirical studies have questioned the validity of the Faces subtest from the WMS-III for detecting impairment in visual memory, particularly among the elderly. A recent examination of the test norms revealed a significant age related floor effect already emerging on Faces I (immediate recall), implying excessive difficulty in the acquisition phase among unimpaired older adults. The current study compared the concurrent validity of the Faces subtest with an alternative measure between 16 Alzheimer's patients and 16 controls. The alternative measure was designed to facilitate acquisition by reducing the sequence of item presentation. Other changes aimed at increasing the retrieval challenge, decreasing error due to guessing and standardizing the administration. Analyses converged to indicate that the alternative measure provided a considerably greater differentiation than the Faces subtest between Alzheimer's patients and controls. Steps for revising the Faces subtest are discussed.

  19. Investigating the Causal Role of rOFA in Holistic Detection of Mooney Faces and Objects: An fMRI-guided TMS Study.

    PubMed

    Bona, Silvia; Cattaneo, Zaira; Silvanto, Juha

    2016-01-01

    The right occipital face area (rOFA) is known to be involved in face discrimination based on local featural information. Whether this region is also involved in global, holistic stimulus processing is not known. We used fMRI-guided transcranial magnetic stimulation (TMS) to investigate whether rOFA is causally implicated in stimulus detection based on holistic processing, by the use of Mooney stimuli. Two studies were carried out: In Experiment 1, participants performed a detection task involving Mooney faces and Mooney objects; Mooney stimuli lack distinguishable local features and can be detected solely via holistic processing (i.e. at a global level) with top-down guidance from previously stored representations. Experiment 2 required participants to detect shapes which are recognized via bottom-up integration of local (collinear) Gabor elements and was performed to control for specificity of rOFA's implication in holistic detection. In Experiment 1, TMS over rOFA and rLO impaired detection of all stimulus categories, with no category-specific effect. In Experiment 2, shape detection was impaired when TMS was applied over rLO but not over rOFA. Our results demonstrate that rOFA is causally implicated in the type of top-down holistic detection required by Mooney stimuli and that such role is not face-selective. In contrast, rOFA does not appear to play a causal role in detection of shapes based on bottom-up integration of local components, demonstrating that its involvement in processing non-face stimuli is specific for holistic processing. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  20. Framework for performance evaluation of face, text, and vehicle detection and tracking in video: data, metrics, and protocol.

    PubMed

    Kasturi, Rangachar; Goldgof, Dmitry; Soundararajan, Padmanabhan; Manohar, Vasant; Garofolo, John; Bowers, Rachel; Boonstra, Matthew; Korzhova, Valentina; Zhang, Jing

    2009-02-01

    Common benchmark data sets, standardized performance metrics, and baseline algorithms have demonstrated considerable impact on research and development in a variety of application domains. These resources provide both consumers and developers of technology with a common framework to objectively compare the performance of different algorithms and algorithmic improvements. In this paper, we present such a framework for evaluating object detection and tracking in video: specifically for face, text, and vehicle objects. This framework includes the source video data, ground-truth annotations (along with guidelines for annotation), performance metrics, evaluation protocols, and tools including scoring software and baseline algorithms. For each detection and tracking task and supported domain, we developed a 50-clip training set and a 50-clip test set. Each data clip is approximately 2.5 minutes long and has been completely spatially/temporally annotated at the I-frame level. Each task/domain, therefore, has an associated annotated corpus of approximately 450,000 frames. The scope of such annotation is unprecedented and was designed to begin to support the necessary quantities of data for robust machine learning approaches, as well as a statistically significant comparison of the performance of algorithms. The goal of this work was to systematically address the challenges of object detection and tracking through a common evaluation framework that permits a meaningful objective comparison of techniques, provides the research community with sufficient data for the exploration of automatic modeling techniques, encourages the incorporation of objective evaluation into the development process, and contributes useful lasting resources of a scale and magnitude that will prove to be extremely useful to the computer vision research community for years to come.

  1. Detection of morphological changes in cliff face surrounding a waterfall using terrestrial laser scanning and unmanned aerial system

    NASA Astrophysics Data System (ADS)

    Hayakawa, Yuichi S.; Obanawa, Hiroyuki

    2015-04-01

    Waterfall or bedrock knickpoint appears as an erosional front in bedrock rivers forming deep v-shaped valley downstream. Following the rapid fluvial erosion of waterfall, rockfalls and gravita-tional collapses often occur in surrounding steep cliffs. Although morphological changes of such steep cliffs are sometimes visually observed, quantitative and precise measurements of their spatio-temporal distribution have been limited due to the difficulties in direct access to such cliffs if with classical measurement methods. However, for the clarification of geomorphological processes oc-curring in the cliffs, multi-temporal mapping of the cliff face at a high resolution is necessary. Re-mote sensing approaches are therefore suitable for the topographic measurements and detection of changes in such inaccessible cliffs. To achieve accurate topographic mapping of cliffs around a wa-terfall, here we perform multi-temporal terrestrial laser scanning (TLS), as well as structure-from-motion multi-view stereo (SfM-MVS) photogrammetry based on unmanned aerial system (UAS). The study site is Kegon Falls in central Japan, having a vertical drop of surface water from top of its overhanging cliff, as well as groundwater outflows from its lower portions. The bedrock is composed of alternate layers of andesite lava and conglomerates. Minor rockfalls in the cliffs are often ob-served by local people. The latest major rockfall occurred in 1986, causing ca. 8-m upstream propa-gation of the waterfall lip. This provides a good opportunity to examine the changes in the surround-ing cliffs following the waterfall recession. Multi-time point clouds were obtained by TLS measure-ment over years, and the three-dimensional changes of the rock surface were detected, uncovering the locus of small rockfalls and gully developments. Erosion seems particularly frequent in relatively weak the conglomerates layer, whereas small rockfalls seems to have occurred in the andesite layers. Also, shadows in the

  2. Mechanisms of face perception

    PubMed Central

    Tsao, Doris Y.

    2009-01-01

    Faces are among the most informative stimuli we ever perceive: Even a split-second glimpse of a person's face tells us their identity, sex, mood, age, race, and direction of attention. The specialness of face processing is acknowledged in the artificial vision community, where contests for face recognition algorithms abound. Neurological evidence strongly implicates a dedicated machinery for face processing in the human brain, to explain the double dissociability of face and object recognition deficits. Furthermore, it has recently become clear that macaques too have specialized neural machinery for processing faces. Here we propose a unifying hypothesis, deduced from computational, neurological, fMRI, and single-unit experiments: that what makes face processing special is that it is gated by an obligatory detection process. We will clarify this idea in concrete algorithmic terms, and show how it can explain a variety of phenomena associated with face processing. PMID:18558862

  3. Familiarity facilitates feature-based face processing.

    PubMed

    Visconti di Oleggio Castello, Matteo; Wheeler, Kelsey G; Cipolli, Carlo; Gobbini, M Ida

    2017-01-01

    Recognition of personally familiar faces is remarkably efficient, effortless and robust. We asked if feature-based face processing facilitates detection of familiar faces by testing the effect of face inversion on a visual search task for familiar and unfamiliar faces. Because face inversion disrupts configural and holistic face processing, we hypothesized that inversion would diminish the familiarity advantage to the extent that it is mediated by such processing. Subjects detected personally familiar and stranger target faces in arrays of two, four, or six face images. Subjects showed significant facilitation of personally familiar face detection for both upright and inverted faces. The effect of familiarity on target absent trials, which involved only rejection of unfamiliar face distractors, suggests that familiarity facilitates rejection of unfamiliar distractors as well as detection of familiar targets. The preserved familiarity effect for inverted faces suggests that facilitation of face detection afforded by familiarity reflects mostly feature-based processes.

  4. Game Face

    ERIC Educational Resources Information Center

    Weiner, Jill

    2005-01-01

    In this article, the author discusses "Game Face: Life Lessons Across the Curriculum", a teaching kit that challenges assumptions and builds confidence. Game Face, which is derived from a book and art exhibition, "Game Face: What Does a Female Athlete Look Like?", uses layered and powerful images of women and girls participating in sports to teach…

  5. On the detection of thermohygrometric differences of Juniperus turbinata habitat between north and south faces in the island of El Hierro (Canary Islands)

    NASA Astrophysics Data System (ADS)

    Salva-Catarineu, Montserrat; Salvador-Franch, Ferran; Lopez-Bustins, Joan A.; Padrón-Padrón, Pedro A.; Cortés-Lucas, Amparo

    2016-04-01

    The current extent of Juniperus turbinata in the island of El Hierro is very small due to heavy exploitation for centuries. The recovery of its natural habitat has such a high environmental and scenic interest since this is a protected species in Europe. The study of the environmental factors that help or limit its recovery is indispensable. Our research project (JUNITUR) studied the populations of juniper woodlands in El Hierro from different environments. These environments are mainly determined by their altitude and exposure to north-easterly trade winds. The main objective of this study was to compare the thermohygrometric conditions of three juniper woodlands: La Dehesa (north-west face at 528 m a.s.l.), El Julan (south face at 996 m a.s.l.) and Sabinosa (north face at 258 m a.s.l.). They are located at different altitude and orientation in El Hierro and present different recovery rates. We used air sensor data loggers fixed to tree branches for recording hourly temperature and humidity data in the three study areas. We analysed daily data of three annual cycles (from September 2012 to August 2015). Similar thermohygrometric annual cycles among the three study areas were observed. We detected the largest differences in winter temperature and summer humidity between the north (to windward) (Sabinosa and La Dehesa) and south (to leeward) (El Julan) faces of the island. The juniper woodland with a highest recovery rate (El Julan) showed the most extreme temperature conditions in both winter and summer seasons. The results of this project might contribute to the knowledge of the juniper bioclimatology in El Hierro, where there is the biggest population of Juniperus turbinata throughout the Canary Islands.

  6. Laughing in the Face of Fear (of Disease Detection): Using Humor to Promote Cancer Self-Examination Behavior.

    PubMed

    Nabi, Robin L

    2016-07-01

    This research examines the possible benefit of using humor to reduce anxiety associated with performing cancer self-examination behaviors. In Study 1, 187 undergraduates read a humorous public service announcement (PSA) script promoting either breast or testicular self-exams. Results suggest that perception of humor reduced anxiety about self-exams, which, in turn, related to more positive self-exam attitudes. Simultaneously, humor perception associated with greater message processing motivation, which, in turn, associated with more supportive self-exam attitudes. Self-exam attitudes also positively associated with self-exam intentions. These results were largely replicated in Study 2. Further, self-exam intentions predicted self-exam behavior 1 week later. However, consistent with past research, the humorous and serious messages did not generate differences in subsequent self-exam behavior, though the intention-behavior relationship was stronger and significant for those exposed to the humorous versus the serious messages. In light of these findings, and given that humor has the advantage of attracting and holding attention in real message environments, the use of carefully constructed humor appeals may be a viable message strategy to promote health detection behaviors.

  7. Rapid prefrontal cortex activation towards aversively paired faces and enhanced contingency detection are observed in highly trait-anxious women under challenging conditions

    PubMed Central

    Rehbein, Maimu Alissa; Wessing, Ida; Zwitserlood, Pienie; Steinberg, Christian; Eden, Annuschka Salima; Dobel, Christian; Junghöfer, Markus

    2015-01-01

    Relative to healthy controls, anxiety-disorder patients show anomalies in classical conditioning that may either result from, or provide a risk factor for, clinically relevant anxiety. Here, we investigated whether healthy participants with enhanced anxiety vulnerability show abnormalities in a challenging affective-conditioning paradigm, in which many stimulus-reinforcer associations had to be acquired with only few learning trials. Forty-seven high and low trait-anxious females underwent MultiCS conditioning, in which 52 different neutral faces (CS+) were paired with an aversive noise (US), while further 52 faces (CS−) remained unpaired. Emotional learning was assessed by evaluative (rating), behavioral (dot-probe, contingency report), and neurophysiological (magnetoencephalography) measures before, during, and after learning. High and low trait-anxious groups did not differ in evaluative ratings or response priming before or after conditioning. High trait-anxious women, however, were better than low trait-anxious women at reporting CS+/US contingencies after conditioning, and showed an enhanced prefrontal cortex (PFC) activation towards CS+ in the M1 (i.e., 80–117 ms) and M170 time intervals (i.e., 140–160 ms) during acquisition. These effects in MultiCS conditioning observed in individuals with elevated trait anxiety are consistent with theories of enhanced conditionability in anxiety vulnerability. Furthermore, they point towards increased threat monitoring and detection in highly trait-anxious females, possibly mediated by alterations in visual working memory. PMID:26113814

  8. Face lift.

    PubMed

    Warren, Richard J; Aston, Sherrell J; Mendelson, Bryan C

    2011-12-01

    After reading this article, the participant should be able to: 1. Identify and describe the anatomy of and changes to the aging face, including changes in bone mass and structure and changes to the skin, tissue, and muscles. 2. Assess each individual's unique anatomy before embarking on face-lift surgery and incorporate various surgical techniques, including fat grafting and other corrective procedures in addition to shifting existing fat to a higher position on the face, into discussions with patients. 3. Identify risk factors and potential complications in prospective patients. 4. Describe the benefits and risks of various techniques. The ability to surgically rejuvenate the aging face has progressed in parallel with plastic surgeons' understanding of facial anatomy. In turn, a more clear explanation now exists for the visible changes seen in the aging face. This article and its associated video content review the current understanding of facial anatomy as it relates to facial aging. The standard face-lift techniques are explained and their various features, both good and bad, are reviewed. The objective is for surgeons to make a better aesthetic diagnosis before embarking on face-lift surgery, and to have the ability to use the appropriate technique depending on the clinical situation.

  9. The Caledonian face test: A new test of face discrimination.

    PubMed

    Logan, Andrew J; Wilkinson, Frances; Wilson, Hugh R; Gordon, Gael E; Loffler, Gunter

    2016-02-01

    This study aimed to develop a clinical test of face perception which is applicable to a wide range of patients and can capture normal variability. The Caledonian face test utilises synthetic faces which combine simplicity with sufficient realism to permit individual identification. Face discrimination thresholds (i.e. minimum difference between faces required for accurate discrimination) were determined in an "odd-one-out" task. The difference between faces was controlled by an adaptive QUEST procedure. A broad range of face discrimination sensitivity was determined from a group (N=52) of young adults (mean 5.75%; SD 1.18; range 3.33-8.84%). The test is fast (3-4 min), repeatable (test-re-test r(2)=0.795) and demonstrates a significant inversion effect. The potential to identify impairments of face discrimination was evaluated by testing LM who reported a lifelong difficulty with face perception. While LM's impairment for two established face tests was close to the criterion for significance (Z-scores of -2.20 and -2.27) for the Caledonian face test, her Z-score was -7.26, implying a more than threefold higher sensitivity. The new face test provides a quantifiable and repeatable assessment of face discrimination ability. The enhanced sensitivity suggests that the Caledonian face test may be capable of detecting more subtle impairments of face perception than available tests. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Recognizing Faces

    ERIC Educational Resources Information Center

    Ellis, Hadyn D.

    1975-01-01

    The proposition that the mechanisms underlying facial recognition are different from those involved in recognizing other classes of pictorial material was assessed following a general review of the literature concerned with recognizing faces. (Author/RK)

  11. About Face

    MedlinePlus Videos and Cool Tools

    Skip to Content Menu Closed (Tap to Open) Home Interviews Our Stories Search All Videos PTSD Basics PTSD Treatment What is AboutFace? Resources for Professionals Get Help Home Watch Interviews Our ...

  12. A simple method for detection of gunshot residue particles from hands, hair, face, and clothing using scanning electron microscopy/wavelength dispersive X-ray (SEM/WDX).

    PubMed

    Kage, S; Kudo, K; Kaizoji, A; Ryumoto, J; Ikeda, H; Ikeda, N

    2001-07-01

    We devised a simple and rapid method for detection of gunshot residue (GSR) particles, using scanning electron microscopy/wavelength dispersive X-ray (SEM/WDX) analysis. Experiments were done on samples containing GSR particles obtained from hands, hair, face, and clothing, using double-sided adhesive coated aluminum stubs (tape-lift method). SEM/WDX analyses for GSR were carried out in three steps: the first step was map analysis for barium (Ba) to search for GSR particles from lead styphnate primed ammunition, or tin (Sn) to search for GSR particles from mercury fulminate primed ammunition. The second step was determination of the location of GSR particles by X-ray imaging of Ba or Sn at a magnification of x 1000-2000 in the SEM, using data of map analysis, and the third step was identification of GSR particles, using WDX spectrometers. Analysis of samples from each primer of a stub took about 3 h. Practical applications were shown for utility of this method.

  13. Face Processing: Models For Recognition

    NASA Astrophysics Data System (ADS)

    Turk, Matthew A.; Pentland, Alexander P.

    1990-03-01

    The human ability to process faces is remarkable. We can identify perhaps thousands of faces learned throughout our lifetime and read facial expression to understand such subtle qualities as emotion. These skills are quite robust, despite sometimes large changes in the visual stimulus due to expression, aging, and distractions such as glasses or changes in hairstyle or facial hair. Computers which model and recognize faces will be useful in a variety of applications, including criminal identification, human-computer interface, and animation. We discuss models for representing faces and their applicability to the task of recognition, and present techniques for identifying faces and detecting eye blinks.

  14. Newborns' Mooney-Face Perception

    ERIC Educational Resources Information Center

    Leo, Irene; Simion, Francesca

    2009-01-01

    The aim of this study is to investigate whether newborns detect a face on the basis of a Gestalt representation based on first-order relational information (i.e., the basic arrangement of face features) by using Mooney stimuli. The incomplete 2-tone Mooney stimuli were used because they preclude focusing both on the local features (i.e., the fine…

  15. A novel thermal face recognition approach using face pattern words

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng

    2010-04-01

    A reliable thermal face recognition system can enhance the national security applications such as prevention against terrorism, surveillance, monitoring and tracking, especially at nighttime. The system can be applied at airports, customs or high-alert facilities (e.g., nuclear power plant) for 24 hours a day. In this paper, we propose a novel face recognition approach utilizing thermal (long wave infrared) face images that can automatically identify a subject at both daytime and nighttime. With a properly acquired thermal image (as a query image) in monitoring zone, the following processes will be employed: normalization and denoising, face detection, face alignment, face masking, Gabor wavelet transform, face pattern words (FPWs) creation, face identification by similarity measure (Hamming distance). If eyeglasses are present on a subject's face, an eyeglasses mask will be automatically extracted from the querying face image, and then masked with all comparing FPWs (no more transforms). A high identification rate (97.44% with Top-1 match) has been achieved upon our preliminary face dataset (of 39 subjects) from the proposed approach regardless operating time and glasses-wearing condition.e

  16. Face-to-face: Perceived personal relevance amplifies face processing

    PubMed Central

    Pittig, Andre; Schupp, Harald T.; Alpers, Georg W.

    2017-01-01

    Abstract The human face conveys emotional and social information, but it is not well understood how these two aspects influence face perception. In order to model a group situation, two faces displaying happy, neutral or angry expressions were presented. Importantly, faces were either facing the observer, or they were presented in profile view directed towards, or looking away from each other. In Experiment 1 (n = 64), face pairs were rated regarding perceived relevance, wish-to-interact, and displayed interactivity, as well as valence and arousal. All variables revealed main effects of facial expression (emotional > neutral), face orientation (facing observer > towards > away) and interactions showed that evaluation of emotional faces strongly varies with their orientation. Experiment 2 (n = 33) examined the temporal dynamics of perceptual-attentional processing of these face constellations with event-related potentials. Processing of emotional and neutral faces differed significantly in N170 amplitudes, early posterior negativity (EPN), and sustained positive potentials. Importantly, selective emotional face processing varied as a function of face orientation, indicating early emotion-specific (N170, EPN) and late threat-specific effects (LPP, sustained positivity). Taken together, perceived personal relevance to the observer—conveyed by facial expression and face direction—amplifies emotional face processing within triadic group situations. PMID:28158672

  17. Face-to-face: Perceived personal relevance amplifies face processing.

    PubMed

    Bublatzky, Florian; Pittig, Andre; Schupp, Harald T; Alpers, Georg W

    2017-05-01

    The human face conveys emotional and social information, but it is not well understood how these two aspects influence face perception. In order to model a group situation, two faces displaying happy, neutral or angry expressions were presented. Importantly, faces were either facing the observer, or they were presented in profile view directed towards, or looking away from each other. In Experiment 1 (n = 64), face pairs were rated regarding perceived relevance, wish-to-interact, and displayed interactivity, as well as valence and arousal. All variables revealed main effects of facial expression (emotional > neutral), face orientation (facing observer > towards > away) and interactions showed that evaluation of emotional faces strongly varies with their orientation. Experiment 2 (n = 33) examined the temporal dynamics of perceptual-attentional processing of these face constellations with event-related potentials. Processing of emotional and neutral faces differed significantly in N170 amplitudes, early posterior negativity (EPN), and sustained positive potentials. Importantly, selective emotional face processing varied as a function of face orientation, indicating early emotion-specific (N170, EPN) and late threat-specific effects (LPP, sustained positivity). Taken together, perceived personal relevance to the observer-conveyed by facial expression and face direction-amplifies emotional face processing within triadic group situations. © The Author (2017). Published by Oxford University Press.

  18. Famous face recognition, face matching, and extraversion.

    PubMed

    Lander, Karen; Poyarekar, Siddhi

    2015-01-01

    It has been previously established that extraverts who are skilled at interpersonal interaction perform significantly better than introverts on a face-specific recognition memory task. In our experiment we further investigate the relationship between extraversion and face recognition, focusing on famous face recognition and face matching. Results indicate that more extraverted individuals perform significantly better on an upright famous face recognition task and show significantly larger face inversion effects. However, our results did not find an effect of extraversion on face matching or inverted famous face recognition.

  19. Virtual & Real Face to Face Teaching

    ERIC Educational Resources Information Center

    Teneqexhi, Romeo; Kuneshka, Loreta

    2016-01-01

    In traditional "face to face" lessons, during the time the teacher writes on a black or white board, the students are always behind the teacher. Sometimes, this happens even in the recorded lesson in videos. Most of the time during the lesson, the teacher shows to the students his back not his face. We do not think the term "face to…

  20. Alternative face models for 3D face registration

    NASA Astrophysics Data System (ADS)

    Salah, Albert Ali; Alyüz, Neşe; Akarun, Lale

    2007-01-01

    3D has become an important modality for face biometrics. The accuracy of a 3D face recognition system depends on a correct registration that aligns the facial surfaces and makes a comparison possible. The best results obtained so far use a one-to-all registration approach, which means each new facial surface is registered to all faces in the gallery, at a great computational cost. We explore the approach of registering the new facial surface to an average face model (AFM), which automatically establishes correspondence to the pre-registered gallery faces. Going one step further, we propose that using a couple of well-selected AFMs can trade-off computation time with accuracy. Drawing on cognitive justifications, we propose to employ category-specific alternative average face models for registration, which is shown to increase the accuracy of the subsequent recognition. We inspect thin-plate spline (TPS) and iterative closest point (ICP) based registration schemes under realistic assumptions on manual or automatic landmark detection prior to registration. We evaluate several approaches for the coarse initialization of ICP. We propose a new algorithm for constructing an AFM, and show that it works better than a recent approach. Finally, we perform simulations with multiple AFMs that correspond to different clusters in the face shape space and compare these with gender and morphology based groupings. We report our results on the FRGC 3D face database.

  1. A Face Inversion Effect without a Face

    ERIC Educational Resources Information Center

    Brandman, Talia; Yovel, Galit

    2012-01-01

    Numerous studies have attributed the face inversion effect (FIE) to configural processing of internal facial features in upright but not inverted faces. Recent findings suggest that face mechanisms can be activated by faceless stimuli presented in the context of a body. Here we asked whether faceless stimuli with or without body context may induce…

  2. Mapping Teacher-Faces

    ERIC Educational Resources Information Center

    Thompson, Greg; Cook, Ian

    2013-01-01

    This paper uses Deleuze and Guattari's concept of faciality to analyse the teacher's face. According to Deleuze and Guattari, the teacher-face is a special type of face because it is an "overcoded" face produced in specific landscapes. This paper suggests four limit-faces for teacher faciality that actualise different mixes of significance and…

  3. IntraFace

    PubMed Central

    De la Torre, Fernando; Chu, Wen-Sheng; Xiong, Xuehan; Vicente, Francisco; Ding, Xiaoyu; Cohn, Jeffrey

    2016-01-01

    Within the last 20 years, there has been an increasing interest in the computer vision community in automated facial image analysis algorithms. This has been driven by applications in animation, market research, autonomous-driving, surveillance, and facial editing among others. To date, there exist several commercial packages for specific facial image analysis tasks such as facial expression recognition, facial attribute analysis or face tracking. However, free and easy-to-use software that incorporates all these functionalities is unavailable. This paper presents IntraFace (IF), a publicly-available software package for automated facial feature tracking, head pose estimation, facial attribute recognition, and facial expression analysis from video. In addition, IFincludes a newly develop technique for unsupervised synchrony detection to discover correlated facial behavior between two or more persons, a relatively unexplored problem in facial image analysis. In tests, IF achieved state-of-the-art results for emotion expression and action unit detection in three databases, FERA, CK+ and RU-FACS; measured audience reaction to a talk given by one of the authors; and discovered synchrony for smiling in videos of parent-infant interaction. IF is free of charge for academic use at http://www.humansensing.cs.cmu.edu/intraface/. PMID:27346987

  4. IntraFace.

    PubMed

    De la Torre, Fernando; Chu, Wen-Sheng; Xiong, Xuehan; Vicente, Francisco; Ding, Xiaoyu; Cohn, Jeffrey

    2015-05-01

    Within the last 20 years, there has been an increasing interest in the computer vision community in automated facial image analysis algorithms. This has been driven by applications in animation, market research, autonomous-driving, surveillance, and facial editing among others. To date, there exist several commercial packages for specific facial image analysis tasks such as facial expression recognition, facial attribute analysis or face tracking. However, free and easy-to-use software that incorporates all these functionalities is unavailable. This paper presents IntraFace (IF), a publicly-available software package for automated facial feature tracking, head pose estimation, facial attribute recognition, and facial expression analysis from video. In addition, IFincludes a newly develop technique for unsupervised synchrony detection to discover correlated facial behavior between two or more persons, a relatively unexplored problem in facial image analysis. In tests, IF achieved state-of-the-art results for emotion expression and action unit detection in three databases, FERA, CK+ and RU-FACS; measured audience reaction to a talk given by one of the authors; and discovered synchrony for smiling in videos of parent-infant interaction. IF is free of charge for academic use at http://www.humansensing.cs.cmu.edu/intraface/.

  5. Enhanced attention amplifies face adaptation.

    PubMed

    Rhodes, Gillian; Jeffery, Linda; Evangelista, Emma; Ewing, Louise; Peters, Marianne; Taylor, Libby

    2011-08-15

    Perceptual adaptation not only produces striking perceptual aftereffects, but also enhances coding efficiency and discrimination by calibrating coding mechanisms to prevailing inputs. Attention to simple stimuli increases adaptation, potentially enhancing its functional benefits. Here we show that attention also increases adaptation to faces. In Experiment 1, face identity aftereffects increased when attention to adapting faces was increased using a change detection task. In Experiment 2, figural (distortion) face aftereffects increased when attention was increased using a snap game (detecting immediate repeats) during adaptation. Both were large effects. Contributions of low-level adaptation were reduced using free viewing (both experiments) and a size change between adapt and test faces (Experiment 2). We suggest that attention may enhance adaptation throughout the entire cortical visual pathway, with functional benefits well beyond the immediate advantages of selective processing of potentially important stimuli. These results highlight the potential to facilitate adaptive updating of face-coding mechanisms by strategic deployment of attentional resources. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. Detection of Silent Type I Choroidal Neovascular Membrane in Chronic Central Serous Chorioretinopathy Using En Face Swept-Source Optical Coherence Tomography Angiography.

    PubMed

    Moussa, Magdy; Leila, Mahmoud; Khalid, Hagar; Lolah, Mohamed

    2017-01-01

    To evaluate the efficacy of SS-OCTA in the detection of silent CNV secondary to chronic CSCR compared to that of FFA and SS-OCT. A retrospective observational case series reviewing the clinical data, FFA, SS-OCT, and SS-OCTA images of patients with chronic CSCR, and comparing the findings. SS-OCTA detects the CNV complex and delineates it from the surrounding pathological features of chronic CSCR by utilizing the blood flow detection algorithm, OCTARA, and the ultrahigh-definition B-scan images of the retinal microstructure generated by swept-source technology. The bivariate correlation procedure was used for the calculation of the correlation matrix of the variables tested. The study included 60 eyes of 40 patients. Mean age was 47.6 years. Mean disease duration was 14.5 months. SS-OCTA detected type 1 CNV in 5 eyes (8.3%). In all 5 eyes, FFA and SS-OCT were inconclusive for CNV. The presence of foveal thinning, opaque material beneath irregular flat PED, and increased choroidal thickness in chronic CSCR constitutes a high-risk profile for progression to CNV development. Silent type 1 CNV is an established complication of chronic CSCR. SS-OCTA is indispensable in excluding CNV especially in high-risk patients and whenever FFA and SS-OCT are inconclusive.

  7. Discrimination between smiling faces: Human observers vs. automated face analysis.

    PubMed

    Del Líbano, Mario; Calvo, Manuel G; Fernández-Martín, Andrés; Recio, Guillermo

    2018-05-11

    This study investigated (a) how prototypical happy faces (with happy eyes and a smile) can be discriminated from blended expressions with a smile but non-happy eyes, depending on type and intensity of the eye expression; and (b) how smile discrimination differs for human perceivers versus automated face analysis, depending on affective valence and morphological facial features. Human observers categorized faces as happy or non-happy, or rated their valence. Automated analysis (FACET software) computed seven expressions (including joy/happiness) and 20 facial action units (AUs). Physical properties (low-level image statistics and visual saliency) of the face stimuli were controlled. Results revealed, first, that some blended expressions (especially, with angry eyes) had lower discrimination thresholds (i.e., they were identified as "non-happy" at lower non-happy eye intensities) than others (especially, with neutral eyes). Second, discrimination sensitivity was better for human perceivers than for automated FACET analysis. As an additional finding, affective valence predicted human discrimination performance, whereas morphological AUs predicted FACET discrimination. FACET can be a valid tool for categorizing prototypical expressions, but is currently more limited than human observers for discrimination of blended expressions. Configural processing facilitates detection of in/congruence(s) across regions, and thus detection of non-genuine smiling faces (due to non-happy eyes). Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Attention Capture by Faces

    ERIC Educational Resources Information Center

    Langton, Stephen R. H.; Law, Anna S.; Burton, A. Mike; Schweinberger, Stefan R.

    2008-01-01

    We report three experiments that investigate whether faces are capable of capturing attention when in competition with other non-face objects. In Experiment 1a participants took longer to decide that an array of objects contained a butterfly target when a face appeared as one of the distracting items than when the face did not appear in the array.…

  9. Efficient search for a face by chimpanzees (Pan troglodytes).

    PubMed

    Tomonaga, Masaki; Imura, Tomoko

    2015-07-16

    The face is quite an important stimulus category for human and nonhuman primates in their social lives. Recent advances in comparative-cognitive research clearly indicate that chimpanzees and humans process faces in a special manner; that is, using holistic or configural processing. Both species exhibit the face-inversion effect in which the inverted presentation of a face deteriorates their perception and recognition. Furthermore, recent studies have shown that humans detect human faces among non-facial objects rapidly. We report that chimpanzees detected chimpanzee faces among non-facial objects quite efficiently. This efficient search was not limited to own-species faces. They also found human adult and baby faces--but not monkey faces--efficiently. Additional testing showed that a front-view face was more readily detected than a profile, suggesting the important role of eye-to-eye contact. Chimpanzees also detected a photograph of a banana as efficiently as a face, but a further examination clearly indicated that the banana was detected mainly due to a low-level feature (i.e., color). Efficient face detection was hampered by an inverted presentation, suggesting that configural processing of faces is a critical element of efficient face detection in both species. This conclusion was supported by a simple simulation experiment using the saliency model.

  10. Efficient search for a face by chimpanzees (Pan troglodytes)

    PubMed Central

    Tomonaga, Masaki; Imura, Tomoko

    2015-01-01

    The face is quite an important stimulus category for human and nonhuman primates in their social lives. Recent advances in comparative-cognitive research clearly indicate that chimpanzees and humans process faces in a special manner; that is, using holistic or configural processing. Both species exhibit the face-inversion effect in which the inverted presentation of a face deteriorates their perception and recognition. Furthermore, recent studies have shown that humans detect human faces among non-facial objects rapidly. We report that chimpanzees detected chimpanzee faces among non-facial objects quite efficiently. This efficient search was not limited to own-species faces. They also found human adult and baby faces-but not monkey faces-efficiently. Additional testing showed that a front-view face was more readily detected than a profile, suggesting the important role of eye-to-eye contact. Chimpanzees also detected a photograph of a banana as efficiently as a face, but a further examination clearly indicated that the banana was detected mainly due to a low-level feature (i.e., color). Efficient face detection was hampered by an inverted presentation, suggesting that configural processing of faces is a critical element of efficient face detection in both species. This conclusion was supported by a simple simulation experiment using the saliency model. PMID:26180944

  11. Face Pareidolia in the Rhesus Monkey.

    PubMed

    Taubert, Jessica; Wardle, Susan G; Flessert, Molly; Leopold, David A; Ungerleider, Leslie G

    2017-08-21

    Face perception in humans and nonhuman primates is rapid and accurate [1-4]. In the human brain, a network of visual-processing regions is specialized for faces [5-7]. Although face processing is a priority of the primate visual system, face detection is not infallible. Face pareidolia is the compelling illusion of perceiving facial features on inanimate objects, such as the illusory face on the surface of the moon. Although face pareidolia is commonly experienced by humans, its presence in other species is unknown. Here we provide evidence for face pareidolia in a species known to possess a complex face-processing system [8-10]: the rhesus monkey (Macaca mulatta). In a visual preference task [11, 12], monkeys looked longer at photographs of objects that elicited face pareidolia in human observers than at photographs of similar objects that did not elicit illusory faces. Examination of eye movements revealed that monkeys fixated the illusory internal facial features in a pattern consistent with how they view photographs of faces [13]. Although the specialized response to faces observed in humans [1, 3, 5-7, 14] is often argued to be continuous across primates [4, 15], it was previously unclear whether face pareidolia arose from a uniquely human capacity. For example, pareidolia could be a product of the human aptitude for perceptual abstraction or result from frequent exposure to cartoons and illustrations that anthropomorphize inanimate objects. Instead, our results indicate that the perception of illusory facial features on inanimate objects is driven by a broadly tuned face-detection mechanism that we share with other species. Published by Elsevier Ltd.

  12. Electrically detected magnetic resonance of carbon dangling bonds at the Si-face 4H-SiC/SiO2 interface

    NASA Astrophysics Data System (ADS)

    Gruber, G.; Cottom, J.; Meszaros, R.; Koch, M.; Pobegen, G.; Aichinger, T.; Peters, D.; Hadley, P.

    2018-04-01

    SiC based metal-oxide-semiconductor field-effect transistors (MOSFETs) have gained a significant importance in power electronics applications. However, electrically active defects at the SiC/SiO2 interface degrade the ideal behavior of the devices. The relevant microscopic defects can be identified by electron paramagnetic resonance (EPR) or electrically detected magnetic resonance (EDMR). This helps to decide which changes to the fabrication process will likely lead to further increases of device performance and reliability. EDMR measurements have shown very similar dominant hyperfine (HF) spectra in differently processed MOSFETs although some discrepancies were observed in the measured g-factors. Here, the HF spectra measured of different SiC MOSFETs are compared, and it is argued that the same dominant defect is present in all devices. A comparison of the data with simulated spectra of the C dangling bond (PbC) center and the silicon vacancy (VSi) demonstrates that the PbC center is a more suitable candidate to explain the observed HF spectra.

  13. From face processing to face recognition: Comparing three different processing levels.

    PubMed

    Besson, G; Barragan-Jason, G; Thorpe, S J; Fabre-Thorpe, M; Puma, S; Ceccaldi, M; Barbeau, E J

    2017-01-01

    Verifying that a face is from a target person (e.g. finding someone in the crowd) is a critical ability of the human face processing system. Yet how fast this can be performed is unknown. The 'entry-level shift due to expertise' hypothesis suggests that - since humans are face experts - processing faces should be as fast - or even faster - at the individual than at superordinate levels. In contrast, the 'superordinate advantage' hypothesis suggests that faces are processed from coarse to fine, so that the opposite pattern should be observed. To clarify this debate, three different face processing levels were compared: (1) a superordinate face categorization level (i.e. detecting human faces among animal faces), (2) a face familiarity level (i.e. recognizing famous faces among unfamiliar ones) and (3) verifying that a face is from a target person, our condition of interest. The minimal speed at which faces can be categorized (∼260ms) or recognized as familiar (∼360ms) has largely been documented in previous studies, and thus provides boundaries to compare our condition of interest to. Twenty-seven participants were included. The recent Speed and Accuracy Boosting procedure paradigm (SAB) was used since it constrains participants to use their fastest strategy. Stimuli were presented either upright or inverted. Results revealed that verifying that a face is from a target person (minimal RT at ∼260ms) was remarkably fast but longer than the face categorization level (∼240ms) and was more sensitive to face inversion. In contrast, it was much faster than recognizing a face as familiar (∼380ms), a level severely affected by face inversion. Face recognition corresponding to finding a specific person in a crowd thus appears achievable in only a quarter of a second. In favor of the 'superordinate advantage' hypothesis or coarse-to-fine account of the face visual hierarchy, these results suggest a graded engagement of the face processing system across processing

  14. Familiarity Enhances Visual Working Memory for Faces

    ERIC Educational Resources Information Center

    Jackson, Margaret C.; Raymond, Jane E.

    2008-01-01

    Although it is intuitive that familiarity with complex visual objects should aid their preservation in visual working memory (WM), empirical evidence for this is lacking. This study used a conventional change-detection procedure to assess visual WM for unfamiliar and famous faces in healthy adults. Across experiments, faces were upright or…

  15. Infant Face Preferences after Binocular Visual Deprivation

    ERIC Educational Resources Information Center

    Mondloch, Catherine J.; Lewis, Terri L.; Levin, Alex V.; Maurer, Daphne

    2013-01-01

    Early visual deprivation impairs some, but not all, aspects of face perception. We investigated the possible developmental roots of later abnormalities by using a face detection task to test infants treated for bilateral congenital cataract within 1 hour of their first focused visual input. The seven patients were between 5 and 12 weeks old…

  16. Face to Face Communications in Space

    NASA Technical Reports Server (NTRS)

    Cohen, Malcolm M.; Davon, Bonnie P. (Technical Monitor)

    1999-01-01

    It has been reported that human face-to-face communications in space are compromised by facial edema, variations in the orientations of speakers and listeners, and background noises that are encountered in the shuttle and in space stations. To date, nearly all reports have been anecdotal or subjective, in the form of post-flight interviews or questionnaires; objective and quantitative data are generally lacking. Although it is acknowledged that efficient face-to-face communications are essential for astronauts to work safely and effectively, specific ways in which the space environment interferes with non-linguistic communication cues are poorly documented. Because we have only a partial understanding of how non-linguistic communication cues may change with mission duration, it is critically important to obtain objective data, and to evaluate these cues under well-controlled experimental conditions.

  17. Head and face reconstruction

    MedlinePlus

    ... of the face. That is why sometimes a plastic surgeon (for skin and face) and a neurosurgeon ( ... Saunders; 2015:chap 24. McGrath MH, Pomerantz JH. Plastic surgery. In: Townsend CM Jr, Beauchamp RD, Evers ...

  18. Face powder poisoning

    MedlinePlus

    ... poisoning URL of this page: //medlineplus.gov/ency/article/002700.htm Face powder poisoning To use the sharing features on this page, please enable JavaScript. Face powder poisoning occurs when someone swallows or ...

  19. Energy efficient face seal

    NASA Technical Reports Server (NTRS)

    Sehnal, J.; Sedy, J.; Etsion, I.; Zobens, A.

    1982-01-01

    Torque, face temperature, leakage, and wear of a flat face seal were compared with three coned face seals at pressures up to 2758 kPa and speeds up to 8000 rpm. Axial movement of the mating seal parts was recorded by a digital data acquisition system. The coning of the tungsten carbide primary ring ranged from .51 micro-m to 5.6 micro-m. The torque of the coned face seal balanced to 76.3% was an average 42% lower, the leakage eleven times higher, than that of the standard flat face seal. The reduction of the balance of the coned face seal to 51.3% resulted by decreasing the torque by an additional 44% and increasing leakage 12 to 230 times, depending on the seal shaft speed. No measurable wear was observed on the face of the coned seals.

  20. Face Time: Educating Face Transplant Candidates

    PubMed Central

    Lamparello, Brooke M.; Bueno, Ericka M.; Diaz-Siso, Jesus Rodrigo; Sisk, Geoffroy C.; Pomahac, Bohdan

    2013-01-01

    Objective: Face transplantation is the innovative application of microsurgery and immunology to restore appearance and function to those with severe facial disfigurements. Our group aims to establish a multidisciplinary education program that can facilitate informed consent and build a strong knowledge base in patients to enhance adherence to medication regimes, recovery, and quality of life. Methods: We analyzed handbooks from our institution's solid organ transplant programs to identify topics applicable to face transplant patients. The team identified unique features of face transplantation that warrant comprehensive patient education. Results: We created a 181-page handbook to provide subjects interested in pursuing transplantation with a written source of information on the process and team members and to address concerns they may have. While the handbook covers a wide range of topics, it is easy to understand and visually appealing. Conclusions: Face transplantation has many unique aspects that must be relayed to the patients pursuing this novel therapy. Since candidates lack third-party support groups and programs, the transplant team must provide an extensive educational component to enhance this complex process. Practice Implications: As face transplantation continues to develop, programs must create sound education programs that address patients’ needs and concerns to facilitate optimal care. PMID:23861990

  1. Face time: educating face transplant candidates.

    PubMed

    Lamparello, Brooke M; Bueno, Ericka M; Diaz-Siso, Jesus Rodrigo; Sisk, Geoffroy C; Pomahac, Bohdan

    2013-01-01

    Face transplantation is the innovative application of microsurgery and immunology to restore appearance and function to those with severe facial disfigurements. Our group aims to establish a multidisciplinary education program that can facilitate informed consent and build a strong knowledge base in patients to enhance adherence to medication regimes, recovery, and quality of life. We analyzed handbooks from our institution's solid organ transplant programs to identify topics applicable to face transplant patients. The team identified unique features of face transplantation that warrant comprehensive patient education. We created a 181-page handbook to provide subjects interested in pursuing transplantation with a written source of information on the process and team members and to address concerns they may have. While the handbook covers a wide range of topics, it is easy to understand and visually appealing. Face transplantation has many unique aspects that must be relayed to the patients pursuing this novel therapy. Since candidates lack third-party support groups and programs, the transplant team must provide an extensive educational component to enhance this complex process. As face transplantation continues to develop, programs must create sound education programs that address patients' needs and concerns to facilitate optimal care.

  2. Reverse engineering the face space: Discovering the critical features for face identification.

    PubMed

    Abudarham, Naphtali; Yovel, Galit

    2016-01-01

    How do we identify people? What are the critical facial features that define an identity and determine whether two faces belong to the same person or different people? To answer these questions, we applied the face space framework, according to which faces are represented as points in a multidimensional feature space, such that face space distances are correlated with perceptual similarities between faces. In particular, we developed a novel method that allowed us to reveal the critical dimensions (i.e., critical features) of the face space. To that end, we constructed a concrete face space, which included 20 facial features of natural face images, and asked human observers to evaluate feature values (e.g., how thick are the lips). Next, we systematically and quantitatively changed facial features, and measured the perceptual effects of these manipulations. We found that critical features were those for which participants have high perceptual sensitivity (PS) for detecting differences across identities (e.g., which of two faces has thicker lips). Furthermore, these high PS features vary minimally across different views of the same identity, suggesting high PS features support face recognition across different images of the same face. The methods described here set an infrastructure for discovering the critical features of other face categories not studied here (e.g., Asians, familiar) as well as other aspects of face processing, such as attractiveness or trait inferences.

  3. Face Detection and Modeling for Recognition

    DTIC Science & Technology

    2002-01-01

    gi st er ed ra n ge an d co lo r im ag es . 16 F ig u re 1. 12 . S y st em d ia gr...it h an d w it h ou t th e tr an sf or m ar e sh ow n . F or ea ch ex am p le , th e im ag es sh ow n in th e fi rs t co lu m n ar e sk in re gi on s...software/products /perflib/ipl/index.htm>. [187] Intel Open Source Computer Vision Library, <http://developer.intel.com/ soft- ware/opensource/cvfl/ opencv

  4. Simple New Method of Detecting Lies By Identifying Invisible Unique Physiological Reflex Response Appearing Often Less Than 10-15 Seconds on the Specific Parts of Face of Lying Person; Quick Screening of Potential Murderers & Problematic Persons.

    PubMed

    Omura, Yoshiaki; Nihrane, Abdallah; Lu, Dominic; Jones, Marilyn K; Shimotsuura, Yasuhiro; Ohki, Motomu

    2015-01-01

    Frequently, we cannot find any significant visible changes when somebody lies, but we found there are significant invisible changes appearing in specific areas of the face when somebody lies and their location often depends on whether the lie is serious with or without physical violence involvement. These abnormalities were detected non-invasively at areas: 1) lobules and c) a small round area of each upper lateral side of forehead; 2) the skin between the base of the 2 orifices of the nose and the upper end of upper lip and 3) Alae of both sides of nose. These invisible significant changes usually last less than 15 seconds after telling a lie. In these areas, Bi-Digital O-Ring Test (BDORT), which received a U.S. Patent in 1993, became significantly weak with an abnormal value of (-)7 and TXB2, measured non-invasively, was increased from 0.125-0.5ng to 12.5-15ng (within the first 5 seconds) and then went back down to less than 1ng (after 15 seconds). These unique changes can be documented semi-permanently by taking photographs of the face of people who tell a lie, within as short as 10 seconds after saying a lying statement. These abnormal responses appear in one or more of the above-mentioned 3 areas 1), 2) & 3). At least one abnormal pupil with BDORT of (-)8-(-)12 & marked reduction in Acetylcholine and abnormal increase in any of 3 Alzheimer's disease associated factors Apolipoprotein (Apo) E4, β-Amyloid (1-42), Tau protein, viral and bacterial infections were detected in both pupils and forehead of murderers and people who often have problems with others. Analysis of well-known typical examples of recent mass murderers was presented as examples. Using these findings, potential murderers and people who are very likely to develop problems with others can be screened within 5-10 minutes by examining their facial photographs and signatures before school admission or employment.

  5. Dissociation of face-selective cortical responses by attention.

    PubMed

    Furey, Maura L; Tanskanen, Topi; Beauchamp, Michael S; Avikainen, Sari; Uutela, Kimmo; Hari, Riitta; Haxby, James V

    2006-01-24

    We studied attentional modulation of cortical processing of faces and houses with functional MRI and magnetoencephalography (MEG). MEG detected an early, transient face-selective response. Directing attention to houses in "double-exposure" pictures of superimposed faces and houses strongly suppressed the characteristic, face-selective functional MRI response in the fusiform gyrus. By contrast, attention had no effect on the M170, the early, face-selective response detected with MEG. Late (>190 ms) category-related MEG responses elicited by faces and houses, however, were strongly modulated by attention. These results indicate that hemodynamic and electrophysiological measures of face-selective cortical processing complement each other. The hemodynamic signals reflect primarily late responses that can be modulated by feedback connections. By contrast, the early, face-specific M170 that was not modulated by attention likely reflects a rapid, feed-forward phase of face-selective processing.

  6. Face inversion increases attractiveness.

    PubMed

    Leder, Helmut; Goller, Juergen; Forster, Michael; Schlageter, Lena; Paul, Matthew A

    2017-07-01

    Assessing facial attractiveness is a ubiquitous, inherent, and hard-wired phenomenon in everyday interactions. As such, it has highly adapted to the default way that faces are typically processed: viewing faces in upright orientation. By inverting faces, we can disrupt this default mode, and study how facial attractiveness is assessed. Faces, rotated at 90 (tilting to either side) and 180°, were rated on attractiveness and distinctiveness scales. For both orientations, we found that faces were rated more attractive and less distinctive than upright faces. Importantly, these effects were more pronounced for faces rated low in upright orientation, and smaller for highly attractive faces. In other words, the less attractive a face was, the more it gained in attractiveness by inversion or rotation. Based on these findings, we argue that facial attractiveness assessments might not rely on the presence of attractive facial characteristics, but on the absence of distinctive, unattractive characteristics. These unattractive characteristics are potentially weighed against an individual, attractive prototype in assessing facial attractiveness. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  7. Brain Activity Related to the Judgment of Face-Likeness: Correlation between EEG and Face-Like Evaluation.

    PubMed

    Nihei, Yuji; Minami, Tetsuto; Nakauchi, Shigeki

    2018-01-01

    Faces represent important information for social communication, because social information, such as face-color, expression, and gender, is obtained from faces. Therefore, individuals' tend to find faces unconsciously, even in objects. Why is face-likeness perceived in non-face objects? Previous event-related potential (ERP) studies showed that the P1 component (early visual processing), the N170 component (face detection), and the N250 component (personal detection) reflect the neural processing of faces. Inverted faces were reported to enhance the amplitude and delay the latency of P1 and N170. To investigate face-likeness processing in the brain, we explored the face-related components of the ERP through a face-like evaluation task using natural faces, cars, insects, and Arcimboldo paintings presented upright or inverted. We found a significant correlation between the inversion effect index and face-like scores in P1 in both hemispheres and in N170 in the right hemisphere. These results suggest that judgment of face-likeness occurs in a relatively early stage of face processing.

  8. Brain Activity Related to the Judgment of Face-Likeness: Correlation between EEG and Face-Like Evaluation

    PubMed Central

    Nihei, Yuji; Minami, Tetsuto; Nakauchi, Shigeki

    2018-01-01

    Faces represent important information for social communication, because social information, such as face-color, expression, and gender, is obtained from faces. Therefore, individuals' tend to find faces unconsciously, even in objects. Why is face-likeness perceived in non-face objects? Previous event-related potential (ERP) studies showed that the P1 component (early visual processing), the N170 component (face detection), and the N250 component (personal detection) reflect the neural processing of faces. Inverted faces were reported to enhance the amplitude and delay the latency of P1 and N170. To investigate face-likeness processing in the brain, we explored the face-related components of the ERP through a face-like evaluation task using natural faces, cars, insects, and Arcimboldo paintings presented upright or inverted. We found a significant correlation between the inversion effect index and face-like scores in P1 in both hemispheres and in N170 in the right hemisphere. These results suggest that judgment of face-likeness occurs in a relatively early stage of face processing. PMID:29503612

  9. A multi-view face recognition system based on cascade face detector and improved Dlib

    NASA Astrophysics Data System (ADS)

    Zhou, Hongjun; Chen, Pei; Shen, Wei

    2018-03-01

    In this research, we present a framework for multi-view face detect and recognition system based on cascade face detector and improved Dlib. This method is aimed to solve the problems of low efficiency and low accuracy in multi-view face recognition, to build a multi-view face recognition system, and to discover a suitable monitoring scheme. For face detection, the cascade face detector is used to extracted the Haar-like feature from the training samples, and Haar-like feature is used to train a cascade classifier by combining Adaboost algorithm. Next, for face recognition, we proposed an improved distance model based on Dlib to improve the accuracy of multiview face recognition. Furthermore, we applied this proposed method into recognizing face images taken from different viewing directions, including horizontal view, overlooks view, and looking-up view, and researched a suitable monitoring scheme. This method works well for multi-view face recognition, and it is also simulated and tested, showing satisfactory experimental results.

  10. Technology survey on video face tracking

    NASA Astrophysics Data System (ADS)

    Zhang, Tong; Gomes, Herman Martins

    2014-03-01

    With the pervasiveness of monitoring cameras installed in public areas, schools, hospitals, work places and homes, video analytics technologies for interpreting these video contents are becoming increasingly relevant to people's lives. Among such technologies, human face detection and tracking (and face identification in many cases) are particularly useful in various application scenarios. While plenty of research has been conducted on face tracking and many promising approaches have been proposed, there are still significant challenges in recognizing and tracking people in videos with uncontrolled capturing conditions, largely due to pose and illumination variations, as well as occlusions and cluttered background. It is especially complex to track and identify multiple people simultaneously in real time due to the large amount of computation involved. In this paper, we present a survey on literature and software that are published or developed during recent years on the face tracking topic. The survey covers the following topics: 1) mainstream and state-of-the-art face tracking methods, including features used to model the targets and metrics used for tracking; 2) face identification and face clustering from face sequences; and 3) software packages or demonstrations that are available for algorithm development or trial. A number of publically available databases for face tracking are also introduced.

  11. How Well Do Computer-Generated Faces Tap Face Expertise?

    PubMed

    Crookes, Kate; Ewing, Louise; Gildenhuys, Ju-Dith; Kloth, Nadine; Hayward, William G; Oxner, Matt; Pond, Stephen; Rhodes, Gillian

    2015-01-01

    The use of computer-generated (CG) stimuli in face processing research is proliferating due to the ease with which faces can be generated, standardised and manipulated. However there has been surprisingly little research into whether CG faces are processed in the same way as photographs of real faces. The present study assessed how well CG faces tap face identity expertise by investigating whether two indicators of face expertise are reduced for CG faces when compared to face photographs. These indicators were accuracy for identification of own-race faces and the other-race effect (ORE)-the well-established finding that own-race faces are recognised more accurately than other-race faces. In Experiment 1 Caucasian and Asian participants completed a recognition memory task for own- and other-race real and CG faces. Overall accuracy for own-race faces was dramatically reduced for CG compared to real faces and the ORE was significantly and substantially attenuated for CG faces. Experiment 2 investigated perceptual discrimination for own- and other-race real and CG faces with Caucasian and Asian participants. Here again, accuracy for own-race faces was significantly reduced for CG compared to real faces. However the ORE was not affected by format. Together these results signal that CG faces of the type tested here do not fully tap face expertise. Technological advancement may, in the future, produce CG faces that are equivalent to real photographs. Until then caution is advised when interpreting results obtained using CG faces.

  12. You may look unhappy unless you smile: the distinctiveness of a smiling face against faces without an explicit smile.

    PubMed

    Park, Hyung-Bum; Han, Ji-Eun; Hyun, Joo-Seok

    2015-05-01

    An expressionless face is often perceived as rude whereas a smiling face is considered as hospitable. Repetitive exposure to such perceptions may have developed stereotype of categorizing an expressionless face as expressing negative emotion. To test this idea, we displayed a search array where the target was an expressionless face and the distractors were either smiling or frowning faces. We manipulated set size. Search reaction times were delayed with frowning distractors. Delays became more evident as the set size increased. We also devised a short-term comparison task where participants compared two sequential sets of expressionless, smiling, and frowning faces. Detection of an expression change across the sets was highly inaccurate when the change was made between frowning and expressionless face. These results indicate that subjects were confused with expressed emotions on frowning and expressionless faces, suggesting that it is difficult to distinguish expressionless face from frowning faces. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Face Search at Scale.

    PubMed

    Wang, Dayong; Otto, Charles; Jain, Anil K

    2017-06-01

    Given the prevalence of social media websites, one challenge facing computer vision researchers is to devise methods to search for persons of interest among the billions of shared photos on these websites. Despite significant progress in face recognition, searching a large collection of unconstrained face images remains a difficult problem. To address this challenge, we propose a face search system which combines a fast search procedure, coupled with a state-of-the-art commercial off the shelf (COTS) matcher, in a cascaded framework. Given a probe face, we first filter the large gallery of photos to find the top- k most similar faces using features learned by a convolutional neural network. The k retrieved candidates are re-ranked by combining similarities based on deep features and those output by the COTS matcher. We evaluate the proposed face search system on a gallery containing 80 million web-downloaded face images. Experimental results demonstrate that while the deep features perform worse than the COTS matcher on a mugshot dataset (93.7 percent versus 98.6 percent TAR@FAR of 0.01 percent), fusing the deep features with the COTS matcher improves the overall performance ( 99.5 percent TAR@FAR of 0.01 percent). This shows that the learned deep features provide complementary information over representations used in state-of-the-art face matchers. On the unconstrained face image benchmarks, the performance of the learned deep features is competitive with reported accuracies. LFW database: 98.20 percent accuracy under the standard protocol and 88.03 percent TAR@FAR of 0.1 percent under the BLUFR protocol; IJB-A benchmark: 51.0 percent TAR@FAR of 0.1 percent (verification), rank 1 retrieval of 82.2 percent (closed-set search), 61.5 percent FNIR@FAR of 1 percent (open-set search). The proposed face search system offers an excellent trade-off between accuracy and scalability on galleries with millions of images. Additionally, in a face search experiment involving

  14. Is face distinctiveness gender based?

    PubMed

    Baudouin, Jean-Yves; Gallay, Mathieu

    2006-08-01

    Two experiments were carried out to study the role of gender category in evaluations of face distinctiveness. In Experiment 1, participants had to evaluate the distinctiveness and the femininity-masculinity of real or artificial composite faces. The composite faces were created by blending either faces of the same gender (sexed composite faces, approximating the sexed prototypes) or faces of both genders (nonsexed composite faces, approximating the face prototype). The results show that the distinctiveness ratings decreased as the number of blended faces increased. Distinctiveness and gender ratings did not covary for real faces or sexed composite faces, but they did vary for nonsexed composite faces. In Experiment 2, participants were asked to state which of two composite faces, one sexed and one nonsexed, was more distinctive. Sexed composite faces were selected less often. The results are interpreted as indicating that distinctiveness is based on sexed prototypes. Implications for face recognition models are discussed. ((c) 2006 APA, all rights reserved).

  15. Stable face representations

    PubMed Central

    Jenkins, Rob; Burton, A. Mike

    2011-01-01

    Photographs are often used to establish the identity of an individual or to verify that they are who they claim to be. Yet, recent research shows that it is surprisingly difficult to match a photo to a face. Neither humans nor machines can perform this task reliably. Although human perceivers are good at matching familiar faces, performance with unfamiliar faces is strikingly poor. The situation is no better for automatic face recognition systems. In practical settings, automatic systems have been consistently disappointing. In this review, we suggest that failure to distinguish between familiar and unfamiliar face processing has led to unrealistic expectations about face identification in applied settings. We also argue that a photograph is not necessarily a reliable indicator of facial appearance, and develop our proposal that summary statistics can provide more stable face representations. In particular, we show that image averaging stabilizes facial appearance by diluting aspects of the image that vary between snapshots of the same person. We review evidence that the resulting images can outperform photographs in both behavioural experiments and computer simulations, and outline promising directions for future research. PMID:21536553

  16. Hole Feature on Conical Face Recognition for Turning Part Model

    NASA Astrophysics Data System (ADS)

    Zubair, A. F.; Abu Mansor, M. S.

    2018-03-01

    Computer Aided Process Planning (CAPP) is the bridge between CAD and CAM and pre-processing of the CAD data in the CAPP system is essential. For CNC turning part, conical faces of part model is inevitable to be recognised beside cylindrical and planar faces. As the sinus cosines of the cone radius structure differ according to different models, face identification in automatic feature recognition of the part model need special intention. This paper intends to focus hole on feature on conical faces that can be detected by CAD solid modeller ACIS via. SAT file. Detection algorithm of face topology were generated and compared. The study shows different faces setup for similar conical part models with different hole type features. Three types of holes were compared and different between merge faces and unmerge faces were studied.

  17. Gaze cueing by pareidolia faces.

    PubMed

    Takahashi, Kohske; Watanabe, Katsumi

    2013-01-01

    Visual images that are not faces are sometimes perceived as faces (the pareidolia phenomenon). While the pareidolia phenomenon provides people with a strong impression that a face is present, it is unclear how deeply pareidolia faces are processed as faces. In the present study, we examined whether a shift in spatial attention would be produced by gaze cueing of face-like objects. A robust cueing effect was observed when the face-like objects were perceived as faces. The magnitude of the cueing effect was comparable between the face-like objects and a cartoon face. However, the cueing effect was eliminated when the observer did not perceive the objects as faces. These results demonstrated that pareidolia faces do more than give the impression of the presence of faces; indeed, they trigger an additional face-specific attentional process.

  18. Gaze cueing by pareidolia faces

    PubMed Central

    Takahashi, Kohske; Watanabe, Katsumi

    2013-01-01

    Visual images that are not faces are sometimes perceived as faces (the pareidolia phenomenon). While the pareidolia phenomenon provides people with a strong impression that a face is present, it is unclear how deeply pareidolia faces are processed as faces. In the present study, we examined whether a shift in spatial attention would be produced by gaze cueing of face-like objects. A robust cueing effect was observed when the face-like objects were perceived as faces. The magnitude of the cueing effect was comparable between the face-like objects and a cartoon face. However, the cueing effect was eliminated when the observer did not perceive the objects as faces. These results demonstrated that pareidolia faces do more than give the impression of the presence of faces; indeed, they trigger an additional face-specific attentional process. PMID:25165505

  19. Protective Face Mask

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Mask to protect the physically impaired from injuries to the face and head has been developed by Langley Research Center. It is made of composite materials, usually graphite or boron fibers woven into a matrix. Weighs less than three ounces.

  20. Accustomed to Her Face

    NASA Image and Video Library

    2007-06-26

    After nearly three years at Saturn, the Cassini spacecraft continues to observe the planet retinue of icy moons. Rhea cratered face attests to its great age, while its bright wisps hint at tectonic activity in the past

  1. Neuronal integration in visual cortex elevates face category tuning to conscious face perception

    PubMed Central

    Fahrenfort, Johannes J.; Snijders, Tineke M.; Heinen, Klaartje; van Gaal, Simon; Scholte, H. Steven; Lamme, Victor A. F.

    2012-01-01

    The human brain has the extraordinary capability to transform cluttered sensory input into distinct object representations. For example, it is able to rapidly and seemingly without effort detect object categories in complex natural scenes. Surprisingly, category tuning is not sufficient to achieve conscious recognition of objects. What neural process beyond category extraction might elevate neural representations to the level where objects are consciously perceived? Here we show that visible and invisible faces produce similar category-selective responses in the ventral visual cortex. The pattern of neural activity evoked by visible faces could be used to decode the presence of invisible faces and vice versa. However, only visible faces caused extensive response enhancements and changes in neural oscillatory synchronization, as well as increased functional connectivity between higher and lower visual areas. We conclude that conscious face perception is more tightly linked to neural processes of sustained information integration and binding than to processes accommodating face category tuning. PMID:23236162

  2. A face in a (temporal) crowd.

    PubMed

    Hacker, Catrina M; Meschke, Emily X; Biederman, Irving

    2018-03-20

    Familiar objects, specified by name, can be identified with high accuracy when embedded in a rapidly presented sequence of images at rates exceeding 10 images/s. Not only can target objects be detected at such brief presentation rates, they can also be detected under high uncertainty, where their classification is defined negatively, e.g., "Not a Tool." The identification of a familiar speaker's voice declines precipitously when uncertainty is increased from one to a mere handful of possible speakers. Is the limitation imposed by uncertainty, i.e., the number of possible individuals, a general characteristic of processes for person individuation such that the identifiability of a familiar face would undergo a similar decline with uncertainty? Specifically, could the presence of an unnamed celebrity, thus any celebrity, be detected when presented in a rapid sequence of unfamiliar faces? If so, could the celebrity be identified? Despite the markedly greater physical similarity of faces compared to objects that are, say, not tools, the presence of a celebrity could be detected with moderately high accuracy (∼75%) at rates exceeding 7 faces/s. False alarms were exceedingly rare as almost all the errors were misses. Detection accuracy by moderate congenital prosopagnosics was lower than controls, but still well above chance. Given the detection of the presence of a celebrity, all subjects were almost always able to identify that celebrity, providing no role for a covert familiarity signal outside of awareness. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Video face recognition against a watch list

    NASA Astrophysics Data System (ADS)

    Abbas, Jehanzeb; Dagli, Charlie K.; Huang, Thomas S.

    2007-10-01

    Due to a large increase in the video surveillance data recently in an effort to maintain high security at public places, we need more robust systems to analyze this data and make tasks like face recognition a realistic possibility in challenging environments. In this paper we explore a watch-list scenario where we use an appearance based model to classify query faces from low resolution videos into either a watch-list or a non-watch-list face. We then use our simple yet a powerful face recognition system to recognize the faces classified as watch-list faces. Where the watch-list includes those people that we are interested in recognizing. Our system uses simple feature machine algorithms from our previous work to match video faces against still images. To test our approach, we match video faces against a large database of still images obtained from a previous work in the field from Yahoo News over a period of time. We do this matching in an efficient manner to come up with a faster and nearly real-time system. This system can be incorporated into a larger surveillance system equipped with advanced algorithms involving anomalous event detection and activity recognition. This is a step towards more secure and robust surveillance systems and efficient video data analysis.

  4. How Fast is Famous Face Recognition?

    PubMed Central

    Barragan-Jason, Gladys; Lachat, Fanny; Barbeau, Emmanuel J.

    2012-01-01

    The rapid recognition of familiar faces is crucial for social interactions. However the actual speed with which recognition can be achieved remains largely unknown as most studies have been carried out without any speed constraints. Different paradigms have been used, leading to conflicting results, and although many authors suggest that face recognition is fast, the speed of face recognition has not been directly compared to “fast” visual tasks. In this study, we sought to overcome these limitations. Subjects performed three tasks, a familiarity categorization task (famous faces among unknown faces), a superordinate categorization task (human faces among animal ones), and a gender categorization task. All tasks were performed under speed constraints. The results show that, despite the use of speed constraints, subjects were slow when they had to categorize famous faces: minimum reaction time was 467 ms, which is 180 ms more than during superordinate categorization and 160 ms more than in the gender condition. Our results are compatible with a hierarchy of face processing from the superordinate level to the familiarity level. The processes taking place between detection and recognition need to be investigated in detail. PMID:23162503

  5. Dynamic Encoding of Face Information in the Human Fusiform Gyrus

    PubMed Central

    Ghuman, Avniel Singh; Brunet, Nicolas M.; Li, Yuanning; Konecky, Roma O.; Pyles, John A.; Walls, Shawn A.; Destefino, Vincent; Wang, Wei; Richardson, R. Mark

    2014-01-01

    Humans’ ability to rapidly and accurately detect, identify, and classify faces under variable conditions derives from a network of brain regions highly tuned to face information. The fusiform face area (FFA) is thought to be a computational hub for face processing, however temporal dynamics of face information processing in FFA remains unclear. Here we use multivariate pattern classification to decode the temporal dynamics of expression-invariant face information processing using electrodes placed directly upon FFA in humans. Early FFA activity (50-75 ms) contained information regarding whether participants were viewing a face. Activity between 200-500 ms contained expression-invariant information about which of 70 faces participants were viewing along with the individual differences in facial features and their configurations. Long-lasting (500+ ms) broadband gamma frequency activity predicted task performance. These results elucidate the dynamic computational role FFA plays in multiple face processing stages and indicate what information is used in performing these visual analyses. PMID:25482825

  6. Successful decoding of famous faces in the fusiform face area.

    PubMed

    Axelrod, Vadim; Yovel, Galit

    2015-01-01

    What are the neural mechanisms of face recognition? It is believed that the network of face-selective areas, which spans the occipital, temporal, and frontal cortices, is important in face recognition. A number of previous studies indeed reported that face identity could be discriminated based on patterns of multivoxel activity in the fusiform face area and the anterior temporal lobe. However, given the difficulty in localizing the face-selective area in the anterior temporal lobe, its role in face recognition is still unknown. Furthermore, previous studies limited their analysis to occipito-temporal regions without testing identity decoding in more anterior face-selective regions, such as the amygdala and prefrontal cortex. In the current high-resolution functional Magnetic Resonance Imaging study, we systematically examined the decoding of the identity of famous faces in the temporo-frontal network of face-selective and adjacent non-face-selective regions. A special focus has been put on the face-area in the anterior temporal lobe, which was reliably localized using an optimized scanning protocol. We found that face-identity could be discriminated above chance level only in the fusiform face area. Our results corroborate the role of the fusiform face area in face recognition. Future studies are needed to further explore the role of the more recently discovered anterior face-selective areas in face recognition.

  7. Successful Decoding of Famous Faces in the Fusiform Face Area

    PubMed Central

    Axelrod, Vadim; Yovel, Galit

    2015-01-01

    What are the neural mechanisms of face recognition? It is believed that the network of face-selective areas, which spans the occipital, temporal, and frontal cortices, is important in face recognition. A number of previous studies indeed reported that face identity could be discriminated based on patterns of multivoxel activity in the fusiform face area and the anterior temporal lobe. However, given the difficulty in localizing the face-selective area in the anterior temporal lobe, its role in face recognition is still unknown. Furthermore, previous studies limited their analysis to occipito-temporal regions without testing identity decoding in more anterior face-selective regions, such as the amygdala and prefrontal cortex. In the current high-resolution functional Magnetic Resonance Imaging study, we systematically examined the decoding of the identity of famous faces in the temporo-frontal network of face-selective and adjacent non-face-selective regions. A special focus has been put on the face-area in the anterior temporal lobe, which was reliably localized using an optimized scanning protocol. We found that face-identity could be discriminated above chance level only in the fusiform face area. Our results corroborate the role of the fusiform face area in face recognition. Future studies are needed to further explore the role of the more recently discovered anterior face-selective areas in face recognition. PMID:25714434

  8. Friends with Faces: How Social Networks Can Enhance Face Recognition and Vice Versa

    NASA Astrophysics Data System (ADS)

    Mavridis, Nikolaos; Kazmi, Wajahat; Toulis, Panos

    The "friendship" relation, a social relation among individuals, is one of the primary relations modeled in some of the world's largest online social networking sites, such as "FaceBook." On the other hand, the "co-occurrence" relation, as a relation among faces appearing in pictures, is one that is easily detectable using modern face detection techniques. These two relations, though appearing in different realms (social vs. visual sensory), have a strong correlation: faces that co-occur in photos often belong to individuals who are friends. Using real-world data gathered from "Facebook," which were gathered as part of the "FaceBots" project, the world's first physical face-recognizing and conversing robot that can utilize and publish information on "Facebook" was established. We present here methods as well as results for utilizing this correlation in both directions. Both algorithms for utilizing knowledge of the social context for faster and better face recognition are given, as well as algorithms for estimating the friendship network of a number of individuals given photos containing their faces. The results are quite encouraging. In the primary example, doubling of the recognition accuracy as well as a sixfold improvement in speed is demonstrated. Various improvements, interesting statistics, as well as an empirical investigation leading to predictions of scalability to much bigger data sets are discussed.

  9. First Impressions From Faces.

    PubMed

    Zebrowitz, Leslie A

    2017-06-01

    Although cultural wisdom warns 'don't judge a book by its cover,' we seem unable to inhibit this tendency even though it can produce inaccurate impressions of people's psychological traits and has significant social consequences. One explanation for this paradox is that first impressions of faces overgeneralize our adaptive impressions of categories of people that those faces resemble (including babies, familiar or unfamiliar people, unfit people, emotional people). Research testing these 'overgeneralization' hypotheses elucidates why we form first impressions from faces, what impressions we form, and what cues influence these impressions. This article focuses on commonalities in impressions across diverse perceivers. However, brief attention is given to individual differences in impressions and impression accuracy.

  10. First Impressions From Faces

    PubMed Central

    Zebrowitz, Leslie A

    2016-01-01

    Although cultural wisdom warns ‘don’t judge a book by its cover,’ we seem unable to inhibit this tendency even though it can produce inaccurate impressions of people’s psychological traits and has significant social consequences. One explanation for this paradox is that first impressions of faces overgeneralize our adaptive impressions of categories of people that those faces resemble (including babies, familiar or unfamiliar people, unfit people, emotional people). Research testing these ‘overgeneralization’ hypotheses elucidates why we form first impressions from faces, what impressions we form, and what cues influence these impressions. This article focuses on commonalities in impressions across diverse perceivers. However, brief attention is given to individual differences in impressions and impression accuracy. PMID:28630532

  11. An equine pain face

    PubMed Central

    Gleerup, Karina B; Forkman, Björn; Lindegaard, Casper; Andersen, Pia H

    2015-01-01

    Objective The objective of this study was to investigate the existence of an equine pain face and to describe this in detail. Study design Semi-randomized, controlled, crossover trial. Animals Six adult horses. Methods Pain was induced with two noxious stimuli, a tourniquet on the antebrachium and topical application of capsaicin. All horses participated in two control trials and received both noxious stimuli twice, once with and once without an observer present. During all sessions their pain state was scored. The horses were filmed and the close-up video recordings of the faces were analysed for alterations in behaviour and facial expressions. Still images from the trials were evaluated for the presence of each of the specific pain face features identified from the video analysis. Results Both noxious challenges were effective in producing a pain response resulting in significantly increased pain scores. Alterations in facial expressions were observed in all horses during all noxious stimulations. The number of pain face features present on the still images from the noxious challenges were significantly higher than for the control trial (p = 0.0001). Facial expressions representative for control and pain trials were condensed into explanatory illustrations. During pain sessions with an observer present, the horses increased their contact-seeking behavior. Conclusions and clinical relevance An equine pain face comprising ‘low’ and/or ‘asymmetrical’ ears, an angled appearance of the eyes, a withdrawn and/or tense stare, mediolaterally dilated nostrils and tension of the lips, chin and certain facial muscles can be recognized in horses during induced acute pain. This description of an equine pain face may be useful for improving tools for pain recognition in horses with mild to moderate pain. PMID:25082060

  12. Valence modulates source memory for faces.

    PubMed

    Bell, Raoul; Buchner, Axel

    2010-01-01

    Previous studies in which the effects of emotional valence on old-new discrimination and source memory have been examined have yielded highly inconsistent results. Here, we present two experiments showing that old-new face discrimination was not affected by whether a face was associated with disgusting, pleasant, or neutral behavior. In contrast, source memory for faces associated with disgusting behavior (i.e., memory for the disgusting context in which the face was encountered) was consistently better than source memory for other types of faces. This data pattern replicates the findings of studies in which descriptions of cheating, neutral, and trustworthy behavior were used, which findings were previously ascribed to a highly specific cheater detection module. The present results suggest that the enhanced source memory for faces of cheaters is due to a more general source memory advantage for faces associated with negative or threatening contexts that may be instrumental in avoiding the negative consequences of encounters with persons associated with negative or threatening behaviors.

  13. Explaining Sad People's Memory Advantage for Faces.

    PubMed

    Hills, Peter J; Marquardt, Zoe; Young, Isabel; Goodenough, Imogen

    2017-01-01

    Sad people recognize faces more accurately than happy people (Hills et al., 2011). We devised four hypotheses for this finding that are tested between in the current study. The four hypotheses are: (1) sad people engage in more expert processing associated with face processing; (2) sad people are motivated to be more accurate than happy people in an attempt to repair their mood; (3) sad people have a defocused attentional strategy that allows more information about a face to be encoded; and (4) sad people scan more of the face than happy people leading to more facial features to be encoded. In Experiment 1, we found that dysphoria (sad mood often associated with depression) was not correlated with the face-inversion effect (a measure of expert processing) nor with response times but was correlated with defocused attention and recognition accuracy. Experiment 2 established that dysphoric participants detected changes made to more facial features than happy participants. In Experiment 3, using eye-tracking we found that sad-induced participants sampled more of the face whilst avoiding the eyes. Experiment 4 showed that sad-induced people demonstrated a smaller own-ethnicity bias. These results indicate that sad people show different attentional allocation to faces than happy and neutral people.

  14. Familiarity enhances visual working memory for faces.

    PubMed

    Jackson, Margaret C; Raymond, Jane E

    2008-06-01

    Although it is intuitive that familiarity with complex visual objects should aid their preservation in visual working memory (WM), empirical evidence for this is lacking. This study used a conventional change-detection procedure to assess visual WM for unfamiliar and famous faces in healthy adults. Across experiments, faces were upright or inverted and a low- or high-load concurrent verbal WM task was administered to suppress contribution from verbal WM. Even with a high verbal memory load, visual WM performance was significantly better and capacity estimated as significantly greater for famous versus unfamiliar faces. Face inversion abolished this effect. Thus, neither strategic, explicit support from verbal WM nor low-level feature processing easily accounts for the observed benefit of high familiarity for visual WM. These results demonstrate that storage of items in visual WM can be enhanced if robust visual representations of them already exist in long-term memory.

  15. Facing Aggression: Cues Differ for Female versus Male Faces

    PubMed Central

    Geniole, Shawn N.; Keyes, Amanda E.; Mondloch, Catherine J.; Carré, Justin M.; McCormick, Cheryl M.

    2012-01-01

    The facial width-to-height ratio (face ratio), is a sexually dimorphic metric associated with actual aggression in men and with observers' judgements of aggression in male faces. Here, we sought to determine if observers' judgements of aggression were associated with the face ratio in female faces. In three studies, participants rated photographs of female and male faces on aggression, femininity, masculinity, attractiveness, and nurturing. In Studies 1 and 2, for female and male faces, judgements of aggression were associated with the face ratio even when other cues in the face related to masculinity were controlled statistically. Nevertheless, correlations between the face ratio and judgements of aggression were smaller for female than for male faces (F1,36 = 7.43, p = 0.01). In Study 1, there was no significant relationship between judgements of femininity and of aggression in female faces. In Study 2, the association between judgements of masculinity and aggression was weaker in female faces than for male faces in Study 1. The weaker association in female faces may be because aggression and masculinity are stereotypically male traits. Thus, in Study 3, observers rated faces on nurturing (a stereotypically female trait) and on femininity. Judgements of nurturing were associated with femininity (positively) and masculinity (negatively) ratings in both female and male faces. In summary, the perception of aggression differs in female versus male faces. The sex difference was not simply because aggression is a gendered construct; the relationships between masculinity/femininity and nurturing were similar for male and female faces even though nurturing is also a gendered construct. Masculinity and femininity ratings are not associated with aggression ratings nor with the face ratio for female faces. In contrast, all four variables are highly inter-correlated in male faces, likely because these cues in male faces serve as “honest signals”. PMID:22276184

  16. Facing aggression: cues differ for female versus male faces.

    PubMed

    Geniole, Shawn N; Keyes, Amanda E; Mondloch, Catherine J; Carré, Justin M; McCormick, Cheryl M

    2012-01-01

    The facial width-to-height ratio (face ratio), is a sexually dimorphic metric associated with actual aggression in men and with observers' judgements of aggression in male faces. Here, we sought to determine if observers' judgements of aggression were associated with the face ratio in female faces. In three studies, participants rated photographs of female and male faces on aggression, femininity, masculinity, attractiveness, and nurturing. In Studies 1 and 2, for female and male faces, judgements of aggression were associated with the face ratio even when other cues in the face related to masculinity were controlled statistically. Nevertheless, correlations between the face ratio and judgements of aggression were smaller for female than for male faces (F(1,36) = 7.43, p = 0.01). In Study 1, there was no significant relationship between judgements of femininity and of aggression in female faces. In Study 2, the association between judgements of masculinity and aggression was weaker in female faces than for male faces in Study 1. The weaker association in female faces may be because aggression and masculinity are stereotypically male traits. Thus, in Study 3, observers rated faces on nurturing (a stereotypically female trait) and on femininity. Judgements of nurturing were associated with femininity (positively) and masculinity (negatively) ratings in both female and male faces. In summary, the perception of aggression differs in female versus male faces. The sex difference was not simply because aggression is a gendered construct; the relationships between masculinity/femininity and nurturing were similar for male and female faces even though nurturing is also a gendered construct. Masculinity and femininity ratings are not associated with aggression ratings nor with the face ratio for female faces. In contrast, all four variables are highly inter-correlated in male faces, likely because these cues in male faces serve as "honest signals".

  17. Face pose tracking using the four-point algorithm

    NASA Astrophysics Data System (ADS)

    Fung, Ho Yin; Wong, Kin Hong; Yu, Ying Kin; Tsui, Kwan Pang; Kam, Ho Chuen

    2017-06-01

    In this paper, we have developed an algorithm to track the pose of a human face robustly and efficiently. Face pose estimation is very useful in many applications such as building virtual reality systems and creating an alternative input method for the disabled. Firstly, we have modified a face detection toolbox called DLib for the detection of a face in front of a camera. The detected face features are passed to a pose estimation method, known as the four-point algorithm, for pose computation. The theory applied and the technical problems encountered during system development are discussed in the paper. It is demonstrated that the system is able to track the pose of a face in real time using a consumer grade laptop computer.

  18. Mechanical Face Seal Dynamics.

    DTIC Science & Technology

    1985-12-01

    1473, 83 APR EDITION OF I JAN 73 IS OBSOLETE. UNCLASSIFIED SECURITY CLASSIFICATION OF THIS PAGE -,1 - " P V 7 V - • ... f -N- PRE FACE This final...dimensionless mass m and support damping 1), ~ at-e aisas M"= -1,,i -4 4) y positive. ’he damping D is Ihe tinplete system of momeints acting on tile

  19. Lightweight Face Mask

    NASA Technical Reports Server (NTRS)

    Cason, W. E. I.; Baucom, R. M.; Evans, R. C.

    1982-01-01

    Lightweight face mask originally developed to protect epileptic patients during seizures could have many other medical and nonmedical applications such as muscular distrophy patients, football linesmen and riot-control police. Masks are extremely lightweight, the lightest of the configurations weighing only 136 grams.

  20. Many Faces Have I.

    ERIC Educational Resources Information Center

    Zilliox, Joseph T.; Lowery, Shannon G.

    1997-01-01

    Describes an extended investigation of polygons and polyhedra which was conducted in response to a challenge posed in Focus, a newsletter from the Mathematical Association of America (MAA). Students were challenged to construct a polyhedron with faces that measure more than 13 inches to a side. Outlines the process, including the questions posed…

  1. Workforce Issues Facing HRD.

    ERIC Educational Resources Information Center

    1995

    These four papers are from a symposium facilitated by Eugene Andette on work force issues facing human resources development (HRD) at the 1995 Academy of Human Resource Development conference. "Meaning Construction and Personal Transformation: Alternative Dimensions of Job Loss" (Terri A. Deems) reports a study conducted to explore the ways…

  2. Problems Facing Rural Schools.

    ERIC Educational Resources Information Center

    Stewart, C. E.; And Others

    Problems facing rural Scottish schools range from short term consideration of daily operation to long term consideration of organizational alternatives. Addressed specifically, such problems include consideration of: (1) liaison between a secondary school and its feeder primary schools; (2) preservice teacher training for work in small, isolated…

  3. Facing Up to Death

    ERIC Educational Resources Information Center

    Ross, Elizabeth Kubler

    1972-01-01

    Doctor urges that Americans accept death as a part of life and suggests ways of helping dying patients and their families face reality calmly, with peace. Dying children and their siblings, as well as children's feelings about relatives' deaths, are also discussed. (PD)

  4. A Wall of Faces

    ERIC Educational Resources Information Center

    Stevens, Lori

    2008-01-01

    Visitors to the campus of Orland High School (OHS) will never question that they have stepped into a world of the masses: kids, activity, personalities, busyness, and playfulness--a veritable cloud of mild bedlam. The wall of ceramic faces that greets a visitor in the school office is another reminder of the organized chaos that the teachers…

  5. Category search speeds up face-selective fMRI responses in a non-hierarchical cortical face network.

    PubMed

    Jiang, Fang; Badler, Jeremy B; Righi, Giulia; Rossion, Bruno

    2015-05-01

    The human brain is extremely efficient at detecting faces in complex visual scenes, but the spatio-temporal dynamics of this remarkable ability, and how it is influenced by category-search, remain largely unknown. In the present study, human subjects were shown gradually-emerging images of faces or cars in visual scenes, while neural activity was recorded using functional magnetic resonance imaging (fMRI). Category search was manipulated by the instruction to indicate the presence of either a face or a car, in different blocks, as soon as an exemplar of the target category was detected in the visual scene. The category selectivity of most face-selective areas was enhanced when participants were instructed to report the presence of faces in gradually decreasing noise stimuli. Conversely, the same regions showed much less selectivity when participants were instructed instead to detect cars. When "face" was the target category, the fusiform face area (FFA) showed consistently earlier differentiation of face versus car stimuli than did the "occipital face area" (OFA). When "car" was the target category, only the FFA showed differentiation of face versus car stimuli. These observations provide further challenges for hierarchical models of cortical face processing and show that during gradual revealing of information, selective category-search may decrease the required amount of information, enhancing and speeding up category-selective responses in the human brain. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Framing faces: Frame alignment impacts holistic face perception.

    PubMed

    Curby, Kim M; Entenman, Robert

    2016-11-01

    Traditional accounts of face perception emphasise the importance of the prototypical configuration of features within faces. However, here we probe influences of more general perceptual grouping mechanisms on holistic face perception. Participants made part-matching judgments about composite faces presented in intact external oval frames or frames made from misaligned oval parts. This manipulation served to disrupt basic perceptual grouping cues that facilitate the grouping of the two face halves together. This manipulation also produced an external face contour like that in the standard misaligned condition used within the classic composite face task. Notably, by introducing a discontinuity in the external contour, grouping of the face halves into a cohesive unit was discouraged, but face configuration was preserved. Conditions where both the face parts and the frames were misaligned together, as in the typical composite task paradigm, or where just the internal face parts where misaligned, were also included. Disrupting only the face frame similarly disrupted holistic face perception as disrupting both the frame and face configuration. However, misaligned face parts presented in aligned frames also incurred a cost to holistic perception. These findings provide support for the contribution of general-purpose perceptual grouping mechanisms to holistic face perception and are presented and discussed in the context of an enhanced object-based selection account of holistic perception.

  7. Conjunction Faces Alter Confidence-Accuracy Relations for Old Faces

    ERIC Educational Resources Information Center

    Reinitz, Mark Tippens; Loftus, Geoffrey R.

    2017-01-01

    The authors used a state-trace methodology to investigate the informational dimensions used to recognize old and conjunction faces (made by combining parts of separately studied faces). Participants in 3 experiments saw faces presented for 1 s each. They then received a recognition test; faces were presented for varying brief durations and…

  8. Neural synchronization during face-to-face communication.

    PubMed

    Jiang, Jing; Dai, Bohan; Peng, Danling; Zhu, Chaozhe; Liu, Li; Lu, Chunming

    2012-11-07

    Although the human brain may have evolutionarily adapted to face-to-face communication, other modes of communication, e.g., telephone and e-mail, increasingly dominate our modern daily life. This study examined the neural difference between face-to-face communication and other types of communication by simultaneously measuring two brains using a hyperscanning approach. The results showed a significant increase in the neural synchronization in the left inferior frontal cortex during a face-to-face dialog between partners but none during a back-to-back dialog, a face-to-face monologue, or a back-to-back monologue. Moreover, the neural synchronization between partners during the face-to-face dialog resulted primarily from the direct interactions between the partners, including multimodal sensory information integration and turn-taking behavior. The communicating behavior during the face-to-face dialog could be predicted accurately based on the neural synchronization level. These results suggest that face-to-face communication, particularly dialog, has special neural features that other types of communication do not have and that the neural synchronization between partners may underlie successful face-to-face communication.

  9. Voicing on Virtual and Face to Face Discussion

    ERIC Educational Resources Information Center

    Yamat, Hamidah

    2013-01-01

    This paper presents and discusses findings of a study conducted on pre-service teachers' experiences in virtual and face to face discussions. Technology has brought learning nowadays beyond the classroom context or time zone. The learning context and process no longer rely solely on face to face communications in the presence of a teacher.…

  10. Bayesian Face Recognition and Perceptual Narrowing in Face-Space

    ERIC Educational Resources Information Center

    Balas, Benjamin

    2012-01-01

    During the first year of life, infants' face recognition abilities are subject to "perceptual narrowing", the end result of which is that observers lose the ability to distinguish previously discriminable faces (e.g. other-race faces) from one another. Perceptual narrowing has been reported for faces of different species and different races, in…

  11. Does Face Inversion Change Spatial Frequency Tuning?

    ERIC Educational Resources Information Center

    Willenbockel, Verena; Fiset, Daniel; Chauvin, Alan; Blais, Caroline; Arguin, Martin; Tanaka, James W.; Bub, Daniel N.; Gosselin, Frederic

    2010-01-01

    The authors examined spatial frequency (SF) tuning of upright and inverted face identification using an SF variant of the Bubbles technique (F. Gosselin & P. G. Schyns, 2001). In Experiment 1, they validated the SF Bubbles technique in a plaid detection task. In Experiments 2a-c, the SFs used for identifying upright and inverted inner facial…

  12. Face recognition system and method using face pattern words and face pattern bytes

    DOEpatents

    Zheng, Yufeng

    2014-12-23

    The present invention provides a novel system and method for identifying individuals and for face recognition utilizing facial features for face identification. The system and method of the invention comprise creating facial features or face patterns called face pattern words and face pattern bytes for face identification. The invention also provides for pattern recognitions for identification other than face recognition. The invention further provides a means for identifying individuals based on visible and/or thermal images of those individuals by utilizing computer software implemented by instructions on a computer or computer system and a computer readable medium containing instructions on a computer system for face recognition and identification.

  13. Face shape and face identity processing in behavioral variant fronto-temporal dementia: A specific deficit for familiarity and name recognition of famous faces.

    PubMed

    De Winter, François-Laurent; Timmers, Dorien; de Gelder, Beatrice; Van Orshoven, Marc; Vieren, Marleen; Bouckaert, Miriam; Cypers, Gert; Caekebeke, Jo; Van de Vliet, Laura; Goffin, Karolien; Van Laere, Koen; Sunaert, Stefan; Vandenberghe, Rik; Vandenbulcke, Mathieu; Van den Stock, Jan

    2016-01-01

    Deficits in face processing have been described in the behavioral variant of fronto-temporal dementia (bvFTD), primarily regarding the recognition of facial expressions. Less is known about face shape and face identity processing. Here we used a hierarchical strategy targeting face shape and face identity recognition in bvFTD and matched healthy controls. Participants performed 3 psychophysical experiments targeting face shape detection (Experiment 1), unfamiliar face identity matching (Experiment 2), familiarity categorization and famous face-name matching (Experiment 3). The results revealed group differences only in Experiment 3, with a deficit in the bvFTD group for both familiarity categorization and famous face-name matching. Voxel-based morphometry regression analyses in the bvFTD group revealed an association between grey matter volume of the left ventral anterior temporal lobe and familiarity recognition, while face-name matching correlated with grey matter volume of the bilateral ventral anterior temporal lobes. Subsequently, we quantified familiarity-specific and name-specific recognition deficits as the sum of the celebrities of which respectively only the name or only the familiarity was accurately recognized. Both indices were associated with grey matter volume of the bilateral anterior temporal cortices. These findings extent previous results by documenting the involvement of the left anterior temporal lobe (ATL) in familiarity detection and the right ATL in name recognition deficits in fronto-temporal lobar degeneration.

  14. Producing desired ice faces

    PubMed Central

    Shultz, Mary Jane; Brumberg, Alexandra; Bisson, Patrick J.; Shultz, Ryan

    2015-01-01

    The ability to prepare single-crystal faces has become central to developing and testing models for chemistry at interfaces, spectacularly demonstrated by heterogeneous catalysis and nanoscience. This ability has been hampered for hexagonal ice, Ih––a fundamental hydrogen-bonded surface––due to two characteristics of ice: ice does not readily cleave along a crystal lattice plane and properties of ice grown on a substrate can differ significantly from those of neat ice. This work describes laboratory-based methods both to determine the Ih crystal lattice orientation relative to a surface and to use that orientation to prepare any desired face. The work builds on previous results attaining nearly 100% yield of high-quality, single-crystal boules. With these methods, researchers can prepare authentic, single-crystal ice surfaces for numerous studies including uptake measurements, surface reactivity, and catalytic activity of this ubiquitous, fundamental solid. PMID:26512102

  15. CRYSTAL/FACE

    NASA Technical Reports Server (NTRS)

    Baumgardner, Darrel; Kok, Greg; Anderson, Bruce

    2004-01-01

    Droplet Measurement Technologies (DMT), under funding from NASA, participated in the CRYSTAL/FACE field campaign in July, 2002 with measurements of cirrus cloud hydrometeors in the size range from 0.5 to 1600 microns. The measurements were made with the DMT Cloud, Aerosol and Precipitation Spectrometer (CAPS) that was flown on NASA's WB57F. With the exception of the first research flight when the data system failed two hours into the mission, the measurement system performed almost flawlessly during the thirteen flights. The measurements from the CAPS have been essential for interpretation of cirrus cloud properties and their impact on climate. The CAPS data set has been used extensively by the CRYSTAL/FACE investigators and as of the date of this report, have been included in five published research articles, 10 conference presentations and six other journal articles currently in preparation.

  16. Anatomy of ageing face.

    PubMed

    Ilankovan, V

    2014-03-01

    Ageing is a biological process that results from changes at a cellular level, particularly modification of mRNA. The face is affected by the same physiological process and results in skeletal, muscular, and cutaneous ageing; ligamentous attenuation, descent of fat, and ageing of the appendages. I describe these changes on a structural and clinical basis and summarise possible solutions for a rejuvenation surgeon. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  17. Face-n-Food: Gender Differences in Tuning to Faces.

    PubMed

    Pavlova, Marina A; Scheffler, Klaus; Sokolov, Alexander N

    2015-01-01

    Faces represent valuable signals for social cognition and non-verbal communication. A wealth of research indicates that women tend to excel in recognition of facial expressions. However, it remains unclear whether females are better tuned to faces. We presented healthy adult females and males with a set of newly created food-plate images resembling faces (slightly bordering on the Giuseppe Arcimboldo style). In a spontaneous recognition task, participants were shown a set of images in a predetermined order from the least to most resembling a face. Females not only more readily recognized the images as a face (they reported resembling a face on images, on which males still did not), but gave on overall more face responses. The findings are discussed in the light of gender differences in deficient face perception. As most neuropsychiatric, neurodevelopmental and psychosomatic disorders characterized by social brain abnormalities are sex specific, the task may serve as a valuable tool for uncovering impairments in visual face processing.

  18. Foil Face Seal Testing

    NASA Technical Reports Server (NTRS)

    Munson, John

    2009-01-01

    In the seal literature you can find many attempts by various researchers to adapt film riding seals to the gas turbine engine. None have been successful, potential distortion of the sealing faces is the primary reason. There is a film riding device that does accommodate distortion and is in service in aircraft applications, namely the foil bearing. More specifically a foil thrust bearing. These are not intended to be seals, and they do not accommodate large axial movement between shaft & static structure. By combining the 2 a unique type of face seal has been created. It functions like a normal face seal. The foil thrust bearing replaces the normal primary sealing surface. The compliance of the foil bearing allows the foils to track distortion of the mating seal ring. The foil seal has several perceived advantages over existing hydrodynamic designs, enumerated in the chart. Materials and design methodology needed for this application already exist. Also the load capacity requirements for the foil bearing are low since it only needs to support itself and overcome friction forces at the antirotation keys.

  19. Beyond Faces and Expertise

    PubMed Central

    Zhao, Mintao; Bülthoff, Heinrich H.; Bülthoff, Isabelle

    2016-01-01

    Holistic processing—the tendency to perceive objects as indecomposable wholes—has long been viewed as a process specific to faces or objects of expertise. Although current theories differ in what causes holistic processing, they share a fundamental constraint for its generalization: Nonface objects cannot elicit facelike holistic processing in the absence of expertise. Contrary to this prevailing view, here we show that line patterns with salient Gestalt information (i.e., connectedness, closure, and continuity between parts) can be processed as holistically as faces without any training. Moreover, weakening the saliency of Gestalt information in these patterns reduced holistic processing of them, which indicates that Gestalt information plays a crucial role in holistic processing. Therefore, holistic processing can be achieved not only via a top-down route based on expertise, but also via a bottom-up route relying merely on object-based information. The finding that facelike holistic processing can extend beyond the domains of faces and objects of expertise poses a challenge to current dominant theories. PMID:26674129

  20. Effects of color information on face processing using event-related potentials and gamma oscillations.

    PubMed

    Minami, T; Goto, K; Kitazaki, M; Nakauchi, S

    2011-03-10

    In humans, face configuration, contour and color may affect face perception, which is important for social interactions. This study aimed to determine the effect of color information on face perception by measuring event-related potentials (ERPs) during the presentation of natural- and bluish-colored faces. Our results demonstrated that the amplitude of the N170 event-related potential, which correlates strongly with face processing, was higher in response to a bluish-colored face than to a natural-colored face. However, gamma-band activity was insensitive to the deviation from a natural face color. These results indicated that color information affects the N170 associated with a face detection mechanism, which suggests that face color is important for face detection. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved.

  1. Quality labeled faces in the wild (QLFW): a database for studying face recognition in real-world environments

    NASA Astrophysics Data System (ADS)

    Karam, Lina J.; Zhu, Tong

    2015-03-01

    The varying quality of face images is an important challenge that limits the effectiveness of face recognition technology when applied in real-world applications. Existing face image databases do not consider the effect of distortions that commonly occur in real-world environments. This database (QLFW) represents an initial attempt to provide a set of labeled face images spanning the wide range of quality, from no perceived impairment to strong perceived impairment for face detection and face recognition applications. Types of impairment include JPEG2000 compression, JPEG compression, additive white noise, Gaussian blur and contrast change. Subjective experiments are conducted to assess the perceived visual quality of faces under different levels and types of distortions and also to assess the human recognition performance under the considered distortions. One goal of this work is to enable automated performance evaluation of face recognition technologies in the presence of different types and levels of visual distortions. This will consequently enable the development of face recognition systems that can operate reliably on real-world visual content in the presence of real-world visual distortions. Another goal is to enable the development and assessment of visual quality metrics for face images and for face detection and recognition applications.

  2. Tweaked residual convolutional network for face alignment

    NASA Astrophysics Data System (ADS)

    Du, Wenchao; Li, Ke; Zhao, Qijun; Zhang, Yi; Chen, Hu

    2017-08-01

    We propose a novel Tweaked Residual Convolutional Network approach for face alignment with two-level convolutional networks architecture. Specifically, the first-level Tweaked Convolutional Network (TCN) module predicts the landmark quickly but accurately enough as a preliminary, by taking low-resolution version of the detected face holistically as the input. The following Residual Convolutional Networks (RCN) module progressively refines the landmark by taking as input the local patch extracted around the predicted landmark, particularly, which allows the Convolutional Neural Network (CNN) to extract local shape-indexed features to fine tune landmark position. Extensive evaluations show that the proposed Tweaked Residual Convolutional Network approach outperforms existing methods.

  3. Face-space: A unifying concept in face recognition research.

    PubMed

    Valentine, Tim; Lewis, Michael B; Hills, Peter J

    2016-10-01

    The concept of a multidimensional psychological space, in which faces can be represented according to their perceived properties, is fundamental to the modern theorist in face processing. Yet the idea was not clearly expressed until 1991. The background that led to the development of face-space is explained, and its continuing influence on theories of face processing is discussed. Research that has explored the properties of the face-space and sought to understand caricature, including facial adaptation paradigms, is reviewed. Face-space as a theoretical framework for understanding the effect of ethnicity and the development of face recognition is evaluated. Finally, two applications of face-space in the forensic setting are discussed. From initially being presented as a model to explain distinctiveness, inversion, and the effect of ethnicity, face-space has become a central pillar in many aspects of face processing. It is currently being developed to help us understand adaptation effects with faces. While being in principle a simple concept, face-space has shaped, and continues to shape, our understanding of face perception.

  4. Covert face recognition in congenital prosopagnosia: a group study.

    PubMed

    Rivolta, Davide; Palermo, Romina; Schmalzl, Laura; Coltheart, Max

    2012-03-01

    Even though people with congenital prosopagnosia (CP) never develop a normal ability to "overtly" recognize faces, some individuals show indices of "covert" (or implicit) face recognition. The aim of this study was to demonstrate covert face recognition in CP when participants could not overtly recognize the faces. Eleven people with CP completed three tasks assessing their overt face recognition ability, and three tasks assessing their "covert" face recognition: a Forced choice familiarity task, a Forced choice cued task, and a Priming task. Evidence of covert recognition was observed with the Forced choice familiarity task, but not the Priming task. In addition, we propose that the Forced choice cued task does not measure covert processing as such, but instead "provoked-overt" recognition. Our study clearly shows that people with CP demonstrate covert recognition for faces that they cannot overtly recognize, and that behavioural tasks vary in their sensitivity to detect covert recognition in CP. Copyright © 2011 Elsevier Srl. All rights reserved.

  5. 6. VIEW FACING EAST ALONG NORTH FACE OF BRIDGE AT ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    6. VIEW FACING EAST ALONG NORTH FACE OF BRIDGE AT CONSTRUCTION DETAILS OF WOOD RAILINGS AND STONE ABUTMENTS. - South Fork Tuolumne River Bridge, Spanning South Fork Tuolumne River on Tioga Road, Mather, Tuolumne County, CA

  6. Seeing faces is necessary for face-patch formation

    PubMed Central

    Arcaro, Michael J.; Schade, Peter F.; Vincent, Justin L.; Ponce, Carlos R.; Livingstone, Margaret S.

    2017-01-01

    Here we report that monkeys raised without exposure to faces did not develop face patches, but did develop domains for other categories, and did show normal retinotopic organization, indicating that early face deprivation leads to a highly selective cortical processing deficit. Therefore experience must be necessary for the formation, or maintenance, of face domains. Gaze tracking revealed that control monkeys looked preferentially at faces, even at ages prior to the emergence of face patches, but face-deprived monkeys did not, indicating that face looking is not innate. A retinotopic organization is present throughout the visual system at birth, so selective early viewing behavior could bias category-specific visual responses towards particular retinotopic representations, thereby leading to domain formation in stereotyped locations in IT, without requiring category-specific templates or biases. Thus we propose that environmental importance influences viewing behavior, viewing behavior drives neuronal activity, and neuronal activity sculpts domain formation. PMID:28869581

  7. 9. WEST FACE OF OLD THEODOLITE BUILDING; WEST FACE OF ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    9. WEST FACE OF OLD THEODOLITE BUILDING; WEST FACE OF EAST PHOTO TOWER IN BACKGROUND - Vandenberg Air Force Base, Space Launch Complex 3, Launch Pad 3 East, Napa & Alden Roads, Lompoc, Santa Barbara County, CA

  8. Seeing faces is necessary for face-domain formation.

    PubMed

    Arcaro, Michael J; Schade, Peter F; Vincent, Justin L; Ponce, Carlos R; Livingstone, Margaret S

    2017-10-01

    Here we report that monkeys raised without exposure to faces did not develop face domains, but did develop domains for other categories and did show normal retinotopic organization, indicating that early face deprivation leads to a highly selective cortical processing deficit. Therefore, experience must be necessary for the formation (or maintenance) of face domains. Gaze tracking revealed that control monkeys looked preferentially at faces, even at ages prior to the emergence of face domains, but face-deprived monkeys did not, indicating that face looking is not innate. A retinotopic organization is present throughout the visual system at birth, so selective early viewing behavior could bias category-specific visual responses toward particular retinotopic representations, thereby leading to domain formation in stereotyped locations in inferotemporal cortex, without requiring category-specific templates or biases. Thus, we propose that environmental importance influences viewing behavior, viewing behavior drives neuronal activity, and neuronal activity sculpts domain formation.

  9. 1. EAST FACING SIDE EAST AND SOUTH SOUTH FACING SIDE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. EAST FACING SIDE EAST AND SOUTH SOUTH FACING SIDE RESIDENTIAL AREA AROUND BUILDINGS 136, 137, & 138 - Hill Field, Non-Commissioned Officers' Quarters, North side of Fourth street, East side of E Avenue, Layton, Davis County, UT

  10. The 'Face' of Jupiter

    NASA Image and Video Library

    2017-06-29

    JunoCam images aren't just for art and science -- sometimes they are processed to bring a chuckle. This image, processed by citizen scientist Jason Major, is titled "Jovey McJupiterface." By rotating the image 180 degrees and orienting it from south up, two white oval storms turn into eyeballs, and the "face" of Jupiter is revealed. The original image was acquired by JunoCam on NASA's Juno spacecraft on May 19, 2017 at 11:20 a.m. PT (2: 20 p.m. ET) from an altitude of 12,075 miles (19,433 kilometers). https://photojournal.jpl.nasa.gov/catalog/PIA21394

  11. Coronal Hole Facing Earth

    NASA Image and Video Library

    2018-05-08

    An extensive equatorial coronal hole has rotated so that it is now facing Earth (May 2-4, 2018). The dark coronal hole extends about halfway across the solar disk. It was observed in a wavelength of extreme ultraviolet light. This magnetically open area is streaming solar wind (i.e., a stream of charged particles released from the sun) into space. When Earth enters a solar wind stream and the stream interacts with our magnetosphere, we often experience nice displays of aurora. Videos are available at https://photojournal.jpl.nasa.gov/catalog/PIA00624

  12. Coronal Hole Faces Earth

    NASA Image and Video Library

    2017-08-14

    A substantial coronal hole rotated into a position where it is facing Earth (Aug. 9-11, 2017). Coronal holes are areas of open magnetic field that spew out charged particles as solar wind that spreads into space. If that solar wind interacts with our own magnetosphere it can generate aurora. In this view of the sun in extreme ultraviolet light, the coronal hole appears as the dark stretch near the center of the sun. It was the most distinctive feature on the sun over the past week. Movies are available at https://photojournal.jpl.nasa.gov/catalog/PIA21874

  13. Coronal Hole Facing Earth

    NASA Image and Video Library

    2018-05-15

    An extensive equatorial coronal hole has rotated so that it is now facing Earth (May 2-4, 2018). The dark coronal hole extends about halfway across the solar disk. It was observed in a wavelength of extreme ultraviolet light. This magnetically open area is streaming solar wind (i.e., a stream of charged particles released from the sun) into space. When Earth enters a solar wind stream and the stream interacts with our magnetosphere, we often experience nice displays of aurora. https://photojournal.jpl.nasa.gov/catalog/PIA00577

  14. Prevalence of face recognition deficits in middle childhood.

    PubMed

    Bennetts, Rachel J; Murray, Ebony; Boyce, Tian; Bate, Sarah

    2017-02-01

    Approximately 2-2.5% of the adult population is believed to show severe difficulties with face recognition, in the absence of any neurological injury-a condition known as developmental prosopagnosia (DP). However, to date no research has attempted to estimate the prevalence of face recognition deficits in children, possibly because there are very few child-friendly, well-validated tests of face recognition. In the current study, we examined face and object recognition in a group of primary school children (aged 5-11 years), to establish whether our tests were suitable for children and to provide an estimate of face recognition difficulties in children. In Experiment 1 (n = 184), children completed a pre-existing test of child face memory, the Cambridge Face Memory Test-Kids (CFMT-K), and a bicycle test with the same format. In Experiment 2 (n = 413), children completed three-alternative forced-choice matching tasks with faces and bicycles. All tests showed good psychometric properties. The face and bicycle tests were well matched for difficulty and showed a similar developmental trajectory. Neither the memory nor the matching tests were suitable to detect impairments in the youngest groups of children, but both tests appear suitable to screen for face recognition problems in middle childhood. In the current sample, 1.2-5.2% of children showed difficulties with face recognition; 1.2-4% showed face-specific difficulties-that is, poor face recognition with typical object recognition abilities. This is somewhat higher than previous adult estimates: It is possible that face matching tests overestimate the prevalence of face recognition difficulties in children; alternatively, some children may "outgrow" face recognition difficulties.

  15. Detection of reassortant H5N6 clade 2.3.4.4 highly pathogenic avian influenza virus in a black-faced spoonbill (Platalea minor) found dead, Taiwan, 2017

    USDA-ARS?s Scientific Manuscript database

    H5N1 high pathogenicity avian influenza virus (HPAIV) emerged in 1996 in Guangdong China (A/goose/Guangdong/1/1996, Gs/GD) has caused outbreaks in over 80 countries throughout Eurasia, Africa, and North America. A H5N6 HPAIV clade 2.3.4.4, A/ black-faced spoonbill /Taiwan/DB645/2017 (SB/Tw/17), was ...

  16. A special purpose knowledge-based face localization method

    NASA Astrophysics Data System (ADS)

    Hassanat, Ahmad; Jassim, Sabah

    2008-04-01

    This paper is concerned with face localization for visual speech recognition (VSR) system. Face detection and localization have got a great deal of attention in the last few years, because it is an essential pre-processing step in many techniques that handle or deal with faces, (e.g. age, face, gender, race and visual speech recognition). We shall present an efficient method for localization human's faces in video images captured on mobile constrained devices, under a wide variation in lighting conditions. We use a multiphase method that may include all or some of the following steps starting with image pre-processing, followed by a special purpose edge detection, then an image refinement step. The output image will be passed through a discrete wavelet decomposition procedure, and the computed LL sub-band at a certain level will be transformed into a binary image that will be scanned by using a special template to select a number of possible candidate locations. Finally, we fuse the scores from the wavelet step with scores determined by color information for the candidate location and employ a form of fuzzy logic to distinguish face from non-face locations. We shall present results of large number of experiments to demonstrate that the proposed face localization method is efficient and achieve high level of accuracy that outperforms existing general-purpose face detection methods.

  17. Bayesian Face Recognition and Perceptual Narrowing in Face-Space

    PubMed Central

    Balas, Benjamin

    2012-01-01

    During the first year of life, infants’ face recognition abilities are subject to “perceptual narrowing,” the end result of which is that observers lose the ability to distinguish previously discriminable faces (e.g. other-race faces) from one another. Perceptual narrowing has been reported for faces of different species and different races, in developing humans and primates. Though the phenomenon is highly robust and replicable, there have been few efforts to model the emergence of perceptual narrowing as a function of the accumulation of experience with faces during infancy. The goal of the current study is to examine how perceptual narrowing might manifest as statistical estimation in “face space,” a geometric framework for describing face recognition that has been successfully applied to adult face perception. Here, I use a computer vision algorithm for Bayesian face recognition to study how the acquisition of experience in face space and the presence of race categories affect performance for own and other-race faces. Perceptual narrowing follows from the establishment of distinct race categories, suggesting that the acquisition of category boundaries for race is a key computational mechanism in developing face expertise. PMID:22709406

  18. Face-to-Face Interference in Typical and Atypical Development

    ERIC Educational Resources Information Center

    Riby, Deborah M.; Doherty-Sneddon, Gwyneth; Whittle, Lisa

    2012-01-01

    Visual communication cues facilitate interpersonal communication. It is important that we look at faces to retrieve and subsequently process such cues. It is also important that we sometimes look away from faces as they increase cognitive load that may interfere with online processing. Indeed, when typically developing individuals hold face gaze…

  19. Event-Related Brain Potential Correlates of Emotional Face Processing

    ERIC Educational Resources Information Center

    Eimer, Martin; Holmes, Amanda

    2007-01-01

    Results from recent event-related brain potential (ERP) studies investigating brain processes involved in the detection and analysis of emotional facial expression are reviewed. In all experiments, emotional faces were found to trigger an increased ERP positivity relative to neutral faces. The onset of this emotional expression effect was…

  20. Faces of Science

    Science.gov Websites

    Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale effects of tuberculosis infection on AIDS, and issues related to national security. Play video Read more

  1. Congenital prosopagnosia: face-blind from birth.

    PubMed

    Behrmann, Marlene; Avidan, Galia

    2005-04-01

    Congenital prosopagnosia refers to the deficit in face processing that is apparent from early childhood in the absence of any underlying neurological basis and in the presence of intact sensory and intellectual function. Several such cases have been described recently and elucidating the mechanisms giving rise to this impairment should aid our understanding of the psychological and neural mechanisms mediating face processing. Fundamental questions include: What is the nature and extent of the face-processing deficit in congenital prosopagnosia? Is the deficit related to a more general perceptual deficit such as the failure to process configural information? Are any neural alterations detectable using fMRI, ERP or structural analyses of the anatomy of the ventral visual cortex? We discuss these issues in relation to the existing literature and suggest directions for future research.

  2. Aging changes in the face

    MedlinePlus

    ... this page: //medlineplus.gov/ency/article/004004.htm Aging changes in the face To use the sharing ... face with age References Brodie SE, Francis JH. Aging and disorders of the eye. In: Fillit HM, ...

  3. Vitiligo on the face (image)

    MedlinePlus

    This is a picture of vitiligo on the face. Complete loss of melanin, the primary skin pigment, ... the same areas on both sides of the face -- symmetrically -- or it may be patchy -- asymmetrical. The ...

  4. Women are better at seeing faces where there are none: an ERP study of face pareidolia.

    PubMed

    Proverbio, Alice M; Galli, Jessica

    2016-09-01

    Event-related potentials (ERPs) were recorded in 26 right-handed students while they detected pictures of animals intermixed with those of familiar objects, faces and faces-in-things (FITs). The face-specific N170 ERP component over the right hemisphere was larger in response to faces and FITs than to objects. The vertex positive potential (VPP) showed a difference in FIT encoding processes between males and females at frontal sites; while for men, the FIT stimuli elicited a VPP of intermediate amplitude (between that for faces and objects), for women, there was no difference in VPP responses to faces or FITs, suggesting a marked anthropomorphization of objects in women. SwLORETA source reconstructions carried out to estimate the intracortical generators of ERPs in the 150-190 ms time window showed how, in the female brain, FIT perception was associated with the activation of brain areas involved in the affective processing of faces (right STS, BA22; posterior cingulate cortex, BA22; and orbitofrontal cortex, BA10) in addition to regions linked to shape processing (left cuneus, BA18/30). Conversely, in the men, the activation of occipito/parietal regions was prevalent, with a considerably smaller activation of BA10. The data suggest that the female brain is more inclined to anthropomorphize perfectly real objects compared to the male brain. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  5. Women are better at seeing faces where there are none: an ERP study of face pareidolia

    PubMed Central

    Galli, Jessica

    2016-01-01

    Event-related potentials (ERPs) were recorded in 26 right-handed students while they detected pictures of animals intermixed with those of familiar objects, faces and faces-in-things (FITs). The face-specific N170 ERP component over the right hemisphere was larger in response to faces and FITs than to objects. The vertex positive potential (VPP) showed a difference in FIT encoding processes between males and females at frontal sites; while for men, the FIT stimuli elicited a VPP of intermediate amplitude (between that for faces and objects), for women, there was no difference in VPP responses to faces or FITs, suggesting a marked anthropomorphization of objects in women. SwLORETA source reconstructions carried out to estimate the intracortical generators of ERPs in the 150–190 ms time window showed how, in the female brain, FIT perception was associated with the activation of brain areas involved in the affective processing of faces (right STS, BA22; posterior cingulate cortex, BA22; and orbitofrontal cortex, BA10) in addition to regions linked to shape processing (left cuneus, BA18/30). Conversely, in the men, the activation of occipito/parietal regions was prevalent, with a considerably smaller activation of BA10. The data suggest that the female brain is more inclined to anthropomorphize perfectly real objects compared to the male brain. PMID:27217120

  6. Smiles in face matching: Idiosyncratic information revealed through a smile improves unfamiliar face matching performance.

    PubMed

    Mileva, Mila; Burton, A Mike

    2018-06-19

    Unfamiliar face matching is a surprisingly difficult task, yet we often rely on people's matching decisions in applied settings (e.g., border control). Most attempts to improve accuracy (including training and image manipulation) have had very limited success. In a series of studies, we demonstrate that using smiling rather than neutral pairs of images brings about significant improvements in face matching accuracy. This is true for both match and mismatch trials, implying that the information provided through a smile helps us detect images of the same identity as well as distinguishing between images of different identities. Study 1 compares matching performance when images in the face pair display either an open-mouth smile or a neutral expression. In Study 2, we add an intermediate level, closed-mouth smile, to identify the effect of teeth being exposed, and Study 3 explores face matching accuracy when only information about the lower part of the face is available. Results demonstrate that an open-mouth smile changes the face in an idiosyncratic way which aids face matching decisions. Such findings have practical implications for matching in the applied context where we typically use neutral images to represent ourselves in official documents. © 2018 The British Psychological Society.

  7. Validity, Sensitivity, and Responsiveness of the 11-Face Faces Pain Scale to Postoperative Pain in Adult Orthopedic Surgery Patients.

    PubMed

    Van Giang, Nguyen; Chiu, Hsiao-Yean; Thai, Duong Hong; Kuo, Shu-Yu; Tsai, Pei-Shan

    2015-10-01

    Pain is common in patients after orthopedic surgery. The 11-face Faces Pain Scale has not been validated for use in adult patients with postoperative pain. To assess the validity of the 11-face Faces Pain Scale and its ability to detect responses to pain medications, and to determine whether the sensitivity of the 11-face Faces Pain Scale for detecting changes in pain intensity over time is associated with gender differences in adult postorthopedic surgery patients. The 11-face Faces Pain Scale was translated into Vietnamese using forward and back translation. Postoperative pain was assessed using an 11-point numerical rating scale and the 11-face Faces Pain Scale on the day of surgery, and before (Time 1) and every 30 minutes after (Times 2-5) the patients had taken pain medications on the first postoperative day. The 11-face Faces Pain Scale highly correlated with the numerical rating scale (r = 0.78, p < .001). When the scores from each follow-up test (Times 2-5) were compared with those from the baseline test (Time 1), the effect sizes were -0.70, -1.05, -1.20, and -1.31, and the standardized response means were -1.17, -1.59, -1.66, and -1.82, respectively. The mean change in pain intensity, but not gender-time interaction effect, over the five time points was significant (F = 182.03, p < .001). Our results support that the 11-face Faces Pain Scale is appropriate for measuring acute postoperative pain in adults. Copyright © 2015 American Society for Pain Management Nursing. Published by Elsevier Inc. All rights reserved.

  8. Neuromagnetic evidence that the right fusiform face area is essential for human face awareness: An intermittent binocular rivalry study.

    PubMed

    Kume, Yuko; Maekawa, Toshihiko; Urakawa, Tomokazu; Hironaga, Naruhito; Ogata, Katsuya; Shigyo, Maki; Tobimatsu, Shozo

    2016-08-01

    When and where the awareness of faces is consciously initiated is unclear. We used magnetoencephalography to probe the brain responses associated with face awareness under intermittent pseudo-rivalry (PR) and binocular rivalry (BR) conditions. The stimuli comprised three pictures: a human face, a monkey face and a house. In the PR condition, we detected the M130 component, which has been minimally characterized in previous research. We obtained a clear recording of the M170 component in the fusiform face area (FFA), and found that this component had an earlier response time to faces compared with other objects. The M170 occurred predominantly in the right hemisphere in both conditions. In the BR condition, the amplitude of the M130 significantly increased in the right hemisphere irrespective of the physical characteristics of the visual stimuli. Conversely, we did not detect the M170 when the face image was suppressed in the BR condition, although this component was clearly present when awareness for the face was initiated. We also found a significant difference in the latency of the M170 (humanface stimuli are imperative for evoking the M170 and that the right FFA plays a critical role in human face awareness. Copyright © 2016. Published by Elsevier Ireland Ltd.

  9. Observed touch on a non-human face is not remapped onto the human observer's own face.

    PubMed

    Beck, Brianna; Bertini, Caterina; Scarpazza, Cristina; Làdavas, Elisabetta

    2013-01-01

    Visual remapping of touch (VRT) is a phenomenon in which seeing a human face being touched enhances detection of tactile stimuli on the observer's own face, especially when the observed face expresses fear. This study tested whether VRT would occur when seeing touch on monkey faces and whether it would be similarly modulated by facial expressions. Human participants detected near-threshold tactile stimulation on their own cheeks while watching fearful, happy, and neutral human or monkey faces being concurrently touched or merely approached by fingers. We predicted minimal VRT for neutral and happy monkey faces but greater VRT for fearful monkey faces. The results with human faces replicated previous findings, demonstrating stronger VRT for fearful expressions than for happy or neutral expressions. However, there was no VRT (i.e. no difference between accuracy in touch and no-touch trials) for any of the monkey faces, regardless of facial expression, suggesting that touch on a non-human face is not remapped onto the somatosensory system of the human observer.

  10. Observed Touch on a Non-Human Face Is Not Remapped onto the Human Observer's Own Face

    PubMed Central

    Beck, Brianna; Bertini, Caterina; Scarpazza, Cristina; Làdavas, Elisabetta

    2013-01-01

    Visual remapping of touch (VRT) is a phenomenon in which seeing a human face being touched enhances detection of tactile stimuli on the observer's own face, especially when the observed face expresses fear. This study tested whether VRT would occur when seeing touch on monkey faces and whether it would be similarly modulated by facial expressions. Human participants detected near-threshold tactile stimulation on their own cheeks while watching fearful, happy, and neutral human or monkey faces being concurrently touched or merely approached by fingers. We predicted minimal VRT for neutral and happy monkey faces but greater VRT for fearful monkey faces. The results with human faces replicated previous findings, demonstrating stronger VRT for fearful expressions than for happy or neutral expressions. However, there was no VRT (i.e. no difference between accuracy in touch and no-touch trials) for any of the monkey faces, regardless of facial expression, suggesting that touch on a non-human face is not remapped onto the somatosensory system of the human observer. PMID:24250781

  11. Two Faces of Pluto

    NASA Image and Video Library

    2015-07-01

    This pair of approximately true color images of Pluto and its big moon Charon, taken by NASA's New Horizons spacecraft, highlight the dramatically different appearance of different sides of the dwarf planet, and reveal never-before-seen details on Pluto's varied surface. The views were made by combining high-resolution black-and-white images from the Long Range Reconnaissance Imager (LORRI) with color information from the lower-resolution color camera that is part of the Ralph instrument. The left-hand image shows the side of Pluto that always faces away from Charon -- this is the side that will be seen at highest resolution by New Horizons when it makes its close approach to Pluto on July 14th. This hemisphere is dominated by a very dark region that extends along the equator and is redder than its surroundings, alongside a strikingly bright, paler-colored region which straddles the equator on the right-hand side of the disk. The opposite hemisphere, the side that faces Charon, is seen in the right-hand image. The most dramatic feature on this side of Pluto is a row of dark dots arranged along the equator. The origin of all these features is still mysterious, but may be revealed in the much more detailed images that will be obtained as the spacecraft continues its approach to Pluto. In both images, Charon shows a darker and grayer color than Pluto, and a conspicuous dark polar region. The left-hand image was obtained at 5:37 UT on June 25th 2015, at a distance from Pluto of 22.9 million kilometers (14.3 million miles) and has a central longitude of 152 degrees. The right-hand image was obtained at 23:15 UT on June 27th 2015, at a distance from Pluto of 19.7 million kilometers (12.2 million miles) with a central longitude of 358 degrees. Insets show the orientation of Pluto in each image -- the solid lines mark the equator and the prime meridian, which is defined to be the longitude that always faces Charon. The smallest visible features are about 200 km (120 miles

  12. [Face recognition in patients with schizophrenia].

    PubMed

    Doi, Hirokazu; Shinohara, Kazuyuki

    2012-07-01

    It is well known that patients with schizophrenia show severe deficiencies in social communication skills. These deficiencies are believed to be partly derived from abnormalities in face recognition. However, the exact nature of these abnormalities exhibited by schizophrenic patients with respect to face recognition has yet to be clarified. In the present paper, we review the main findings on face recognition deficiencies in patients with schizophrenia, particularly focusing on abnormalities in the recognition of facial expression and gaze direction, which are the primary sources of information of others' mental states. The existing studies reveal that the abnormal recognition of facial expression and gaze direction in schizophrenic patients is attributable to impairments in both perceptual processing of visual stimuli, and cognitive-emotional responses to social information. Furthermore, schizophrenic patients show malfunctions in distributed neural regions, ranging from the fusiform gyrus recruited in the structural encoding of facial stimuli, to the amygdala which plays a primary role in the detection of the emotional significance of stimuli. These findings were obtained from research in patient groups with heterogeneous characteristics. Because previous studies have indicated that impairments in face recognition in schizophrenic patients might vary according to the types of symptoms, it is of primary importance to compare the nature of face recognition deficiencies and the impairments of underlying neural functions across sub-groups of patients.

  13. Adaptive error correction codes for face identification

    NASA Astrophysics Data System (ADS)

    Hussein, Wafaa R.; Sellahewa, Harin; Jassim, Sabah A.

    2012-06-01

    Face recognition in uncontrolled environments is greatly affected by fuzziness of face feature vectors as a result of extreme variation in recording conditions (e.g. illumination, poses or expressions) in different sessions. Many techniques have been developed to deal with these variations, resulting in improved performances. This paper aims to model template fuzziness as errors and investigate the use of error detection/correction techniques for face recognition in uncontrolled environments. Error correction codes (ECC) have recently been used for biometric key generation but not on biometric templates. We have investigated error patterns in binary face feature vectors extracted from different image windows of differing sizes and for different recording conditions. By estimating statistical parameters for the intra-class and inter-class distributions of Hamming distances in each window, we encode with appropriate ECC's. The proposed approached is tested for binarised wavelet templates using two face databases: Extended Yale-B and Yale. We shall demonstrate that using different combinations of BCH-based ECC's for different blocks and different recording conditions leads to in different accuracy rates, and that using ECC's results in significantly improved recognition results.

  14. Neural markers of opposite-sex bias in face processing.

    PubMed

    Proverbio, Alice Mado; Riva, Federica; Martin, Eleonora; Zani, Alberto

    2010-01-01

    Some behavioral and neuroimaging studies suggest that adults prefer to view attractive faces of the opposite sex more than attractive faces of the same sex. However, unlike the other-race face effect (Caldara et al., 2004), little is known regarding the existence of an opposite-/same-sex bias in face processing. In this study, the faces of 130 attractive male and female adults were foveally presented to 40 heterosexual university students (20 men and 20 women) who were engaged in a secondary perceptual task (landscape detection). The automatic processing of face gender was investigated by recording ERPs from 128 scalp sites. Neural markers of opposite- vs. same-sex bias in face processing included larger and earlier centro-parietal N400s in response to faces of the opposite sex and a larger late positivity (LP) to same-sex faces. Analysis of intra-cortical neural generators (swLORETA) showed that facial processing-related (FG, BA37, BA20/21) and emotion-related brain areas (the right parahippocampal gyrus, BA35; uncus, BA36/38; and the cingulate gyrus, BA24) had higher activations in response to opposite- than same-sex faces. The results of this analysis, along with data obtained from ERP recordings, support the hypothesis that both genders process opposite-sex faces differently than same-sex faces. The data also suggest a hemispheric asymmetry in the processing of opposite-/same-sex faces, with the right hemisphere involved in processing same-sex faces and the left hemisphere involved in processing faces of the opposite sex. The data support previous literature suggesting a right lateralization for the representation of self-image and body awareness.

  15. Being BOLD: The neural dynamics of face perception.

    PubMed

    Gentile, Francesco; Ales, Justin; Rossion, Bruno

    2017-01-01

    According to a non-hierarchical view of human cortical face processing, selective responses to faces may emerge in a higher-order area of the hierarchy, in the lateral part of the middle fusiform gyrus (fusiform face area [FFA]) independently from face-selective responses in the lateral inferior occipital gyrus (occipital face area [OFA]), a lower order area. Here we provide a stringent test of this hypothesis by gradually revealing segmented face stimuli throughout strict linear descrambling of phase information [Ales et al., 2012]. Using a short sampling rate (500 ms) of fMRI acquisition and single subject statistical analysis, we show a face-selective responses emerging earlier, that is, at a lower level of structural (i.e., phase) information, in the FFA compared with the OFA. In both regions, a face detection response emerging at a lower level of structural information for upright than inverted faces, both in the FFA and OFA, in line with behavioral responses and with previous findings of delayed responses to inverted faces with direct recordings of neural activity were also reported. Overall, these results support the non-hierarchical view of human cortical face processing and open new perspectives for time-resolved analysis at the single subject level of fMRI data obtained during continuously evolving visual stimulation. Hum Brain Mapp 38:120-139, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  16. Face lift postoperative recovery.

    PubMed

    Mottura, A Aldo

    2002-01-01

    The purpose of this paper is to describe what I have studied and experienced, mainly regarding the control and prediction of the postoperative edema; how to achieve an agreeable recovery and give positive support to the patient, who in turn will receive pleasant sensations that neutralize the negative consequences of the surgery.After the skin is lifted, the drainage flow to the flaps is reversed abruptly toward the medial part of the face, where the flap bases are located. The thickness and extension of the flap determines the magnitude of the post-op edema, which is also augmented by medial surgeries (blepharo, rhino) whose trauma obstruct their natural drainage, increasing the congestion and edema. To study the lymphatic drainage, the day before an extended face lift (FL) a woman was infiltrated in the cheek skin with lynfofast (solution of tecmesio) and the absorption was observed by gamma camera. Seven days after the FL she underwent the same study; we observed no absorption by the lymphatic, concluding that a week after surgery, the lymphatic network was still damaged. To study the venous return during surgery, a fine catheter was introduced into the external jugular vein up to the mandibular border to measure the peripheral pressure. Following platysma plication the pressure rose, and again after a simple bandage, but with an elastic bandage it increased even further, diminishing considerably when it was released. Hence, platysma plication and the elastic bandage on the neck augment the venous congestion of the face. There are diseases that produce and can prolong the surgical edema: cardiac, hepatic, and renal insufficiencies, hypothyroidism, malnutrition, etc. According to these factors, the post-op edema can be predicted, the surgeon can choose between a wide dissection or a medial surgery, depending on the social or employment compromises the patient has, or the patient must accept a prolonged recovery if a complex surgery is necessary. Operative

  17. Enhancing the performance of cooperative face detector by NFGS

    NASA Astrophysics Data System (ADS)

    Yesugade, Snehal; Dave, Palak; Srivastava, Srinkhala; Das, Apurba

    2015-07-01

    Computerized human face detection is an important task of deformable pattern recognition in today's world. Especially in cooperative authentication scenarios like ATM fraud detection, attendance recording, video tracking and video surveillance, the accuracy of the face detection engine in terms of accuracy, memory utilization and speed have been active areas of research for the last decade. The Haar based face detection or SIFT and EBGM based face recognition systems are fairly reliable in this regard. But, there the features are extracted in terms of gray textures. When the input is a high resolution online video with a fairly large viewing area, Haar needs to search for face everywhere (say 352×250 pixels) and every time (e.g., 30 FPS capture all the time). In the current paper we have proposed to address both the aforementioned scenarios by a neuro-visually inspired method of figure-ground segregation (NFGS) [5] to result in a two-dimensional binary array from gray face image. The NFGS would identify the reference video frame in a low sampling rate and updates the same with significant change of environment like illumination. The proposed algorithm would trigger the face detector only when appearance of a new entity is encountered into the viewing area. To address the detection accuracy, classical face detector would be enabled only in a narrowed down region of interest (RoI) as fed by the NFGS. The act of updating the RoI would be done in each frame online with respect to the moving entity which in turn would improve both FR (False Rejection) and FA (False Acceptance) of the face detection system.

  18. Exploring the unconscious using faces.

    PubMed

    Axelrod, Vadim; Bar, Moshe; Rees, Geraint

    2015-01-01

    Understanding the mechanisms of unconscious processing is one of the most substantial endeavors of cognitive science. While there are many different empirical ways to address this question, the use of faces in such research has proven exceptionally fruitful. We review here what has been learned about unconscious processing through the use of faces and face-selective neural correlates. A large number of cognitive systems can be explored with faces, including emotions, social cueing and evaluation, attention, multisensory integration, and various aspects of face processing. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Learning Compact Binary Face Descriptor for Face Recognition.

    PubMed

    Lu, Jiwen; Liong, Venice Erin; Zhou, Xiuzhuang; Zhou, Jie

    2015-10-01

    Binary feature descriptors such as local binary patterns (LBP) and its variations have been widely used in many face recognition systems due to their excellent robustness and strong discriminative power. However, most existing binary face descriptors are hand-crafted, which require strong prior knowledge to engineer them by hand. In this paper, we propose a compact binary face descriptor (CBFD) feature learning method for face representation and recognition. Given each face image, we first extract pixel difference vectors (PDVs) in local patches by computing the difference between each pixel and its neighboring pixels. Then, we learn a feature mapping to project these pixel difference vectors into low-dimensional binary vectors in an unsupervised manner, where 1) the variance of all binary codes in the training set is maximized, 2) the loss between the original real-valued codes and the learned binary codes is minimized, and 3) binary codes evenly distribute at each learned bin, so that the redundancy information in PDVs is removed and compact binary codes are obtained. Lastly, we cluster and pool these binary codes into a histogram feature as the final representation for each face image. Moreover, we propose a coupled CBFD (C-CBFD) method by reducing the modality gap of heterogeneous faces at the feature level to make our method applicable to heterogeneous face recognition. Extensive experimental results on five widely used face datasets show that our methods outperform state-of-the-art face descriptors.

  20. The many faces of research on face perception.

    PubMed

    Little, Anthony C; Jones, Benedict C; DeBruine, Lisa M

    2011-06-12

    Face perception is fundamental to human social interaction. Many different types of important information are visible in faces and the processes and mechanisms involved in extracting this information are complex and can be highly specialized. The importance of faces has long been recognized by a wide range of scientists. Importantly, the range of perspectives and techniques that this breadth has brought to face perception research has, in recent years, led to many important advances in our understanding of face processing. The articles in this issue on face perception each review a particular arena of interest in face perception, variously focusing on (i) the social aspects of face perception (attraction, recognition and emotion), (ii) the neural mechanisms underlying face perception (using brain scanning, patient data, direct stimulation of the brain, visual adaptation and single-cell recording), and (iii) comparative aspects of face perception (comparing adult human abilities with those of chimpanzees and children). Here, we introduce the central themes of the issue and present an overview of the articles.

  1. Visual cryptography for face privacy

    NASA Astrophysics Data System (ADS)

    Ross, Arun; Othman, Asem A.

    2010-04-01

    We discuss the problem of preserving the privacy of a digital face image stored in a central database. In the proposed scheme, a private face image is dithered into two host face images such that it can be revealed only when both host images are simultaneously available; at the same time, the individual host images do not reveal the identity of the original image. In order to accomplish this, we appeal to the field of Visual Cryptography. Experimental results confirm the following: (a) the possibility of hiding a private face image in two unrelated host face images; (b) the successful matching of face images that are reconstructed by superimposing the host images; and (c) the inability of the host images, known as sheets, to reveal the identity of the secret face image.

  2. [Comparative studies of face recognition].

    PubMed

    Kawai, Nobuyuki

    2012-07-01

    Every human being is proficient in face recognition. However, the reason for and the manner in which humans have attained such an ability remain unknown. These questions can be best answered-through comparative studies of face recognition in non-human animals. Studies in both primates and non-primates show that not only primates, but also non-primates possess the ability to extract information from their conspecifics and from human experimenters. Neural specialization for face recognition is shared with mammals in distant taxa, suggesting that face recognition evolved earlier than the emergence of mammals. A recent study indicated that a social insect, the golden paper wasp, can distinguish their conspecific faces, whereas a closely related species, which has a less complex social lifestyle with just one queen ruling a nest of underlings, did not show strong face recognition for their conspecifics. Social complexity and the need to differentiate between one another likely led humans to evolve their face recognition abilities.

  3. Image preprocessing study on KPCA-based face recognition

    NASA Astrophysics Data System (ADS)

    Li, Xuan; Li, Dehua

    2015-12-01

    Face recognition as an important biometric identification method, with its friendly, natural, convenient advantages, has obtained more and more attention. This paper intends to research a face recognition system including face detection, feature extraction and face recognition, mainly through researching on related theory and the key technology of various preprocessing methods in face detection process, using KPCA method, focuses on the different recognition results in different preprocessing methods. In this paper, we choose YCbCr color space for skin segmentation and choose integral projection for face location. We use erosion and dilation of the opening and closing operation and illumination compensation method to preprocess face images, and then use the face recognition method based on kernel principal component analysis method for analysis and research, and the experiments were carried out using the typical face database. The algorithms experiment on MATLAB platform. Experimental results show that integration of the kernel method based on PCA algorithm under certain conditions make the extracted features represent the original image information better for using nonlinear feature extraction method, which can obtain higher recognition rate. In the image preprocessing stage, we found that images under various operations may appear different results, so as to obtain different recognition rate in recognition stage. At the same time, in the process of the kernel principal component analysis, the value of the power of the polynomial function can affect the recognition result.

  4. The fusiform face area: a cortical region specialized for the perception of faces

    PubMed Central

    Kanwisher, Nancy; Yovel, Galit

    2006-01-01

    face perception, by addressing (and rebutting) five of the most widely discussed alternatives to this hypothesis. In §4, we consider recent findings that are beginning to provide clues into the computations conducted in the FFA and the nature of the representations the FFA extracts from faces. We argue that the FFA is engaged both in detecting faces and in extracting the necessary perceptual information to recognize them, and that the properties of the FFA mirror previously identified behavioural signatures of face-specific processing (e.g. the face-inversion effect). Section 5 asks how the computations and representations in the FFA differ from those occurring in other nearby regions of cortex that respond strongly to faces and objects. The evidence indicates clear functional dissociations between these regions, demonstrating that the FFA shows not only functional specificity but also area specificity. We end by speculating in §6 on some of the broader questions raised by current research on the FFA, including the developmental origins of this region and the question of whether faces are unique versus whether similarly specialized mechanisms also exist for other domains of high-level perception and cognition. PMID:17118927

  5. The construction FACE database - Codifying the NIOSH FACE reports.

    PubMed

    Dong, Xiuwen Sue; Largay, Julie A; Wang, Xuanwen; Cain, Chris Trahan; Romano, Nancy

    2017-09-01

    The National Institute for Occupational Safety and Health (NIOSH) has published reports detailing the results of investigations on selected work-related fatalities through the Fatality Assessment and Control Evaluation (FACE) program since 1982. Information from construction-related FACE reports was coded into the Construction FACE Database (CFD). Use of the CFD was illustrated by analyzing major CFD variables. A total of 768 construction fatalities were included in the CFD. Information on decedents, safety training, use of PPE, and FACE recommendations were coded. Analysis shows that one in five decedents in the CFD died within the first two months on the job; 75% and 43% of reports recommended having safety training or installing protection equipment, respectively. Comprehensive research using FACE reports may improve understanding of work-related fatalities and provide much-needed information on injury prevention. The CFD allows researchers to analyze the FACE reports quantitatively and efficiently. Copyright © 2017 Elsevier Ltd and National Safety Council. All rights reserved.

  6. Deficient cortical face-sensitive N170 responses and basic visual processing in schizophrenia.

    PubMed

    Maher, S; Mashhoon, Y; Ekstrom, T; Lukas, S; Chen, Y

    2016-01-01

    Face detection, an ability to identify a visual stimulus as a face, is impaired in patients with schizophrenia. It is unclear whether impaired face processing in this psychiatric disorder results from face-specific domains or stems from more basic visual domains. In this study, we examined cortical face-sensitive N170 response in schizophrenia, taking into account deficient basic visual contrast processing. We equalized visual contrast signals among patients (n=20) and controls (n=20) and between face and tree images, based on their individual perceptual capacities (determined using psychophysical methods). We measured N170, a putative temporal marker of face processing, during face detection and tree detection. In controls, N170 amplitudes were significantly greater for faces than trees across all three visual contrast levels tested (perceptual threshold, two times perceptual threshold and 100%). In patients, however, N170 amplitudes did not differ between faces and trees, indicating diminished face selectivity (indexed by the differential responses to face vs. tree). These results indicate a lack of face-selectivity in temporal responses of brain machinery putatively responsible for face processing in schizophrenia. This neuroimaging finding suggests that face-specific processing is compromised in this psychiatric disorder. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. The own-age face recognition bias is task dependent.

    PubMed

    Proietti, Valentina; Macchi Cassia, Viola; Mondloch, Catherine J

    2015-08-01

    The own-age bias (OAB) in face recognition (more accurate recognition of own-age than other-age faces) is robust among young adults but not older adults. We investigated the OAB under two different task conditions. In Experiment 1 young and older adults (who reported more recent experience with own than other-age faces) completed a match-to-sample task with young and older adult faces; only young adults showed an OAB. In Experiment 2 young and older adults completed an identity detection task in which we manipulated the identity strength of target and distracter identities by morphing each face with an average face in 20% steps. Accuracy increased with identity strength and facial age influenced older adults' (but not younger adults') strategy, but there was no evidence of an OAB. Collectively, these results suggest that the OAB depends on task demands and may be absent when searching for one identity. © 2014 The British Psychological Society.

  8. Contributions of individual face features to face discrimination.

    PubMed

    Logan, Andrew J; Gordon, Gael E; Loffler, Gunter

    2017-08-01

    Faces are highly complex stimuli that contain a host of information. Such complexity poses the following questions: (a) do observers exhibit preferences for specific information? (b) how does sensitivity to individual face parts compare? These questions were addressed by quantifying sensitivity to different face features. Discrimination thresholds were determined for synthetic faces under the following conditions: (i) 'full face': all face features visible; (ii) 'isolated feature': single feature presented in isolation; (iii) 'embedded feature': all features visible, but only one feature modified. Mean threshold elevations for isolated features, relative to full-faces, were 0.84x, 1.08, 2.12, 3.34, 4.07 and 4.47 for head-shape, hairline, nose, mouth, eyes and eyebrows respectively. Hence, when two full faces can be discriminated at threshold, the difference between the eyes is about four times less than what is required when discriminating between isolated eyes. In all cases, sensitivity was higher when features were presented in isolation than when they were embedded within a face context (threshold elevations of 0.94x, 1.74, 2.67, 2.90, 5.94 and 9.94). This reveals a specific pattern of sensitivity to face information. Observers are between two and four times more sensitive to external than internal features. The pattern for internal features (higher sensitivity for the nose, compared to mouth, eyes and eyebrows) is consistent with lower sensitivity for those parts affected by facial dynamics (e.g. facial expressions). That isolated features are easier to discriminate than embedded features supports a holistic face processing mechanism which impedes extraction of information about individual features from full faces. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Facing the Spectator

    PubMed Central

    Koenderink, Jan; van Doorn, Andrea; Pinna, Baingio

    2016-01-01

    We investigated the familiar phenomenon of the uncanny feeling that represented people in frontal pose invariably appear to “face you” from wherever you stand. We deploy two different methods. The stimuli include the conventional one—a flat portrait rocking back and forth about a vertical axis—augmented with two novel variations. In one alternative, the portrait frame rotates whereas the actual portrait stays motionless and fronto-parallel; in the other, we replace the (flat!) portrait with a volumetric object. These variations yield exactly the same optical stimulation in frontal view, but become grossly different in very oblique views. We also let participants sample their momentary awareness through “gauge object” settings in static displays. From our results, we conclude that the psychogenesis of visual awareness maintains a number—at least two, but most likely more—of distinct spatial frameworks simultaneously involving “cue–scission.” Cues may be effective in one of these spatial frameworks but ineffective or functionally different in other ones. PMID:27895885

  10. Dermatology facing autoinflammatory syndrome.

    PubMed

    Alecu, Mihail; Coman, Gabriela; Muşetescu, Alina; Cojoacă, Marian Emanuel; Coman, Oana Andreia

    2015-01-01

    Cutaneous symptoms are characteristic for the autoinflammatory disorders (AIDs), both in the classical autoinflammatory phenotype and in most disorders included in this syndrome, but they are not specific and inconstant. Several skin disorders (pyoderma gangrenosum and pustular acne) may be encountered either isolate or associated with autoinflammatory symptoms, forming well-defined clinical entities within the autoinflammatory syndrome. The high prevalence of cutaneous manifestations is an important characteristic of AIDs. The presence of cutaneous symptoms in AIDs opens the perspective of understanding the contribution of innate immunity mechanisms involved in skin pathology. It is possible that many diseases present the alteration, in various degrees, of the innate immune mechanisms. Recently, dermatology faced two challenges connected to AIDs. The first involves the diagnosis of skin symptoms in a clinical autoinflammatory setting and the investigative approach to identify a disorder classified as AID. The second is to identify the altered mechanisms of inborn immunity among the pathogenetic mechanisms of known dermatological diseases (e.g., neutrophilic dermatoses). On the other hand, cutaneous symptoms are in certain cases regarded as a criterion to asses the efficacy of specific or non-specific therapies with monoclonal antibodies in disorders included in AIDs. Dermatology mostly benefits from the identification and knowledge of AIDs due to the role of innate immunity in skin pathogeny and also due to the large extent of clinical forms resulting from the association of skin symptoms with other disorders included in this group.

  11. Faces of Pluto

    NASA Image and Video Library

    2015-06-11

    These images, taken by NASA's New Horizons' Long Range Reconnaissance Imager (LORRI), show four different "faces" of Pluto as it rotates about its axis with a period of 6.4 days. All the images have been rotated to align Pluto's rotational axis with the vertical direction (up-down) on the figure, as depicted schematically in the upper left. From left to right, the images were taken when Pluto's central longitude was 17, 63, 130, and 243 degrees, respectively. The date of each image, the distance of the New Horizons spacecraft from Pluto, and the number of days until Pluto closest approach are all indicated in the figure.These images show dramatic variations in Pluto's surface features as it rotates. When a very large, dark region near Pluto's equator appears near the limb, it gives Pluto a distinctly, but false, non-spherical appearance. Pluto is known to be almost perfectly spherical from previous data. These images are displayed at four times the native LORRI image size, and have been processed using a method called deconvolution, which sharpens the original images to enhance features on Pluto. Deconvolution can occasionally introduce "false" details, so the finest details in these pictures will need to be confirmed by images taken from closer range in the next few weeks. All of the images are displayed using the same brightness scale. http://photojournal.jpl.nasa.gov/catalog/PIA19686

  12. About-face on face recognition ability and holistic processing.

    PubMed

    Richler, Jennifer J; Floyd, R Jackie; Gauthier, Isabel

    2015-01-01

    Previous work found a small but significant relationship between holistic processing measured with the composite task and face recognition ability measured by the Cambridge Face Memory Test (CFMT; Duchaine & Nakayama, 2006). Surprisingly, recent work using a different measure of holistic processing (Vanderbilt Holistic Face Processing Test [VHPT-F]; Richler, Floyd, & Gauthier, 2014) and a larger sample found no evidence for such a relationship. In Experiment 1 we replicate this unexpected result, finding no relationship between holistic processing (VHPT-F) and face recognition ability (CFMT). A key difference between the VHPT-F and other holistic processing measures is that unique face parts are used on each trial in the VHPT-F, unlike in other tasks where a small set of face parts repeat across the experiment. In Experiment 2, we test the hypothesis that correlations between the CFMT and holistic processing tasks are driven by stimulus repetition that allows for learning during the composite task. Consistent with our predictions, CFMT performance was correlated with holistic processing in the composite task when a small set of face parts repeated over trials, but not when face parts did not repeat. A meta-analysis confirms that relationships between the CFMT and holistic processing depend on stimulus repetition. These results raise important questions about what is being measured by the CFMT, and challenge current assumptions about why faces are processed holistically.

  13. About-face on face recognition ability and holistic processing

    PubMed Central

    Richler, Jennifer J.; Floyd, R. Jackie; Gauthier, Isabel

    2015-01-01

    Previous work found a small but significant relationship between holistic processing measured with the composite task and face recognition ability measured by the Cambridge Face Memory Test (CFMT; Duchaine & Nakayama, 2006). Surprisingly, recent work using a different measure of holistic processing (Vanderbilt Holistic Face Processing Test [VHPT-F]; Richler, Floyd, & Gauthier, 2014) and a larger sample found no evidence for such a relationship. In Experiment 1 we replicate this unexpected result, finding no relationship between holistic processing (VHPT-F) and face recognition ability (CFMT). A key difference between the VHPT-F and other holistic processing measures is that unique face parts are used on each trial in the VHPT-F, unlike in other tasks where a small set of face parts repeat across the experiment. In Experiment 2, we test the hypothesis that correlations between the CFMT and holistic processing tasks are driven by stimulus repetition that allows for learning during the composite task. Consistent with our predictions, CFMT performance was correlated with holistic processing in the composite task when a small set of face parts repeated over trials, but not when face parts did not repeat. A meta-analysis confirms that relationships between the CFMT and holistic processing depend on stimulus repetition. These results raise important questions about what is being measured by the CFMT, and challenge current assumptions about why faces are processed holistically. PMID:26223027

  14. Face-n-Food: Gender Differences in Tuning to Faces

    PubMed Central

    Pavlova, Marina A.; Scheffler, Klaus; Sokolov, Alexander N.

    2015-01-01

    Faces represent valuable signals for social cognition and non-verbal communication. A wealth of research indicates that women tend to excel in recognition of facial expressions. However, it remains unclear whether females are better tuned to faces. We presented healthy adult females and males with a set of newly created food-plate images resembling faces (slightly bordering on the Giuseppe Arcimboldo style). In a spontaneous recognition task, participants were shown a set of images in a predetermined order from the least to most resembling a face. Females not only more readily recognized the images as a face (they reported resembling a face on images, on which males still did not), but gave on overall more face responses. The findings are discussed in the light of gender differences in deficient face perception. As most neuropsychiatric, neurodevelopmental and psychosomatic disorders characterized by social brain abnormalities are sex specific, the task may serve as a valuable tool for uncovering impairments in visual face processing. PMID:26154177

  15. Human face processing is tuned to sexual age preferences

    PubMed Central

    Ponseti, J.; Granert, O.; van Eimeren, T.; Jansen, O.; Wolff, S.; Beier, K.; Deuschl, G.; Bosinski, H.; Siebner, H.

    2014-01-01

    Human faces can motivate nurturing behaviour or sexual behaviour when adults see a child or an adult face, respectively. This suggests that face processing is tuned to detecting age cues of sexual maturity to stimulate the appropriate reproductive behaviour: either caretaking or mating. In paedophilia, sexual attraction is directed to sexually immature children. Therefore, we hypothesized that brain networks that normally are tuned to mature faces of the preferred gender show an abnormal tuning to sexual immature faces in paedophilia. Here, we use functional magnetic resonance imaging (fMRI) to test directly for the existence of a network which is tuned to face cues of sexual maturity. During fMRI, participants sexually attracted to either adults or children were exposed to various face images. In individuals attracted to adults, adult faces activated several brain regions significantly more than child faces. These brain regions comprised areas known to be implicated in face processing, and sexual processing, including occipital areas, the ventrolateral prefrontal cortex and, subcortically, the putamen and nucleus caudatus. The same regions were activated in paedophiles, but with a reversed preferential response pattern. PMID:24850896

  16. Learned face-voice pairings facilitate visual search.

    PubMed

    Zweig, L Jacob; Suzuki, Satoru; Grabowecky, Marcia

    2015-04-01

    Voices provide a rich source of information that is important for identifying individuals and for social interaction. During search for a face in a crowd, voices often accompany visual information, and they facilitate localization of the sought-after individual. However, it is unclear whether this facilitation occurs primarily because the voice cues the location of the face or because it also increases the salience of the associated face. Here we demonstrate that a voice that provides no location information nonetheless facilitates visual search for an associated face. We trained novel face-voice associations and verified learning using a two-alternative forced choice task in which participants had to correctly match a presented voice to the associated face. Following training, participants searched for a previously learned target face among other faces while hearing one of the following sounds (localized at the center of the display): a congruent learned voice, an incongruent but familiar voice, an unlearned and unfamiliar voice, or a time-reversed voice. Only the congruent learned voice speeded visual search for the associated face. This result suggests that voices facilitate the visual detection of associated faces, potentially by increasing their visual salience, and that the underlying crossmodal associations can be established through brief training.

  17. Human face processing is tuned to sexual age preferences.

    PubMed

    Ponseti, J; Granert, O; van Eimeren, T; Jansen, O; Wolff, S; Beier, K; Deuschl, G; Bosinski, H; Siebner, H

    2014-05-01

    Human faces can motivate nurturing behaviour or sexual behaviour when adults see a child or an adult face, respectively. This suggests that face processing is tuned to detecting age cues of sexual maturity to stimulate the appropriate reproductive behaviour: either caretaking or mating. In paedophilia, sexual attraction is directed to sexually immature children. Therefore, we hypothesized that brain networks that normally are tuned to mature faces of the preferred gender show an abnormal tuning to sexual immature faces in paedophilia. Here, we use functional magnetic resonance imaging (fMRI) to test directly for the existence of a network which is tuned to face cues of sexual maturity. During fMRI, participants sexually attracted to either adults or children were exposed to various face images. In individuals attracted to adults, adult faces activated several brain regions significantly more than child faces. These brain regions comprised areas known to be implicated in face processing, and sexual processing, including occipital areas, the ventrolateral prefrontal cortex and, subcortically, the putamen and nucleus caudatus. The same regions were activated in paedophiles, but with a reversed preferential response pattern. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  18. Experimental comparisons of face-to-face and anonymous real-time team competition in a networked gaming learning environment.

    PubMed

    Yu, Fu-Yun; Han, Chialing; Chan, Tak-Wai

    2008-08-01

    This study investigates the impact of anonymous, computerized, synchronized team competition on students' motivation, satisfaction, and interpersonal relationships. Sixty-eight fourth-graders participated in this study. A synchronous gaming learning system was developed to have dyads compete against each other in answering multiple-choice questions set in accordance with the school curriculum in two conditions (face-to-face and anonymous). The results showed that students who were exposed to the anonymous team competition condition responded significantly more positively than those in the face-to-face condition in terms of motivation and satisfaction at the 0.050 and 0.056 levels respectively. Although further studies regarding the effects of anonymous interaction in a networked gaming learning environment are imperative, the positive effects detected in this preliminary study indicate that anonymity is a viable feature for mitigating the negative effects that competition may inflict on motivation and satisfaction as reported in traditional face-to-face environments.

  19. Face recognition for criminal identification: An implementation of principal component analysis for face recognition

    NASA Astrophysics Data System (ADS)

    Abdullah, Nurul Azma; Saidi, Md. Jamri; Rahman, Nurul Hidayah Ab; Wen, Chuah Chai; Hamid, Isredza Rahmi A.

    2017-10-01

    In practice, identification of criminal in Malaysia is done through thumbprint identification. However, this type of identification is constrained as most of criminal nowadays getting cleverer not to leave their thumbprint on the scene. With the advent of security technology, cameras especially CCTV have been installed in many public and private areas to provide surveillance activities. The footage of the CCTV can be used to identify suspects on scene. However, because of limited software developed to automatically detect the similarity between photo in the footage and recorded photo of criminals, the law enforce thumbprint identification. In this paper, an automated facial recognition system for criminal database was proposed using known Principal Component Analysis approach. This system will be able to detect face and recognize face automatically. This will help the law enforcements to detect or recognize suspect of the case if no thumbprint present on the scene. The results show that about 80% of input photo can be matched with the template data.

  20. Face adaptation improves gender discrimination.

    PubMed

    Yang, Hua; Shen, Jianhong; Chen, Juan; Fang, Fang

    2011-01-01

    Adaptation to a visual pattern can alter the sensitivities of neuronal populations encoding the pattern. However, the functional roles of adaptation, especially in high-level vision, are still equivocal. In the present study, we performed three experiments to investigate if face gender adaptation could affect gender discrimination. Experiments 1 and 2 revealed that adapting to a male/female face could selectively enhance discrimination for male/female faces. Experiment 3 showed that the discrimination enhancement induced by face adaptation could transfer across a substantial change in three-dimensional face viewpoint. These results provide further evidence suggesting that, similar to low-level vision, adaptation in high-level vision could calibrate the visual system to current inputs of complex shapes (i.e. face) and improve discrimination at the adapted characteristic. Copyright © 2010 Elsevier Ltd. All rights reserved.

  1. Holistic face training enhances face processing in developmental prosopagnosia

    PubMed Central

    Cohan, Sarah; Nakayama, Ken

    2014-01-01

    Prosopagnosia has largely been regarded as an untreatable disorder. However, recent case studies using cognitive training have shown that it is possible to enhance face recognition abilities in individuals with developmental prosopagnosia. Our goal was to determine if this approach could be effective in a larger population of developmental prosopagnosics. We trained 24 developmental prosopagnosics using a 3-week online face-training program targeting holistic face processing. Twelve subjects with developmental prosopagnosia were assessed before and after training, and the other 12 were assessed before and after a waiting period, they then performed the training, and were then assessed again. The assessments included measures of front-view face discrimination, face discrimination with view-point changes, measures of holistic face processing, and a 5-day diary to quantify potential real-world improvements. Compared with the waiting period, developmental prosopagnosics showed moderate but significant overall training-related improvements on measures of front-view face discrimination. Those who reached the more difficult levels of training (‘better’ trainees) showed the strongest improvements in front-view face discrimination and showed significantly increased holistic face processing to the point of being similar to that of unimpaired control subjects. Despite challenges in characterizing developmental prosopagnosics’ everyday face recognition and potential biases in self-report, results also showed modest but consistent self-reported diary improvements. In summary, we demonstrate that by using cognitive training that targets holistic processing, it is possible to enhance face perception across a group of developmental prosopagnosics and further suggest that those who improved the most on the training task received the greatest benefits. PMID:24691394

  2. Glued to Which Face? Attentional Priority Effect of Female Babyface and Male Mature Face.

    PubMed

    Zheng, Wenwen; Luo, Ting; Hu, Chuan-Peng; Peng, Kaiping

    2018-01-01

    A more babyfaced individual is perceived as more child-like and this impression from babyface, as known as babyface effect, has an impact on social life among various age groups. In this study, the influence of babyfaces on visual selective attention was tested by cognitive task, demonstrating that the female babyface and male mature face would draw participants' attention so that they take their eyes off more slowly. In Experiment 1, a detection task was applied to test the influence of babyfaces on visual selective attention. In this experiment, a babyface and a mature face with the same gender were presented simultaneously with a letter on one of them. The reaction time was shorter when the target letter was overlaid with a female babyface or male mature face, suggesting an attention capture effect. To explore how this competition influenced by attentional resources, we conducted Experiment 2 with a spatial cueing paradigm and controlled the attentional resources by cueing validity and inter-stimulus interval. In this task, the female babyface and male mature face prolonged responses to the spatially separated targets under the condition of an invalid and long interval pre-cue. This observation replicated the result of Experiment 1. This indicates that the female babyface and male mature face glued visual selective attention once attentional resources were directed to them. To further investigate the subliminal influence from a babyface, we used continuous flash suppression paradigm in Experiment 3. The results, again, showed the advantage of the female babyfaces and male mature faces: they broke the suppression faster than other faces. Our results provide primary evidence that the female babyfaces and male mature faces can reliably glue the visual selective attention, both supra- and sub-liminally.

  3. Glued to Which Face? Attentional Priority Effect of Female Babyface and Male Mature Face

    PubMed Central

    Zheng, Wenwen; Luo, Ting; Hu, Chuan-Peng; Peng, Kaiping

    2018-01-01

    A more babyfaced individual is perceived as more child-like and this impression from babyface, as known as babyface effect, has an impact on social life among various age groups. In this study, the influence of babyfaces on visual selective attention was tested by cognitive task, demonstrating that the female babyface and male mature face would draw participants’ attention so that they take their eyes off more slowly. In Experiment 1, a detection task was applied to test the influence of babyfaces on visual selective attention. In this experiment, a babyface and a mature face with the same gender were presented simultaneously with a letter on one of them. The reaction time was shorter when the target letter was overlaid with a female babyface or male mature face, suggesting an attention capture effect. To explore how this competition influenced by attentional resources, we conducted Experiment 2 with a spatial cueing paradigm and controlled the attentional resources by cueing validity and inter-stimulus interval. In this task, the female babyface and male mature face prolonged responses to the spatially separated targets under the condition of an invalid and long interval pre-cue. This observation replicated the result of Experiment 1. This indicates that the female babyface and male mature face glued visual selective attention once attentional resources were directed to them. To further investigate the subliminal influence from a babyface, we used continuous flash suppression paradigm in Experiment 3. The results, again, showed the advantage of the female babyfaces and male mature faces: they broke the suppression faster than other faces. Our results provide primary evidence that the female babyfaces and male mature faces can reliably glue the visual selective attention, both supra- and sub-liminally. PMID:29559946

  4. Emotion Words: Adding Face Value.

    PubMed

    Fugate, Jennifer M B; Gendron, Maria; Nakashima, Satoshi F; Barrett, Lisa Feldman

    2017-06-12

    Despite a growing number of studies suggesting that emotion words affect perceptual judgments of emotional stimuli, little is known about how emotion words affect perceptual memory for emotional faces. In Experiments 1 and 2 we tested how emotion words (compared with control words) affected participants' abilities to select a target emotional face from among distractor faces. Participants were generally more likely to false alarm to distractor emotional faces when primed with an emotion word congruent with the face (compared with a control word). Moreover, participants showed both decreased sensitivity (d') to discriminate between target and distractor faces, as well as altered response biases (c; more likely to answer "yes") when primed with an emotion word (compared with a control word). In Experiment 3 we showed that emotion words had more of an effect on perceptual memory judgments when the structural information in the target face was limited, as well as when participants were only able to categorize the face with a partially congruent emotion word. The overall results are consistent with the idea that emotion words affect the encoding of emotional faces in perceptual memory. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  5. Cyber- and Face-to-Face Bullying: Who Crosses Over?

    ERIC Educational Resources Information Center

    Shin, Hwayeon Helene; Braithwaite, Valerie; Ahmed, Eliza

    2016-01-01

    A total of 3956 children aged 12-13 years who completed the Longitudinal Study of Australian Children (LSAC Wave 5) were studied about their experiences of traditional face-to-face bullying and cyberbullying in the last month. In terms of prevalence, sixty percent of the sample had been involved in traditional bullying as the victim and/or the…

  6. Blended Outreach: Face-to-Face and Remote Programs

    ERIC Educational Resources Information Center

    Poeppelmeyer, Diana

    2011-01-01

    The Texas School for the Deaf (TSD) has two missions. One is to provide educational services to deaf and hard of hearing students and their families on the Austin campus--this is the traditional, face-to-face, center-based service model. The other is to serve as a resource center for the state, providing information, referral, programs, and…

  7. Future Schools: Blending Face-to-Face and Online Learning

    ERIC Educational Resources Information Center

    Schorr, Jonathan; McGriff, Deborah

    2012-01-01

    "Hybrid schools" are schools that combine "face-to-face" education in a specific place with online instruction. In this article, the authors describe school models which offer a vision for what deeply integrated technology can mean for children's education, for the way schools are structured, and for the promise of greater…

  8. View of Face A and Face B Arrays, looking northeast ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    View of Face A and Face B Arrays, looking northeast - Beale Air Force Base, Perimeter Acquisition Vehicle Entry Phased-Array Warning System, Techinical Equipment Building, End of Spencer Paul Road, north of Warren Shingle Road (14th Street), Marysville, Yuba County, CA

  9. Looking northwest, Face B Array to left, Face C (rear) ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Looking northwest, Face B Array to left, Face C (rear) center, Power Plant (Building 5761), to right - Beale Air Force Base, Perimeter Acquisition Vehicle Entry Phased-Array Warning System, Techinical Equipment Building, End of Spencer Paul Road, north of Warren Shingle Road (14th Street), Marysville, Yuba County, CA

  10. Face-to-face interference in typical and atypical development

    PubMed Central

    Riby, Deborah M; Doherty-Sneddon, Gwyneth; Whittle, Lisa

    2012-01-01

    Visual communication cues facilitate interpersonal communication. It is important that we look at faces to retrieve and subsequently process such cues. It is also important that we sometimes look away from faces as they increase cognitive load that may interfere with online processing. Indeed, when typically developing individuals hold face gaze it interferes with task completion. In this novel study we quantify face interference for the first time in Williams syndrome (WS) and Autism Spectrum Disorder (ASD). These disorders of development impact on cognition and social attention, but how do faces interfere with cognitive processing? Individuals developing typically as well as those with ASD (n = 19) and WS (n = 16) were recorded during a question and answer session that involved mathematics questions. In phase 1 gaze behaviour was not manipulated, but in phase 2 participants were required to maintain eye contact with the experimenter at all times. Looking at faces decreased task accuracy for individuals who were developing typically. Critically, the same pattern was seen in WS and ASD, whereby task performance decreased when participants were required to hold face gaze. The results show that looking at faces interferes with task performance in all groups. This finding requires the caveat that individuals with WS and ASD found it harder than individuals who were developing typically to maintain eye contact throughout the interaction. Individuals with ASD struggled to hold eye contact at all points of the interaction while those with WS found it especially difficult when thinking. PMID:22356183

  11. Face shape differs in phylogenetically related populations.

    PubMed

    Hopman, Saskia M J; Merks, Johannes H M; Suttie, Michael; Hennekam, Raoul C M; Hammond, Peter

    2014-11-01

    3D analysis of facial morphology has delineated facial phenotypes in many medical conditions and detected fine grained differences between typical and atypical patients to inform genotype-phenotype studies. Next-generation sequencing techniques have enabled extremely detailed genotype-phenotype correlative analysis. Such comparisons typically employ control groups matched for age, sex and ethnicity and the distinction between ethnic categories in genotype-phenotype studies has been widely debated. The phylogenetic tree based on genetic polymorphism studies divides the world population into nine subpopulations. Here we show statistically significant face shape differences between two European Caucasian populations of close phylogenetic and geographic proximity from the UK and The Netherlands. The average face shape differences between the Dutch and UK cohorts were visualised in dynamic morphs and signature heat maps, and quantified for their statistical significance using both conventional anthropometry and state of the art dense surface modelling techniques. Our results demonstrate significant differences between Dutch and UK face shape. Other studies have shown that genetic variants influence normal facial variation. Thus, face shape difference between populations could reflect underlying genetic difference. This should be taken into account in genotype-phenotype studies and we recommend that in those studies reference groups be established in the same population as the individuals who form the subject of the study.

  12. Face Alignment via Regressing Local Binary Features.

    PubMed

    Ren, Shaoqing; Cao, Xudong; Wei, Yichen; Sun, Jian

    2016-03-01

    This paper presents a highly efficient and accurate regression approach for face alignment. Our approach has two novel components: 1) a set of local binary features and 2) a locality principle for learning those features. The locality principle guides us to learn a set of highly discriminative local binary features for each facial landmark independently. The obtained local binary features are used to jointly learn a linear regression for the final output. This approach achieves the state-of-the-art results when tested on the most challenging benchmarks to date. Furthermore, because extracting and regressing local binary features are computationally very cheap, our system is much faster than previous methods. It achieves over 3000 frames per second (FPS) on a desktop or 300 FPS on a mobile phone for locating a few dozens of landmarks. We also study a key issue that is important but has received little attention in the previous research, which is the face detector used to initialize alignment. We investigate several face detectors and perform quantitative evaluation on how they affect alignment accuracy. We find that an alignment friendly detector can further greatly boost the accuracy of our alignment method, reducing the error up to 16% relatively. To facilitate practical usage of face detection/alignment methods, we also propose a convenient metric to measure how good a detector is for alignment initialization.

  13. Finding Hope in the Face-to-Face.

    PubMed

    Edgoose, Jennifer Y C; Edgoose, Julian M

    2017-05-01

    What does it mean to look into the face of a patient who looks back? Face-to-face encounters are at the heart of the patient-clinician relationship but their singular significance is often lost amid the demands of today's high-tech, metric-driven health care systems. Using the framework provided by the philosopher and Holocaust survivor Emmanuel Levinas, the authors explore the unique responsibility and potential for hope found only in face-to-face encounters. Revisiting this most fundamental attribute of medicine is likely our greatest chance to reclaim who we are as clinicians and why we do what we do. © 2017 Annals of Family Medicine, Inc.

  14. Atypical face shape and genomic structural variants in epilepsy

    PubMed Central

    Chinthapalli, Krishna; Bartolini, Emanuele; Novy, Jan; Suttie, Michael; Marini, Carla; Falchi, Melania; Fox, Zoe; Clayton, Lisa M. S.; Sander, Josemir W.; Guerrini, Renzo; Depondt, Chantal; Hennekam, Raoul; Hammond, Peter

    2012-01-01

    Many pathogenic structural variants of the human genome are known to cause facial dysmorphism. During the past decade, pathogenic structural variants have also been found to be an important class of genetic risk factor for epilepsy. In other fields, face shape has been assessed objectively using 3D stereophotogrammetry and dense surface models. We hypothesized that computer-based analysis of 3D face images would detect subtle facial abnormality in people with epilepsy who carry pathogenic structural variants as determined by chromosome microarray. In 118 children and adults attending three European epilepsy clinics, we used an objective measure called Face Shape Difference to show that those with pathogenic structural variants have a significantly more atypical face shape than those without such variants. This is true when analysing the whole face, or the periorbital region or the perinasal region alone. We then tested the predictive accuracy of our measure in a second group of 63 patients. Using a minimum threshold to detect face shape abnormalities with pathogenic structural variants, we found high sensitivity (4/5, 80% for whole face; 3/5, 60% for periorbital and perinasal regions) and specificity (45/58, 78% for whole face and perinasal regions; 40/58, 69% for periorbital region). We show that the results do not seem to be affected by facial injury, facial expression, intellectual disability, drug history or demographic differences. Finally, we use bioinformatics tools to explore relationships between facial shape and gene expression within the developing forebrain. Stereophotogrammetry and dense surface models are powerful, objective, non-contact methods of detecting relevant face shape abnormalities. We demonstrate that they are useful in identifying atypical face shape in adults or children with structural variants, and they may give insights into the molecular genetics of facial development. PMID:22975390

  15. Face to face with emotion: holistic face processing is modulated by emotional state.

    PubMed

    Curby, Kim M; Johnson, Kareem J; Tyson, Alyssa

    2012-01-01

    Negative emotions are linked with a local, rather than global, visual processing style, which may preferentially facilitate feature-based, relative to holistic, processing mechanisms. Because faces are typically processed holistically, and because social contexts are prime elicitors of emotions, we examined whether negative emotions decrease holistic processing of faces. We induced positive, negative, or neutral emotions via film clips and measured holistic processing before and after the induction: participants made judgements about cued parts of chimeric faces, and holistic processing was indexed by the interference caused by task-irrelevant face parts. Emotional state significantly modulated face-processing style, with the negative emotion induction leading to decreased holistic processing. Furthermore, self-reported change in emotional state correlated with changes in holistic processing. These results contrast with general assumptions that holistic processing of faces is automatic and immune to outside influences, and they illustrate emotion's power to modulate socially relevant aspects of visual perception.

  16. Parallel Processing in Face Perception

    ERIC Educational Resources Information Center

    Martens, Ulla; Leuthold, Hartmut; Schweinberger, Stefan R.

    2010-01-01

    The authors examined face perception models with regard to the functional and temporal organization of facial identity and expression analysis. Participants performed a manual 2-choice go/no-go task to classify faces, where response hand depended on facial familiarity (famous vs. unfamiliar) and response execution depended on facial expression…

  17. The So-Called Face

    NASA Image and Video Library

    2002-05-21

    The so-called Face on Mars can be seen slightly above center and to the right in this NASA Mars Odyssey image. This 3-km long knob was first imaged by NASA Viking spacecraft in the 1970 and to some resembled a face carved into the rocks of Mars.

  18. More efficient rejection of happy than of angry face distractors in visual search.

    PubMed

    Horstmann, Gernot; Scharlau, Ingrid; Ansorge, Ulrich

    2006-12-01

    In the present study, we examined whether the detection advantage for negative-face targets in crowds of positive-face distractors over positive-face targets in crowds of negative faces can be explained by differentially efficient distractor rejection. Search Condition A demonstrated more efficient distractor rejection with negative-face targets in positive-face crowds than vice versa. Search Condition B showed that target identity alone is not sufficient to account for this effect, because there was no difference in processing efficiency for positive- and negative-face targets within neutral crowds. Search Condition C showed differentially efficient processing with neutral-face targets among positive- or negative-face distractors. These results were obtained with both a within-participants (Experiment 1) and a between-participants (Experiment 2) design. The pattern of results is consistent with the assumption that efficient rejection of positive (more homogenous) distractors is an important determinant of performance in search among (face) distractors.

  19. Development of Preferences for Differently Aged Faces of Different Races.

    PubMed

    Heron-Delaney, Michelle; Quinn, Paul C; Damon, Fabrice; Lee, Kang; Pascalis, Olivier

    2018-02-01

    Children's experiences with differently aged faces changes in the course of development. During infancy, most faces encountered are adult, however as children mature, exposure to child faces becomes more extensive. Does this change in experience influence preference for differently aged faces? The preferences of children for adult versus child, and adult versus infant faces were investigated. Caucasian 3- to 6-year-olds and adults were presented with adult/child and adult/infant face pairs which were either Caucasian or Asian (race consistent within pairs). Younger children (3 to 4 years) preferred adults over children, whereas older children (5 to 6 years) preferred children over adults. This preference was only detected for Caucasian faces. These data support a "here and now" model of the development of face age processing from infancy to childhood. In particular, the findings suggest that growing experience with peers influences age preferences and that race impacts on these preferences. In contrast, adults preferred infants and children over adults when the faces were Caucasian or Asian, suggesting an increasing influence of a baby schema, and a decreasing influence of race. The different preferences of younger children, older children, and adults also suggest discontinuity and the possibility of different mechanisms at work during different developmental periods.

  20. Robust Point Set Matching for Partial Face Recognition.

    PubMed

    Weng, Renliang; Lu, Jiwen; Tan, Yap-Peng

    2016-03-01

    Over the past three decades, a number of face recognition methods have been proposed in computer vision, and most of them use holistic face images for person identification. In many real-world scenarios especially some unconstrained environments, human faces might be occluded by other objects, and it is difficult to obtain fully holistic face images for recognition. To address this, we propose a new partial face recognition approach to recognize persons of interest from their partial faces. Given a pair of gallery image and probe face patch, we first detect keypoints and extract their local textural features. Then, we propose a robust point set matching method to discriminatively match these two extracted local feature sets, where both the textural information and geometrical information of local features are explicitly used for matching simultaneously. Finally, the similarity of two faces is converted as the distance between these two aligned feature sets. Experimental results on four public face data sets show the effectiveness of the proposed approach.

  1. Dynamic encoding of face information in the human fusiform gyrus.

    PubMed

    Ghuman, Avniel Singh; Brunet, Nicolas M; Li, Yuanning; Konecky, Roma O; Pyles, John A; Walls, Shawn A; Destefino, Vincent; Wang, Wei; Richardson, R Mark

    2014-12-08

    Humans' ability to rapidly and accurately detect, identify and classify faces under variable conditions derives from a network of brain regions highly tuned to face information. The fusiform face area (FFA) is thought to be a computational hub for face processing; however, temporal dynamics of face information processing in FFA remains unclear. Here we use multivariate pattern classification to decode the temporal dynamics of expression-invariant face information processing using electrodes placed directly on FFA in humans. Early FFA activity (50-75 ms) contained information regarding whether participants were viewing a face. Activity between 200 and 500 ms contained expression-invariant information about which of 70 faces participants were viewing along with the individual differences in facial features and their configurations. Long-lasting (500+ms) broadband gamma frequency activity predicted task performance. These results elucidate the dynamic computational role FFA plays in multiple face processing stages and indicate what information is used in performing these visual analyses.

  2. Emotion-independent face recognition

    NASA Astrophysics Data System (ADS)

    De Silva, Liyanage C.; Esther, Kho G. P.

    2000-12-01

    Current face recognition techniques tend to work well when recognizing faces under small variations in lighting, facial expression and pose, but deteriorate under more extreme conditions. In this paper, a face recognition system to recognize faces of known individuals, despite variations in facial expression due to different emotions, is developed. The eigenface approach is used for feature extraction. Classification methods include Euclidean distance, back propagation neural network and generalized regression neural network. These methods yield 100% recognition accuracy when the training database is representative, containing one image representing the peak expression for each emotion of each person apart from the neutral expression. The feature vectors used for comparison in the Euclidean distance method and for training the neural network must be all the feature vectors of the training set. These results are obtained for a face database consisting of only four persons.

  3. Genetic specificity of face recognition.

    PubMed

    Shakeshaft, Nicholas G; Plomin, Robert

    2015-10-13

    Specific cognitive abilities in diverse domains are typically found to be highly heritable and substantially correlated with general cognitive ability (g), both phenotypically and genetically. Recent twin studies have found the ability to memorize and recognize faces to be an exception, being similarly heritable but phenotypically substantially uncorrelated both with g and with general object recognition. However, the genetic relationships between face recognition and other abilities (the extent to which they share a common genetic etiology) cannot be determined from phenotypic associations. In this, to our knowledge, first study of the genetic associations between face recognition and other domains, 2,000 18- and 19-year-old United Kingdom twins completed tests assessing their face recognition, object recognition, and general cognitive abilities. Results confirmed the substantial heritability of face recognition (61%), and multivariate genetic analyses found that most of this genetic influence is unique and not shared with other cognitive abilities.

  4. Genetic specificity of face recognition

    PubMed Central

    Shakeshaft, Nicholas G.; Plomin, Robert

    2015-01-01

    Specific cognitive abilities in diverse domains are typically found to be highly heritable and substantially correlated with general cognitive ability (g), both phenotypically and genetically. Recent twin studies have found the ability to memorize and recognize faces to be an exception, being similarly heritable but phenotypically substantially uncorrelated both with g and with general object recognition. However, the genetic relationships between face recognition and other abilities (the extent to which they share a common genetic etiology) cannot be determined from phenotypic associations. In this, to our knowledge, first study of the genetic associations between face recognition and other domains, 2,000 18- and 19-year-old United Kingdom twins completed tests assessing their face recognition, object recognition, and general cognitive abilities. Results confirmed the substantial heritability of face recognition (61%), and multivariate genetic analyses found that most of this genetic influence is unique and not shared with other cognitive abilities. PMID:26417086

  5. Face features and face configurations both contribute to visual crowding.

    PubMed

    Sun, Hsin-Mei; Balas, Benjamin

    2015-02-01

    Crowding refers to the inability to recognize an object in peripheral vision when other objects are presented nearby (Whitney & Levi Trends in Cognitive Sciences, 15, 160-168, 2011). A popular explanation of crowding is that features of the target and flankers are combined inappropriately when they are located within an integration field, thus impairing target recognition (Pelli, Palomares, & Majaj Journal of Vision, 4(12), 12:1136-1169, 2004). However, it remains unclear which features of the target and flankers are combined inappropriately to cause crowding (Levi Vision Research, 48, 635-654, 2008). For example, in a complex stimulus (e.g., a face), to what extent does crowding result from the integration of features at a part-based level or at the level of global processing of the configural appearance? In this study, we used a face categorization task and different types of flankers to examine how much the magnitude of visual crowding depends on the similarity of face parts or of global configurations. We created flankers with face-like features (e.g., the eyes, nose, and mouth) in typical and scrambled configurations to examine the impacts of part appearance and global configuration on the visual crowding of faces. Additionally, we used "electrical socket" flankers that mimicked first-order face configuration but had only schematic features, to examine the extent to which global face geometry impacted crowding. Our results indicated that both face parts and configurations contribute to visual crowding, suggesting that face similarity as realized under crowded conditions includes both aspects of facial appearance.

  6. Face-Lift Satisfaction Using the FACE-Q.

    PubMed

    Sinno, Sammy; Schwitzer, Jonathan; Anzai, Lavinia; Thorne, Charles H

    2015-08-01

    Face lifting is one of the most common operative procedures for facial aging and perhaps the procedure most synonymous with plastic surgery in the minds of the lay public, but no verifiable documentation of patient satisfaction exists in the literature. This study is the first to examine face-lift outcomes and patient satisfaction using a validated questionnaire. One hundred five patients undergoing a face lift performed by the senior author (C.H.T.) using a high, extended-superficial musculoaponeurotic system with submental platysma approximation technique were asked to complete anonymously the FACE-Q by e-mail. FACE-Q scores were assessed for each domain (range, 0 to 100), with higher scores indicating greater satisfaction with appearance or superior quality of life. Fifty-three patients completed the FACE-Q (50.5 percent response rate). Patients demonstrated high satisfaction with facial appearance (mean ± SD, 80.7 ± 22.3), and quality of life, including social confidence (90.4 ± 16.6), psychological well-being (92.8 ± 14.3), and early life impact (92.2 ± 16.4). Patients also reported extremely high satisfaction with their decision to undergo face lifting (90.5 ± 15.9). On average, patients felt they appeared 6.9 years younger than their actual age. Patients were most satisfied with the appearance of their nasolabial folds (86.2 ± 18.5), cheeks (86.1 ± 25.4), and lower face/jawline (86.0 ± 20.6), compared with their necks (78.1 ± 25.6) and area under the chin (67.9 ± 32.3). Patients who responded in this study were extremely satisfied with their decision to undergo face lifting and the outcomes and quality of life following the procedure.

  7. High precision automated face localization in thermal images: oral cancer dataset as test case

    NASA Astrophysics Data System (ADS)

    Chakraborty, M.; Raman, S. K.; Mukhopadhyay, S.; Patsa, S.; Anjum, N.; Ray, J. G.

    2017-02-01

    Automated face detection is the pivotal step in computer vision aided facial medical diagnosis and biometrics. This paper presents an automatic, subject adaptive framework for accurate face detection in the long infrared spectrum on our database for oral cancer detection consisting of malignant, precancerous and normal subjects of varied age group. Previous works on oral cancer detection using Digital Infrared Thermal Imaging(DITI) reveals that patients and normal subjects differ significantly in their facial thermal distribution. Therefore, it is a challenging task to formulate a completely adaptive framework to veraciously localize face from such a subject specific modality. Our model consists of first extracting the most probable facial regions by minimum error thresholding followed by ingenious adaptive methods to leverage the horizontal and vertical projections of the segmented thermal image. Additionally, the model incorporates our domain knowledge of exploiting temperature difference between strategic locations of the face. To our best knowledge, this is the pioneering work on detecting faces in thermal facial images comprising both patients and normal subjects. Previous works on face detection have not specifically targeted automated medical diagnosis; face bounding box returned by those algorithms are thus loose and not apt for further medical automation. Our algorithm significantly outperforms contemporary face detection algorithms in terms of commonly used metrics for evaluating face detection accuracy. Since our method has been tested on challenging dataset consisting of both patients and normal subjects of diverse age groups, it can be seamlessly adapted in any DITI guided facial healthcare or biometric applications.

  8. The Development of Face Perception in Infancy: Intersensory Interference and Unimodal Visual Facilitation

    ERIC Educational Resources Information Center

    Bahrick, Lorraine E.; Lickliter, Robert; Castellanos, Irina

    2013-01-01

    Although research has demonstrated impressive face perception skills of young infants, little attention has focused on conditions that enhance versus impair infant face perception. The present studies tested the prediction, generated from the intersensory redundancy hypothesis (IRH), that face discrimination, which relies on detection of visual…

  9. Intersensory Redundancy Hinders Face Discrimination in Preschool Children: Evidence for Visual Facilitation

    ERIC Educational Resources Information Center

    Bahrick, Lorraine E.; Krogh-Jespersen, Sheila; Argumosa, Melissa A.; Lopez, Hassel

    2014-01-01

    Although infants and children show impressive face-processing skills, little research has focused on the conditions that facilitate versus impair face perception. According to the intersensory redundancy hypothesis (IRH), face discrimination, which relies on detection of visual featural information, should be impaired in the context of…

  10. The Cambridge Face Memory Test for Children (CFMT-C): a new tool for measuring face recognition skills in childhood.

    PubMed

    Croydon, Abigail; Pimperton, Hannah; Ewing, Louise; Duchaine, Brad C; Pellicano, Elizabeth

    2014-09-01

    Face recognition ability follows a lengthy developmental course, not reaching maturity until well into adulthood. Valid and reliable assessments of face recognition memory ability are necessary to examine patterns of ability and disability in face processing, yet there is a dearth of such assessments for children. We modified a well-known test of face memory in adults, the Cambridge Face Memory Test (Duchaine & Nakayama, 2006, Neuropsychologia, 44, 576-585), to make it developmentally appropriate for children. To establish its utility, we administered either the upright or inverted versions of the computerised Cambridge Face Memory Test - Children (CFMT-C) to 401 children aged between 5 and 12 years. Our results show that the CFMT-C is sufficiently sensitive to demonstrate age-related gains in the recognition of unfamiliar upright and inverted faces, does not suffer from ceiling or floor effects, generates robust inversion effects, and is capable of detecting difficulties in face memory in children diagnosed with autism. Together, these findings indicate that the CFMT-C constitutes a new valid assessment tool for children's face recognition skills. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Modeling Human Dynamics of Face-to-Face Interaction Networks

    NASA Astrophysics Data System (ADS)

    Starnini, Michele; Baronchelli, Andrea; Pastor-Satorras, Romualdo

    2013-04-01

    Face-to-face interaction networks describe social interactions in human gatherings, and are the substrate for processes such as epidemic spreading and gossip propagation. The bursty nature of human behavior characterizes many aspects of empirical data, such as the distribution of conversation lengths, of conversations per person, or of interconversation times. Despite several recent attempts, a general theoretical understanding of the global picture emerging from data is still lacking. Here we present a simple model that reproduces quantitatively most of the relevant features of empirical face-to-face interaction networks. The model describes agents that perform a random walk in a two-dimensional space and are characterized by an attractiveness whose effect is to slow down the motion of people around them. The proposed framework sheds light on the dynamics of human interactions and can improve the modeling of dynamical processes taking place on the ensuing dynamical social networks.

  12. Can Faces Prime a Language?

    PubMed

    Woumans, Evy; Martin, Clara D; Vanden Bulcke, Charlotte; Van Assche, Eva; Costa, Albert; Hartsuiker, Robert J; Duyck, Wouter

    2015-09-01

    Bilinguals have two languages that are activated in parallel. During speech production, one of these languages must be selected on the basis of some cue. The present study investigated whether the face of an interlocutor can serve as such a cue. Spanish-Catalan and Dutch-French bilinguals were first familiarized with certain faces, each of which was associated with only one language, during simulated Skype conversations. Afterward, these participants performed a language production task in which they generated words associated with the words produced by familiar and unfamiliar faces displayed on-screen. When responding to familiar faces, participants produced words faster if the faces were speaking the same language as in the previous Skype simulation than if the same faces were speaking a different language. Furthermore, this language priming effect disappeared when it became clear that the interlocutors were actually bilingual. These findings suggest that faces can prime a language, but their cuing effect disappears when it turns out that they are unreliable as language cues. © The Author(s) 2015.

  13. Visual adaptation and face perception

    PubMed Central

    Webster, Michael A.; MacLeod, Donald I. A.

    2011-01-01

    The appearance of faces can be strongly affected by the characteristics of faces viewed previously. These perceptual after-effects reflect processes of sensory adaptation that are found throughout the visual system, but which have been considered only relatively recently in the context of higher level perceptual judgements. In this review, we explore the consequences of adaptation for human face perception, and the implications of adaptation for understanding the neural-coding schemes underlying the visual representation of faces. The properties of face after-effects suggest that they, in part, reflect response changes at high and possibly face-specific levels of visual processing. Yet, the form of the after-effects and the norm-based codes that they point to show many parallels with the adaptations and functional organization that are thought to underlie the encoding of perceptual attributes like colour. The nature and basis for human colour vision have been studied extensively, and we draw on ideas and principles that have been developed to account for norms and normalization in colour vision to consider potential similarities and differences in the representation and adaptation of faces. PMID:21536555

  14. Visual adaptation and face perception.

    PubMed

    Webster, Michael A; MacLeod, Donald I A

    2011-06-12

    The appearance of faces can be strongly affected by the characteristics of faces viewed previously. These perceptual after-effects reflect processes of sensory adaptation that are found throughout the visual system, but which have been considered only relatively recently in the context of higher level perceptual judgements. In this review, we explore the consequences of adaptation for human face perception, and the implications of adaptation for understanding the neural-coding schemes underlying the visual representation of faces. The properties of face after-effects suggest that they, in part, reflect response changes at high and possibly face-specific levels of visual processing. Yet, the form of the after-effects and the norm-based codes that they point to show many parallels with the adaptations and functional organization that are thought to underlie the encoding of perceptual attributes like colour. The nature and basis for human colour vision have been studied extensively, and we draw on ideas and principles that have been developed to account for norms and normalization in colour vision to consider potential similarities and differences in the representation and adaptation of faces.

  15. SPACE: Vision and Reality: Face to Face. Proceedings Report

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The proceedings of the 11th National Space Symposium entitled 'Vision and Reality: Face to Face' is presented. Technological areas discussed include the following sections: Vision for the future; Positioning for the future; Remote sensing, the emerging era; space opportunities, Competitive vision with acquisition reality; National security requirements in space; The world is into space; and The outlook for space. An appendice is also attached.

  16. Is Beauty in the Face of the Beholder?

    PubMed Central

    Laeng, Bruno; Vermeer, Oddrun; Sulutvedt, Unni

    2013-01-01

    Opposing forces influence assortative mating so that one seeks a similar mate while at the same time avoiding inbreeding with close relatives. Thus, mate choice may be a balancing of phenotypic similarity and dissimilarity between partners. In the present study, we assessed the role of resemblance to Self’s facial traits in judgments of physical attractiveness. Participants chose the most attractive face image of their romantic partner among several variants, where the faces were morphed so as to include only 22% of another face. Participants distinctly preferred a “Self-based morph” (i.e., their partner’s face with a small amount of Self’s face blended into it) to other morphed images. The Self-based morph was also preferred to the morph of their partner’s face blended with the partner’s same-sex “prototype”, although the latter face was (“objectively”) judged more attractive by other individuals. When ranking morphs differing in level of amalgamation (i.e., 11% vs. 22% vs. 33%) of another face, the 22% was chosen consistently as the preferred morph and, in particular, when Self was blended in the partner’s face. A forced-choice signal-detection paradigm showed that the effect of self-resemblance operated at an unconscious level, since the same participants were unable to detect the presence of their own faces in the above morphs. We concluded that individuals, if given the opportunity, seek to promote “positive assortment” for Self’s phenotype, especially when the level of similarity approaches an optimal point that is similar to Self without causing a conscious acknowledgment of the similarity. PMID:23874608

  17. [Neural basis of self-face recognition: social aspects].

    PubMed

    Sugiura, Motoaki

    2012-07-01

    Considering the importance of the face in social survival and evidence from evolutionary psychology of visual self-recognition, it is reasonable that we expect neural mechanisms for higher social-cognitive processes to underlie self-face recognition. A decade of neuroimaging studies so far has, however, not provided an encouraging finding in this respect. Self-face specific activation has typically been reported in the areas for sensory-motor integration in the right lateral cortices. This observation appears to reflect the physical nature of the self-face which representation is developed via the detection of contingency between one's own action and sensory feedback. We have recently revealed that the medial prefrontal cortex, implicated in socially nuanced self-referential process, is activated during self-face recognition under a rich social context where multiple other faces are available for reference. The posterior cingulate cortex has also exhibited this activation modulation, and in the separate experiment showed a response to attractively manipulated self-face suggesting its relevance to positive self-value. Furthermore, the regions in the right lateral cortices typically showing self-face-specific activation have responded also to the face of one's close friend under the rich social context. This observation is potentially explained by the fact that the contingency detection for physical self-recognition also plays a role in physical social interaction, which characterizes the representation of personally familiar people. These findings demonstrate that neuroscientific exploration reveals multiple facets of the relationship between self-face recognition and social-cognitive process, and that technically the manipulation of social context is key to its success.

  18. Collaborative recall in face-to-face and electronic groups.

    PubMed

    Ekeocha, Justina Ohaeri; Brennan, Susan E

    2008-04-01

    When people remember shared experiences, the amount they recall as a collaborating group is less than the amount obtained by pooling their individual memories. We tested the hypothesis that reduced group productivity can be attributed, at least in part, to content filtering, where information is omitted from group products either because individuals fail to retrieve it or choose to withhold it (self-filtering), or because groups reject or fail to incorporate it (group-filtering). Three-person groups viewed a movie clip together and recalled it, first individually, then in face-to-face or electronic groups, and finally individually again. Although both kinds of groups recalled equal amounts, group-filtering occurred more often face-to-face, while self-filtering occurred more often electronically. This suggests that reduced group productivity is due not only to intrapersonal factors stemming from cognitive interference, but also to interpersonal costs of coordinating the group product. Finally, face-to-face group interaction facilitated subsequent individual recall.

  19. False match elimination for face recognition based on SIFT algorithm

    NASA Astrophysics Data System (ADS)

    Gu, Xuyuan; Shi, Ping; Shao, Meide

    2011-06-01

    The SIFT (Scale Invariant Feature Transform) is a well known algorithm used to detect and describe local features in images. It is invariant to image scale, rotation and robust to the noise and illumination. In this paper, a novel method used for face recognition based on SIFT is proposed, which combines the optimization of SIFT, mutual matching and Progressive Sample Consensus (PROSAC) together and can eliminate the false matches of face recognition effectively. Experiments on ORL face database show that many false matches can be eliminated and better recognition rate is achieved.

  20. Holistic processing of static and moving faces.

    PubMed

    Zhao, Mintao; Bülthoff, Isabelle

    2017-07-01

    Humans' face ability develops and matures with extensive experience in perceiving, recognizing, and interacting with faces that move most of the time. However, how facial movements affect 1 core aspect of face ability-holistic face processing-remains unclear. Here we investigated the influence of rigid facial motion on holistic and part-based face processing by manipulating the presence of facial motion during study and at test in a composite face task. The results showed that rigidly moving faces were processed as holistically as static faces (Experiment 1). Holistic processing of moving faces persisted whether facial motion was presented during study, at test, or both (Experiment 2). Moreover, when faces were inverted to eliminate the contributions of both an upright face template and observers' expertise with upright faces, rigid facial motion facilitated holistic face processing (Experiment 3). Thus, holistic processing represents a general principle of face perception that applies to both static and dynamic faces, rather than being limited to static faces. These results support an emerging view that both perceiver-based and face-based factors contribute to holistic face processing, and they offer new insights on what underlies holistic face processing, how information supporting holistic face processing interacts with each other, and why facial motion may affect face recognition and holistic face processing differently. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  1. Face the Fats Quiz 2

    MedlinePlus

    ... heart? Ready to make informed choices about the foods you eat? From fish to French fries to fried chicken, test your knowledge about the fats in some familiar foods. Welcome to Face the Fats Quiz II - and ...

  2. Anthropometric Analysis of the Face.

    PubMed

    Zacharopoulos, Georgios V; Manios, Andreas; Kau, Chung H; Velagrakis, George; Tzanakakis, George N; de Bree, Eelco

    2016-01-01

    Facial anthropometric analysis is essential for planning cosmetic and reconstructive facial surgery, but has not been available in detail for modern Greeks. In this study, multiple measurements of the face were performed on young Greek males and females to provide a complete facial anthropometric profile of this population and to compare its facial morphology with that of North American Caucasians. Thirty-one direct facial anthropometric measurements were obtained from 152 Greek students. Moreover, the prevalence of the various face types was determined. The resulting data were compared with those published regarding North American Caucasians. A complete set of average anthropometric data was obtained for each sex. Greek males, when compared to Greek females, were found to have statistically significantly longer foreheads as well as greater values in morphologic face height, mandible width, maxillary surface arc distance, and mandibular surface arc distance. In both sexes, the most common face types were mesoprosop, leptoprosop, and hyperleptoprosop. Greek males had significantly wider faces and mandibles than the North American Caucasian males, whereas Greek females had only significantly wider mandibles than their North American counterparts. Differences of statistical significance were noted in the head and face regions among sexes as well as among Greek and North American Caucasians. With the establishment of facial norms for Greek adults, this study contributes to the preoperative planning as well as postoperative evaluation of Greek patients that are, respectively, scheduled for or are to be subjected to facial reconstructive and aesthetic surgery.

  3. Face verification with balanced thresholds.

    PubMed

    Yan, Shuicheng; Xu, Dong; Tang, Xiaoou

    2007-01-01

    The process of face verification is guided by a pre-learned global threshold, which, however, is often inconsistent with class-specific optimal thresholds. It is, hence, beneficial to pursue a balance of the class-specific thresholds in the model-learning stage. In this paper, we present a new dimensionality reduction algorithm tailored to the verification task that ensures threshold balance. This is achieved by the following aspects. First, feasibility is guaranteed by employing an affine transformation matrix, instead of the conventional projection matrix, for dimensionality reduction, and, hence, we call the proposed algorithm threshold balanced transformation (TBT). Then, the affine transformation matrix, constrained as the product of an orthogonal matrix and a diagonal matrix, is optimized to improve the threshold balance and classification capability in an iterative manner. Unlike most algorithms for face verification which are directly transplanted from face identification literature, TBT is specifically designed for face verification and clarifies the intrinsic distinction between these two tasks. Experiments on three benchmark face databases demonstrate that TBT significantly outperforms the state-of-the-art subspace techniques for face verification.

  4. Faces Do Not Capture Special Attention in Children with Autism Spectrum Disorder: A Change Blindness Study

    ERIC Educational Resources Information Center

    Kikuchi, Yukiko; Senju, Atsushi; Tojo, Yoshikuni; Osanai, Hiroo; Hasegawa, Toshikazu

    2009-01-01

    Two experiments investigated attention of children with autism spectrum disorder (ASD) to faces and objects. In both experiments, children (7- to 15-year-olds) detected the difference between 2 visual scenes. Results in Experiment 1 revealed that typically developing children (n = 16) detected the change in faces faster than in objects, whereas…

  5. Carbon-Type Analysis and Comparison of Original and Reblended FACE Diesel Fuels (FACE 2, FACE 4, and FACE 7)

    SciTech Connect

    Bays, J. Timothy; King, David L.; O'Hagan, Molly J.

    This report summarizes the carbon-type analysis from 1H and 13C{1H} nuclear magnetic resonance spectroscopy (NMR) of Fuels for Advanced Combustion Engines (FACE) diesel blends, FD-2B, FD 4B, and FD-7B, and makes comparison of the new blends with the original FACE diesel blends, FD 2A, FD 4A, and FD-7A, respectively. Generally, FD-2A and FD-2B are more similar than the A and B blends of FD-4 and FD-7. The aromatic carbon content is roughly equivalent, although the new FACE blends have decreased monoaromatic content and increased di- and tri-cycloaromatic content, as well as a higher overall aromatic content, than the original FACEmore » blends. The aromatic components of the new FACE blends generally have a higher alkyl substitution with longer alkyl substituents. The naphthenic and paraffinic contents remained relatively consistent. Based on aliphatic methyl and methylene carbon ratios, cetane numbers for FD-2A and -2B, and FD-7A and -7B are predicted to be consistent, while the cetane number for FD-4B is predicted to be higher than FD-4A. Overall, the new FACE fuel blends are fairly consistent with the original FACE fuel blends, but there are observable differences. In addition to providing important comparative compositional information on reformulated FACE diesel blends, this report also provides important information about the capabilities of the team at Pacific Northwest National Laboratory in the use of NMR spectroscopy for the detailed characterization and comparison of fuels and fuel blends.« less

  6. Visual search for faces by race: a cross-race study.

    PubMed

    Sun, Gang; Song, Luping; Bentin, Shlomo; Yang, Yanjie; Zhao, Lun

    2013-08-30

    Using a single averaged face of each race previous study indicated that the detection of one other-race face among own-race faces background was faster than vice versa (Levin, 1996, 2000). However, employing a variable mapping of face pictures one recent report found preferential detection of own-race faces vs. other-race faces (Lipp et al., 2009). Using the well-controlled design and a heterogeneous set of real face images, in the present study we explored the visual search for own and other race faces in Chinese and Caucasian participants. Across both groups, the search for a face of one race among other-race faces was serial and self-terminating. In Chinese participants, the search consistently faster for other-race than own-race faces, irrespective of upright or upside-down condition; however, this search asymmetry was not evident in Caucasian participants. These characteristics suggested that the race of a face is not a visual basic feature, and in Chinese participants the faster search for other-race than own-race faces also reflects perceptual factors. The possible mechanism underlying other-race search effects was discussed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Smiling emphasizes perceived distinctiveness of faces.

    PubMed

    Kawamura, Satoru; Komori, Masashi

    2008-08-01

    In this study, 114 Japanese observers (56 men and 58 women) rated the distinctiveness of 48 neutral faces and 48 smiling faces. Analysis showed smiling faces were rated as significantly more distinctive than neutral ones. Greater perceived distinctiveness provides an explanation for previous results that smiling faces are better remembered than faces with neutral expressions.

  8. Semantic Learning Modifies Perceptual Face Processing

    ERIC Educational Resources Information Center

    Heisz, Jennifer J.; Shedden, Judith M.

    2009-01-01

    Face processing changes when a face is learned with personally relevant information. In a five-day learning paradigm, faces were presented with rich semantic stories that conveyed personal information about the faces. Event-related potentials were recorded before and after learning during a passive viewing task. When faces were novel, we observed…

  9. Clinical application of the FACES score for face transplantation.

    PubMed

    Chopra, Karan; Susarla, Srinivas M; Goodrich, Danielle; Bernard, Steven; Zins, James E; Papay, Frank; Lee, W P Andrew; Gordon, Chad R

    2014-01-01

    This study aimed to systematically evaluate all reported outcomes of facial allotransplantation (FT) using the previously described FACES scoring instrument. This was a retrospective study of all consecutive face transplants to date (January 2012). Candidates were identified using medical and general internet database searches. Medical literature and media reports were reviewed for details regarding demographic, operative, anatomic, and psychosocial data, which were then used to formulate FACES scores. Pre-transplant and post-transplant scores for "functional status", "aesthetic deformity", "co-morbidities", "exposed tissue", and "surgical history" were calculated. Scores were statistically compared using paired-samples analyses. Twenty consecutive patients were identified, with 18 surviving recipients. The sample was composed of 3 females and 17 males, with a mean age of 35.0 ± 11.0 years (range: 19-57 years). Overall, data reporting for functional parameters was poor. Six subjects had complete pre-transplant and post-transplant data available for all 5 FACES domains. The mean pre-transplant FACES score was 33.5 ± 8.8 (range: 23-44); the mean post-transplant score was 21.5 ± 5.9 (range: 14-32) and was statistically significantly lower than the pre-transplant score (P = 0.02). Among the individual domains, FT conferred a statistically significant improvement in aesthetic defect scores and exposed tissue scores (P ≤ 0.01) while, at the same time, it displayed no significant increases in co-morbidity (P = 0.17). There is a significant deficiency in functional outcome reports thus far. Moreover, FT resulted in improved overall FACES score, with the most dramatic improvements noted in aesthetic defect and exposed tissue scores.

  10. Ethnicity identification from face images

    NASA Astrophysics Data System (ADS)

    Lu, Xiaoguang; Jain, Anil K.

    2004-08-01

    Human facial images provide the demographic information, such as ethnicity and gender. Conversely, ethnicity and gender also play an important role in face-related applications. Image-based ethnicity identification problem is addressed in a machine learning framework. The Linear Discriminant Analysis (LDA) based scheme is presented for the two-class (Asian vs. non-Asian) ethnicity classification task. Multiscale analysis is applied to the input facial images. An ensemble framework, which integrates the LDA analysis for the input face images at different scales, is proposed to further improve the classification performance. The product rule is used as the combination strategy in the ensemble. Experimental results based on a face database containing 263 subjects (2,630 face images, with equal balance between the two classes) are promising, indicating that LDA and the proposed ensemble framework have sufficient discriminative power for the ethnicity classification problem. The normalized ethnicity classification scores can be helpful in the facial identity recognition. Useful as a "soft" biometric, face matching scores can be updated based on the output of ethnicity classification module. In other words, ethnicity classifier does not have to be perfect to be useful in practice.

  11. Venous drainage of the face.

    PubMed

    Onishi, S; Imanishi, N; Yoshimura, Y; Inoue, Y; Sakamoto, Y; Chang, H; Okumoto, T

    2017-04-01

    The venous anatomy of the face was examined in 12 fresh cadavers. Venograms and arteriovenograms were obtained after the injection of contrast medium. In 8 of the 12 cadavers, a large loop was formed by the facial vein, the supratrochlear vein, and the superficial temporal vein, which became the main trunk vein of the face. In 4 of the 12 cadavers, the superior lateral limb of the loop vein was less well developed. The loop vein generally did not accompany the arteries of the face. Cutaneous branches of the loop vein formed a polygonal venous network in the skin, while communicating branches ran toward deep veins. These findings suggest that blood from the dermis of the face is collected by the polygonal venous network and enters the loop vein through the cutaneous branches, after which blood flows away from the face through the superficial temporal vein, the facial vein, and the communicating branches and enters the deep veins. Copyright © 2016 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  12. Three-dimensional face model reproduction method using multiview images

    NASA Astrophysics Data System (ADS)

    Nagashima, Yoshio; Agawa, Hiroshi; Kishino, Fumio

    1991-11-01

    This paper describes a method of reproducing three-dimensional face models using multi-view images for a virtual space teleconferencing system that achieves a realistic visual presence for teleconferencing. The goal of this research, as an integral component of a virtual space teleconferencing system, is to generate a three-dimensional face model from facial images, synthesize images of the model virtually viewed from different angles, and with natural shadow to suit the lighting conditions of the virtual space. The proposed method is as follows: first, front and side view images of the human face are taken by TV cameras. The 3D data of facial feature points are obtained from front- and side-views by an image processing technique based on the color, shape, and correlation of face components. Using these 3D data, the prepared base face models, representing typical Japanese male and female faces, are modified to approximate the input facial image. The personal face model, representing the individual character, is then reproduced. Next, an oblique view image is taken by TV camera. The feature points of the oblique view image are extracted using the same image processing technique. A more precise personal model is reproduced by fitting the boundary of the personal face model to the boundary of the oblique view image. The modified boundary of the personal face model is determined by using face direction, namely rotation angle, which is detected based on the extracted feature points. After the 3D model is established, the new images are synthesized by mapping facial texture onto the model.

  13. [Treatment goals in FACE philosophy].

    PubMed

    Martin, Domingo; Maté, Amaia; Zabalegui, Paula; Valenzuela, Jaime

    2017-03-01

    The FACE philosophy is characterized by clearly defined treatment goals: facial esthetics, dental esthetics, periodontal health, functional occlusion, neuromuscular mechanism and joint function. The purpose is to establish ideal occlusion with good facial esthetics and an orthopedic stable joint position. The authors present all the concepts of FACE philosophy and illustrate them through one case report. Taking into account all the FACE philosophy concepts increases diagnostic ability and improves the quality and stability of treatment outcomes. The goal of this philosophy is to harmonize the facial profile, tooth alignment, periodontium, functional occlusion, neuromuscular mechanism and joint function. The evaluation and treatment approach to vertical problems are unique to the philosophy. © EDP Sciences, SFODF, 2017.

  14. Anatomically accurate individual face modeling.

    PubMed

    Zhang, Yu; Prakash, Edmond C; Sung, Eric

    2003-01-01

    This paper presents a new 3D face model of a specific person constructed from the anatomical perspective. By exploiting the laser range data, a 3D facial mesh precisely representing the skin geometry is reconstructed. Based on the geometric facial mesh, we develop a deformable multi-layer skin model. It takes into account the nonlinear stress-strain relationship and dynamically simulates the non-homogenous behavior of the real skin. The face model also incorporates a set of anatomically-motivated facial muscle actuators and underlying skull structure. Lagrangian mechanics governs the facial motion dynamics, dictating the dynamic deformation of facial skin in response to the muscle contraction.

  15. Penetrating injuries of the face.

    PubMed

    Gaboriau, H P; Kreutziger, K L

    1998-01-01

    In dealing with gunshot wounds to the face, the emergency department physician should have a basic knowledge of ballistics. Securing an airway (either intubation or surgical airway) should be the top priority. The location of the wound dictates which patient should be intubated. Plain x-ray films of the face and skull, as well as CT scan in certain situations, allow determination of the extent of damages to the skeleton as well as intracranial injuries. Clinical symptoms suggesting an underlying vascular injury require an angiogram. After thorough debridement of the wounds, fractures are treated either with open-reduction and internal fixation or closed-reduction and intermaxillary fixation.

  16. Attractiveness judgments and discrimination of mommies and grandmas: perceptual tuning for young adult faces.

    PubMed

    Short, Lindsey A; Mondloch, Catherine J; Hackland, Anne T

    2015-01-01

    Adults are more accurate in detecting deviations from normality in young adult faces than in older adult faces despite exhibiting comparable accuracy in discriminating both face ages. This deficit in judging the normality of older faces may be due to reliance on a face space optimized for the dimensions of young adult faces, perhaps because of early and continuous experience with young adult faces. Here we examined the emergence of this young adult face bias by testing 3- and 7-year-old children on a child-friendly version of the task used to test adults. In an attractiveness judgment task, children viewed young and older adult face pairs; each pair consisted of an unaltered face and a distorted face of the same identity. Children pointed to the prettiest face, which served as a measure of their sensitivity to the dimensions on which faces vary relative to a norm. To examine whether biases in the attractiveness task were specific to deficits in referencing a norm or extended to impaired discrimination, we tested children on a simultaneous match-to-sample task with the same stimuli. Both age groups were more accurate in judging the attractiveness of young faces relative to older faces; however, unlike adults, the young adult face bias extended to the match-to-sample task. These results suggest that by 3 years of age, children's perceptual system is more finely tuned for young adult faces than for older adult faces, which may support past findings of superior recognition for young adult faces. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Effective connectivities of cortical regions for top-down face processing: A Dynamic Causal Modeling study

    PubMed Central

    Li, Jun; Liu, Jiangang; Liang, Jimin; Zhang, Hongchuan; Zhao, Jizheng; Rieth, Cory A.; Huber, David E.; Li, Wu; Shi, Guangming; Ai, Lin; Tian, Jie; Lee, Kang

    2013-01-01

    To study top-down face processing, the present study used an experimental paradigm in which participants detected non-existent faces in pure noise images. Conventional BOLD signal analysis identified three regions involved in this illusory face detection. These regions included the left orbitofrontal cortex (OFC) in addition to the right fusiform face area (FFA) and right occipital face area (OFA), both of which were previously known to be involved in both top-down and bottom-up processing of faces. We used Dynamic Causal Modeling (DCM) and Bayesian model selection to further analyze the data, revealing both intrinsic and modulatory effective connectivities among these three cortical regions. Specifically, our results support the claim that the orbitofrontal cortex plays a crucial role in the top-down processing of faces by regulating the activities of the occipital face area, and the occipital face area in turn detects the illusory face features in the visual stimuli and then provides this information to the fusiform face area for further analysis. PMID:20423709

  18. The relationship between visual search and categorization of own- and other-age faces.

    PubMed

    Craig, Belinda M; Lipp, Ottmar V

    2018-03-13

    Young adult participants are faster to detect young adult faces in crowds of infant and child faces than vice versa. These findings have been interpreted as evidence for more efficient attentional capture by own-age than other-age faces, but could alternatively reflect faster rejection of other-age than own-age distractors, consistent with the previously reported other-age categorization advantage: faster categorization of other-age than own-age faces. Participants searched for own-age faces in other-age backgrounds or vice versa. Extending the finding to different other-age groups, young adult participants were faster to detect young adult faces in both early adolescent (Experiment 1) and older adult backgrounds (Experiment 2). To investigate whether the own-age detection advantage could be explained by faster categorization and rejection of other-age background faces, participants in experiments 3 and 4 also completed an age categorization task. Relatively faster categorization of other-age faces was related to relatively faster search through other-age backgrounds on target absent trials but not target present trials. These results confirm that other-age faces are more quickly categorized and searched through and that categorization and search processes are related; however, this correlational approach could not confirm or reject the contribution of background face processing to the own-age detection advantage. © 2018 The British Psychological Society.

  19. Explaining Sad People’s Memory Advantage for Faces

    PubMed Central

    Hills, Peter J.; Marquardt, Zoe; Young, Isabel; Goodenough, Imogen

    2017-01-01

    Sad people recognize faces more accurately than happy people (Hills et al., 2011). We devised four hypotheses for this finding that are tested between in the current study. The four hypotheses are: (1) sad people engage in more expert processing associated with face processing; (2) sad people are motivated to be more accurate than happy people in an attempt to repair their mood; (3) sad people have a defocused attentional strategy that allows more information about a face to be encoded; and (4) sad people scan more of the face than happy people leading to more facial features to be encoded. In Experiment 1, we found that dysphoria (sad mood often associated with depression) was not correlated with the face-inversion effect (a measure of expert processing) nor with response times but was correlated with defocused attention and recognition accuracy. Experiment 2 established that dysphoric participants detected changes made to more facial features than happy participants. In Experiment 3, using eye-tracking we found that sad-induced participants sampled more of the face whilst avoiding the eyes. Experiment 4 showed that sad-induced people demonstrated a smaller own-ethnicity bias. These results indicate that sad people show different attentional allocation to faces than happy and neutral people. PMID:28261138

  20. Error Rates in Users of Automatic Face Recognition Software

    PubMed Central

    White, David; Dunn, James D.; Schmid, Alexandra C.; Kemp, Richard I.

    2015-01-01

    In recent years, wide deployment of automatic face recognition systems has been accompanied by substantial gains in algorithm performance. However, benchmarking tests designed to evaluate these systems do not account for the errors of human operators, who are often an integral part of face recognition solutions in forensic and security settings. This causes a mismatch between evaluation tests and operational accuracy. We address this by measuring user performance in a face recognition system used to screen passport applications for identity fraud. Experiment 1 measured target detection accuracy in algorithm-generated ‘candidate lists’ selected from a large database of passport images. Accuracy was notably poorer than in previous studies of unfamiliar face matching: participants made over 50% errors for adult target faces, and over 60% when matching images of children. Experiment 2 then compared performance of student participants to trained passport officers–who use the system in their daily work–and found equivalent performance in these groups. Encouragingly, a group of highly trained and experienced “facial examiners” outperformed these groups by 20 percentage points. We conclude that human performance curtails accuracy of face recognition systems–potentially reducing benchmark estimates by 50% in operational settings. Mere practise does not attenuate these limits, but superior performance of trained examiners suggests that recruitment and selection of human operators, in combination with effective training and mentorship, can improve the operational accuracy of face recognition systems. PMID:26465631

  1. What makes a cell face-selective: the importance of contrast

    PubMed Central

    Ohayon, Shay; Freiwald, Winrich A; Tsao, Doris Y

    2012-01-01

    Summary Faces are robustly detected by computer vision algorithms that search for characteristic coarse contrast features. Here, we investigated whether face-selective cells in the primate brain exploit contrast features as well. We recorded from face-selective neurons in macaque inferotemporal cortex, while presenting a face-like collage of regions whose luminances were changed randomly. Modulating contrast combinations between regions induced activity changes ranging from no response to a response greater than that to a real face in 50% of cells. The critical stimulus factor determining response magnitude was contrast polarity, e.g., nose region brighter than left eye. Contrast polarity preferences were consistent across cells, suggesting a common computational strategy across the population, and matched features used by computer vision algorithms for face detection. Furthermore, most cells were tuned both for contrast polarity and for the geometry of facial features, suggesting cells encode information useful both for detection and recognition. PMID:22578507

  2. Encouraging Participation in Face-to-Face Lectures: The Index Card Technique

    ERIC Educational Resources Information Center

    Daws, Laura Beth

    2018-01-01

    Courses: This activity will work in any face-to-face communication lecture course. Objectives: By the end of the semester in a face-to-face lecture class, every student will have engaged in verbal discussion.

  3. 78 FR 52996 - Culturally Significant Objects Imported for Exhibition Determinations: “Face to Face: Flanders...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-27

    ... DEPARTMENT OF STATE [Public Notice 8441] Culturally Significant Objects Imported for Exhibition Determinations: ``Face to Face: Flanders, Florence, and Renaissance Painting'' Exhibition SUMMARY: Notice is... objects to be included in the exhibition ``Face to Face: Flanders, Florence, and Renaissance Painting...

  4. A randomized trial of face-to-face counselling versus telephone counselling versus bibliotherapy for occupational stress.

    PubMed

    Kilfedder, Catherine; Power, Kevin; Karatzias, Thanos; McCafferty, Aileen; Niven, Karen; Chouliara, Zoë; Galloway, Lisa; Sharp, Stephen

    2010-09-01

    The aim of the present study was to compare the effectiveness and acceptability of three interventions for occupational stress. A total of 90 National Health Service employees were randomized to face-to-face counselling or telephone counselling or bibliotherapy. Outcomes were assessed at post-intervention and 4-month follow-up. Clinical Outcomes in Routine Evaluation (CORE), General Health Questionnaire (GHQ-12), and Perceived Stress Scale (PSS-10) were used to evaluate intervention outcomes. An intention-to-treat analyses was performed. Repeated measures analysis revealed significant time effects on all measures with the exception of CORE Risk. No significant group effects were detected on all outcome measures. No time by group significant interaction effects were detected on any of the outcome measures with the exception of CORE Functioning and GHQ total. With regard to acceptability of interventions, participants expressed a preference for face-to-face counselling over the other two modalities. Overall, it was concluded that the three intervention groups are equally effective. Given that bibliotherapy is the least costly of the three, results from the present study might be considered in relation to a stepped care approach to occupational stress management with bibliotherapy as the first line of intervention, followed by telephone and face-to-face counselling as required.

  5. "Just another pretty face": a multidimensional scaling approach to face attractiveness and variability.

    PubMed

    Potter, Timothy; Corneille, Olivier; Ruys, Kirsten I; Rhodes, Ginwan

    2007-04-01

    Findings on both attractiveness and memory for faces suggest that people should perceive more similarity among attractive than among unattractive faces. A multidimensional scaling approach was used to test this hypothesis in two studies. In Study 1, we derived a psychological face space from similarity ratings of attractive and unattractive Caucasian female faces. In Study 2, we derived a face space for attractive and unattractive male faces of Caucasians and non-Caucasians. Both studies confirm that attractive faces are indeed more tightly clustered than unattractive faces in people's psychological face spaces. These studies provide direct and original support for theoretical assumptions previously made in the face space and face memory literatures.

  6. The Many Faces of Language.

    ERIC Educational Resources Information Center

    Werdmann, Anne M.

    In a sixth-grade unit, students learned about people's facial expressions through careful observation, recording, reporting, and generalizing. The students studied the faces of people of various ages; explored "masks" that people wear in different situations; learned about the use of ritual masks; made case studies of individuals to show…

  7. "Put on a Happy Face"

    ERIC Educational Resources Information Center

    Morris, Michael

    2004-01-01

    All evaluators face the challenge of striving to adhere to the highest possible standards of ethical conduct. Translating the AEA's Guiding Principles and the Joint Committee's Program Evaluation Standards into everyday practice, however, can be a complex, uncertain, and frustrating endeavor. Moreover, acting in an ethical fashion can require…

  8. Thermal to Visible Face Recognition

    DTIC Science & Technology

    2012-04-01

    Thermal to Visible Face Recognition Jonghyun Choi†, Shuowen Hu‡, S. Susan Young‡ and Larry S. Davis† †University of Maryland, College Park, MD ‡U.S...Bourlai et al.7 Further author information: (Send correspondence to Jonghyun Choi) Jonghyun Choi: E-mail: jhchoi@umd.edu, Telephone: 1 301 335 3866 This

  9. Mechanical Coal-Face Fracturer

    NASA Technical Reports Server (NTRS)

    Collins, E. R., Jr.

    1984-01-01

    Radial points on proposed drill bit take advantage of natural fracture planes of coal. Radial fracture points retracted during drilling and impacted by piston to fracture coal once drilling halts. Group of bits attached to array of pneumatic drivers to fracture large areas of coal face.

  10. Families Facing the Nuclear Taboo.

    ERIC Educational Resources Information Center

    Jacobs, Judith Bula

    1988-01-01

    Discusses attitudes of 12 families participating in group which was formed to focus on issues related to the possibility of a nuclear disaster. Why and how these families are facing the nuclear taboo plus various outcomes of doing so are discussed as well as the role of the professional in encouraging such openness about these difficult issues.…

  11. Face-Sealing Butterfly Valve

    NASA Technical Reports Server (NTRS)

    Tervo, John N.

    1992-01-01

    Valve plate made to translate as well as rotate. Valve opened and closed by turning shaft and lever. Interactions among lever, spring, valve plate, and face seal cause plate to undergo combination of translation and rotation so valve plate clears seal during parts of opening and closing motions.

  12. The Ontogeny of Face Identity

    ERIC Educational Resources Information Center

    Blass, Elliott M.; Camp, Carole Ann

    2004-01-01

    A paradigm was designed to study how infants identify live faces. Eight- to 21-week-old infants were seated comfortably and were presented an adult female, dressed in a white laboratory coat and a white turtle neck sweater, until habituation ensued. The adult then left the room. One minute later either she or an identically garbed confederate…

  13. Reading sadness beyond human faces.

    PubMed

    Chammat, Mariam; Foucher, Aurélie; Nadel, Jacqueline; Dubal, Stéphanie

    2010-08-12

    Human faces are the main emotion displayers. Knowing that emotional compared to neutral stimuli elicit enlarged ERPs components at the perceptual level, one may wonder whether this has led to an emotional facilitation bias toward human faces. To contribute to this question, we measured the P1 and N170 components of the ERPs elicited by human facial compared to artificial stimuli, namely non-humanoid robots. Fifteen healthy young adults were shown sad and neutral, upright and inverted expressions of human versus robotic displays. An increase in P1 amplitude in response to sad displays compared to neutral ones evidenced an early perceptual amplification for sadness information. P1 and N170 latencies were delayed in response to robotic stimuli compared to human ones, while N170 amplitude was not affected by media. Inverted human stimuli elicited a longer latency of P1 and a larger N170 amplitude while inverted robotic stimuli did not. As a whole, our results show that emotion facilitation is not biased to human faces but rather extend to non-human displays, thus suggesting our capacity to read emotion beyond faces. Copyright 2010 Elsevier B.V. All rights reserved.

  14. Optogenetic and pharmacological suppression of spatial clusters of face neurons reveal their causal role in face gender discrimination

    PubMed Central

    Afraz, Arash; Boyden, Edward S.; DiCarlo, James J.

    2015-01-01

    Neurons that respond more to images of faces over nonface objects were identified in the inferior temporal (IT) cortex of primates three decades ago. Although it is hypothesized that perceptual discrimination between faces depends on the neural activity of IT subregions enriched with “face neurons,” such a causal link has not been directly established. Here, using optogenetic and pharmacological methods, we reversibly suppressed the neural activity in small subregions of IT cortex of macaque monkeys performing a facial gender-discrimination task. Each type of intervention independently demonstrated that suppression of IT subregions enriched in face neurons induced a contralateral deficit in face gender-discrimination behavior. The same neural suppression of other IT subregions produced no detectable change in behavior. These results establish a causal link between the neural activity in IT face neuron subregions and face gender-discrimination behavior. Also, the demonstration that brief neural suppression of specific spatial subregions of IT induces behavioral effects opens the door for applying the technical advantages of optogenetics to a systematic attack on the causal relationship between IT cortex and high-level visual perception. PMID:25953336

  15. Optogenetic and pharmacological suppression of spatial clusters of face neurons reveal their causal role in face gender discrimination.

    PubMed

    Afraz, Arash; Boyden, Edward S; DiCarlo, James J

    2015-05-26

    Neurons that respond more to images of faces over nonface objects were identified in the inferior temporal (IT) cortex of primates three decades ago. Although it is hypothesized that perceptual discrimination between faces depends on the neural activity of IT subregions enriched with "face neurons," such a causal link has not been directly established. Here, using optogenetic and pharmacological methods, we reversibly suppressed the neural activity in small subregions of IT cortex of macaque monkeys performing a facial gender-discrimination task. Each type of intervention independently demonstrated that suppression of IT subregions enriched in face neurons induced a contralateral deficit in face gender-discrimination behavior. The same neural suppression of other IT subregions produced no detectable change in behavior. These results establish a causal link between the neural activity in IT face neuron subregions and face gender-discrimination behavior. Also, the demonstration that brief neural suppression of specific spatial subregions of IT induces behavioral effects opens the door for applying the technical advantages of optogenetics to a systematic attack on the causal relationship between IT cortex and high-level visual perception.

  16. Incorporating Online Discussion in Face to Face Classroom Learning: A New Blended Learning Approach

    ERIC Educational Resources Information Center

    Chen, Wenli; Looi, Chee-Kit

    2007-01-01

    This paper discusses an innovative blended learning strategy which incorporates online discussion in both in-class face to face, and off-classroom settings. Online discussion in a face to face class is compared with its two counterparts, off-class online discussion as well as in-class, face to face oral discussion, to examine the advantages and…

  17. A Comparison of Online and Face-to-Face Approaches to Teaching Introduction to American Government

    ERIC Educational Resources Information Center

    Bolsen, Toby; Evans, Michael; Fleming, Anna McCaghren

    2016-01-01

    This article reports results from a large study comparing four different approaches to teaching Introduction to American Government: (1) traditional, a paper textbook with 100% face-to-face lecture-style teaching; (2) breakout, a paper textbook with 50% face-to-face lecture-style teaching and 50% face-to-face small-group breakout discussion…

  18. The Online and Face-to-Face Counseling Attitudes Scales: A Validation Study

    ERIC Educational Resources Information Center

    Rochlen, Aaron B.; Beretvas, S. Natasha; Zack, Jason S.

    2004-01-01

    This article reports on the development of measures of attitudes toward online and face-to-face counseling. Overall, participants expressed more favorable evaluations of face-to-face counseling than of online counseling. Significant correlations were found between online and face-to-face counseling with traditional help-seeking attitudes, comfort…

  19. Developmental Changes in Mother-Infant Face-to-Face Communication: Birth to 3 Months.

    ERIC Educational Resources Information Center

    Lavelli, Manuela; Fogel, Alan

    2002-01-01

    Investigated development of face-to-face communication in infants between 1 and 14 weeks old and their mothers. Found a curvilinear development of early face-to-face communication, with increases occurring between weeks 4 and 9. When placed on a sofa, infants' face-to-face communication was longer than when they were held. Girls spent a longer…

  20. Face memory and face recognition in children and adolescents with attention deficit hyperactivity disorder: A systematic review.

    PubMed

    Romani, Maria; Vigliante, Miriam; Faedda, Noemi; Rossetti, Serena; Pezzuti, Lina; Guidetti, Vincenzo; Cardona, Francesco

    2018-06-01

    This review focuses on facial recognition abilities in children and adolescents with attention deficit hyperactivity disorder (ADHD). A systematic review, using PRISMA guidelines, was conducted to identify original articles published prior to May 2017 pertaining to memory, face recognition, affect recognition, facial expression recognition and recall of faces in children and adolescents with ADHD. The qualitative synthesis based on different studies shows a particular focus of the research on facial affect recognition without paying similar attention to the structural encoding of facial recognition. In this review, we further investigate facial recognition abilities in children and adolescents with ADHD, providing synthesis of the results observed in the literature, while detecting face recognition tasks used on face processing abilities in ADHD and identifying aspects not yet explored. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Artificial faces are harder to remember

    PubMed Central

    Balas, Benjamin; Pacella, Jonathan

    2015-01-01

    Observers interact with artificial faces in a range of different settings and in many cases must remember and identify computer-generated faces. In general, however, most adults have heavily biased experience favoring real faces over synthetic faces. It is well known that face recognition abilities are affected by experience such that faces belonging to “out-groups” defined by race or age are more poorly remembered and harder to discriminate from one another than faces belonging to the “in-group.” Here, we examine the extent to which artificial faces form an “out-group” in this sense when other perceptual categories are matched. We rendered synthetic faces using photographs of real human faces and compared performance in a memory task and a discrimination task across real and artificial versions of the same faces. We found that real faces were easier to remember, but only slightly more discriminable than artificial faces. Artificial faces were also equally susceptible to the well-known face inversion effect, suggesting that while these patterns are still processed by the human visual system in a face-like manner, artificial appearance does compromise the efficiency of face processing. PMID:26195852

  2. A Smart Spoofing Face Detector by Display Features Analysis.

    PubMed

    Lai, ChinLun; Tai, ChiuYuan

    2016-07-21

    In this paper, a smart face liveness detector is proposed to prevent the biometric system from being "deceived" by the video or picture of a valid user that the counterfeiter took with a high definition handheld device (e.g., iPad with retina display). By analyzing the characteristics of the display platform and using an expert decision-making core, we can effectively detect whether a spoofing action comes from a fake face displayed in the high definition display by verifying the chromaticity regions in the captured face. That is, a live or spoof face can be distinguished precisely by the designed optical image sensor. To sum up, by the proposed method/system, a normal optical image sensor can be upgraded to a powerful version to detect the spoofing actions. The experimental results prove that the proposed detection system can achieve very high detection rate compared to the existing methods and thus be practical to implement directly in the authentication systems.

  3. Audio-video feature correlation: faces and speech

    NASA Astrophysics Data System (ADS)

    Durand, Gwenael; Montacie, Claude; Caraty, Marie-Jose; Faudemay, Pascal

    1999-08-01

    This paper presents a study of the correlation of features automatically extracted from the audio stream and the video stream of audiovisual documents. In particular, we were interested in finding out whether speech analysis tools could be combined with face detection methods, and to what extend they should be combined. A generic audio signal partitioning algorithm as first used to detect Silence/Noise/Music/Speech segments in a full length movie. A generic object detection method was applied to the keyframes extracted from the movie in order to detect the presence or absence of faces. The correlation between the presence of a face in the keyframes and of the corresponding voice in the audio stream was studied. A third stream, which is the script of the movie, is warped on the speech channel in order to automatically label faces appearing in the keyframes with the name of the corresponding character. We naturally found that extracted audio and video features were related in many cases, and that significant benefits can be obtained from the joint use of audio and video analysis methods.

  4. Navon letters affect face learning and face retrieval.

    PubMed

    Lewis, Michael B; Mills, Claire; Hills, Peter J; Weston, Nicola

    2009-01-01

    Identifying the local letters of a Navon letter (a large letter made up of smaller different letters) prior to recognition causes impairment in accuracy, while identifying the global letters of a Navon letter causes an enhancement in recognition accuracy (Macrae & Lewis, 2002). This effect may result from a transfer-inappropriate processing shift (TIPS) (Schooler, 2002). The present experiment extends research on the underlying mechanism of this effect by exploring this Navon effect on face learning as well as face recognition. The results of the two experiments revealed that when the Navon task used at retrieval was the same as that used at encoding then the performance accuracy is enhanced, whereas when the processing operations mismatch at retrieval and at encoding, this impairs recognition accuracy. These results provide support for the TIPS explanation of the Navon effect.

  5. Really Reaching the Public, Face-to-Face

    NASA Astrophysics Data System (ADS)

    Foukal, Peter

    2014-02-01

    This past summer I was able to provide a young couple with their first view of Saturn through a telescope, and afterward they told me what a profound experience this look into space had been for them. It wasn't the first time I'd seen such an emotional response since I opened the East Point Solar Observatory, a small public observatory in Nahant, Mass., in 1995. But listening to them reminded me how lucky we scientists are to pursue a career that brings out such warm feelings in our neighbors. It also made me wonder whether the effectiveness of our national approach to public outreach might be increased by more face-to-face contact between scientists and the public.

  6. Is the Thatcher Illusion Modulated by Face Familiarity? Evidence from an Eye Tracking Study

    PubMed Central

    2016-01-01

    Thompson (1980) first detected and described the Thatcher Illusion, where participants instantly perceive an upright face with inverted eyes and mouth as grotesque, but fail to do so when the same face is inverted. One prominent but controversial explanation is that the processing of configural information is disrupted in inverted faces. Studies investigating the Thatcher Illusion either used famous faces or non-famous faces. Highly familiar faces were often thought to be processed in a pronounced configural mode, so they seem ideal candidates to be tested in one Thatcher study against unfamiliar faces–but this has never been addressed so far. In our study, participants evaluated 16 famous and 16 non-famous faces for their grotesqueness. We tested whether familiarity (famous/non-famous faces) modulates reaction times, correctness of grotesqueness assessments (accuracy), and eye movement patterns for the factors orientation (upright/inverted) and Thatcherisation (Thatcherised/non-Thatcherised). On a behavioural level, familiarity effects were only observable via face inversion (higher accuracy and sensitivity for famous compared to non-famous faces) but not via Thatcherisation. Regarding eye movements, however, Thatcherisation influenced the scanning of famous and non-famous faces, for instance, in scanning the mouth region of the presented faces (higher number, duration and dwell time of fixations for famous compared to non-famous faces if Thatcherised). Altogether, famous faces seem to be processed in a more elaborate, more expertise-based way than non-famous faces, whereas non-famous, inverted faces seem to cause difficulties in accurate and sensitive processing. Results are further discussed in the face of existing studies of familiar vs. unfamiliar face processing. PMID:27776145

  7. Neural circuitry of emotional face processing in autism spectrum disorders.

    PubMed

    Monk, Christopher S; Weng, Shih-Jen; Wiggins, Jillian Lee; Kurapati, Nikhil; Louro, Hugo M C; Carrasco, Melisa; Maslowsky, Julie; Risi, Susan; Lord, Catherine

    2010-03-01

    Autism spectrum disorders (ASD) are associated with severe impairments in social functioning. Because faces provide nonverbal cues that support social interactions, many studies of ASD have examined neural structures that process faces, including the amygdala, ventromedial prefrontal cortex and superior and middle temporal gyri. However, increases or decreases in activation are often contingent on the cognitive task. Specifically, the cognitive domain of attention influences group differences in brain activation. We investigated brain function abnormalities in participants with ASD using a task that monitored attention bias to emotional faces. Twenty-four participants (12 with ASD, 12 controls) completed a functional magnetic resonance imaging study while performing an attention cuing task with emotional (happy, sad, angry) and neutral faces. In response to emotional faces, those in the ASD group showed greater right amygdala activation than those in the control group. A preliminary psychophysiological connectivity analysis showed that ASD participants had stronger positive right amygdala and ventromedial prefrontal cortex coupling and weaker positive right amygdala and temporal lobe coupling than controls. There were no group differences in the behavioural measure of attention bias to the emotional faces. The small sample size may have affected our ability to detect additional group differences. When attention bias to emotional faces was equivalent between ASD and control groups, ASD was associated with greater amygdala activation. Preliminary analyses showed that ASD participants had stronger connectivity between the amygdala ventromedial prefrontal cortex (a network implicated in emotional modulation) and weaker connectivity between the amygdala and temporal lobe (a pathway involved in the identification of facial expressions, although areas of group differences were generally in a more anterior region of the temporal lobe than what is typically reported for

  8. Photogrammetric Analysis of Attractiveness in Indian Faces

    PubMed Central

    Duggal, Shveta; Kapoor, DN; Verma, Santosh; Sagar, Mahesh; Lee, Yung-Seop; Moon, Hyoungjin

    2016-01-01

    Background The objective of this study was to assess the attractive facial features of the Indian population. We tried to evaluate subjective ratings of facial attractiveness and identify which facial aesthetic subunits were important for facial attractiveness. Methods A cross-sectional study was conducted of 150 samples (referred to as candidates). Frontal photographs were analyzed. An orthodontist, a prosthodontist, an oral surgeon, a dentist, an artist, a photographer and two laymen (estimators) subjectively evaluated candidates' faces using visual analog scale (VAS) scores. As an objective method for facial analysis, we used balanced angular proportional analysis (BAPA). Using SAS 10.1 (SAS Institute Inc.), the Turkey's studentized range test and Pearson correlation analysis were performed to detect between-group differences in VAS scores (Experiment 1), to identify correlations between VAS scores and BAPA scores (Experiment 2), and to analyze the characteristic features of facial attractiveness and gender differences (Experiment 3); the significance level was set at P=0.05. Results Experiment 1 revealed some differences in VAS scores according to professional characteristics. In Experiment 2, BAPA scores were found to behave similarly to subjective ratings of facial beauty, but showed a relatively weak correlation coefficient with the VAS scores. Experiment 3 found that the decisive factors for facial attractiveness were different for men and women. Composite images of attractive Indian male and female faces were constructed. Conclusions Our photogrammetric study, statistical analysis, and average composite faces of an Indian population provide valuable information about subjective perceptions of facial beauty and attractive facial structures in the Indian population. PMID:27019809

  9. Holistic Processing of Static and Moving Faces

    ERIC Educational Resources Information Center

    Zhao, Mintao; Bülthoff, Isabelle

    2017-01-01

    Humans' face ability develops and matures with extensive experience in perceiving, recognizing, and interacting with faces that move most of the time. However, how facial movements affect 1 core aspect of face ability--holistic face processing--remains unclear. Here we investigated the influence of rigid facial motion on holistic and part-based…

  10. Neural microgenesis of personally familiar face recognition

    PubMed Central

    Ramon, Meike; Vizioli, Luca; Liu-Shuang, Joan; Rossion, Bruno

    2015-01-01

    Despite a wealth of information provided by neuroimaging research, the neural basis of familiar face recognition in humans remains largely unknown. Here, we isolated the discriminative neural responses to unfamiliar and familiar faces by slowly increasing visual information (i.e., high-spatial frequencies) to progressively reveal faces of unfamiliar or personally familiar individuals. Activation in ventral occipitotemporal face-preferential regions increased with visual information, independently of long-term face familiarity. In contrast, medial temporal lobe structures (perirhinal cortex, amygdala, hippocampus) and anterior inferior temporal cortex responded abruptly when sufficient information for familiar face recognition was accumulated. These observations suggest that following detailed analysis of individual faces in core posterior areas of the face-processing network, familiar face recognition emerges categorically in medial temporal and anterior regions of the extended cortical face network. PMID:26283361

  11. Neural microgenesis of personally familiar face recognition.

    PubMed

    Ramon, Meike; Vizioli, Luca; Liu-Shuang, Joan; Rossion, Bruno

    2015-09-01

    Despite a wealth of information provided by neuroimaging research, the neural basis of familiar face recognition in humans remains largely unknown. Here, we isolated the discriminative neural responses to unfamiliar and familiar faces by slowly increasing visual information (i.e., high-spatial frequencies) to progressively reveal faces of unfamiliar or personally familiar individuals. Activation in ventral occipitotemporal face-preferential regions increased with visual information, independently of long-term face familiarity. In contrast, medial temporal lobe structures (perirhinal cortex, amygdala, hippocampus) and anterior inferior temporal cortex responded abruptly when sufficient information for familiar face recognition was accumulated. These observations suggest that following detailed analysis of individual faces in core posterior areas of the face-processing network, familiar face recognition emerges categorically in medial temporal and anterior regions of the extended cortical face network.

  12. Spatial Mechanisms within the Dorsal Visual Pathway Contribute to the Configural Processing of Faces.

    PubMed

    Zachariou, Valentinos; Nikas, Christine V; Safiullah, Zaid N; Gotts, Stephen J; Ungerleider, Leslie G

    2017-08-01

    Human face recognition is often attributed to configural processing; namely, processing the spatial relationships among the features of a face. If configural processing depends on fine-grained spatial information, do visuospatial mechanisms within the dorsal visual pathway contribute to this process? We explored this question in human adults using functional magnetic resonance imaging and transcranial magnetic stimulation (TMS) in a same-different face detection task. Within localized, spatial-processing regions of the posterior parietal cortex, configural face differences led to significantly stronger activation compared to featural face differences, and the magnitude of this activation correlated with behavioral performance. In addition, detection of configural relative to featural face differences led to significantly stronger functional connectivity between the right FFA and the spatial processing regions of the dorsal stream, whereas detection of featural relative to configural face differences led to stronger functional connectivity between the right FFA and left FFA. Critically, TMS centered on these parietal regions impaired performance on configural but not featural face difference detections. We conclude that spatial mechanisms within the dorsal visual pathway contribute to the configural processing of facial features and, more broadly, that the dorsal stream may contribute to the veridical perception of faces. Published by Oxford University Press 2016.

  13. Newborn preference for a new face vs. a previously seen communicative or motionless face.

    PubMed

    Cecchini, Marco; Baroni, Eleonora; Di Vito, Cinzia; Piccolo, Federica; Lai, Carlo

    2011-06-01

    Newborn infants prefer to look at a new face compared to a known face (still-face). This effect does not happen with the mother-face. The newborns could be attracted by the mother-face because, unlike the still-face, it confirms an expectation of communication. Fifty newborns were video-recorded. Sixteen of them were recruited in the final sample: nine were exposed to a communicative face and seven to a still-face. All the 16 newborns were successively exposed to two preference-tasks where a new face was compared with the known face. Only newborns previously exposed to a still-face preferred to look at a new face instead of the known face. The results suggest that the newborns are able to build a dynamic representation of faces. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. Airway recovery after face transplantation.

    PubMed

    Fischer, Sebastian; Wallins, Joe S; Bueno, Ericka M; Kueckelhaus, Maximilian; Chandawarkar, Akash; Diaz-Siso, J Rodrigo; Larson, Allison; Murphy, George F; Annino, Donald J; Caterson, Edward J; Pomahac, Bohdan

    2014-12-01

    Severe facial injuries can compromise the upper airway by reducing airway volume, obstructing or obliterating the nasal passage, and interfering with oral airflow. Besides the significant impact on quality of life, upper airway impairments can have life-threatening or life-altering consequences. The authors evaluated improvements in functional airway after face transplantation. Between 2009 and 2011, four patients underwent face transplantation at the authors' institution, the Brigham and Women's Hospital. Patients were examined preoperatively and postoperatively and their records reviewed for upper airway infections and sleeping disorders. The nasal mucosa was biopsied after face transplantation and analyzed using scanning electron microscopy. Volumetric imaging software was used to evaluate computed tomographic scans of the upper airway and assess airway volume changes before and after transplantation. Before transplantation, two patients presented an exposed naked nasal cavity and two suffered from occlusion of the nasal passage. Two patients required tracheostomy tubes and one had a prosthetic nose. Sleeping disorders were seen in three patients, and chronic cough was diagnosed in one. After transplantation, there was no significant improvement in sleeping disorders. The incidence of sinusitis increased because of mechanical interference of the donor septum and disappeared after surgical correction. All patients were decannulated after transplantation and were capable of nose breathing. Scanning electron micrographs of the respiratory mucosa revealed viable tissue capable of mucin production. Airway volume significantly increased in all patients. Face transplantation successfully restored the upper airway in four patients. Unhindered nasal breathing, viable respiratory mucosa, and a significant increase in airway volume contributed to tracheostomy decannulation.

  15. Biometrics: Facing Up to Terrorism

    DTIC Science & Technology

    2001-10-01

    ment committee appointed by Secretary of Trans- portation Norman Y. Mineta to review airport security measures will recommend that facial recogni- tion...on the Role Facial Recognition Technology Can Play in Enhancing Airport Security .” Joseph Atick, the CEO of Visionics, testified before the government...system at a U.S. air- port. This deployment is believed to be the first-in-the-nation use of face-recognition technology for airport security . The sys

  16. The changing face of beauty.

    PubMed

    Romm, S

    1989-01-01

    Beautiful faces, like clothing and body conformation, go in and out of fashion. Yet, certain women in every era are considered truly beautiful. Who, then, sets standards of facial beauty and how are women chosen as representative of an ideal? Identifying great beauties is easier than explaining why they are chosen, but answers to these elusive questions are suggested in art, literature, and a review of past events.

  17. Two Strategic Decisions Facing Fusion

    NASA Astrophysics Data System (ADS)

    Baldwin, D. E.

    1998-06-01

    Two strategic decisions facing the U.S. fusion program are described. The first decision deals with the role and rationale of the tokamak within the U. S. fusion program, and it underlies the debate over our continuing role in the evolving ITER collaboration (mid-1998). The second decision concerns how to include Inertial Fusion Energy (IFE) as a viable part of the national effort to harness fusion energy.

  18. Mapping multisensory parietal face and body areas in humans.

    PubMed

    Huang, Ruey-Song; Chen, Ching-fu; Tran, Alyssa T; Holstein, Katie L; Sereno, Martin I

    2012-10-30

    Detection and avoidance of impending obstacles is crucial to preventing head and body injuries in daily life. To safely avoid obstacles, locations of objects approaching the body surface are usually detected via the visual system and then used by the motor system to guide defensive movements. Mediating between visual input and motor output, the posterior parietal cortex plays an important role in integrating multisensory information in peripersonal space. We used functional MRI to map parietal areas that see and feel multisensory stimuli near or on the face and body. Tactile experiments using full-body air-puff stimulation suits revealed somatotopic areas of the face and multiple body parts forming a higher-level homunculus in the superior posterior parietal cortex. Visual experiments using wide-field looming stimuli revealed retinotopic maps that overlap with the parietal face and body areas in the postcentral sulcus at the most anterior border of the dorsal visual pathway. Starting at the parietal face area and moving medially and posteriorly into the lower-body areas, the median of visual polar-angle representations in these somatotopic areas gradually shifts from near the horizontal meridian into the lower visual field. These results suggest the parietal face and body areas fuse multisensory information in peripersonal space to guard an individual from head to toe.

  19. Social Cognition in Williams Syndrome: Face Tuning

    PubMed Central

    Pavlova, Marina A.; Heiz, Julie; Sokolov, Alexander N.; Barisnikov, Koviljka

    2016-01-01

    Many neurological, neurodevelopmental, neuropsychiatric, and psychosomatic disorders are characterized by impairments in visual social cognition, body language reading, and facial assessment of a social counterpart. Yet a wealth of research indicates that individuals with Williams syndrome exhibit remarkable concern for social stimuli and face fascination. Here individuals with Williams syndrome were presented with a set of Face-n-Food images composed of food ingredients and in different degree resembling a face (slightly bordering on the Giuseppe Arcimboldo style). The primary advantage of these images is that single components do not explicitly trigger face-specific processing, whereas in face images commonly used for investigating face perception (such as photographs or depictions), the mere occurrence of typical cues already implicates face presence. In a spontaneous recognition task, participants were shown a set of images in a predetermined order from the least to most resembling a face. Strikingly, individuals with Williams syndrome exhibited profound deficits in recognition of the Face-n-Food images as a face: they did not report seeing a face on the images, which typically developing controls effortlessly recognized as a face, and gave overall fewer face responses. This suggests atypical face tuning in Williams syndrome. The outcome is discussed in the light of a general pattern of social cognition in Williams syndrome and brain mechanisms underpinning face processing. PMID:27531986

  20. Social Cognition in Williams Syndrome: Face Tuning.

    PubMed

    Pavlova, Marina A; Heiz, Julie; Sokolov, Alexander N; Barisnikov, Koviljka

    2016-01-01

    Many neurological, neurodevelopmental, neuropsychiatric, and psychosomatic disorders are characterized by impairments in visual social cognition, body language reading, and facial assessment of a social counterpart. Yet a wealth of research indicates that individuals with Williams syndrome exhibit remarkable concern for social stimuli and face fascination. Here individuals with Williams syndrome were presented with a set of Face-n-Food images composed of food ingredients and in different degree resembling a face (slightly bordering on the Giuseppe Arcimboldo style). The primary advantage of these images is that single components do not explicitly trigger face-specific processing, whereas in face images commonly used for investigating face perception (such as photographs or depictions), the mere occurrence of typical cues already implicates face presence. In a spontaneous recognition task, participants were shown a set of images in a predetermined order from the least to most resembling a face. Strikingly, individuals with Williams syndrome exhibited profound deficits in recognition of the Face-n-Food images as a face: they did not report seeing a face on the images, which typically developing controls effortlessly recognized as a face, and gave overall fewer face responses. This suggests atypical face tuning in Williams syndrome. The outcome is discussed in the light of a general pattern of social cognition in Williams syndrome and brain mechanisms underpinning face processing.

  1. The importance of internal facial features in learning new faces.

    PubMed

    Longmore, Christopher A; Liu, Chang Hong; Young, Andrew W

    2015-01-01

    For familiar faces, the internal features (eyes, nose, and mouth) are known to be differentially salient for recognition compared to external features such as hairstyle. Two experiments are reported that investigate how this internal feature advantage accrues as a face becomes familiar. In Experiment 1, we tested the contribution of internal and external features to the ability to generalize from a single studied photograph to different views of the same face. A recognition advantage for the internal features over the external features was found after a change of viewpoint, whereas there was no internal feature advantage when the same image was used at study and test. In Experiment 2, we removed the most salient external feature (hairstyle) from studied photographs and looked at how this affected generalization to a novel viewpoint. Removing the hair from images of the face assisted generalization to novel viewpoints, and this was especially the case when photographs showing more than one viewpoint were studied. The results suggest that the internal features play an important role in the generalization between different images of an individual's face by enabling the viewer to detect the common identity-diagnostic elements across non-identical instances of the face.

  2. Segmentation of human face using gradient-based approach

    NASA Astrophysics Data System (ADS)

    Baskan, Selin; Bulut, M. Mete; Atalay, Volkan

    2001-04-01

    This paper describes a method for automatic segmentation of facial features such as eyebrows, eyes, nose, mouth and ears in color images. This work is an initial step for wide range of applications based on feature-based approaches, such as face recognition, lip-reading, gender estimation, facial expression analysis, etc. Human face can be characterized by its skin color and nearly elliptical shape. For this purpose, face detection is performed using color and shape information. Uniform illumination is assumed. No restrictions on glasses, make-up, beard, etc. are imposed. Facial features are extracted using the vertically and horizontally oriented gradient projections. The gradient of a minimum with respect to its neighbor maxima gives the boundaries of a facial feature. Each facial feature has a different horizontal characteristic. These characteristics are derived by extensive experimentation with many face images. Using fuzzy set theory, the similarity between the candidate and the feature characteristic under consideration is calculated. Gradient-based method is accompanied by the anthropometrical information, for robustness. Ear detection is performed using contour-based shape descriptors. This method detects the facial features and circumscribes each facial feature with the smallest rectangle possible. AR database is used for testing. The developed method is also suitable for real-time systems.

  3. Face-to-Face or Not-to-Face: A Technology Preference for Communication

    PubMed Central

    Darmawan, Bobby; Mohamed Ariffin, Mohd Yahya

    2014-01-01

    Abstract This study employed the Model of Technology Preference (MTP) to explain the relationship of the variables as the antecedents of behavioral intention to adopt a social networking site (SNS) for communication. Self-administered questionnaires were distributed to SNS account users using paper-based and web-based surveys that led to 514 valid responses. The data were analyzed using structural equation modeling (SEM). The results show that two out of three attributes of the attribute-based preference (ATRP) affect attitude-based preference (ATTP). The data support the hypotheses that perceived enjoyment and social presence are predictors of ATTP. In this study, the findings further indicated that ATTP has no relationship with the behavioral intention of using SNS, but it has a relationship with the attitude of using SNS. SNS development should provide features that ensure enjoyment and social presence for users to communicate instead of using the traditional face-to-face method of communication. PMID:25405782

  4. Face-to-face or not-to-face: A technology preference for communication.

    PubMed

    Jaafar, Noor Ismawati; Darmawan, Bobby; Mohamed Ariffin, Mohd Yahya

    2014-11-01

    This study employed the Model of Technology Preference (MTP) to explain the relationship of the variables as the antecedents of behavioral intention to adopt a social networking site (SNS) for communication. Self-administered questionnaires were distributed to SNS account users using paper-based and web-based surveys that led to 514 valid responses. The data were analyzed using structural equation modeling (SEM). The results show that two out of three attributes of the attribute-based preference (ATRP) affect attitude-based preference (ATTP). The data support the hypotheses that perceived enjoyment and social presence are predictors of ATTP. In this study, the findings further indicated that ATTP has no relationship with the behavioral intention of using SNS, but it has a relationship with the attitude of using SNS. SNS development should provide features that ensure enjoyment and social presence for users to communicate instead of using the traditional face-to-face method of communication.

  5. Embedded wavelet-based face recognition under variable position

    NASA Astrophysics Data System (ADS)

    Cotret, Pascal; Chevobbe, Stéphane; Darouich, Mehdi

    2015-02-01

    For several years, face recognition has been a hot topic in the image processing field: this technique is applied in several domains such as CCTV, electronic devices delocking and so on. In this context, this work studies the efficiency of a wavelet-based face recognition method in terms of subject position robustness and performance on various systems. The use of wavelet transform has a limited impact on the position robustness of PCA-based face recognition. This work shows, for a well-known database (Yale face database B*), that subject position in a 3D space can vary up to 10% of the original ROI size without decreasing recognition rates. Face recognition is performed on approximation coefficients of the image wavelet transform: results are still satisfying after 3 levels of decomposition. Furthermore, face database size can be divided by a factor 64 (22K with K = 3). In the context of ultra-embedded vision systems, memory footprint is one of the key points to be addressed; that is the reason why compression techniques such as wavelet transform are interesting. Furthermore, it leads to a low-complexity face detection stage compliant with limited computation resources available on such systems. The approach described in this work is tested on three platforms from a standard x86-based computer towards nanocomputers such as RaspberryPi and SECO boards. For K = 3 and a database with 40 faces, the execution mean time for one frame is 0.64 ms on a x86-based computer, 9 ms on a SECO board and 26 ms on a RaspberryPi (B model).

  6. Whole-face procedures for recovering facial images from memory.

    PubMed

    Frowd, Charlie D; Skelton, Faye; Hepton, Gemma; Holden, Laura; Minahil, Simra; Pitchford, Melanie; McIntyre, Alex; Brown, Charity; Hancock, Peter J B

    2013-06-01

    Research has indicated that traditional methods for accessing facial memories usually yield unidentifiable images. Recent research, however, has made important improvements in this area to the witness interview, method used for constructing the face and recognition of finished composites. Here, we investigated whether three of these improvements would produce even-more recognisable images when used in conjunction with each other. The techniques are holistic in nature: they involve processes which operate on an entire face. Forty participants first inspected an unfamiliar target face. Nominally 24h later, they were interviewed using a standard type of cognitive interview (CI) to recall the appearance of the target, or an enhanced 'holistic' interview where the CI was followed by procedures for focussing on the target's character. Participants then constructed a composite using EvoFIT, a recognition-type system that requires repeatedly selecting items from face arrays, with 'breeding', to 'evolve' a composite. They either saw faces in these arrays with blurred external features, or an enhanced method where these faces were presented with masked external features. Then, further participants attempted to name the composites, first by looking at the face front-on, the normal method, and then for a second time by looking at the face side-on, which research demonstrates facilitates recognition. All techniques improved correct naming on their own, but together promoted highly-recognisable composites with mean naming at 74% correct. The implication is that these techniques, if used together by practitioners, should substantially increase the detection of suspects using this forensic method of person identification. Copyright © 2013 Forensic Science Society. Published by Elsevier Ireland Ltd. All rights reserved.

  7. Multi-stream face recognition for crime-fighting

    NASA Astrophysics Data System (ADS)

    Jassim, Sabah A.; Sellahewa, Harin

    2007-04-01

    Automatic face recognition (AFR) is a challenging task that is increasingly becoming the preferred biometric trait for identification and has the potential of becoming an essential tool in the fight against crime and terrorism. Closed-circuit television (CCTV) cameras have increasingly been used over the last few years for surveillance in public places such as airports, train stations and shopping centers. They are used to detect and prevent crime, shoplifting, public disorder and terrorism. The work of law-enforcing and intelligence agencies is becoming more reliant on the use of databases of biometric data for large section of the population. Face is one of the most natural biometric traits that can be used for identification and surveillance. However, variations in lighting conditions, facial expressions, face size and pose are a great obstacle to AFR. This paper is concerned with using waveletbased face recognition schemes in the presence of variations of expressions and illumination. In particular, we will investigate the use of a combination of wavelet frequency channels for a multi-stream face recognition using various wavelet subbands as different face signal streams. The proposed schemes extend our recently developed face veri.cation scheme for implementation on mobile devices. We shall present experimental results on the performance of our proposed schemes for a number of face databases including a new AV database recorded on a PDA. By analyzing the various experimental data, we shall demonstrate that the multi-stream approach is robust against variations in illumination and facial expressions than the previous single-stream approach.

  8. Crossing the “Uncanny Valley”: adaptation to cartoon faces can influence perception of human faces

    PubMed Central

    Chen, Haiwen; Russell, Richard; Nakayama, Ken; Livingstone, Margaret

    2013-01-01

    Adaptation can shift what individuals identify to be a prototypical or attractive face. Past work suggests that low-level shape adaptation can affect high-level face processing but is position dependent. Adaptation to distorted images of faces can also affect face processing but only within sub-categories of faces, such as gender, age, and race/ethnicity. This study assesses whether there is a representation of face that is specific to faces (as opposed to all shapes) but general to all kinds of faces (as opposed to subcategories) by testing whether adaptation to one type of face can affect perception of another. Participants were shown cartoon videos containing faces with abnormally large eyes. Using animated videos allowed us to simulate naturalistic exposure and avoid positional shape adaptation. Results suggest that adaptation to cartoon faces with large eyes shifts preferences for human faces toward larger eyes, supporting the existence of general face representations. PMID:20465173

  9. Face recognition increases during saccade preparation.

    PubMed

    Lin, Hai; Rizak, Joshua D; Ma, Yuan-ye; Yang, Shang-chuan; Chen, Lin; Hu, Xin-tian

    2014-01-01

    Face perception is integral to human perception system as it underlies social interactions. Saccadic eye movements are frequently made to bring interesting visual information, such as faces, onto the fovea for detailed processing. Just before eye movement onset, the processing of some basic features, such as the orientation, of an object improves at the saccade landing point. Interestingly, there is also evidence that indicates faces are processed in early visual processing stages similar to basic features. However, it is not known whether this early enhancement of processing includes face recognition. In this study, three experiments were performed to map the timing of face presentation to the beginning of the eye movement in order to evaluate pre-saccadic face recognition. Faces were found to be similarly processed as simple objects immediately prior to saccadic movements. Starting ∼ 120 ms before a saccade to a target face, independent of whether or not the face was surrounded by other faces, the face recognition gradually improved and the critical spacing of the crowding decreased as saccade onset was approaching. These results suggest that an upcoming saccade prepares the visual system for new information about faces at the saccade landing site and may reduce the background in a crowd to target the intended face. This indicates an important role of pre-saccadic eye movement signals in human face recognition.

  10. The MUSE project face to face with reality

    NASA Astrophysics Data System (ADS)

    Caillier, P.; Accardo, M.; Adjali, L.; Anwand, H.; Bacon, Roland; Boudon, D.; Brotons, L.; Capoani, L.; Daguisé, E.; Dupieux, M.; Dupuy, C.; François, M.; Glindemann, A.; Gojak, D.; Hansali, G.; Hahn, T.; Jarno, A.; Kelz, A.; Koehler, C.; Kosmalski, J.; Laurent, F.; Le Floch, M.; Lizon, J.-L.; Loupias, M.; Manescau, A.; Migniau, J. E.; Monstein, C.; Nicklas, H.; Parès, L.; Pécontal-Rousset, A.; Piqueras, L.; Reiss, R.; Remillieux, A.; Renault, E.; Rupprecht, G.; Streicher, O.; Stuik, R.; Valentin, H.; Vernet, J.; Weilbacher, P.; Zins, G.

    2012-09-01

    MUSE (Multi Unit Spectroscopic Explorer) is a second generation instrument built for ESO (European Southern Observatory) to be installed in Chile on the VLT (Very Large Telescope). The MUSE project is supported by a European consortium of 7 institutes. After the critical turning point of shifting from the design to the manufacturing phase, the MUSE project has now completed the realization of its different sub-systems and should finalize its global integration and test in Europe. To arrive to this point many challenges had to be overcome, many technical difficulties, non compliances or procurements delays which seemed at the time overwhelming. Now is the time to face the results of our organization, of our strategy, of our choices. Now is the time to face the reality of the MUSE instrument. During the design phase a plan was provided by the project management in order to achieve the realization of the MUSE instrument in specification, time and cost. This critical moment in the project life when the instrument takes shape and reality is the opportunity to look not only at the outcome but also to see how well we followed the original plan, what had to be changed or adapted and what should have been.

  11. Familiar Face Recognition in Children with Autism: The Differential Use of Inner and Outer Face Parts

    ERIC Educational Resources Information Center

    Wilson, Rebecca; Pascalis, Olivier; Blades, Mark

    2007-01-01

    We investigated whether children with autistic spectrum disorders (ASD) have a deficit in recognising familiar faces. Children with ASD were given a forced choice familiar face recognition task with three conditions: full faces, inner face parts and outer face parts. Control groups were children with developmental delay (DD) and typically…

  12. Online or Face to Face? A Comparison of Two Methods of Training Professionals

    ERIC Educational Resources Information Center

    Dillon, Kristin; Dworkin, Jodi; Gengler, Colleen; Olson, Kathleen

    2008-01-01

    Online courses offer benefits over face-to-face courses such as accessibility, affordability, and flexibility. Literature assessing the effectiveness of face-to-face and online courses is growing, but findings remain inconclusive. This study compared evaluations completed by professionals who had taken a research update short course either face to…

  13. Faces do not capture special attention in children with autism spectrum disorder: a change blindness study.

    PubMed

    Kikuchi, Yukiko; Senju, Atsushi; Tojo, Yoshikuni; Osanai, Hiroo; Hasegawa, Toshikazu

    2009-01-01

    Two experiments investigated attention of children with autism spectrum disorder (ASD) to faces and objects. In both experiments, children (7- to 15-year-olds) detected the difference between 2 visual scenes. Results in Experiment 1 revealed that typically developing children (n = 16) detected the change in faces faster than in objects, whereas children with ASD (n = 16) were equally fast in detecting changes in faces and objects. These results were replicated in Experiment 2 (n = 16 in children with ASD and 22 in typically developing children), which does not require face recognition skill. Results suggest that children with ASD lack an attentional bias toward others' faces, which could contribute to their atypical social orienting.

  14. Emergency face-mask removal effectiveness: a comparison of traditional and nontraditional football helmet face-mask attachment systems.

    PubMed

    Swartz, Erik E; Belmore, Keith; Decoster, Laura C; Armstrong, Charles W

    2010-01-01

    Football helmet face-mask attachment design changes might affect the effectiveness of face-mask removal. To compare the efficiency of face-mask removal between newly designed and traditional football helmets. Controlled laboratory study. Applied biomechanics laboratory. Twenty-five certified athletic trainers. The independent variable was face-mask attachment system on 5 levels: (1) Revolution IQ with Quick Release (QR), (2) Revolution IQ with Quick Release hardware altered (QRAlt), (3) traditional (Trad), (4) traditional with hardware altered (TradAlt), and (5) ION 4D (ION). Participants removed face masks using a cordless screwdriver with a back-up cutting tool or only the cutting tool for the ION. Investigators altered face-mask hardware to unexpectedly challenge participants during removal for traditional and Revolution IQ helmets. Participants completed each condition twice in random order and were blinded to hardware alteration. Removal success, removal time, helmet motion, and rating of perceived exertion (RPE). Time and 3-dimensional helmet motion were recorded. If the face mask remained attached at 3 minutes, the trial was categorized as unsuccessful. Participants rated each trial for level of difficulty (RPE). We used repeated-measures analyses of variance (α  =  .05) with follow-up comparisons to test for differences. Removal success was 100% (48 of 48) for QR, Trad, and ION; 97.9% (47 of 48) for TradAlt; and 72.9% (35 of 48) for QRAlt. Differences in time for face-mask removal were detected (F(4,20)  =  48.87, P  =  .001), with times ranging from 33.96 ± 14.14 seconds for QR to 99.22 ± 20.53 seconds for QRAlt. Differences were found in range of motion during face-mask removal (F(4,20)  =  16.25, P  =  .001), with range of motion from 10.10° ± 3.07° for QR to 16.91° ± 5.36° for TradAlt. Differences also were detected in RPE during face-mask removal (F(4,20)  =  43.20, P  =  .001), with participants reporting average

  15. Emergency Face-Mask Removal Effectiveness: A Comparison of Traditional and Nontraditional Football Helmet Face-Mask Attachment Systems

    PubMed Central

    Swartz, Erik E.; Belmore, Keith; Decoster, Laura C.; Armstrong, Charles W.

    2010-01-01

    Abstract Context: Football helmet face-mask attachment design changes might affect the effectiveness of face-mask removal. Objective: To compare the efficiency of face-mask removal between newly designed and traditional football helmets. Design: Controlled laboratory study. Setting: Applied biomechanics laboratory. Participants: Twenty-five certified athletic trainers. Intervention(s): The independent variable was face-mask attachment system on 5 levels: (1) Revolution IQ with Quick Release (QR), (2) Revolution IQ with Quick Release hardware altered (QRAlt), (3) traditional (Trad), (4) traditional with hardware altered (TradAlt), and (5) ION 4D (ION). Participants removed face masks using a cordless screwdriver with a back-up cutting tool or only the cutting tool for the ION. Investigators altered face-mask hardware to unexpectedly challenge participants during removal for traditional and Revolution IQ helmets. Participants completed each condition twice in random order and were blinded to hardware alteration. Main Outcome Measure(s): Removal success, removal time, helmet motion, and rating of perceived exertion (RPE). Time and 3-dimensional helmet motion were recorded. If the face mask remained attached at 3 minutes, the trial was categorized as unsuccessful. Participants rated each trial for level of difficulty (RPE). We used repeated-measures analyses of variance (α  =  .05) with follow-up comparisons to test for differences. Results: Removal success was 100% (48 of 48) for QR, Trad, and ION; 97.9% (47 of 48) for TradAlt; and 72.9% (35 of 48) for QRAlt. Differences in time for face-mask removal were detected (F4,20  =  48.87, P  =  .001), with times ranging from 33.96 ± 14.14 seconds for QR to 99.22 ± 20.53 seconds for QRAlt. Differences were found in range of motion during face-mask removal (F4,20  =  16.25, P  =  .001), with range of motion from 10.10° ± 3.07° for QR to 16.91° ± 5.36° for TradAlt. Differences also were detected

  16. Advanced Face Gear Surface Durability Evaluations

    NASA Technical Reports Server (NTRS)

    Lewicki, David G.; Heath, Gregory F.

    2016-01-01

    The surface durability life of helical face gears and isotropic super-finished (ISF) face gears was investigated. Experimental fatigue tests were performed at the NASA Glenn Research Center. Endurance tests were performed on 10 sets of helical face gears in mesh with tapered involute helical pinions, and 10 sets of ISF-enhanced straight face gears in mesh with tapered involute spur pinions. The results were compared to previous tests on straight face gears. The life of the ISF configuration was slightly less than that of previous tests on straight face gears. The life of the ISF configuration was slightly greater than that of the helical configuration.

  17. JAMES RIVER FACE WILDERNESS, VIRGINIA.

    USGS Publications Warehouse

    Brown, C. Ervin; Gazdik, Gertrude C.

    1984-01-01

    A mineral survey concluded that the James River Face Wilderness, Virginia, had little promise for the occurrence of metallic mineral resources. Two major rock units in the area do contain large nonmetallic mineral resources of quartzite and shale that have been mined for silica products and for brick and expanded aggregate, respectively. Because large deposits of the same material are more easily available in nearby areas, demand for the deposits within the wilderness is highly unlikely. No energy resources were identified in the course of this study.

  18. Crystal face temperature determination means

    DOEpatents

    Nason, D.O.; Burger, A.

    1994-11-22

    An optically transparent furnace having a detection apparatus with a pedestal enclosed in an evacuated ampule for growing a crystal thereon is disclosed. Temperature differential is provided by a source heater, a base heater and a cold finger such that material migrates from a polycrystalline source material to grow the crystal. A quartz halogen lamp projects a collimated beam onto the crystal and a reflected beam is analyzed by a double monochromator and photomultiplier detection spectrometer and the detected peak position in the reflected energy spectrum of the reflected beam is interpreted to determine surface temperature of the crystal. 3 figs.

  19. Normal composite face effects in developmental prosopagnosia.

    PubMed

    Biotti, Federica; Wu, Esther; Yang, Hua; Jiahui, Guo; Duchaine, Bradley; Cook, Richard

    2017-10-01

    Upright face perception is thought to involve holistic processing, whereby local features are integrated into a unified whole. Consistent with this view, the top half of one face appears to fuse perceptually with the bottom half of another, when aligned spatially and presented upright. This 'composite face effect' reveals a tendency to integrate information from disparate regions when faces are presented canonically. In recent years, the relationship between susceptibility to the composite effect and face recognition ability has received extensive attention both in participants with normal face recognition and participants with developmental prosopagnosia. Previous results suggest that individuals with developmental prosopagnosia may show reduced susceptibility to the effect suggestive of diminished holistic face processing. Here we describe two studies that examine whether developmental prosopagnosia is associated with reduced composite face effects. Despite using independent samples of developmental prosopagnosics and different composite procedures, we find no evidence for reduced composite face effects. The experiments yielded similar results; highly significant composite effects in both prosopagnosic groups that were similar in magnitude to the effects found in participants with normal face processing. The composite face effects exhibited by both samples and the controls were greatly diminished when stimulus arrangements were inverted. Our finding that the whole-face binding process indexed by the composite effect is intact in developmental prosopagnosia indicates that other factors are responsible for developmental prosopagnosia. These results are also inconsistent with suggestions that susceptibility to the composite face effect and face recognition ability are tightly linked. While the holistic process revealed by the composite face effect may be necessary for typical face perception, it is not sufficient; individual differences in face recognition ability

  20. Face photo-sketch synthesis and recognition.

    PubMed

    Wang, Xiaogang; Tang, Xiaoou

    2009-11-01

    In this paper, we propose a novel face photo-sketch synthesis and recognition method using a multiscale Markov Random Fields (MRF) model. Our system has three components: 1) given a face photo, synthesizing a sketch drawing; 2) given a face sketch drawing, synthesizing a photo; and 3) searching for face photos in the database based on a query sketch drawn by an artist. It has useful applications for both digital entertainment and law enforcement. We assume that faces to be studied are in a frontal pose, with normal lighting and neutral expression, and have no occlusions. To synthesize sketch/photo images, the face region is divided into overlapping patches for learning. The size of the patches decides the scale of local face structures to be learned. From a training set which contains photo-sketch pairs, the joint photo-sketch model is learned at multiple scales using a multiscale MRF model. By transforming a face photo to a sketch (or transforming a sketch to a photo), the difference between photos and sketches is significantly reduced, thus allowing effective matching between the two in face sketch recognition. After the photo-sketch transformation, in principle, most of the proposed face photo recognition approaches can be applied to face sketch recognition in a straightforward way. Extensive experiments are conducted on a face sketch database including 606 faces, which can be downloaded from our Web site (http://mmlab.ie.cuhk.edu.hk/facesketch.html).

  1. Simultaneous face and voice processing in schizophrenia.

    PubMed

    Liu, Taosheng; Pinheiro, Ana P; Zhao, Zhongxin; Nestor, Paul G; McCarley, Robert W; Niznikiewicz, Margaret

    2016-05-15

    While several studies have consistently demonstrated abnormalities in the unisensory processing of face and voice in schizophrenia (SZ), the extent of abnormalities in the simultaneous processing of both types of information remains unclear. To address this issue, we used event-related potentials (ERP) methodology to probe the multisensory integration of face and non-semantic sounds in schizophrenia. EEG was recorded from 18 schizophrenia patients and 19 healthy control (HC) subjects in three conditions: neutral faces (visual condition-VIS); neutral non-semantic sounds (auditory condition-AUD); neutral faces presented simultaneously with neutral non-semantic sounds (audiovisual condition-AUDVIS). When compared with HC, the schizophrenia group showed less negative N170 to both face and face-voice stimuli; later P270 peak latency in the multimodal condition of face-voice relative to unimodal condition of face (the reverse was true in HC); reduced P400 amplitude and earlier P400 peak latency in the face but not in the voice-face condition. Thus, the analysis of ERP components suggests that deficits in the encoding of facial information extend to multimodal face-voice stimuli and that delays exist in feature extraction from multimodal face-voice stimuli in schizophrenia. In contrast, categorization processes seem to benefit from the presentation of simultaneous face-voice information. Timepoint by timepoint tests of multimodal integration did not suggest impairment in the initial stages of processing in schizophrenia. Published by Elsevier B.V.

  2. Developmental origins of the face inversion effect.

    PubMed

    Cashon, Cara H; Holt, Nicholas A

    2015-01-01

    A hallmark of adults' expertise for faces is that they are better at recognizing, discriminating, and processing upright faces compared to inverted faces. We investigate the developmental origins of "the face inversion effect" by reviewing research on infants' perception of upright and inverted faces during the first year of life. We review the effects of inversion on infants' face preference, recognition, processing (holistic and second-order configural), and scanning as well as face-related neural responses. Particular attention is paid to the developmental patterns that emerge within and across these areas of face perception. We conclude that the developmental origins of the inversion effect begin in the first few months of life and grow stronger over the first year, culminating in effects that are commonly thought to indicate adult-like expertise. We posit that by the end of the first year, infants' face-processing system has become specialized to upright faces and a foundation for adults' upright-face expertise has been established. Developmental mechanisms that may facilitate the emergence of this upright-face specialization are discussed, including the roles that physical and social development may play in upright faces' becoming more meaningful to infants during the first year. © 2015 Elsevier Inc. All rights reserved.

  3. Emotional Expression and Heart Rate in High-Risk Infants during the Face-To-Face/Still-Face

    PubMed Central

    Mattson, Whitney I.; Ekas, Naomi V.; Lambert, Brittany; Tronick, Ed; Lester, Barry M.; Messinger, Daniel S.

    2013-01-01

    In infants, eye constriction—the Duchenne marker—and mouth opening appear to index the intensity of both positive and negative facial expressions. We combined eye constriction and mouth opening that co-occurred with smiles and cry-faces (respectively, the prototypic expressions of infant joy and distress) to measure emotional expression intensity. Expression intensity and heart rate were measured throughout the Face-to-Face/Still Face (FFSF) in a sample of infants with prenatal cocaine exposure who were at risk for developmental difficulties. Smiles declined and cry-faces increased in the still-face episode, but the distribution of eye constriction and mouth opening in smiles and cry-faces did not differ across episodes of the FFSF. As time elapsed in the still face episode potential indices of intensity increased, cry-faces were more likely to be accompanied by eye constriction and mouth opening. During cry-faces there were also moderately stable individual differences in the quantity of eye constriction and mouth opening. Infant heart rate was higher during cry-faces and lower during smiles, but did not vary with intensity of expression or by episode. In sum, infants express more intense negative affect as the still-face progresses, but do not show clear differences in expressive intensity between episodes of the FFSF. PMID:24095807

  4. Emotional expression and heart rate in high-risk infants during the face-to-face/still-face.

    PubMed

    Mattson, Whitney I; Ekas, Naomi V; Lambert, Brittany; Tronick, Ed; Lester, Barry M; Messinger, Daniel S

    2013-12-01

    In infants, eye constriction-the Duchenne marker-and mouth opening appear to index the intensity of both positive and negative facial expressions. We combined eye constriction and mouth opening that co-occurred with smiles and cry-faces (respectively, the prototypic expressions of infant joy and distress) to measure emotional expression intensity. Expression intensity and heart rate were measured throughout the face-to-face/still-face (FFSF) in a sample of infants with prenatal cocaine exposure who were at risk for developmental difficulties. Smiles declined and cry-faces increased in the still-face episode, but the distribution of eye constriction and mouth opening in smiles and cry-faces did not differ across episodes of the FFSF. As time elapsed in the still face episode potential indices of intensity increased, cry-faces were more likely to be accompanied by eye constriction and mouth opening. During cry-faces there were also moderately stable individual differences in the quantity of eye constriction and mouth opening. Infant heart rate was higher during cry-faces and lower during smiles, but did not vary with intensity of expression or by episode. In sum, infants express more intense negative affect as the still-face progresses, but do not show clear differences in expressive intensity between episodes of the FFSF. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. A decade of letrozole: FACE

    PubMed Central

    2007-01-01

    Third-generation nonsteroidal aromatase inhibitors (AIs), letrozole and anastrozole, are superior to tamoxifen as initial therapy for early breast cancer but have not been directly compared in a head-to-head adjuvant trial. Cumulative evidence suggests that AIs are not equivalent in terms of potency of estrogen suppression and that there may be differences in clinical efficacy. Thus, with no data from head-to-head comparisons of the AIs as adjuvant therapy yet available, the question of whether there are efficacy differences between the AIs remains. To help answer this question, the Femara versus Anastrozole Clinical Evaluation (FACE) is a phase IIIb open-label, randomized, multicenter trial designed to test whether letrozole or anastrozole has superior efficacy as adjuvant treatment of postmenopausal women with hormone receptor (HR)- and lymph node-positive breast cancer. Eligible patients (target accrual, N = 4,000) are randomized to receive either letrozole 2.5 mg or anastrozole 1 mg daily for up to 5 years. The primary objective is to compare disease-free survival at 5 years. Secondary end points include safety, overall survival, time to distant metastases, and time to contralateral breast cancer. The FACE trial will determine whether or not letrozole offers a greater clinical benefit to postmenopausal women with HR+ early breast cancer at increased risk of early recurrence compared with anastrozole. PMID:17912637

  6. Cidaroids spines facing ocean acidification.

    PubMed

    Dery, Aurélie; Tran, Phuong Dat; Compère, Philippe; Dubois, Philippe

    2018-07-01

    When facing seawater undersaturated towards calcium carbonates, spines of classical sea urchins (euechinoids) show traces of corrosion although they are covered by an epidermis. Cidaroids (a sister clade of euechinoids) are provided with mature spines devoid of epidermis, which makes them, at first sight, more sensitive to dissolution when facing undersaturated seawater. A recent study showed that spines of a tropical cidaroid are resistant to dissolution due to the high density and the low magnesium concentration of the peculiar external spine layer, the cortex. The biofilm and epibionts covering the spines was also suggested to take part in the spine protection. Here, we investigate the protective role of these factors in different cidaroid species from a broad range of latitude, temperature and depth. The high density of the cortical layer and the cover of biofilm and epibionts were confirmed as key protection against dissolution. The low magnesium concentration of cidaroid spines compared to that of euechinoid ones makes them less soluble in general. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Seeing a haptically explored face: visual facial-expression aftereffect from haptic adaptation to a face.

    PubMed

    Matsumiya, Kazumichi

    2013-10-01

    Current views on face perception assume that the visual system receives only visual facial signals. However, I show that the visual perception of faces is systematically biased by adaptation to a haptically explored face. Recently, face aftereffects (FAEs; the altered perception of faces after adaptation to a face) have been demonstrated not only in visual perception but also in haptic perception; therefore, I combined the two FAEs to examine whether the visual system receives face-related signals from the haptic modality. I found that adaptation to a haptically explored facial expression on a face mask produced a visual FAE for facial expression. This cross-modal FAE was not due to explicitly imaging a face, response bias, or adaptation to local features. Furthermore, FAEs transferred from vision to haptics. These results indicate that visual face processing depends on substrates adapted by haptic faces, which suggests that face processing relies on shared representation underlying cross-modal interactions.

  8. The effect of face patch microstimulation on perception of faces and objects.

    PubMed

    Moeller, Sebastian; Crapse, Trinity; Chang, Le; Tsao, Doris Y

    2017-05-01

    What is the range of stimuli encoded by face-selective regions of the brain? We asked how electrical microstimulation of face patches in macaque inferotemporal cortex affects perception of faces and objects. We found that microstimulation strongly distorted face percepts and that this effect depended on precise targeting to the center of face patches. While microstimulation had no effect on the percept of many non-face objects, it did affect the percept of some, including non-face objects whose shape is consistent with a face (for example, apples) as well as somewhat facelike abstract images (for example, cartoon houses). Microstimulation even perturbed the percept of certain objects that did not activate the stimulated face patch at all. Overall, these results indicate that representation of facial identity is localized to face patches, but activity in these patches can also affect perception of face-compatible non-face objects, including objects normally represented in other parts of inferotemporal cortex.

  9. Attractive faces temporally modulate visual attention

    PubMed Central

    Nakamura, Koyo; Kawabata, Hideaki

    2014-01-01

    Facial attractiveness is an important biological and social signal on social interaction. Recent research has demonstrated that an attractive face captures greater spatial attention than an unattractive face does. Little is known, however, about the temporal characteristics of visual attention for facial attractiveness. In this study, we investigated the temporal modulation of visual attention induced by facial attractiveness by using a rapid serial visual presentation. Fourteen male faces and two female faces were successively presented for 160 ms, respectively, and participants were asked to identify two female faces embedded among a series of multiple male distractor faces. Identification of a second female target (T2) was impaired when a first target (T1) was attractive compared to neutral or unattractive faces, at 320 ms stimulus onset asynchrony (SOA); identification was improved when T1 was attractive compared to unattractive faces at 640 ms SOA. These findings suggest that the spontaneous appraisal of facial attractiveness modulates temporal attention. PMID:24994994

  10. Facing Diabetes: What You Need to Know

    MedlinePlus

    ... of this page please turn Javascript on. Feature: Diabetes Facing Diabetes: What You Need to Know Past Issues / Fall ... your loved ones. Photos: AP The Faces of Diabetes Diabetes strikes millions of Americans, young and old, ...

  11. Facial detection using deep learning

    NASA Astrophysics Data System (ADS)

    Sharma, Manik; Anuradha, J.; Manne, H. K.; Kashyap, G. S. C.

    2017-11-01

    In the recent past, we have observed that Facebook has developed an uncanny ability to recognize people in photographs. Previously, we had to tag people in photos by clicking on them and typing their name. Now as soon as we upload a photo, Facebook tags everyone on its own. Facebook can recognize faces with 98% accuracy which is pretty much as good as humans can do. This technology is called Face Detection. Face detection is a popular topic in biometrics. We have surveillance cameras in public places for video capture as well as security purposes. The main advantages of this algorithm over other are uniqueness and approval. We need speed and accuracy to identify. But face detection is really a series of several related problems: First, look at a picture and find all the faces in it. Second, focus on each face and understand that even if a face is turned in a weird direction or in bad lighting, it is still the same person. Third select features which can be used to identify each face uniquely like size of the eyes, face etc. Finally, compare these features to data we have to find the person name. As a human, your brain is wired to do all of this automatically and instantly. In fact, humans are too good at recognizing faces. Computers are not capable of this kind of high-level generalization, so we must teach them how to do each step in this process separately. The growth of face detection is largely driven by growing applications such as credit card verification, surveillance video images, authentication for banking and security system access.

  12. Implicit conditioning of faces via the social regulation of emotion: ERP evidence of early attentional biases for security conditioned faces.

    PubMed

    Beckes, Lane; Coan, James A; Morris, James P

    2013-08-01

    Not much is known about the neural and psychological processes that promote the initial conditions necessary for positive social bonding. This study explores one method of conditioned bonding utilizing dynamics related to the social regulation of emotion and attachment theory. This form of conditioning involves repeated presentations of negative stimuli followed by images of warm, smiling faces. L. Beckes, J. Simpson, and A. Erickson (2010) found that this conditioning procedure results in positive associations with the faces measured via a lexical decision task, suggesting they are perceived as comforting. This study found that the P1 ERP was similarly modified by this conditioning procedure and the P1 amplitude predicted lexical decision times to insecure words primed by the faces. The findings have implications for understanding how the brain detects supportive people, the flexibility and modifiability of early ERP components, and social bonding more broadly. Copyright © 2013 Society for Psychophysiological Research.

  13. Integral Face Shield Concept for Firefighter's Helmet

    NASA Technical Reports Server (NTRS)

    Abeles, F.; Hansberry, E.; Himel, V.

    1982-01-01

    Stowable face shield could be made integral part of helmet worn by firefighters. Shield, made from same tough clear plastic as removable face shields presently used, would be pivoted at temples to slide up inside helmet when not needed. Stowable face shield, being stored in helmet, is always available, ready for use, and is protected when not being used.

  14. Erasing the face after-effect.

    PubMed

    Kiani, Ghazaleh; Davies-Thompson, Jodie; Barton, Jason J S

    2014-10-24

    Perceptual after-effects decay over time at a rate that depends on several factors, such as the duration of adaptation and the duration of the test stimuli. Whether this decay is accelerated by exposure to other faces after adaptation is not known. Our goal was to determine if the appearance of other faces during a delay period after adaptation affected the face identity after-effect. In the first experiment we investigated whether, in the perception of ambiguous stimuli created by morphing between two faces, the repulsive after-effects from adaptation to one face were reduced by brief presentation of the second face in a delay period. We found no effect; however, this may have been confounded by a small attractive after-effect from the interference face. In the second experiment, the interference stimuli were faces unrelated to those used as adaptation stimuli, and we examined after-effects at three different delay periods. This showed a decline in after-effects as the time since adaptation increased, and an enhancement of this decline by the presentation of intervening faces. An exponential model estimated that the intervening faces caused an 85% reduction in the time constant of the after-effect decay. In conclusion, we confirm that face after-effects decline rapidly after adaptation and that exposure to other faces hastens the re-setting of the system. Crown Copyright © 2014. Published by Elsevier B.V. All rights reserved.

  15. Implicit face prototype learning from geometric information.

    PubMed

    Or, Charles C-F; Wilson, Hugh R

    2013-04-19

    There is evidence that humans implicitly learn an average or prototype of previously studied faces, as the unseen face prototype is falsely recognized as having been learned (Solso & McCarthy, 1981). Here we investigated the extent and nature of face prototype formation where observers' memory was tested after they studied synthetic faces defined purely in geometric terms in a multidimensional face space. We found a strong prototype effect: The basic results showed that the unseen prototype averaged from the studied faces was falsely identified as learned at a rate of 86.3%, whereas individual studied faces were identified correctly 66.3% of the time and the distractors were incorrectly identified as having been learned only 32.4% of the time. This prototype learning lasted at least 1 week. Face prototype learning occurred even when the studied faces were further from the unseen prototype than the median variation in the population. Prototype memory formation was evident in addition to memory formation of studied face exemplars as demonstrated in our models. Additional studies showed that the prototype effect can be generalized across viewpoints, and head shape and internal features separately contribute to prototype formation. Thus, implicit face prototype extraction in a multidimensional space is a very general aspect of geometric face learning. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Visual Search for Faces with Emotional Expressions

    ERIC Educational Resources Information Center

    Frischen, Alexandra; Eastwood, John D.; Smilek, Daniel

    2008-01-01

    The goal of this review is to critically examine contradictory findings in the study of visual search for emotionally expressive faces. Several key issues are addressed: Can emotional faces be processed preattentively and guide attention? What properties of these faces influence search efficiency? Is search moderated by the emotional state of the…

  17. Recognition memory of newly learned faces.

    PubMed

    Ishai, Alumit; Yago, Elena

    2006-12-11

    We used event-related fMRI to study recognition memory of newly learned faces. Caucasian subjects memorized unfamiliar, neutral and happy South Korean faces and 4 days later performed a memory retrieval task in the MR scanner. We predicted that previously seen faces would be recognized faster and more accurately and would elicit stronger neural activation than novel faces. Consistent with our hypothesis, novel faces were recognized more slowly and less accurately than previously seen faces. We found activation in a distributed cortical network that included face-responsive regions in the visual cortex, parietal and prefrontal regions, and the hippocampus. Within all regions, correctly recognized, previously seen faces evoked stronger activation than novel faces. Additionally, in parietal and prefrontal cortices, stronger activation was observed during correct than incorrect trials. Finally, in the hippocampus, false alarms to happy faces elicited stronger responses than false alarms to neutral faces. Our findings suggest that face recognition memory is mediated by stimulus-specific representations stored in extrastriate regions; parietal and prefrontal regions where old and new items are classified; and the hippocampus where veridical memory traces are recovered.

  18. 49 CFR 236.774 - Movement, facing.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Movement, facing. 236.774 Section 236.774 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION... Movement, facing. The movement of a train over the points of a switch which face in a direction opposite to...

  19. Infants Experience Perceptual Narrowing for Nonprimate Faces

    ERIC Educational Resources Information Center

    Simpson, Elizabeth A.; Varga, Krisztina; Frick, Janet E.; Fragaszy, Dorothy

    2011-01-01

    Perceptual narrowing--a phenomenon in which perception is broad from birth, but narrows as a function of experience--has previously been tested with primate faces. In the first 6 months of life, infants can discriminate among individual human and monkey faces. Though the ability to discriminate monkey faces is lost after about 9 months, infants…

  20. Recognition Memory for Realistic Synthetic Faces

    PubMed Central

    Yotsumoto, Yuko; Kahana, Michael J.; Wilson, Hugh R.; Sekuler, Robert

    2006-01-01

    A series of experiments examined short-term recognition memory for trios of briefly-presented, synthetic human faces derived from three real human faces. The stimuli were graded series of faces, which differed by varying known amounts from the face of the average female. Faces based on each of the three real faces were transformed so as to lie along orthogonal axes in a 3-D face space. Experiment 1 showed that the synthetic faces' perceptual similarity stucture strongly influenced recognition memory. Results were fit by NEMo, a noisy exemplar model of perceptual recognition memory. The fits revealed that recognition memory was influenced both by the similarity of the probe to series items, and by the similarities among the series items themselves. Non-metric multi-dimensional scaling (MDS) showed that faces' perceptual representations largely preserved the 3-D space in which the face stimuli were arrayed. NEMo gave a better account of the results when similarity was defined as perceptual, MDS similarity rather than physical proximity of one face to another. Experiment 2 confirmed the importance of within-list homogeneity directly, without mediation of a model. We discuss the affinities and differences between visual memory for synthetic faces and memory for simpler stimuli. PMID:17948069

  1. Orienting to Eye Gaze and Face Processing

    ERIC Educational Resources Information Center

    Tipples, Jason

    2005-01-01

    The author conducted 7 experiments to examine possible interactions between orienting to eye gaze and specific forms of face processing. Participants classified a letter following either an upright or inverted face with averted, uninformative eye gaze. Eye gaze orienting effects were recorded for upright and inverted faces, irrespective of whether…

  2. Unconscious Evaluation of Faces on Social Dimensions

    PubMed Central

    Stewart, Lorna H.; Ajina, Sara; Getov, Spas; Bahrami, Bahador; Todorov, Alexander; Rees, Geraint

    2012-01-01

    It has been proposed that two major axes, dominance and trustworthiness, characterize the social dimensions of face evaluation. Whether evaluation of faces on these social dimensions is restricted to conscious appraisal or happens at a preconscious level is unknown. Here we provide behavioral evidence that such preconscious evaluations exist and that they are likely to be interpretations arising from interactions between the face stimuli and observer-specific traits. Monocularly viewed faces that varied independently along two social dimensions of trust and dominance were rendered invisible by continuous flash suppression (CFS) when a flashing pattern was presented to the other eye. Participants pressed a button as soon as they saw the face emerge from suppression to indicate whether the previously hidden face was located slightly to the left or right of central fixation. Dominant and untrustworthy faces took significantly longer time to emerge (T2E) compared with neutral faces. A control experiment showed these findings could not reflect delayed motor responses to conscious faces. Finally, we showed that participants' self-reported propensity to trust was strongly predictive of untrust avoidance (i.e., difference in T2E for untrustworthy vs neutral faces) as well as dominance avoidance (i.e., difference in T2E for dominant vs neutral faces). Dominance avoidance was also correlated with submissive behavior. We suggest that such prolongation of suppression for threatening faces may result from a passive fear response, leading to slowed visual perception. PMID:22468670

  3. Unconscious evaluation of faces on social dimensions.

    PubMed

    Stewart, Lorna H; Ajina, Sara; Getov, Spas; Bahrami, Bahador; Todorov, Alexander; Rees, Geraint

    2012-11-01

    It has been proposed that two major axes, dominance and trustworthiness, characterize the social dimensions of face evaluation. Whether evaluation of faces on these social dimensions is restricted to conscious appraisal or happens at a preconscious level is unknown. Here we provide behavioral evidence that such preconscious evaluations exist and that they are likely to be interpretations arising from interactions between the face stimuli and observer-specific traits. Monocularly viewed faces that varied independently along two social dimensions of trust and dominance were rendered invisible by continuous flash suppression (CFS) when a flashing pattern was presented to the other eye. Participants pressed a button as soon as they saw the face emerge from suppression to indicate whether the previously hidden face was located slightly to the left or right of central fixation. Dominant and untrustworthy faces took significantly longer time to emerge (T2E) compared with neutral faces. A control experiment showed these findings could not reflect delayed motor responses to conscious faces. Finally, we showed that participants' self-reported propensity to trust was strongly predictive of untrust avoidance (i.e., difference in T2E for untrustworthy vs neutral faces) as well as dominance avoidance (i.e., difference in T2E for dominant vs neutral faces). Dominance avoidance was also correlated with submissive behavior. We suggest that such prolongation of suppression for threatening faces may result from a passive fear response, leading to slowed visual perception. (PsycINFO Database Record (c) 2012 APA, all rights reserved).

  4. Reading Faces: From Features to Recognition.

    PubMed

    Guntupalli, J Swaroop; Gobbini, M Ida

    2017-12-01

    Chang and Tsao recently reported that the monkey face patch system encodes facial identity in a space of facial features as opposed to exemplars. Here, we discuss how such coding might contribute to face recognition, emphasizing the critical role of learning and interactions with other brain areas for optimizing the recognition of familiar faces. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Anti Theft Mechanism Through Face recognition Using FPGA

    NASA Astrophysics Data System (ADS)

    Sundari, Y. B. T.; Laxminarayana, G.; Laxmi, G. Vijaya

    2012-11-01

    The use of vehicle is must for everyone. At the same time, protection from theft is also very important. Prevention of vehicle theft can be done remotely by an authorized person. The location of the car can be found by using GPS and GSM controlled by FPGA. In this paper, face recognition is used to identify the persons and comparison is done with the preloaded faces for authorization. The vehicle will start only when the authorized personís face is identified. In the event of theft attempt or unauthorized personís trial to drive the vehicle, an MMS/SMS will be sent to the owner along with the location. Then the authorized person can alert the security personnel for tracking and catching the vehicle. For face recognition, a Principal Component Analysis (PCA) algorithm is developed using MATLAB. The control technique for GPS and GSM is developed using VHDL over SPTRAN 3E FPGA. The MMS sending method is written in VB6.0. The proposed application can be implemented with some modifications in the systems wherever the face recognition or detection is needed like, airports, international borders, banking applications etc.

  6. En Face Optical Coherence Tomography for Visualization of the Choroid.

    PubMed

    Savastano, Maria Cristina; Rispoli, Marco; Savastano, Alfonso; Lumbroso, Bruno

    2015-05-01

    To assess posterior pole choroid patterns in healthy eyes using en face optical coherence tomography (OCT). This observational study included 154 healthy eyes of 77 patients who underwent en face OCT. The mean age of the patients was 31.2 years (standard deviation: 13 years); 40 patients were women, and 37 patients were men. En face imaging of the choroidal vasculature was assessed using an OCT Optovue RTVue (Optovue, Fremont, CA). To generate an appropriate choroid image, the best detectable vessels in Haller's layer below the retinal pigment epithelium surface parallel plane were selected. Images of diverse choroidal vessel patterns at the posterior pole were observed and recorded with en face OCT. Five different patterns of Haller's layer with different occurrences were assessed. Pattern 1 (temporal herringbone) represented 49.2%, pattern 2 (branched from below) and pattern 3 (laterally diagonal) represented 14.2%, pattern 4 (doubled arcuate) was observed in 11.9%, and pattern 5 (reticular feature) was observed in 10.5% of the reference plane. In vivo assessment of human choroid microvasculature in healthy eyes using en face OCT demonstrated five different patterns. The choroid vasculature pattern may play a role in the origin and development of neuroretinal pathologies, with potential importance in chorioretinal diseases and circulatory abnormalities. Copyright 2015, SLACK Incorporated.

  7. Face biometrics with renewable templates

    NASA Astrophysics Data System (ADS)

    van der Veen, Michiel; Kevenaar, Tom; Schrijen, Geert-Jan; Akkermans, Ton H.; Zuo, Fei

    2006-02-01

    In recent literature, privacy protection technologies for biometric templates were proposed. Among these is the so-called helper-data system (HDS) based on reliable component selection. In this paper we integrate this approach with face biometrics such that we achieve a system in which the templates are privacy protected, and multiple templates can be derived from the same facial image for the purpose of template renewability. Extracting binary feature vectors forms an essential step in this process. Using the FERET and Caltech databases, we show that this quantization step does not significantly degrade the classification performance compared to, for example, traditional correlation-based classifiers. The binary feature vectors are integrated in the HDS leading to a privacy protected facial recognition algorithm with acceptable FAR and FRR, provided that the intra-class variation is sufficiently small. This suggests that a controlled enrollment procedure with a sufficient number of enrollment measurements is required.

  8. Crystal face temperature determination means

    DOEpatents

    Nason, Donald O.; Burger, Arnold

    1994-01-01

    An optically transparent furnace (10) having a detection apparatus (29) with a pedestal (12) enclosed in an evacuated ampule (16) for growing a crystal (14) thereon. Temperature differential is provided by a source heater (20), a base heater (24) and a cold finger (26) such that material migrates from a polycrystalline source material (18) to grow the crystal (14). A quartz halogen lamp (32) projects a collimated beam (30) onto the crystal (14) and a reflected beam (34) is analyzed by a double monochromator and photomultiplier detection spectrometer (40) and the detected peak position (48) in the reflected energy spectrum (44) of the reflected beam (34) is interpreted to determine surface temperature of the crystal (14).

  9. The time course of face processing: startle eyeblink response modulation by face gender and expression.

    PubMed

    Duval, Elizabeth R; Lovelace, Christopher T; Aarant, Justin; Filion, Diane L

    2013-12-01

    The purpose of this study was to investigate the effects of both facial expression and face gender on startle eyeblink response patterns at varying lead intervals (300, 800, and 3500ms) indicative of attentional and emotional processes. We aimed to determine whether responses to affective faces map onto the Defense Cascade Model (Lang et al., 1997) to better understand the stages of processing during affective face viewing. At 300ms, there was an interaction between face expression and face gender with female happy and neutral faces and male angry faces producing inhibited startle. At 3500ms, there was a trend for facilitated startle during angry compared to neutral faces. These findings suggest that affective expressions are perceived differently in male and female faces, especially at short lead intervals. Future studies investigating face processing should take both face gender and expression into account. © 2013.

  10. In the face of fear: Anxiety sensitizes defensive responses to fearful faces

    PubMed Central

    Grillon, Christian; Charney, Danielle R.

    2011-01-01

    Fearful faces readily activate the amygdala. Yet, whether fearful faces evoke fear is unclear. Startle studies show no potentiation of startle by fearful faces, suggesting that such stimuli do not activate defense mechanisms. However, the response to biologically relevant stimuli may be sensitized by anxiety. The present study tested the hypothesis that startle would not be potentiated by fearful faces in a safe context, but that startle would be larger during fearful faces compared to neutral faces in a threat-of-shock context. Subjects viewed fearful and neutral faces in alternating periods of safety and threat of shock. Acoustic startle stimuli were presented in the presence and absence of the faces. Startle was transiently potentiated by fearful faces compared to neutral faces in the threat periods. This suggests that although fearful faces do not prompt behavioral mobilization in an innocuous context, they can do so in an anxiogenic one. PMID:21824155

  11. Visual Search Efficiency is Greater for Human Faces Compared to Animal Faces

    PubMed Central

    Simpson, Elizabeth A.; Mertins, Haley L.; Yee, Krysten; Fullerton, Alison; Jakobsen, Krisztina V.

    2015-01-01

    The Animate Monitoring Hypothesis proposes that humans and animals were the most important categories of visual stimuli for ancestral humans to monitor, as they presented important challenges and opportunities for survival and reproduction; however, it remains unknown whether animal faces are located as efficiently as human faces. We tested this hypothesis by examining whether human, primate, and mammal faces elicit similarly efficient searches, or whether human faces are privileged. In the first three experiments, participants located a target (human, primate, or mammal face) among distractors (non-face objects). We found fixations on human faces were faster and more accurate than primate faces, even when controlling for search category specificity. A final experiment revealed that, even when task-irrelevant, human faces slowed searches for non-faces, suggesting some bottom-up processing may be responsible for the human face search efficiency advantage. PMID:24962122

  12. Looking at My Own Face: Visual Processing Strategies in Self–Other Face Recognition

    PubMed Central

    Chakraborty, Anya; Chakrabarti, Bhismadev

    2018-01-01

    We live in an age of ‘selfies.’ Yet, how we look at our own faces has seldom been systematically investigated. In this study we test if the visual processing of the highly familiar self-face is different from other faces, using psychophysics and eye-tracking. This paradigm also enabled us to test the association between the psychophysical properties of self-face representation and visual processing strategies involved in self-face recognition. Thirty-three adults performed a self-face recognition task from a series of self-other face morphs with simultaneous eye-tracking. Participants were found to look longer at the lower part of the face for self-face compared to other-face. Participants with a more distinct self-face representation, as indexed by a steeper slope of the psychometric response curve for self-face recognition, were found to look longer at upper part of the faces identified as ‘self’ vs. those identified as ‘other’. This result indicates that self-face representation can influence where we look when we process our own vs. others’ faces. We also investigated the association of autism-related traits with self-face processing metrics since autism has previously been associated with atypical self-processing. The study did not find any self-face specific association with autistic traits, suggesting that autism-related features may be related to self-processing in a domain specific manner. PMID:29487554

  13. What’s in a Face? How Face Gender and Current Affect Influence Perceived Emotion

    PubMed Central

    Harris, Daniel A.; Hayes-Skelton, Sarah A.; Ciaramitaro, Vivian M.

    2016-01-01

    Faces drive our social interactions. A vast literature suggests an interaction between gender and emotional face perception, with studies using different methodologies demonstrating that the gender of a face can affect how emotions are processed. However, how different is our perception of affective male and female faces? Furthermore, how does our current affective state when viewing faces influence our perceptual biases? We presented participants with a series of faces morphed along an emotional continuum from happy to angry. Participants judged each face morph as either happy or angry. We determined each participant’s unique emotional ‘neutral’ point, defined as the face morph judged to be perceived equally happy and angry, separately for male and female faces. We also assessed how current state affect influenced these perceptual neutral points. Our results indicate that, for both male and female participants, the emotional neutral point for male faces is perceptually biased to be happier than for female faces. This bias suggests that more happiness is required to perceive a male face as emotionally neutral, i.e., we are biased to perceive a male face as more negative. Interestingly, we also find that perceptual biases in perceiving female faces are correlated with current mood, such that positive state affect correlates with perceiving female faces as happier, while we find no significant correlation between negative state affect and the perception of facial emotion. Furthermore, we find reaction time biases, with slower responses for angry male faces compared to angry female faces. PMID:27733839

  14. A survey of the dummy face and human face stimuli used in BCI paradigm.

    PubMed

    Chen, Long; Jin, Jing; Zhang, Yu; Wang, Xingyu; Cichocki, Andrzej

    2015-01-15

    It was proved that the human face stimulus were superior to the flash only stimulus in BCI system. However, human face stimulus may lead to copyright infringement problems and was hard to be edited according to the requirement of the BCI study. Recently, it was reported that facial expression changes could be done by changing a curve in a dummy face which could obtain good performance when it was applied to visual-based P300 BCI systems. In this paper, four different paradigms were presented, which were called dummy face pattern, human face pattern, inverted dummy face pattern and inverted human face pattern, to evaluate the performance of the dummy faces stimuli compared with the human faces stimuli. The key point that determined the value of dummy faces in BCI systems were whether dummy faces stimuli could obtain as good performance as human faces stimuli. Online and offline results of four different paradigms would have been obtained and comparatively analyzed. Online and offline results showed that there was no significant difference among dummy faces and human faces in ERPs, classification accuracy and information transfer rate when they were applied in BCI systems. Dummy faces stimuli could evoke large ERPs and obtain as high classification accuracy and information transfer rate as the human faces stimuli. Since dummy faces were easy to be edited and had no copyright infringement problems, it would be a good choice for optimizing the stimuli of BCI systems. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Aesthetic strategies for the aging Asian face.

    PubMed

    Lam, Samuel M

    2007-08-01

    Aging that manifests in the Asian face is remarkably different, yet in many ways similar, to that of the white face. These dissimilarities and similarities are highlighted in this article along with overall strategies to approach the aging Asian face. This article focuses almost exclusively on the judgment and thinking that are required when approaching the Asian patient. More specifically, one issue that is covered is the cultural aspect that pertains to patient motivation and perspectives on cosmetic enhancement. The other equally important aspect that is addressed is elaboration of a new paradigm on what constitutes a youthful face, especially as that model relates to the Asian face.

  16. A Survey of nearby, nearly face-on spiral galaxies

    NASA Astrophysics Data System (ADS)

    Garmire, Gordon

    2014-09-01

    This is a continuation of a survey of nearby, nearly face-on spiral galaxies. The main purpose is to search for evidence of collisions with small galaxies that show up in X-rays by the generation of hot shocked gas from the collision. Secondary objectives include study of the spatial distribution point sources in the galaxy and to detect evidence for a central massive blackhole. These are alternate targets.

  17. A Survey of nearby, nearly face-on spiral galaxies

    NASA Astrophysics Data System (ADS)

    Garmire, Gordon

    2014-09-01

    This is a continuation of a survey of nearby, nearly face-on spiral galaxies. The main purpose is to search for evidence of collisions with small galaxies that show up in X-rays by the generation of hot shocked gas from the collision. Secondary objectives include study of the spatial distribution point sources in the galaxy and to detect evidence for a central massive blackhole.

  18. The Hierarchical Brain Network for Face Recognition

    PubMed Central

    Zhen, Zonglei; Fang, Huizhen; Liu, Jia

    2013-01-01

    Numerous functional magnetic resonance imaging (fMRI) studies have identified multiple cortical regions that are involved in face processing in the human brain. However, few studies have characterized the face-processing network as a functioning whole. In this study, we used fMRI to identify face-selective regions in the entire brain and then explore the hierarchical structure of the face-processing network by analyzing functional connectivity among these regions. We identified twenty-five regions mainly in the occipital, temporal and frontal cortex that showed a reliable response selective to faces (versus objects) across participants and across scan sessions. Furthermore, these regions were clustered into three relatively independent sub-networks in a face-recognition task on the basis of the strength of functional connectivity among them. The functionality of the sub-networks likely corresponds to the recognition of individual identity, retrieval of semantic knowledge and representation of emotional information. Interestingly, when the task was switched to object recognition from face recognition, the functional connectivity between the inferior occipital gyrus and the rest of the face-selective regions were significantly reduced, suggesting that this region may serve as an entry node in the face-processing network. In sum, our study provides empirical evidence for cognitive and neural models of face recognition and helps elucidate the neural mechanisms underlying face recognition at the network level. PMID:23527282

  19. The hierarchical brain network for face recognition.

    PubMed

    Zhen, Zonglei; Fang, Huizhen; Liu, Jia

    2013-01-01

    Numerous functional magnetic resonance imaging (fMRI) studies have identified multiple cortical regions that are involved in face processing in the human brain. However, few studies have characterized the face-processing network as a functioning whole. In this study, we used fMRI to identify face-selective regions in the entire brain and then explore the hierarchical structure of the face-processing network by analyzing functional connectivity among these regions. We identified twenty-five regions mainly in the occipital, temporal and frontal cortex that showed a reliable response selective to faces (versus objects) across participants and across scan sessions. Furthermore, these regions were clustered into three relatively independent sub-networks in a face-recognition task on the basis of the strength of functional connectivity among them. The functionality of the sub-networks likely corresponds to the recognition of individual identity, retrieval of semantic knowledge and representation of emotional information. Interestingly, when the task was switched to object recognition from face recognition, the functional connectivity between the inferior occipital gyrus and the rest of the face-selective regions were significantly reduced, suggesting that this region may serve as an entry node in the face-processing network. In sum, our study provides empirical evidence for cognitive and neural models of face recognition and helps elucidate the neural mechanisms underlying face recognition at the network level.

  20. The Functional Neuroanatomy of Human Face Perception.

    PubMed

    Grill-Spector, Kalanit; Weiner, Kevin S; Kay, Kendrick; Gomez, Jesse

    2017-09-15

    Face perception is critical for normal social functioning and is mediated by a network of regions in the ventral visual stream. In this review, we describe recent neuroimaging findings regarding the macro- and microscopic anatomical features of the ventral face network, the characteristics of white matter connections, and basic computations performed by population receptive fields within face-selective regions composing this network. We emphasize the importance of the neural tissue properties and white matter connections of each region, as these anatomical properties may be tightly linked to the functional characteristics of the ventral face network. We end by considering how empirical investigations of the neural architecture of the face network may inform the development of computational models and shed light on how computations in the face network enable efficient face perception.

  1. Aging and attentional biases for emotional faces.

    PubMed

    Mather, Mara; Carstensen, Laura L

    2003-09-01

    We examined age differences in attention to and memory for faces expressing sadness, anger, and happiness. Participants saw a pair of faces, one emotional and one neutral, and then a dot probe that appeared in the location of one of the faces. In two experiments, older adults responded faster to the dot if it was presented on the same side as a neutral face than if it was presented on the same side as a negative face. Younger adults did not exhibit this attentional bias. Interactions of age and valence were also found for memory for the faces, with older adults remembering positive better than negative faces. These findings reveal that in their initial attention, older adults avoid negative information. This attentional bias is consistent with older adults' generally better emotional well-being and their tendency to remember negative less well than positive information.

  2. Meta-analytic review of the development of face discrimination in infancy: Face race, face gender, infant age, and methodology moderate face discrimination.

    PubMed

    Sugden, Nicole A; Marquis, Alexandra R

    2017-11-01

    Infants show facility for discriminating between individual faces within hours of birth. Over the first year of life, infants' face discrimination shows continued improvement with familiar face types, such as own-race faces, but not with unfamiliar face types, like other-race faces. The goal of this meta-analytic review is to provide an effect size for infants' face discrimination ability overall, with own-race faces, and with other-race faces within the first year of life, how this differs with age, and how it is influenced by task methodology. Inclusion criteria were (a) infant participants aged 0 to 12 months, (b) completing a human own- or other-race face discrimination task, (c) with discrimination being determined by infant looking. Our analysis included 30 works (165 samples, 1,926 participants participated in 2,623 tasks). The effect size for infants' face discrimination was small, 6.53% greater than chance (i.e., equal looking to the novel and familiar). There was a significant difference in discrimination by race, overall (own-race, 8.18%; other-race, 3.18%) and between ages (own-race: 0- to 4.5-month-olds, 7.32%; 5- to 7.5-month-olds, 9.17%; and 8- to 12-month-olds, 7.68%; other-race: 0- to 4.5-month-olds, 6.12%; 5- to 7.5-month-olds, 3.70%; and 8- to 12-month-olds, 2.79%). Multilevel linear (mixed-effects) models were used to predict face discrimination; infants' capacity to discriminate faces is sensitive to face characteristics including race, gender, and emotion as well as the methods used, including task timing, coding method, and visual angle. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. A smart technique for attendance system to recognize faces through parallelism

    NASA Astrophysics Data System (ADS)

    Prabhavathi, B.; Tanuja, V.; Madhu Viswanatham, V.; Rajashekhara Babu, M.

    2017-11-01

    Major part of recognising a person is face with the help of image processing techniques we can exploit the physical features of a person. In the old approach method that is used in schools and colleges it is there that the professor calls the student name and then the attendance for the students marked. Here in paper want to deviate from the old approach and go with the new approach by using techniques that are there in image processing. In this paper we presenting spontaneous presence for students in classroom. At first classroom image has been in use and after that image is kept in data record. For the images that are stored in the database we apply system algorithm which includes steps such as, histogram classification, noise removal, face detection and face recognition methods. So by using these steps we detect the faces and then compare it with the database. The attendance gets marked automatically if the system recognizes the faces.

  4. Tracking the truth: the effect of face familiarity on eye fixations during deception.

    PubMed

    Millen, Ailsa E; Hope, Lorraine; Hillstrom, Anne P; Vrij, Aldert

    2017-05-01

    In forensic investigations, suspects sometimes conceal recognition of a familiar person to protect co-conspirators or hide knowledge of a victim. The current experiment sought to determine whether eye fixations could be used to identify memory of known persons when lying about recognition of faces. Participants' eye movements were monitored whilst they lied and told the truth about recognition of faces that varied in familiarity (newly learned, famous celebrities, personally known). Memory detection by eye movements during recognition of personally familiar and famous celebrity faces was negligibly affected by lying, thereby demonstrating that detection of memory during lies is influenced by the prior learning of the face. By contrast, eye movements did not reveal lies robustly for newly learned faces. These findings support the use of eye movements as markers of memory during concealed recognition but also suggest caution when familiarity is only a consequence of one brief exposure.

  5. Face-Likeness and Image Variability Drive Responses in Human Face-Selective Ventral Regions

    PubMed Central

    Davidenko, Nicolas; Remus, David A.; Grill-Spector, Kalanit

    2012-01-01

    The human ventral visual stream contains regions that respond selectively to faces over objects. However, it is unknown whether responses in these regions correlate with how face-like stimuli appear. Here, we use parameterized face silhouettes to manipulate the perceived face-likeness of stimuli and measure responses in face- and object-selective ventral regions with high-resolution fMRI. We first use “concentric hyper-sphere” (CH) sampling to define face silhouettes at different distances from the prototype face. Observers rate the stimuli as progressively more face-like the closer they are to the prototype face. Paradoxically, responses in both face- and object-selective regions decrease as face-likeness ratings increase. Because CH sampling produces blocks of stimuli whose variability is negatively correlated with face-likeness, this effect may be driven by more adaptation during high face-likeness (low-variability) blocks than during low face-likeness (high-variability) blocks. We tested this hypothesis by measuring responses to matched-variability (MV) blocks of stimuli with similar face-likeness ratings as with CH sampling. Critically, under MV sampling, we find a face-specific effect: responses in face-selective regions gradually increase with perceived face-likeness, but responses in object-selective regions are unchanged. Our studies provide novel evidence that face-selective responses correlate with the perceived face-likeness of stimuli, but this effect is revealed only when image variability is controlled across conditions. Finally, our data show that variability is a powerful factor that drives responses across the ventral stream. This indicates that controlling variability across conditions should be a critical tool in future neuroimaging studies of face and object representation. PMID:21823208

  6. Near-infrared face recognition utilizing open CV software

    NASA Astrophysics Data System (ADS)

    Sellami, Louiza; Ngo, Hau; Fowler, Chris J.; Kearney, Liam M.

    2014-06-01

    Commercially available hardware, freely available algorithms, and authors' developed software are synergized successfully to detect and recognize subjects in an environment without visible light. This project integrates three major components: an illumination device operating in near infrared (NIR) spectrum, a NIR capable camera and a software algorithm capable of performing image manipulation, facial detection and recognition. Focusing our efforts in the near infrared spectrum allows the low budget system to operate covertly while still allowing for accurate face recognition. In doing so a valuable function has been developed which presents potential benefits in future civilian and military security and surveillance operations.

  7. Divided by a Common Degree Program? Profiling Online and Face-to-Face Information Science Students

    ERIC Educational Resources Information Center

    Haigh, Maria

    2007-01-01

    This study examines profiles of online and face-to-face students in a single information science school: the University of Wisconsin-Milwaukee School of Information Studies. A questionnaire was administered to 76 students enrolled in online course sections and 72 students enrolled in face-to-face course sections. The questionnaire examined student…

  8. Web-Based vs. Face-to-Face MBA Classes: A Comparative Assessment Study

    ERIC Educational Resources Information Center

    Brownstein, Barry; Brownstein, Deborah; Gerlowski, Daniel A.

    2008-01-01

    The challenges of online learning include ensuring that the learning outcomes are at least as robust as in the face-to-face sections of the same course. At the University of Baltimore, both online sections and face-to-face sections of core MBA courses are offered. Once admitted to the MBA, students are free to enroll in any combination of…

  9. Face Aftereffects Indicate Dissociable, but Not Distinct, Coding of Male and Female Faces

    ERIC Educational Resources Information Center

    Jaquet, Emma; Rhodes, Gillian

    2008-01-01

    It has been claimed that exposure to distorted faces of one sex induces perceptual aftereffects for test faces that are of the same sex, but not for test faces of the other sex (A. C. Little, L. M. DeBruine, & B. C. Jones, 2005). This result suggests that male and female faces have separate neural coding. Given the high degree of visual similarity…

  10. Faces forming traces: neurophysiological correlates of learning naturally distinctive and caricatured faces.

    PubMed

    Schulz, Claudia; Kaufmann, Jürgen M; Kurt, Alexander; Schweinberger, Stefan R

    2012-10-15

    Distinctive faces are easier to learn and recognise than typical faces. We investigated effects of natural vs. artificial distinctiveness on performance and neural correlates of face learning. Spatial caricatures of initially non-distinctive faces were created such that their rated distinctiveness matched a set of naturally distinctive faces. During learning, we presented naturally distinctive, caricatured, and non-distinctive faces for later recognition among novel faces, using different images of the same identities at learning and test. For learned faces, an advantage in performance was observed for naturally distinctive and caricatured over non-distinctive faces, with larger benefits for naturally distinctive faces. Distinctive and caricatured faces elicited more negative occipitotemporal ERPs (P200, N250) and larger centroparietal positivity (LPC) during learning. At test, earliest distinctiveness effects were again seen in the P200. In line with recent research, N250 and LPC were larger for learned than for novel faces overall. Importantly, whereas left hemispheric N250 was increased for learned naturally distinctive faces, right hemispheric N250 responded particularly to caricatured novel faces. We conclude that natural distinctiveness induces benefits to face recognition beyond those induced by exaggeration of a face's idiosyncratic shape, and that the left hemisphere in particular may mediate recognition across different images. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. Examining the Roles of the Facilitator in Online and Face-to-Face PD Contexts

    ERIC Educational Resources Information Center

    Park, Gina; Johnson, Heather; Vath, Richard; Kubitskey, Beth; Fishman, Barry

    2013-01-01

    Online teacher professional development has become an alternative to face-to-face professional development. Such a shift from face-to-face to online professional development, however, brings new challenges for professional development facilitators, whose roles are crucial in orchestrating teacher learning. This paper is motivated by the need to…

  12. Moodle: A Way for Blending VLE and Face-to-Face Instruction in the ELT Context?

    ERIC Educational Resources Information Center

    Ilin, Gulden

    2013-01-01

    This classroom research explores the probable consequences of a blended Teaching English to Young Learners (TEYLs) course comprised of Moodle applications and face to face instruction in the English Language Teaching (ELT) context. Contrary to previous face to face only procedure, the course was divided into two segments: traditional classroom…

  13. "No Significant Distance" between Face-to-Face and Online Instruction: Evidence from Principles of Economics

    ERIC Educational Resources Information Center

    Coates, Dennis; Humphreys, Brad, R.; Kane, John; Vachris, Michelle, A.

    2004-01-01

    This paper describes an experiment focused on measuring and explaining differences in students learning between online and face-to-face modes of instruction in college level principles of economics courses. Our results indicate that students in face-to-face sections scored better on the Test of Understanding College Economics (TUCE) than students…

  14. The Use of Computer-Mediated Communication To Enhance Subsequent Face-to-Face Discussions.

    ERIC Educational Resources Information Center

    Dietz-Uhler, Beth; Bishop-Clark, Cathy

    2001-01-01

    Describes a study of undergraduate students that assessed the effects of synchronous (Internet chat) and asynchronous (Internet discussion board) computer-mediated communication on subsequent face-to-face discussions. Results showed that face-to-face discussions preceded by computer-mediated communication were perceived to be more enjoyable.…

  15. Comparing Student Outcomes in Blended and Face-to-Face Courses

    ERIC Educational Resources Information Center

    Roscoe, Douglas D.

    2012-01-01

    This article reports on a study of student outcomes in a pair of matched courses, one taught face-to-face and one taught in a blended format, in which students completed most of the work online but met several times face-to-face. Learning objectives, course content, and pedagogical approaches were identical but the mode of instruction was…

  16. The complex duration perception of emotional faces: effects of face direction.

    PubMed

    Kliegl, Katrin M; Limbrecht-Ecklundt, Kerstin; Dürr, Lea; Traue, Harald C; Huckauf, Anke

    2015-01-01

    The perceived duration of emotional face stimuli strongly depends on the expressed emotion. But, emotional faces also differ regarding a number of other features like gaze, face direction, or sex. Usually, these features have been controlled by only using pictures of female models with straight gaze and face direction. Doi and Shinohara (2009) reported that an overestimation of angry faces could only be found when the model's gaze was oriented toward the observer. We aimed at replicating this effect for face direction. Moreover, we explored the effect of face direction on the duration perception sad faces. Controlling for the sex of the face model and the participant, female and male participants rated the duration of neutral, angry, and sad face stimuli of both sexes photographed from different perspectives in a bisection task. In line with current findings, we report a significant overestimation of angry compared to neutral face stimuli that was modulated by face direction. Moreover, the perceived duration of sad face stimuli did not differ from that of neutral faces and was not influenced by face direction. Furthermore, we found that faces of the opposite sex appeared to last longer than those of the same sex. This outcome is discussed with regards to stimulus parameters like the induced arousal, social relevance, and an evolutionary context.

  17. Mapping attractor fields in face space: the atypicality bias in face recognition.

    PubMed

    Tanaka, J; Giles, M; Kremen, S; Simon, V

    1998-09-01

    A familiar face can be recognized across many changes in the stimulus input. In this research, the many-to-one mapping of face stimuli to a single face memory is referred to as a face memory's 'attractor field'. According to the attractor field approach, a face memory will be activated by any stimuli falling within the boundaries of its attractor field. It was predicted that by virtue of its location in a multi-dimensional face space, the attractor field of an atypical face will be larger than the attractor field of a typical face. To test this prediction, subjects make likeness judgments to morphed faces that contained a 50/50 contribution from an atypical and a typical parent face. The main result of four experiments was that the morph face was judged to bear a stronger resemblance to the atypical face parent than the typical face parent. The computational basis of the atypicality bias was demonstrated in a neural network simulation where morph inputs of atypical and typical representations elicited stronger activation of atypical output units than of typical output units. Together, the behavioral and simulation evidence supports the view that the attractor fields of atypical faces span over a broader region of face space that the attractor fields of typical faces.

  18. On the facilitative effects of face motion on face recognition and its development

    PubMed Central

    Xiao, Naiqi G.; Perrotta, Steve; Quinn, Paul C.; Wang, Zhe; Sun, Yu-Hao P.; Lee, Kang

    2014-01-01

    For the past century, researchers have extensively studied human face processing and its development. These studies have advanced our understanding of not only face processing, but also visual processing in general. However, most of what we know about face processing was investigated using static face images as stimuli. Therefore, an important question arises: to what extent does our understanding of static face processing generalize to face processing in real-life contexts in which faces are mostly moving? The present article addresses this question by examining recent studies on moving face processing to uncover the influence of facial movements on face processing and its development. First, we describe evidence on the facilitative effects of facial movements on face recognition and two related theoretical hypotheses: the supplementary information hypothesis and the representation enhancement hypothesis. We then highlight several recent studies suggesting that facial movements optimize face processing by activating specific face processing strategies that accommodate to task requirements. Lastly, we review the influence of facial movements on the development of face processing in the first year of life. We focus on infants' sensitivity to facial movements and explore the facilitative effects of facial movements on infants' face recognition performance. We conclude by outlining several future directions to investigate moving face processing and emphasize the importance of including dynamic aspects of facial information to further understand face processing in real-life contexts. PMID:25009517

  19. The Impact of Face-to-Face Orientation on Online Retention: A Pilot Study

    ERIC Educational Resources Information Center

    Ali, Radwan; Leeds, Elke M.

    2009-01-01

    Student retention in online education is a concern for students, faculty and administration. Retention rates are 20% lower in online courses than in traditional face-to-face courses. As part of an integration and engagement strategy, a face-to-face orientation was added to an online undergraduate business information systems course to examine its…

  20. Learning Management Systems in Traditional Face-to-Face Courses: A Narrative Inquiry Study

    ERIC Educational Resources Information Center

    Washington, Gloria

    2017-01-01

    The purpose of the qualitative narrative inquiry study was to explore accounts of individual higher education instructors' experiences utilizing LMSs as a potential platform for teaching and learning in the traditional face-to-face classroom environment. The pedagogical use of LMSs in traditional face-to-face courses from real life experiences of…

  1. An Adult Face Bias in Infants That is Modulated by Face Race

    ERIC Educational Resources Information Center

    Heron-Delaney, Michelle; Damon, Fabrice; Quinn, Paul C.; Méary, David; Xiao, Naiqi G.; Lee, Kang; Pascalis, Olivier

    2017-01-01

    The visual preferences of infants for adult versus infant faces were investigated. Caucasian 3.5- and 6-month-olds were presented with Caucasian adult vs. infant face pairs and Asian adult vs. infant face pairs, in both upright and inverted orientations. Both age groups showed a visual preference for upright adult over infant faces when the faces…

  2. Face to Face or E-Learning in Turkish EFL Context

    ERIC Educational Resources Information Center

    Solak, Ekrem; Cakir, Recep

    2014-01-01

    This purpose of this study was to understand e-learners and face to face learners' views towards learning English through e-learning in vocational higher school context and to determine the role of academic achievement and gender in e-learning and face to face learning. This study was conducted at a state-run university in 2012-2013 academic year…

  3. Choosing between Online and Face-to-Face Courses: Community College Student Voices

    ERIC Educational Resources Information Center

    Jaggars, Shanna Smith

    2014-01-01

    In this study, community college students discussed their experiences with online and face-to-face learning as well as their reasons for selecting online (rather than face-to-face) sections of specific courses. Students reported lower levels of instructor presence in online courses and that they needed to "teach themselves." Accordingly,…

  4. Highlights from a Literature Review Prepared for the Face to Face Research Project

    ERIC Educational Resources Information Center

    National Literacy Trust, 2010

    2010-01-01

    Between March 2009 and March 2011, Talk To Your Baby has been engaged in a research project, under the title of Face to Face, to identify key messages for parents and carers in relation to communicating with babies and young children, and has examined the most effective ways to promote these messages to parents and carers. The Face to Face project…

  5. Do Young Infants Prefer an Infant-Directed Face or a Happy Face?

    ERIC Educational Resources Information Center

    Kim, Hojin I.; Johnson, Scott P.

    2013-01-01

    Infants' visual preference for infant-directed (ID) faces over adult-directed (AD) faces was examined in two experiments that introduced controls for emotion. Infants' eye movements were recorded as they viewed a series of side-by-side dynamic faces. When emotion was held constant, 6-month-old infants showed no preference for ID faces over AD…

  6. Sponge systematics facing new challenges.

    PubMed

    Cárdenas, P; Pérez, T; Boury-Esnault, N

    2012-01-01

    Systematics is nowadays facing new challenges with the introduction of new concepts and new techniques. Compared to most other phyla, phylogenetic relationships among sponges are still largely unresolved. In the past 10 years, the classical taxonomy has been completely overturned and a review of the state of the art appears necessary. The field of taxonomy remains a prominent discipline of sponge research and studies related to sponge systematics were in greater number in the Eighth World Sponge Conference (Girona, Spain, September 2010) than in any previous world sponge conferences. To understand the state of this rapidly growing field, this chapter proposes to review studies, mainly from the past decade, in sponge taxonomy, nomenclature and phylogeny. In a first part, we analyse the reasons of the current success of this field. In a second part, we establish the current sponge systematics theoretical framework, with the use of (1) cladistics, (2) different codes of nomenclature (PhyloCode vs. Linnaean system) and (3) integrative taxonomy. Sponges are infamous for their lack of characters. However, by listing and discussing in a third part all characters available to taxonomists, we show how diverse characters are and that new ones are being used and tested, while old ones should be revisited. We then review the systematics of the four main classes of sponges (Hexactinellida, Calcispongiae, Homoscleromorpha and Demospongiae), each time focusing on current issues and case studies. We present a review of the taxonomic changes since the publication of the Systema Porifera (2002), and point to problems a sponge taxonomist is still faced with nowadays. To conclude, we make a series of proposals for the future of sponge systematics. In the light of recent studies, we establish a series of taxonomic changes that the sponge community may be ready to accept. We also propose a series of sponge new names and definitions following the PhyloCode. The issue of phantom species

  7. Lost in Translation: Adapting a Face-to-Face Course Into an Online Learning Experience.

    PubMed

    Kenzig, Melissa J

    2015-09-01

    Online education has grown dramatically over the past decade. Instructors who teach face-to-face courses are being called on to adapt their courses to the online environment. Many instructors do not have sufficient training to be able to effectively move courses to an online format. This commentary discusses the growth of online learning, common challenges faced by instructors adapting courses from face-to-face to online, and best practices for translating face-to-face courses into online learning opportunities. © 2015 Society for Public Health Education.

  8. Image-Based 3D Face Modeling System

    NASA Astrophysics Data System (ADS)

    Park, In Kyu; Zhang, Hui; Vezhnevets, Vladimir

    2005-12-01

    This paper describes an automatic system for 3D face modeling using frontal and profile images taken by an ordinary digital camera. The system consists of four subsystems including frontal feature detection, profile feature detection, shape deformation, and texture generation modules. The frontal and profile feature detection modules automatically extract the facial parts such as the eye, nose, mouth, and ear. The shape deformation module utilizes the detected features to deform the generic head mesh model such that the deformed model coincides with the detected features. A texture is created by combining the facial textures augmented from the input images and the synthesized texture and mapped onto the deformed generic head model. This paper provides a practical system for 3D face modeling, which is highly automated by aggregating, customizing, and optimizing a bunch of individual computer vision algorithms. The experimental results show a highly automated process of modeling, which is sufficiently robust to various imaging conditions. The whole model creation including all the optional manual corrections takes only 2[InlineEquation not available: see fulltext.]3 minutes.

  9. Differential involvement of episodic and face representations in ERP repetition effects.

    PubMed

    Jemel, Boutheina; Calabria, Marco; Delvenne, Jean-François; Crommelinck, Marc; Bruyer, Raymond

    2003-03-03

    The purpose of this study was to disentangle the contribution of episodic-perceptual from pre-existing memory representations of faces to repetition effects. ERPs were recorded to first and second presentations of same and different photos of famous and unfamiliar faces, in an incidental task where occasional non-targets had to be detected. Repetition of same and different photos of famous faces resulted in an N400 amplitude decrement. No such N400 repetition-induced attenuation was observed for unfamiliar faces. In addition, repetition of same photos of faces, and not different ones, gave rise to an early ERP repetition effect (starting at approximately 350 ms) with an occipito-temporal scalp distribution. Together, these results suggest that repetition effects depend on two temporally and may be neuro-functionally distinct loci, episode-based representation and face recognition units stored in long-term memory.

  10. Typical and atypical neurodevelopment for face specialization: An fMRI study

    PubMed Central

    Joseph, Jane E.; Zhu, Xun; Gundran, Andrew; Davies, Faraday; Clark, Jonathan D.; Ruble, Lisa; Glaser, Paul; Bhatt, Ramesh S.

    2014-01-01

    Individuals with Autism Spectrum Disorder (ASD) and their relatives process faces differently from typically developed (TD) individuals. In an fMRI face-viewing task, TD and undiagnosed sibling (SIB) children (5–18 years) showed face specialization in the right amygdala and ventromedial prefrontal cortex (vmPFC), with left fusiform and right amygdala face specialization increasing with age in TD subjects. SIBs showed extensive antero-medial temporal lobe activation for faces that was not present in any other group, suggesting a potential compensatory mechanism. In ASD, face specialization was minimal but increased with age in the right fusiform and decreased with age in the left amygdala, suggesting atypical development of a frontal-amygdala-fusiform system which is strongly linked to detecting salience and processing facial information. PMID:25479816

  11. A system for tracking and recognizing pedestrian faces using a network of loosely coupled cameras

    NASA Astrophysics Data System (ADS)

    Gagnon, L.; Laliberté, F.; Foucher, S.; Branzan Albu, A.; Laurendeau, D.

    2006-05-01

    A face recognition module has been developed for an intelligent multi-camera video surveillance system. The module can recognize a pedestrian face in terms of six basic emotions and the neutral state. Face and facial features detection (eyes, nasal root, nose and mouth) are first performed using cascades of boosted classifiers. These features are used to normalize the pose and dimension of the face image. Gabor filters are then sampled on a regular grid covering the face image to build a facial feature vector that feeds a nearest neighbor classifier with a cosine distance similarity measure for facial expression interpretation and face model construction. A graphical user interface allows the user to adjust the module parameters.

  12. Early (N170) activation of face-specific cortex by face-like objects

    PubMed Central

    Hadjikhani, Nouchine; Kveraga, Kestutis; Naik, Paulami; Ahlfors, Seppo P.

    2009-01-01

    The tendency to perceive faces in random patterns exhibiting configural properties of faces is an example of pareidolia. Perception of ‘real’ faces has been associated with a cortical response signal arising at about 170ms after stimulus onset; but what happens when non-face objects are perceived as faces? Using magnetoencephalography (MEG), we found that objects incidentally perceived as faces evoked an early (165ms) activation in the ventral fusiform cortex, at a time and location similar to that evoked by faces, whereas common objects did not evoke such activation. An earlier peak at 130 ms was also seen for images of real faces only. Our findings suggest that face perception evoked by face-like objects is a relatively early process, and not a late re-interpretation cognitive phenomenon. PMID:19218867

  13. Facebook and MySpace: complement or substitute for face-to-face interaction?

    PubMed

    Kujath, Carlyne L

    2011-01-01

    Previous studies have claimed that social-networking sites are used as a substitute for face-to-face interaction, resulting in deteriorating relationship quality and decreased intimacy among its users. The present study hypothesized that this type of communication is not a substitute for face-to-face interaction; rather, that it is an extension of communication with face-to-face partners. A survey was administered to examine the use of Facebook and MySpace in this regard among 183 college students. The study confirmed that Facebook and MySpace do act as an extension of face-to-face interaction, but that some users do tend to rely on Facebook and MySpace for interpersonal communication more than face-to-face interaction.

  14. Neonatal face-to-face interactions promote later social behaviour in infant rhesus monkeys

    PubMed Central

    Dettmer, Amanda M.; Kaburu, Stefano S. K.; Simpson, Elizabeth A.; Paukner, Annika; Sclafani, Valentina; Byers, Kristen L.; Murphy, Ashley M.; Miller, Michelle; Marquez, Neal; Miller, Grace M.; Suomi, Stephen J.; Ferrari, Pier F.

    2016-01-01

    In primates, including humans, mothers engage in face-to-face interactions with their infants, with frequencies varying both within and across species. However, the impact of this variation in face-to-face interactions on infant social development is unclear. Here we report that infant monkeys (Macaca mulatta) who engaged in more neonatal face-to-face interactions with mothers have increased social interactions at 2 and 5 months. In a controlled experiment, we show that this effect is not due to physical contact alone: monkeys randomly assigned to receive additional neonatal face-to-face interactions (mutual gaze and intermittent lip-smacking) with human caregivers display increased social interest at 2 months, compared with monkeys who received only additional handling. These studies suggest that face-to-face interactions from birth promote young primate social interest and competency. PMID:27300086

  15. What Faces Reveal: A Novel Method to Identify Patients at Risk of Deterioration Using Facial Expressions.

    PubMed

    Madrigal-Garcia, Maria Isabel; Rodrigues, Marcos; Shenfield, Alex; Singer, Mervyn; Moreno-Cuesta, Jeronimo

    2018-07-01

    To identify facial expressions occurring in patients at risk of deterioration in hospital wards. Prospective observational feasibility study. General ward patients in a London Community Hospital, United Kingdom. Thirty-four patients at risk of clinical deterioration. A 5-minute video (25 frames/s; 7,500 images) was recorded, encrypted, and subsequently analyzed for action units by a trained facial action coding system psychologist blinded to outcome. Action units of the upper face, head position, eyes position, lips and jaw position, and lower face were analyzed in conjunction with clinical measures collected within the National Early Warning Score. The most frequently detected action units were action unit 43 (73%) for upper face, action unit 51 (11.7%) for head position, action unit 62 (5.8%) for eyes position, action unit 25 (44.1%) for lips and jaw, and action unit 15 (67.6%) for lower face. The presence of certain combined face displays was increased in patients requiring admission to intensive care, namely, action units 43 + 15 + 25 (face display 1, p < 0.013), action units 43 + 15 + 51/52 (face display 2, p < 0.003), and action units 43 + 15 + 51 + 25 (face display 3, p < 0.002). Having face display 1, face display 2, and face display 3 increased the risk of being admitted to intensive care eight-fold, 18-fold, and as a sure event, respectively. A logistic regression model with face display 1, face display 2, face display 3, and National Early Warning Score as independent covariates described admission to intensive care with an average concordance statistic (C-index) of 0.71 (p = 0.009). Patterned facial expressions can be identified in deteriorating general ward patients. This tool may potentially augment risk prediction of current scoring systems.

  16. Typical visual search performance and atypical gaze behaviors in response to faces in Williams syndrome.

    PubMed

    Hirai, Masahiro; Muramatsu, Yukako; Mizuno, Seiji; Kurahashi, Naoko; Kurahashi, Hirokazu; Nakamura, Miho

    2016-01-01

    Evidence indicates that individuals with Williams syndrome (WS) exhibit atypical attentional characteristics when viewing faces. However, the dynamics of visual attention captured by faces remain unclear, especially when explicit attentional forces are present. To clarify this, we introduced a visual search paradigm and assessed how the relative strength of visual attention captured by a face and explicit attentional control changes as search progresses. Participants (WS and controls) searched for a target (butterfly) within an array of distractors, which sometimes contained an upright face. We analyzed reaction time and location of the first fixation-which reflect the attentional profile at the initial stage-and fixation durations. These features represent aspects of attention at later stages of visual search. The strength of visual attention captured by faces and explicit attentional control (toward the butterfly) was characterized by the frequency of first fixations on a face or butterfly and on the duration of face or butterfly fixations. Although reaction time was longer in all groups when faces were present, and visual attention was not dominated by faces in any group during the initial stages of the search, when faces were present, attention to faces dominated in the WS group during the later search stages. Furthermore, for the WS group, reaction time correlated with eye-movement measures at different stages of searching such that longer reaction times were associated with longer face-fixations, specifically at the initial stage of searching. Moreover, longer reaction times were associated with longer face-fixations at the later stages of searching, while shorter reaction times were associated with longer butterfly fixations. The relative strength of attention captured by faces in people with WS is not observed at the initial stage of searching but becomes dominant as the search progresses. Furthermore, although behavioral responses are associated with some

  17. Enhancement of face recognition learning in patients with brain injury using three cognitive training procedures.

    PubMed

    Powell, Jane; Letson, Susan; Davidoff, Jules; Valentine, Tim; Greenwood, Richard

    2008-04-01

    Twenty patients with impairments of face recognition, in the context of a broader pattern of cognitive deficits, were administered three new training procedures derived from contemporary theories of face processing to enhance their learning of new faces: semantic association (being given additional verbal information about the to-be-learned faces); caricaturing (presentation of caricatured versions of the faces during training and veridical versions at recognition testing); and part recognition (focusing patients on distinctive features during the training phase). Using a within-subjects design, each training procedure was applied to a different set of 10 previously unfamiliar faces and entailed six presentations of each face. In a "simple exposure" control procedure (SE), participants were given six presentations of another set of faces using the same basic protocol but with no further elaboration. Order of the four procedures was counterbalanced, and each condition was administered on a different day. A control group of 12 patients with similar levels of face recognition impairment were trained on all four sets of faces under SE conditions. Compared to the SE condition, all three training procedures resulted in more accurate discrimination between the 10 studied faces and 10 distractor faces in a post-training recognition test. This did not reflect any intrinsic lesser memorability of the faces used in the SE condition, as evidenced by the comparable performance across face sets by the control group. At the group level, the three experimental procedures were of similar efficacy, and associated cognitive deficits did not predict which technique would be most beneficial to individual patients; however, there was limited power to detect such associations. Interestingly, a pure prosopagnosic patient who was tested separately showed benefit only from the part recognition technique. Possible mechanisms for the observed effects, and implications for rehabilitation, are

  18. Face classification using electronic synapses

    NASA Astrophysics Data System (ADS)

    Yao, Peng; Wu, Huaqiang; Gao, Bin; Eryilmaz, Sukru Burc; Huang, Xueyao; Zhang, Wenqiang; Zhang, Qingtian; Deng, Ning; Shi, Luping; Wong, H.-S. Philip; Qian, He

    2017-05-01

    Conventional hardware platforms consume huge amount of energy for cognitive learning due to the data movement between the processor and the off-chip memory. Brain-inspired device technologies using analogue weight storage allow to complete cognitive tasks more efficiently. Here we present an analogue non-volatile resistive memory (an electronic synapse) with foundry friendly materials. The device shows bidirectional continuous weight modulation behaviour. Grey-scale face classification is experimentally demonstrated using an integrated 1024-cell array with parallel online training. The energy consumption within the analogue synapses for each iteration is 1,000 × (20 ×) lower compared to an implementation using Intel Xeon Phi processor with off-chip memory (with hypothetical on-chip digital resistive random access memory). The accuracy on test sets is close to the result using a central processing unit. These experimental results consolidate the feasibility of analogue synaptic array and pave the way toward building an energy efficient and large-scale neuromorphic system.

  19. Face classification using electronic synapses.

    PubMed

    Yao, Peng; Wu, Huaqiang; Gao, Bin; Eryilmaz, Sukru Burc; Huang, Xueyao; Zhang, Wenqiang; Zhang, Qingtian; Deng, Ning; Shi, Luping; Wong, H-S Philip; Qian, He

    2017-05-12

    Conventional hardware platforms consume huge amount of energy for cognitive learning due to the data movement between the processor and the off-chip memory. Brain-inspired device technologies using analogue weight storage allow to complete cognitive tasks more efficiently. Here we present an analogue non-volatile resistive memory (an electronic synapse) with foundry friendly materials. The device shows bidirectional continuous weight modulation behaviour. Grey-scale face classification is experimentally demonstrated using an integrated 1024-cell array with parallel online training. The energy consumption within the analogue synapses for each iteration is 1,000 × (20 ×) lower compared to an implementation using Intel Xeon Phi processor with off-chip memory (with hypothetical on-chip digital resistive random access memory). The accuracy on test sets is close to the result using a central processing unit. These experimental results consolidate the feasibility of analogue synaptic array and pave the way toward building an energy efficient and large-scale neuromorphic system.

  20. Three Faces of Fragile X.

    PubMed

    Lieb-Lundell, Cornelia C E

    2016-11-01

    Fragile X syndrome (FXS) is the first of 3 syndromes identified as a health condition related to fragile X mental retardation (FMR1) gene dysfunction. The other 2 syndromes are fragile X-associated primary ovarian insufficiency syndrome (FXPOI) and fragile X-associated tremor/ataxia syndrome (FXTAS), which together are referred to as fragile X-associated disorders (FXDs). Collectively, this group comprises the 3 faces of fragile X. Even though the 3 conditions share a common genetic defect, each one is a separate health condition that results in a variety of body function impairments such as motor delay, musculoskeletal issues related to low muscle tone, coordination limitations, ataxia, tremor, undefined muscle aches and pains, and, for FXTAS, a late-onset neurodegeneration. Although each FXD condition may benefit from physical therapy intervention, available evidence as to the efficacy of intervention appropriate to FXDs is lacking. This perspective article will discuss the genetic basis of FMR1 gene dysfunction and describe health conditions related to this mutation, which have a range of expressions within a family. Physical therapy concerns and possible assessment and intervention strategies will be introduced. Understanding the intergenerational effect of the FMR1 mutation with potential life-span expression is a key component to identifying and treating the health conditions related to this specific genetic condition. © 2016 American Physical Therapy Association.

  1. Facing rim cavities fluctuation modes

    NASA Astrophysics Data System (ADS)

    Casalino, Damiano; Ribeiro, André F. P.; Fares, Ehab

    2014-06-01

    Cavity modes taking place in the rims of two opposite wheels are investigated through Lattice-Boltzmann CFD simulations. Based on previous observations carried out by the authors during the BANC-II/LAGOON landing gear aeroacoustic study, a resonance mode can take place in the volume between the wheels of a two-wheel landing gear, involving a coupling between shear-layer vortical fluctuations and acoustic modes resulting from the combination of round cavity modes and wheel-to-wheel transversal acoustic modes. As a result, side force fluctuations and tonal noise side radiation take place. A parametric study of the cavity mode properties is carried out in the present work by varying the distance between the wheels. Moreover, the effects due to the presence of the axle are investigated by removing the axle from the two-wheel assembly. The azimuthal properties of the modes are scrutinized by filtering the unsteady flow in narrow bands around the tonal frequencies and investigating the azimuthal structure of the filtered fluctuation modes. Estimation of the tone frequencies with an ad hoc proposed analytical formula confirms the observed modal properties of the filtered unsteady flow solutions. The present study constitutes a primary step in the description of facing rim cavity modes as a possible source of landing gear tonal noise.

  2. Faciotopy—A face-feature map with face-like topology in the human occipital face area

    PubMed Central

    Henriksson, Linda; Mur, Marieke; Kriegeskorte, Nikolaus

    2015-01-01

    The occipital face area (OFA) and fusiform face area (FFA) are brain regions thought to be specialized for face perception. However, their intrinsic functional organization and status as cortical areas with well-defined boundaries remains unclear. Here we test these regions for “faciotopy”, a particular hypothesis about their intrinsic functional organisation. A faciotopic area would contain a face-feature map on the cortical surface, where cortical patches represent face features and neighbouring patches represent features that are physically neighbouring in a face. The faciotopy hypothesis is motivated by the idea that face regions might develop from a retinotopic protomap and acquire their selectivity for face features through natural visual experience. Faces have a prototypical configuration of features, are usually perceived in a canonical upright orientation, and are frequently fixated in particular locations. To test the faciotopy hypothesis, we presented images of isolated face features at fixation to subjects during functional magnetic resonance imaging. The responses in V1 were best explained by low-level image properties of the stimuli. OFA, and to a lesser degree FFA, showed evidence for faciotopic organization. When a single patch of cortex was estimated for each face feature, the cortical distances between the feature patches reflected the physical distance between the features in a face. Faciotopy would be the first example, to our knowledge, of a cortical map reflecting the topology, not of a part of the organism itself (its retina in retinotopy, its body in somatotopy), but of an external object of particular perceptual significance. PMID:26235800

  3. Face-to-face or face-to-screen? Undergraduates' opinions and test performance in classroom vs. online learning

    PubMed Central

    Kemp, Nenagh; Grieve, Rachel

    2014-01-01

    As electronic communication becomes increasingly common, and as students juggle study, work, and family life, many universities are offering their students more flexible learning opportunities. Classes once delivered face-to-face are often replaced by online activities and discussions. However, there is little research comparing students' experience and learning in these two modalities. The aim of this study was to compare undergraduates' preference for, and academic performance on, class material and assessment presented online vs. in traditional classrooms. Psychology students (N = 67) at an Australian university completed written exercises, a class discussion, and a written test on two academic topics. The activities for one topic were conducted face-to-face, and the other online, with topics counterbalanced across two groups. The results showed that students preferred to complete activities face-to-face rather than online, but there was no significant difference in their test performance in the two modalities. In their written responses, students expressed a strong preference for class discussions to be conducted face-to-face, reporting that they felt more engaged, and received more immediate feedback, than in online discussion. A follow-up study with a separate group (N = 37) confirmed that although students appreciated the convenience of completing written activities online in their own time, they also strongly preferred to discuss course content with peers in the classroom rather than online. It is concluded that online and face-to-face activities can lead to similar levels of academic performance, but that students would rather do written activities online but engage in discussion in person. Course developers could aim to structure classes so that students can benefit from both the flexibility of online learning, and the greater engagement experienced in face-to-face discussion. PMID:25429276

  4. Face-to-face or face-to-screen? Undergraduates' opinions and test performance in classroom vs. online learning.

    PubMed

    Kemp, Nenagh; Grieve, Rachel

    2014-01-01

    As electronic communication becomes increasingly common, and as students juggle study, work, and family life, many universities are offering their students more flexible learning opportunities. Classes once delivered face-to-face are often replaced by online activities and discussions. However, there is little research comparing students' experience and learning in these two modalities. The aim of this study was to compare undergraduates' preference for, and academic performance on, class material and assessment presented online vs. in traditional classrooms. Psychology students (N = 67) at an Australian university completed written exercises, a class discussion, and a written test on two academic topics. The activities for one topic were conducted face-to-face, and the other online, with topics counterbalanced across two groups. The results showed that students preferred to complete activities face-to-face rather than online, but there was no significant difference in their test performance in the two modalities. In their written responses, students expressed a strong preference for class discussions to be conducted face-to-face, reporting that they felt more engaged, and received more immediate feedback, than in online discussion. A follow-up study with a separate group (N = 37) confirmed that although students appreciated the convenience of completing written activities online in their own time, they also strongly preferred to discuss course content with peers in the classroom rather than online. It is concluded that online and face-to-face activities can lead to similar levels of academic performance, but that students would rather do written activities online but engage in discussion in person. Course developers could aim to structure classes so that students can benefit from both the flexibility of online learning, and the greater engagement experienced in face-to-face discussion.

  5. Faciotopy-A face-feature map with face-like topology in the human occipital face area.

    PubMed

    Henriksson, Linda; Mur, Marieke; Kriegeskorte, Nikolaus

    2015-11-01

    The occipital face area (OFA) and fusiform face area (FFA) are brain regions thought to be specialized for face perception. However, their intrinsic functional organization and status as cortical areas with well-defined boundaries remains unclear. Here we test these regions for "faciotopy", a particular hypothesis about their intrinsic functional organisation. A faciotopic area would contain a face-feature map on the cortical surface, where cortical patches represent face features and neighbouring patches represent features that are physically neighbouring in a face. The faciotopy hypothesis is motivated by the idea that face regions might develop from a retinotopic protomap and acquire their selectivity for face features through natural visual experience. Faces have a prototypical configuration of features, are usually perceived in a canonical upright orientation, and are frequently fixated in particular locations. To test the faciotopy hypothesis, we presented images of isolated face features at fixation to subjects during functional magnetic resonance imaging. The responses in V1 were best explained by low-level image properties of the stimuli. OFA, and to a lesser degree FFA, showed evidence for faciotopic organization. When a single patch of cortex was estimated for each face feature, the cortical distances between the feature patches reflected the physical distance between the features in a face. Faciotopy would be the first example, to our knowledge, of a cortical map reflecting the topology, not of a part of the organism itself (its retina in retinotopy, its body in somatotopy), but of an external object of particular perceptual significance. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. Self-face recognition in social context.

    PubMed

    Sugiura, Motoaki; Sassa, Yuko; Jeong, Hyeonjeong; Wakusawa, Keisuke; Horie, Kaoru; Sato, Shigeru; Kawashima, Ryuta

    2012-06-01

    The concept of "social self" is often described as a representation of the self-reflected in the eyes or minds of others. Although the appearance of one's own face has substantial social significance for humans, neuroimaging studies have failed to link self-face recognition and the likely neural substrate of the social self, the medial prefrontal cortex (MPFC). We assumed that the social self is recruited during self-face recognition under a rich social context where multiple other faces are available for comparison of social values. Using functional magnetic resonance imaging (fMRI), we examined the modulation of neural responses to the faces of the self and of a close friend in a social context. We identified an enhanced response in the ventral MPFC and right occipitoparietal sulcus in the social context specifically for the self-face. Neural response in the right lateral parietal and inferior temporal cortices, previously claimed as self-face-specific, was unaffected for the self-face but unexpectedly enhanced for the friend's face in the social context. Self-face-specific activation in the pars triangularis of the inferior frontal gyrus, and self-face-specific reduction of activation in the left middle temporal gyrus and the right supramarginal gyrus, replicating a previous finding, were not subject to such modulation. Our results thus demonstrated the recruitment of a social self during self-face recognition in the social context. At least three brain networks for self-face-specific activation may be dissociated by different patterns of response-modulation in the social context, suggesting multiple dynamic self-other representations in the human brain. Copyright © 2011 Wiley-Liss, Inc.

  7. Face format at encoding affects the other-race effect in face memory.

    PubMed

    Zhao, Mintao; Hayward, William G; Bülthoff, Isabelle

    2014-08-07

    Memory of own-race faces is generally better than memory of other-races faces. This other-race effect (ORE) in face memory has been attributed to differences in contact, holistic processing, and motivation to individuate faces. Since most studies demonstrate the ORE with participants learning and recognizing static, single-view faces, it remains unclear whether the ORE can be generalized to different face learning conditions. Using an old/new recognition task, we tested whether face format at encoding modulates the ORE. The results showed a significant ORE when participants learned static, single-view faces (Experiment 1). In contrast, the ORE disappeared when participants learned rigidly moving faces (Experiment 2). Moreover, learning faces displayed from four discrete views produced the same results as learning rigidly moving faces (Experiment 3). Contact with other-race faces was correlated with the magnitude of the ORE. Nonetheless, the absence of the ORE in Experiments 2 and 3 cannot be readily explained by either more frequent contact with other-race faces or stronger motivation to individuate them. These results demonstrate that the ORE is sensitive to face format at encoding, supporting the hypothesis that relative involvement of holistic and featural processing at encoding mediates the ORE observed in face memory. © 2014 ARVO.

  8. Face processing pattern under top-down perception: a functional MRI study

    NASA Astrophysics Data System (ADS)

    Li, Jun; Liang, Jimin; Tian, Jie; Liu, Jiangang; Zhao, Jizheng; Zhang, Hui; Shi, Guangming

    2009-02-01

    Although top-down perceptual process plays an important role in face processing, its neural substrate is still puzzling because the top-down stream is extracted difficultly from the activation pattern associated with contamination caused by bottom-up face perception input. In the present study, a novel paradigm of instructing participants to detect faces from pure noise images is employed, which could efficiently eliminate the interference of bottom-up face perception in topdown face processing. Analyzing the map of functional connectivity with right FFA analyzed by conventional Pearson's correlation, a possible face processing pattern induced by top-down perception can be obtained. Apart from the brain areas of bilateral fusiform gyrus (FG), left inferior occipital gyrus (IOG) and left superior temporal sulcus (STS), which are consistent with a core system in the distributed cortical network for face perception, activation induced by top-down face processing is also found in these regions that include the anterior cingulate gyrus (ACC), right oribitofrontal cortex (OFC), left precuneus, right parahippocampal cortex, left dorsolateral prefrontal cortex (DLPFC), right frontal pole, bilateral premotor cortex, left inferior parietal cortex and bilateral thalamus. The results indicate that making-decision, attention, episodic memory retrieving and contextual associative processing network cooperate with general face processing regions to process face information under top-down perception.

  9. Discrimination of human and dog faces and inversion responses in domestic dogs (Canis familiaris).

    PubMed

    Racca, Anaïs; Amadei, Eleonora; Ligout, Séverine; Guo, Kun; Meints, Kerstin; Mills, Daniel

    2010-05-01

    Although domestic dogs can respond to many facial cues displayed by other dogs and humans, it remains unclear whether they can differentiate individual dogs or humans based on facial cues alone and, if so, whether they would demonstrate the face inversion effect, a behavioural hallmark commonly used in primates to differentiate face processing from object processing. In this study, we first established the applicability of the visual paired comparison (VPC or preferential looking) procedure for dogs using a simple object discrimination task with 2D pictures. The animals demonstrated a clear looking preference for novel objects when simultaneously presented with prior-exposed familiar objects. We then adopted this VPC procedure to assess their face discrimination and inversion responses. Dogs showed a deviation from random behaviour, indicating discrimination capability when inspecting upright dog faces, human faces and object images; but the pattern of viewing preference was dependent upon image category. They directed longer viewing time at novel (vs. familiar) human faces and objects, but not at dog faces, instead, a longer viewing time at familiar (vs. novel) dog faces was observed. No significant looking preference was detected for inverted images regardless of image category. Our results indicate that domestic dogs can use facial cues alone to differentiate individual dogs and humans and that they exhibit a non-specific inversion response. In addition, the discrimination response by dogs of human and dog faces appears to differ with the type of face involved.

  10. Robust Selectivity for Faces in the Human Amygdala in the Absence of Expressions

    PubMed Central

    Mende-Siedlecki, Peter; Verosky, Sara C.; Turk-Browne, Nicholas B.; Todorov, Alexander

    2014-01-01

    There is a well-established posterior network of cortical regions that plays a central role in face processing and that has been investigated extensively. In contrast, although responsive to faces, the amygdala is not considered a core face-selective region, and its face selectivity has never been a topic of systematic research in human neuroimaging studies. Here, we conducted a large-scale group analysis of fMRI data from 215 participants. We replicated the posterior network observed in prior studies but found equally robust and reliable responses to faces in the amygdala. These responses were detectable in most individual participants, but they were also highly sensitive to the initial statistical threshold and habituated more rapidly than the responses in posterior face-selective regions. A multivariate analysis showed that the pattern of responses to faces across voxels in the amygdala had high reliability over time. Finally, functional connectivity analyses showed stronger coupling between the amygdala and posterior face-selective regions during the perception of faces than during the perception of control visual categories. These findings suggest that the amygdala should be considered a core face-selective region. PMID:23984945

  11. Finding a face in the crowd: testing the anger superiority effect in Asperger Syndrome.

    PubMed

    Ashwin, Chris; Wheelwright, Sally; Baron-Cohen, Simon

    2006-06-01

    Social threat captures attention and is processed rapidly and efficiently, with many lines of research showing involvement of the amygdala. Visual search paradigms looking at social threat have shown angry faces 'pop-out' in a crowd, compared to happy faces. Autism and Asperger Syndrome (AS) are neurodevelopmental conditions characterised by social deficits, abnormal face processing, and amygdala dysfunction. We tested adults with high-functioning autism (HFA) and AS using a facial visual search paradigm with schematic neutral and emotional faces. We found, contrary to predictions, that people with HFA/AS performed similarly to controls in many conditions. However, the effect was reduced in the HFA/AS group when using widely varying crowd sizes and when faces were inverted, suggesting a difference in face-processing style may be evident even with simple schematic faces. We conclude there are intact threat detection mechanisms in AS, under simple and predictable conditions, but that like other face-perception tasks, the visual search of threat faces task reveals atypical face-processing in HFA/AS.

  12. Individual differences in anxiety predict neural measures of visual working memory for untrustworthy faces.

    PubMed

    Meconi, Federica; Luria, Roy; Sessa, Paola

    2014-12-01

    When facing strangers, one of the first evaluations people perform is to implicitly assess their trustworthiness. However, the underlying processes supporting trustworthiness appraisal are poorly understood. We hypothesized that visual working memory (VWM) maintains online face representations that are sensitive to physical cues of trustworthiness, and that differences among individuals in representing untrustworthy faces are associated with individual differences in anxiety. Participants performed a change detection task that required encoding and maintaining for a short interval the identity of one face parametrically manipulated to be either trustworthy or untrustworthy. The sustained posterior contralateral negativity (SPCN), an event-related component (ERP) time-locked to the onset of the face, was used to index the resolution of face representations in VWM. Results revealed greater SPCN amplitudes for trustworthy faces when compared with untrustworthy faces, indicating that VWM is sensitive to physical cues of trustworthiness, even in the absence of explicit trustworthiness appraisal. In addition, differences in SPCN amplitude between trustworthy and untrustworthy faces correlated with participants' anxiety, indicating that healthy college students with sub-clinical high anxiety levels represented untrustworthy faces in greater detail compared with students with sub-clinical low anxiety levels. This pattern of findings is discussed in terms of the high flexibility of aversive/avoidance and appetitive/approach motivational systems. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  13. Blended CBT versus face-to-face CBT: a randomised non-inferiority trial.

    PubMed

    Mathiasen, Kim; Andersen, Tonny E; Riper, Heleen; Kleiboer, Annet A M; Roessler, Kirsten K

    2016-12-05

    Internet based cognitive behavioural therapy (iCBT) has been demonstrated to be cost- and clinically effective. There is a need, however, for increased therapist contact for some patient groups. Combining iCBT with traditional face-to-face (ftf) consultations in a blended format (B-CBT) may produce a new treatment format with multiple benefits from both traditional CBT and iCBT such as individual adaptation, lower costs than traditional therapy, wide geographical and temporal availability, and possibly lower threshold to implementation. The primary aim of the present study is to compare directly the clinical effectiveness of B-CBT with face-to-face CBT for adult major depressive disorder. The study is designed as a two arm randomised controlled non-inferiority trial comparing blended CBT for adult depression with treatment as usual (TAU). In the blended condition six sessions of ftf CBT is alternated with six to eight online modules (NoDep). TAU is defined as 12 sessions of ftf CBT. The primary outcome is symptomatic change of depressive symptoms on the patient-health questionnaire (PHQ-9). Additionally, the study will include an economic evaluation. All participants must be 18 years of age or older and meet the diagnostic criteria for major depressive disorder according to the Diagnostic and Statistical Manual of Mental disorders 4th edition. Participants are randomised on an individual level by a researcher not involved in the project. The primary outcome is analysed by regressing the three-month follow-up PHQ-9 data on the baseline PHQ-9 score and a treatment group indicator using ancova. A sample size of 130 in two balanced groups will yield a power of at least 80% to detect standardised mean differences above 0.5 on a normally distributed variable. This study design will compare B-CBT and ftf CBT in a concise and direct manner with only a minimal of the variance explained by differences in therapeutic content. On the other hand, while situated in routine care

  14. ERPs reveal subliminal processing of fearful faces.

    PubMed

    Kiss, Monika; Eimer, Martin

    2008-03-01

    To investigate whether facial expression is processed in the absence of conscious awareness, ERPs were recorded in a task in which participants had to identify the expression of masked fearful and neutral target faces. On supraliminal trials (200 ms target duration), in which identification performance was high, a sustained positivity to fearful versus neutral target faces started 140 ms after target face onset. On subliminal trials (8 ms target duration), identification performance was at chance level, but ERPs still showed systematic fear-specific effects. An early positivity to fearful target faces was present but smaller than on supraliminal trials. A subsequent enhanced N2 to fearful faces was only present for subliminal trials. In contrast, a P3 enhancement to fearful faces was observed on supraliminal but not subliminal trials. Results demonstrate rapid emotional expression processing in the absence of awareness.

  15. ERPs reveal subliminal processing of fearful faces

    PubMed Central

    Kiss, Monika; Eimer, Martin

    2008-01-01

    To investigate whether facial expression is processed in the absence of conscious awareness, ERPs were recorded in a task where participants had to identify the expression of masked fearful and neutral target faces. On supraliminal trials (200 ms target duration), where identification performance was high, a sustained positivity to fearful versus neutral target faces started 140 ms after target face onset. On subliminal trials (8 ms target duration), identification performance was at chance level, but ERPs still showed systematic fear-specific effects. An early positivity to fearful target faces was present but smaller than on supraliminal trials. A subsequent enhanced N2 to fearful faces was only present for subliminal trials. In contrast, a P3 enhancement to fearful faces was observed on supraliminal but not subliminal trials. Results demonstrate rapid emotional expression processing in the absence of awareness. PMID:17995905

  16. A survey of real face modeling methods

    NASA Astrophysics Data System (ADS)

    Liu, Xiaoyue; Dai, Yugang; He, Xiangzhen; Wan, Fucheng

    2017-09-01

    The face model has always been a research challenge in computer graphics, which involves the coordination of multiple organs in faces. This article explained two kinds of face modeling method which is based on the data driven and based on parameter control, analyzed its content and background, summarized their advantages and disadvantages, and concluded muscle model which is based on the anatomy of the principle has higher veracity and easy to drive.

  17. Development of an Autonomous Face Recognition Machine.

    DTIC Science & Technology

    1986-12-08

    This approach, like Baron’s, would be a very time consuming task. The problem of locating a face in Bromley’s work was the least complex of the three...top level design and the development and design decisions that were made in developing the Autonomous Face Recognition Machine (AFRM). The chapter is...images within a digital image. The second sectio examines the algorithm used in performing face recognition. The decision to divide the development

  18. Contextual modulation of biases in face recognition.

    PubMed

    Felisberti, Fatima Maria; Pavey, Louisa

    2010-09-23

    The ability to recognize the faces of potential cooperators and cheaters is fundamental to social exchanges, given that cooperation for mutual benefit is expected. Studies addressing biases in face recognition have so far proved inconclusive, with reports of biases towards faces of cheaters, biases towards faces of cooperators, or no biases at all. This study attempts to uncover possible causes underlying such discrepancies. Four experiments were designed to investigate biases in face recognition during social exchanges when behavioral descriptors (prosocial, antisocial or neutral) embedded in different scenarios were tagged to faces during memorization. Face recognition, measured as accuracy and response latency, was tested with modified yes-no, forced-choice and recall tasks (N = 174). An enhanced recognition of faces tagged with prosocial descriptors was observed when the encoding scenario involved financial transactions and the rules of the social contract were not explicit (experiments 1 and 2). Such bias was eliminated or attenuated by making participants explicitly aware of "cooperative", "cheating" and "neutral/indifferent" behaviors via a pre-test questionnaire and then adding such tags to behavioral descriptors (experiment 3). Further, in a social judgment scenario with descriptors of salient moral behaviors, recognition of antisocial and prosocial faces was similar, but significantly better than neutral faces (experiment 4). The results highlight the relevance of descriptors and scenarios of social exchange in face recognition, when the frequency of prosocial and antisocial individuals in a group is similar. Recognition biases towards prosocial faces emerged when descriptors did not state the rules of a social contract or the moral status of a behavior, and they point to the existence of broad and flexible cognitive abilities finely tuned to minor changes in social context.

  19. Early (M170) activation of face-specific cortex by face-like objects.

    PubMed

    Hadjikhani, Nouchine; Kveraga, Kestutis; Naik, Paulami; Ahlfors, Seppo P

    2009-03-04

    The tendency to perceive faces in random patterns exhibiting configural properties of faces is an example of pareidolia. Perception of 'real' faces has been associated with a cortical response signal arising at approximately 170 ms after stimulus onset, but what happens when nonface objects are perceived as faces? Using magnetoencephalography, we found that objects incidentally perceived as faces evoked an early (165 ms) activation in the ventral fusiform cortex, at a time and location similar to that evoked by faces, whereas common objects did not evoke such activation. An earlier peak at 130 ms was also seen for images of real faces only. Our findings suggest that face perception evoked by face-like objects is a relatively early process, and not a late reinterpretation cognitive phenomenon.

  20. Withholding response to self-face is faster than to other-face.

    PubMed

    Zhu, Min; Hu, Yinying; Tang, Xiaochen; Luo, Junlong; Gao, Xiangping

    2015-01-01

    Self-face advantage refers to adults' response to self-face is faster than that to other-face. A stop-signal task was used to explore how self-face advantage interacted with response inhibition. The results showed that reaction times of self-face were faster than that of other-face not in the go task but in the stop response trials. The novelty of the finding was that self-face has shorter stop-signal reaction time compared to other-face in the successful inhibition trials. These results indicated the processing mechanism of self-face may be characterized by a strong response tendency and a corresponding strong inhibition control.