Sample records for face detection method

  1. A Method of Face Detection with Bayesian Probability

    NASA Astrophysics Data System (ADS)

    Sarker, Goutam

    2010-10-01

    The objective of face detection is to identify all images which contain a face, irrespective of its orientation, illumination conditions etc. This is a hard problem, because the faces are highly variable in size, shape lighting conditions etc. Many methods have been designed and developed to detect faces in a single image. The present paper is based on one `Appearance Based Method' which relies on learning the facial and non facial features from image examples. This in its turn is based on statistical analysis of examples and counter examples of facial images and employs Bayesian Conditional Classification Rule to detect the probability of belongingness of a face (or non-face) within an image frame. The detection rate of the present system is very high and thereby the number of false positive and false negative detection is substantially low.

  2. Novel face-detection method under various environments

    NASA Astrophysics Data System (ADS)

    Jing, Min-Quan; Chen, Ling-Hwei

    2009-06-01

    We propose a method to detect a face with different poses under various environments. On the basis of skin color information, skin regions are first extracted from an input image. Next, the shoulder part is cut out by using shape information and the head part is then identified as a face candidate. For a face candidate, a set of geometric features is applied to determine if it is a profile face. If not, then a set of eyelike rectangles extracted from the face candidate and the lighting distribution are used to determine if the face candidate is a nonprofile face. Experimental results show that the proposed method is robust under a wide range of lighting conditions, different poses, and races. The detection rate for the HHI face database is 93.68%. For the Champion face database, the detection rate is 95.15%.

  3. Adaboost multi-view face detection based on YCgCr skin color model

    NASA Astrophysics Data System (ADS)

    Lan, Qi; Xu, Zhiyong

    2016-09-01

    Traditional Adaboost face detection algorithm uses Haar-like features training face classifiers, whose detection error rate is low in the face region. While under the complex background, the classifiers will make wrong detection easily to the background regions with the similar faces gray level distribution, which leads to the error detection rate of traditional Adaboost algorithm is high. As one of the most important features of a face, skin in YCgCr color space has good clustering. We can fast exclude the non-face areas through the skin color model. Therefore, combining with the advantages of the Adaboost algorithm and skin color detection algorithm, this paper proposes Adaboost face detection algorithm method that bases on YCgCr skin color model. Experiments show that, compared with traditional algorithm, the method we proposed has improved significantly in the detection accuracy and errors.

  4. Face liveness detection for face recognition based on cardiac features of skin color image

    NASA Astrophysics Data System (ADS)

    Suh, Kun Ha; Lee, Eui Chul

    2016-07-01

    With the growth of biometric technology, spoofing attacks have been emerged a threat to the security of the system. Main spoofing scenarios in the face recognition system include the printing attack, replay attack, and 3D mask attack. To prevent such attacks, techniques that evaluating liveness of the biometric data can be considered as a solution. In this paper, a novel face liveness detection method based on cardiac signal extracted from face is presented. The key point of proposed method is that the cardiac characteristic is detected in live faces but not detected in non-live faces. Experimental results showed that the proposed method can be effective way for determining printing attack or 3D mask attack.

  5. Live face detection based on the analysis of Fourier spectra

    NASA Astrophysics Data System (ADS)

    Li, Jiangwei; Wang, Yunhong; Tan, Tieniu; Jain, Anil K.

    2004-08-01

    Biometrics is a rapidly developing technology that is to identify a person based on his or her physiological or behavioral characteristics. To ensure the correction of authentication, the biometric system must be able to detect and reject the use of a copy of a biometric instead of the live biometric. This function is usually termed "liveness detection". This paper describes a new method for live face detection. Using structure and movement information of live face, an effective live face detection algorithm is presented. Compared to existing approaches, which concentrate on the measurement of 3D depth information, this method is based on the analysis of Fourier spectra of a single face image or face image sequences. Experimental results show that the proposed method has an encouraging performance.

  6. An Implementation of Privacy Protection for a Surveillance Camera Using ROI Coding of JPEG2000 with Face Detection

    NASA Astrophysics Data System (ADS)

    Muneyasu, Mitsuji; Odani, Shuhei; Kitaura, Yoshihiro; Namba, Hitoshi

    On the use of a surveillance camera, there is a case where privacy protection should be considered. This paper proposes a new privacy protection method by automatically degrading the face region in surveillance images. The proposed method consists of ROI coding of JPEG2000 and a face detection method based on template matching. The experimental result shows that the face region can be detected and hidden correctly.

  7. Energy conservation using face detection

    NASA Astrophysics Data System (ADS)

    Deotale, Nilesh T.; Kalbande, Dhananjay R.; Mishra, Akassh A.

    2011-10-01

    Computerized Face Detection, is concerned with the difficult task of converting a video signal of a person to written text. It has several applications like face recognition, simultaneous multiple face processing, biometrics, security, video surveillance, human computer interface, image database management, digital cameras use face detection for autofocus, selecting regions of interest in photo slideshows that use a pan-and-scale and The Present Paper deals with energy conservation using face detection. Automating the process to a computer requires the use of various image processing techniques. There are various methods that can be used for Face Detection such as Contour tracking methods, Template matching, Controlled background, Model based, Motion based and color based. Basically, the video of the subject are converted into images are further selected manually for processing. However, several factors like poor illumination, movement of face, viewpoint-dependent Physical appearance, Acquisition geometry, Imaging conditions, Compression artifacts makes Face detection difficult. This paper reports an algorithm for conservation of energy using face detection for various devices. The present paper suggests Energy Conservation can be done by Detecting the Face and reducing the brightness of complete image and then adjusting the brightness of the particular area of an image where the face is located using histogram equalization.

  8. Face detection assisted auto exposure: supporting evidence from a psychophysical study

    NASA Astrophysics Data System (ADS)

    Jin, Elaine W.; Lin, Sheng; Dharumalingam, Dhandapani

    2010-01-01

    Face detection has been implemented in many digital still cameras and camera phones with the promise of enhancing existing camera functions (e.g. auto exposure) and adding new features to cameras (e.g. blink detection). In this study we examined the use of face detection algorithms in assisting auto exposure (AE). The set of 706 images, used in this study, was captured using Canon Digital Single Lens Reflex cameras and subsequently processed with an image processing pipeline. A psychophysical study was performed to obtain optimal exposure along with the upper and lower bounds of exposure for all 706 images. Three methods of marking faces were utilized: manual marking, face detection algorithm A (FD-A), and face detection algorithm B (FD-B). The manual marking method found 751 faces in 426 images, which served as the ground-truth for face regions of interest. The remaining images do not have any faces or the faces are too small to be considered detectable. The two face detection algorithms are different in resource requirements and in performance. FD-A uses less memory and gate counts compared to FD-B, but FD-B detects more faces and has less false positives. A face detection assisted auto exposure algorithm was developed and tested against the evaluation results from the psychophysical study. The AE test results showed noticeable improvement when faces were detected and used in auto exposure. However, the presence of false positives would negatively impact the added benefit.

  9. An objective method for measuring face detection thresholds using the sweep steady-state visual evoked response

    PubMed Central

    Ales, Justin M.; Farzin, Faraz; Rossion, Bruno; Norcia, Anthony M.

    2012-01-01

    We introduce a sensitive method for measuring face detection thresholds rapidly, objectively, and independently of low-level visual cues. The method is based on the swept parameter steady-state visual evoked potential (ssVEP), in which a stimulus is presented at a specific temporal frequency while parametrically varying (“sweeping”) the detectability of the stimulus. Here, the visibility of a face image was increased by progressive derandomization of the phase spectra of the image in a series of equally spaced steps. Alternations between face and fully randomized images at a constant rate (3/s) elicit a robust first harmonic response at 3 Hz specific to the structure of the face. High-density EEG was recorded from 10 human adult participants, who were asked to respond with a button-press as soon as they detected a face. The majority of participants produced an evoked response at the first harmonic (3 Hz) that emerged abruptly between 30% and 35% phase-coherence of the face, which was most prominent on right occipito-temporal sites. Thresholds for face detection were estimated reliably in single participants from 15 trials, or on each of the 15 individual face trials. The ssVEP-derived thresholds correlated with the concurrently measured perceptual face detection thresholds. This first application of the sweep VEP approach to high-level vision provides a sensitive and objective method that could be used to measure and compare visual perception thresholds for various object shapes and levels of categorization in different human populations, including infants and individuals with developmental delay. PMID:23024355

  10. A Comparative Survey of Methods for Remote Heart Rate Detection From Frontal Face Videos

    PubMed Central

    Wang, Chen; Pun, Thierry; Chanel, Guillaume

    2018-01-01

    Remotely measuring physiological activity can provide substantial benefits for both the medical and the affective computing applications. Recent research has proposed different methodologies for the unobtrusive detection of heart rate (HR) using human face recordings. These methods are based on subtle color changes or motions of the face due to cardiovascular activities, which are invisible to human eyes but can be captured by digital cameras. Several approaches have been proposed such as signal processing and machine learning. However, these methods are compared with different datasets, and there is consequently no consensus on method performance. In this article, we describe and evaluate several methods defined in literature, from 2008 until present day, for the remote detection of HR using human face recordings. The general HR processing pipeline is divided into three stages: face video processing, face blood volume pulse (BVP) signal extraction, and HR computation. Approaches presented in the paper are classified and grouped according to each stage. At each stage, algorithms are analyzed and compared based on their performance using the public database MAHNOB-HCI. Results found in this article are limited on MAHNOB-HCI dataset. Results show that extracted face skin area contains more BVP information. Blind source separation and peak detection methods are more robust with head motions for estimating HR. PMID:29765940

  11. Real-time detection with AdaBoost-svm combination in various face orientation

    NASA Astrophysics Data System (ADS)

    Fhonna, R. P.; Nasution, M. K. M.; Tulus

    2018-03-01

    Most of the research has used algorithm AdaBoost-SVM for face detection. However, to our knowledge so far there is no research has been facing detection on real-time data with various orientations using the combination of AdaBoost and Support Vector Machine (SVM). Characteristics of complex and diverse face variations and real-time data in various orientations, and with a very complex application will slow down the performance of the face detection system this becomes a challenge in this research. Face orientation performed on the detection system, that is 900, 450, 00, -450, and -900. This combination method is expected to be an effective and efficient solution in various face orientations. The results showed that the highest average detection rate is on the face detection oriented 00 and the lowest detection rate is in the face orientation 900.

  12. Multiview face detection based on position estimation over multicamera surveillance system

    NASA Astrophysics Data System (ADS)

    Huang, Ching-chun; Chou, Jay; Shiu, Jia-Hou; Wang, Sheng-Jyh

    2012-02-01

    In this paper, we propose a multi-view face detection system that locates head positions and indicates the direction of each face in 3-D space over a multi-camera surveillance system. To locate 3-D head positions, conventional methods relied on face detection in 2-D images and projected the face regions back to 3-D space for correspondence. However, the inevitable false face detection and rejection usually degrades the system performance. Instead, our system searches for the heads and face directions over the 3-D space using a sliding cube. Each searched 3-D cube is projected onto the 2-D camera views to determine the existence and direction of human faces. Moreover, a pre-process to estimate the locations of candidate targets is illustrated to speed-up the searching process over the 3-D space. In summary, our proposed method can efficiently fuse multi-camera information and suppress the ambiguity caused by detection errors. Our evaluation shows that the proposed approach can efficiently indicate the head position and face direction on real video sequences even under serious occlusion.

  13. Face pose tracking using the four-point algorithm

    NASA Astrophysics Data System (ADS)

    Fung, Ho Yin; Wong, Kin Hong; Yu, Ying Kin; Tsui, Kwan Pang; Kam, Ho Chuen

    2017-06-01

    In this paper, we have developed an algorithm to track the pose of a human face robustly and efficiently. Face pose estimation is very useful in many applications such as building virtual reality systems and creating an alternative input method for the disabled. Firstly, we have modified a face detection toolbox called DLib for the detection of a face in front of a camera. The detected face features are passed to a pose estimation method, known as the four-point algorithm, for pose computation. The theory applied and the technical problems encountered during system development are discussed in the paper. It is demonstrated that the system is able to track the pose of a face in real time using a consumer grade laptop computer.

  14. A special purpose knowledge-based face localization method

    NASA Astrophysics Data System (ADS)

    Hassanat, Ahmad; Jassim, Sabah

    2008-04-01

    This paper is concerned with face localization for visual speech recognition (VSR) system. Face detection and localization have got a great deal of attention in the last few years, because it is an essential pre-processing step in many techniques that handle or deal with faces, (e.g. age, face, gender, race and visual speech recognition). We shall present an efficient method for localization human's faces in video images captured on mobile constrained devices, under a wide variation in lighting conditions. We use a multiphase method that may include all or some of the following steps starting with image pre-processing, followed by a special purpose edge detection, then an image refinement step. The output image will be passed through a discrete wavelet decomposition procedure, and the computed LL sub-band at a certain level will be transformed into a binary image that will be scanned by using a special template to select a number of possible candidate locations. Finally, we fuse the scores from the wavelet step with scores determined by color information for the candidate location and employ a form of fuzzy logic to distinguish face from non-face locations. We shall present results of large number of experiments to demonstrate that the proposed face localization method is efficient and achieve high level of accuracy that outperforms existing general-purpose face detection methods.

  15. Sunglass detection method for automation of video surveillance system

    NASA Astrophysics Data System (ADS)

    Sikandar, Tasriva; Samsudin, Wan Nur Azhani W.; Hawari Ghazali, Kamarul; Mohd, Izzeldin I.; Fazle Rabbi, Mohammad

    2018-04-01

    Wearing sunglass to hide face from surveillance camera is a common activity in criminal incidences. Therefore, sunglass detection from surveillance video has become a demanding issue in automation of security systems. In this paper we propose an image processing method to detect sunglass from surveillance images. Specifically, a unique feature using facial height and width has been employed to identify the covered region of the face. The presence of covered area by sunglass is evaluated using facial height-width ratio. Threshold value of covered area percentage is used to classify the glass wearing face. Two different types of glasses have been considered i.e. eye glass and sunglass. The results of this study demonstrate that the proposed method is able to detect sunglasses in two different illumination conditions such as, room illumination as well as in the presence of sunlight. In addition, due to the multi-level checking in facial region, this method has 100% accuracy of detecting sunglass. However, in an exceptional case where fabric surrounding the face has similar color as skin, the correct detection rate was found 93.33% for eye glass.

  16. High precision automated face localization in thermal images: oral cancer dataset as test case

    NASA Astrophysics Data System (ADS)

    Chakraborty, M.; Raman, S. K.; Mukhopadhyay, S.; Patsa, S.; Anjum, N.; Ray, J. G.

    2017-02-01

    Automated face detection is the pivotal step in computer vision aided facial medical diagnosis and biometrics. This paper presents an automatic, subject adaptive framework for accurate face detection in the long infrared spectrum on our database for oral cancer detection consisting of malignant, precancerous and normal subjects of varied age group. Previous works on oral cancer detection using Digital Infrared Thermal Imaging(DITI) reveals that patients and normal subjects differ significantly in their facial thermal distribution. Therefore, it is a challenging task to formulate a completely adaptive framework to veraciously localize face from such a subject specific modality. Our model consists of first extracting the most probable facial regions by minimum error thresholding followed by ingenious adaptive methods to leverage the horizontal and vertical projections of the segmented thermal image. Additionally, the model incorporates our domain knowledge of exploiting temperature difference between strategic locations of the face. To our best knowledge, this is the pioneering work on detecting faces in thermal facial images comprising both patients and normal subjects. Previous works on face detection have not specifically targeted automated medical diagnosis; face bounding box returned by those algorithms are thus loose and not apt for further medical automation. Our algorithm significantly outperforms contemporary face detection algorithms in terms of commonly used metrics for evaluating face detection accuracy. Since our method has been tested on challenging dataset consisting of both patients and normal subjects of diverse age groups, it can be seamlessly adapted in any DITI guided facial healthcare or biometric applications.

  17. A multi-view face recognition system based on cascade face detector and improved Dlib

    NASA Astrophysics Data System (ADS)

    Zhou, Hongjun; Chen, Pei; Shen, Wei

    2018-03-01

    In this research, we present a framework for multi-view face detect and recognition system based on cascade face detector and improved Dlib. This method is aimed to solve the problems of low efficiency and low accuracy in multi-view face recognition, to build a multi-view face recognition system, and to discover a suitable monitoring scheme. For face detection, the cascade face detector is used to extracted the Haar-like feature from the training samples, and Haar-like feature is used to train a cascade classifier by combining Adaboost algorithm. Next, for face recognition, we proposed an improved distance model based on Dlib to improve the accuracy of multiview face recognition. Furthermore, we applied this proposed method into recognizing face images taken from different viewing directions, including horizontal view, overlooks view, and looking-up view, and researched a suitable monitoring scheme. This method works well for multi-view face recognition, and it is also simulated and tested, showing satisfactory experimental results.

  18. The biometric-based module of smart grid system

    NASA Astrophysics Data System (ADS)

    Engel, E.; Kovalev, I. V.; Ermoshkina, A.

    2015-10-01

    Within Smart Grid concept the flexible biometric-based module base on Principal Component Analysis (PCA) and selective Neural Network is developed. The formation of the selective Neural Network the biometric-based module uses the method which includes three main stages: preliminary processing of the image, face localization and face recognition. Experiments on the Yale face database show that (i) selective Neural Network exhibits promising classification capability for face detection, recognition problems; and (ii) the proposed biometric-based module achieves near real-time face detection, recognition speed and the competitive performance, as compared to some existing subspaces-based methods.

  19. Image preprocessing study on KPCA-based face recognition

    NASA Astrophysics Data System (ADS)

    Li, Xuan; Li, Dehua

    2015-12-01

    Face recognition as an important biometric identification method, with its friendly, natural, convenient advantages, has obtained more and more attention. This paper intends to research a face recognition system including face detection, feature extraction and face recognition, mainly through researching on related theory and the key technology of various preprocessing methods in face detection process, using KPCA method, focuses on the different recognition results in different preprocessing methods. In this paper, we choose YCbCr color space for skin segmentation and choose integral projection for face location. We use erosion and dilation of the opening and closing operation and illumination compensation method to preprocess face images, and then use the face recognition method based on kernel principal component analysis method for analysis and research, and the experiments were carried out using the typical face database. The algorithms experiment on MATLAB platform. Experimental results show that integration of the kernel method based on PCA algorithm under certain conditions make the extracted features represent the original image information better for using nonlinear feature extraction method, which can obtain higher recognition rate. In the image preprocessing stage, we found that images under various operations may appear different results, so as to obtain different recognition rate in recognition stage. At the same time, in the process of the kernel principal component analysis, the value of the power of the polynomial function can affect the recognition result.

  20. Varying face occlusion detection and iterative recovery for face recognition

    NASA Astrophysics Data System (ADS)

    Wang, Meng; Hu, Zhengping; Sun, Zhe; Zhao, Shuhuan; Sun, Mei

    2017-05-01

    In most sparse representation methods for face recognition (FR), occlusion problems were usually solved via removing the occlusion part of both query samples and training samples to perform the recognition process. This practice ignores the global feature of facial image and may lead to unsatisfactory results due to the limitation of local features. Considering the aforementioned drawback, we propose a method called varying occlusion detection and iterative recovery for FR. The main contributions of our method are as follows: (1) to detect an accurate occlusion area of facial images, an image processing and intersection-based clustering combination method is used for occlusion FR; (2) according to an accurate occlusion map, the new integrated facial images are recovered iteratively and put into a recognition process; and (3) the effectiveness on recognition accuracy of our method is verified by comparing it with three typical occlusion map detection methods. Experiments show that the proposed method has a highly accurate detection and recovery performance and that it outperforms several similar state-of-the-art methods against partial contiguous occlusion.

  1. Audio-video feature correlation: faces and speech

    NASA Astrophysics Data System (ADS)

    Durand, Gwenael; Montacie, Claude; Caraty, Marie-Jose; Faudemay, Pascal

    1999-08-01

    This paper presents a study of the correlation of features automatically extracted from the audio stream and the video stream of audiovisual documents. In particular, we were interested in finding out whether speech analysis tools could be combined with face detection methods, and to what extend they should be combined. A generic audio signal partitioning algorithm as first used to detect Silence/Noise/Music/Speech segments in a full length movie. A generic object detection method was applied to the keyframes extracted from the movie in order to detect the presence or absence of faces. The correlation between the presence of a face in the keyframes and of the corresponding voice in the audio stream was studied. A third stream, which is the script of the movie, is warped on the speech channel in order to automatically label faces appearing in the keyframes with the name of the corresponding character. We naturally found that extracted audio and video features were related in many cases, and that significant benefits can be obtained from the joint use of audio and video analysis methods.

  2. Face Detection Technique as Interactive Audio/Video Controller for a Mother-Tongue-Based Instructional Material

    NASA Astrophysics Data System (ADS)

    Guidang, Excel Philip B.; Llanda, Christopher John R.; Palaoag, Thelma D.

    2018-03-01

    Face Detection Technique as a strategy in controlling a multimedia instructional material was implemented in this study. Specifically, it achieved the following objectives: 1) developed a face detection application that controls an embedded mother-tongue-based instructional material for face-recognition configuration using Python; 2) determined the perceptions of the students using the Mutt Susan’s student app review rubric. The study concludes that face detection technique is effective in controlling an electronic instructional material. It can be used to change the method of interaction of the student with an instructional material. 90% of the students perceived the application to be a great app and 10% rated the application to be good.

  3. Moving human full body and body parts detection, tracking, and applications on human activity estimation, walking pattern and face recognition

    NASA Astrophysics Data System (ADS)

    Chen, Hai-Wen; McGurr, Mike

    2016-05-01

    We have developed a new way for detection and tracking of human full-body and body-parts with color (intensity) patch morphological segmentation and adaptive thresholding for security surveillance cameras. An adaptive threshold scheme has been developed for dealing with body size changes, illumination condition changes, and cross camera parameter changes. Tests with the PETS 2009 and 2014 datasets show that we can obtain high probability of detection and low probability of false alarm for full-body. Test results indicate that our human full-body detection method can considerably outperform the current state-of-the-art methods in both detection performance and computational complexity. Furthermore, in this paper, we have developed several methods using color features for detection and tracking of human body-parts (arms, legs, torso, and head, etc.). For example, we have developed a human skin color sub-patch segmentation algorithm by first conducting a RGB to YIQ transformation and then applying a Subtractive I/Q image Fusion with morphological operations. With this method, we can reliably detect and track human skin color related body-parts such as face, neck, arms, and legs. Reliable body-parts (e.g. head) detection allows us to continuously track the individual person even in the case that multiple closely spaced persons are merged. Accordingly, we have developed a new algorithm to split a merged detection blob back to individual detections based on the detected head positions. Detected body-parts also allow us to extract important local constellation features of the body-parts positions and angles related to the full-body. These features are useful for human walking gait pattern recognition and human pose (e.g. standing or falling down) estimation for potential abnormal behavior and accidental event detection, as evidenced with our experimental tests. Furthermore, based on the reliable head (face) tacking, we have applied a super-resolution algorithm to enhance the face resolution for improved human face recognition performance.

  4. Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors

    PubMed Central

    Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung

    2018-01-01

    Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases. PMID:29495417

  5. Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors.

    PubMed

    Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung

    2018-02-26

    Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases.

  6. Detecting gear tooth fracture in a high contact ratio face gear mesh

    NASA Technical Reports Server (NTRS)

    Zakrajsek, James J.; Handschuh, Robert F.; Lewicki, David G.; Decker, Harry J.

    1995-01-01

    This paper summarized the results of a study in which three different vibration diagnostic methods were used to detect gear tooth fracture in a high contact ratio face gear mesh. The NASA spiral bevel gear fatigue test rig was used to produce unseeded fault, natural failures of four face gear specimens. During the fatigue tests, which were run to determine load capacity and primary failure mechanisms for face gears, vibration signals were monitored and recorded for gear diagnostic purposes. Gear tooth bending fatigue and surface pitting were the primary failure modes found in the tests. The damage ranged from partial tooth fracture on a single tooth in one test to heavy wear, severe pitting, and complete tooth fracture of several teeth on another test. Three gear fault detection techniques, FM4, NA4*, and NB4, were applied to the experimental data. These methods use the signal average in both the time and frequency domain. Method NA4* was able to conclusively detect the gear tooth fractures in three out of the four fatigue tests, along with gear tooth surface pitting and heavy wear. For multiple tooth fractures, all of the methods gave a clear indication of the damage. It was also found that due to the high contact ratio of the face gear mesh, single tooth fractures did not significantly affect the vibration signal, making this type of failure difficult to detect.

  7. A Smart Spoofing Face Detector by Display Features Analysis.

    PubMed

    Lai, ChinLun; Tai, ChiuYuan

    2016-07-21

    In this paper, a smart face liveness detector is proposed to prevent the biometric system from being "deceived" by the video or picture of a valid user that the counterfeiter took with a high definition handheld device (e.g., iPad with retina display). By analyzing the characteristics of the display platform and using an expert decision-making core, we can effectively detect whether a spoofing action comes from a fake face displayed in the high definition display by verifying the chromaticity regions in the captured face. That is, a live or spoof face can be distinguished precisely by the designed optical image sensor. To sum up, by the proposed method/system, a normal optical image sensor can be upgraded to a powerful version to detect the spoofing actions. The experimental results prove that the proposed detection system can achieve very high detection rate compared to the existing methods and thus be practical to implement directly in the authentication systems.

  8. Automatic Detection of Acromegaly From Facial Photographs Using Machine Learning Methods.

    PubMed

    Kong, Xiangyi; Gong, Shun; Su, Lijuan; Howard, Newton; Kong, Yanguo

    2018-01-01

    Automatic early detection of acromegaly is theoretically possible from facial photographs, which can lessen the prevalence and increase the cure probability. In this study, several popular machine learning algorithms were used to train a retrospective development dataset consisting of 527 acromegaly patients and 596 normal subjects. We firstly used OpenCV to detect the face bounding rectangle box, and then cropped and resized it to the same pixel dimensions. From the detected faces, locations of facial landmarks which were the potential clinical indicators were extracted. Frontalization was then adopted to synthesize frontal facing views to improve the performance. Several popular machine learning methods including LM, KNN, SVM, RT, CNN, and EM were used to automatically identify acromegaly from the detected facial photographs, extracted facial landmarks, and synthesized frontal faces. The trained models were evaluated using a separate dataset, of which half were diagnosed as acromegaly by growth hormone suppression test. The best result of our proposed methods showed a PPV of 96%, a NPV of 95%, a sensitivity of 96% and a specificity of 96%. Artificial intelligence can automatically early detect acromegaly with a high sensitivity and specificity. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  9. Face Liveness Detection Using Defocus

    PubMed Central

    Kim, Sooyeon; Ban, Yuseok; Lee, Sangyoun

    2015-01-01

    In order to develop security systems for identity authentication, face recognition (FR) technology has been applied. One of the main problems of applying FR technology is that the systems are especially vulnerable to attacks with spoofing faces (e.g., 2D pictures). To defend from these attacks and to enhance the reliability of FR systems, many anti-spoofing approaches have been recently developed. In this paper, we propose a method for face liveness detection using the effect of defocus. From two images sequentially taken at different focuses, three features, focus, power histogram and gradient location and orientation histogram (GLOH), are extracted. Afterwards, we detect forged faces through the feature-level fusion approach. For reliable performance verification, we develop two databases with a handheld digital camera and a webcam. The proposed method achieves a 3.29% half total error rate (HTER) at a given depth of field (DoF) and can be extended to camera-equipped devices, like smartphones. PMID:25594594

  10. Pornographic information of Internet views detection method based on the connected areas

    NASA Astrophysics Data System (ADS)

    Wang, Huibai; Fan, Ajie

    2017-01-01

    Nowadays online porn video broadcasting and downloading is very popular. In view of the widespread phenomenon of Internet pornography, this paper proposed a new method of pornographic video detection based on connected areas. Firstly, decode the video into a serious of static images and detect skin color on the extracted key frames. If the area of skin color reaches a certain threshold, use the AdaBoost algorithm to detect the human face. Judge the connectivity of the human face and the large area of skin color to determine whether detect the sensitive area finally. The experimental results show that the method can effectively remove the non-pornographic videos contain human who wear less. This method can improve the efficiency and reduce the workload of detection.

  11. Segmentation of human face using gradient-based approach

    NASA Astrophysics Data System (ADS)

    Baskan, Selin; Bulut, M. Mete; Atalay, Volkan

    2001-04-01

    This paper describes a method for automatic segmentation of facial features such as eyebrows, eyes, nose, mouth and ears in color images. This work is an initial step for wide range of applications based on feature-based approaches, such as face recognition, lip-reading, gender estimation, facial expression analysis, etc. Human face can be characterized by its skin color and nearly elliptical shape. For this purpose, face detection is performed using color and shape information. Uniform illumination is assumed. No restrictions on glasses, make-up, beard, etc. are imposed. Facial features are extracted using the vertically and horizontally oriented gradient projections. The gradient of a minimum with respect to its neighbor maxima gives the boundaries of a facial feature. Each facial feature has a different horizontal characteristic. These characteristics are derived by extensive experimentation with many face images. Using fuzzy set theory, the similarity between the candidate and the feature characteristic under consideration is calculated. Gradient-based method is accompanied by the anthropometrical information, for robustness. Ear detection is performed using contour-based shape descriptors. This method detects the facial features and circumscribes each facial feature with the smallest rectangle possible. AR database is used for testing. The developed method is also suitable for real-time systems.

  12. Facial recognition in education system

    NASA Astrophysics Data System (ADS)

    Krithika, L. B.; Venkatesh, K.; Rathore, S.; Kumar, M. Harish

    2017-11-01

    Human beings exploit emotions comprehensively for conveying messages and their resolution. Emotion detection and face recognition can provide an interface between the individuals and technologies. The most successful applications of recognition analysis are recognition of faces. Many different techniques have been used to recognize the facial expressions and emotion detection handle varying poses. In this paper, we approach an efficient method to recognize the facial expressions to track face points and distances. This can automatically identify observer face movements and face expression in image. This can capture different aspects of emotion and facial expressions.

  13. Privacy protection in surveillance systems based on JPEG DCT baseline compression and spectral domain watermarking

    NASA Astrophysics Data System (ADS)

    Sablik, Thomas; Velten, Jörg; Kummert, Anton

    2015-03-01

    An novel system for automatic privacy protection in digital media based on spectral domain watermarking and JPEG compression is described in the present paper. In a first step private areas are detected. Therefore a detection method is presented. The implemented method uses Haar cascades to detects faces. Integral images are used to speed up calculations and the detection. Multiple detections of one face are combined. Succeeding steps comprise embedding the data into the image as part of JPEG compression using spectral domain methods and protecting the area of privacy. The embedding process is integrated into and adapted to JPEG compression. A Spread Spectrum Watermarking method is used to embed the size and position of the private areas into the cover image. Different methods for embedding regarding their robustness are compared. Moreover the performance of the method concerning tampered images is presented.

  14. Automated face detection for occurrence and occupancy estimation in chimpanzees.

    PubMed

    Crunchant, Anne-Sophie; Egerer, Monika; Loos, Alexander; Burghardt, Tilo; Zuberbühler, Klaus; Corogenes, Katherine; Leinert, Vera; Kulik, Lars; Kühl, Hjalmar S

    2017-03-01

    Surveying endangered species is necessary to evaluate conservation effectiveness. Camera trapping and biometric computer vision are recent technological advances. They have impacted on the methods applicable to field surveys and these methods have gained significant momentum over the last decade. Yet, most researchers inspect footage manually and few studies have used automated semantic processing of video trap data from the field. The particular aim of this study is to evaluate methods that incorporate automated face detection technology as an aid to estimate site use of two chimpanzee communities based on camera trapping. As a comparative baseline we employ traditional manual inspection of footage. Our analysis focuses specifically on the basic parameter of occurrence where we assess the performance and practical value of chimpanzee face detection software. We found that the semi-automated data processing required only 2-4% of the time compared to the purely manual analysis. This is a non-negligible increase in efficiency that is critical when assessing the feasibility of camera trap occupancy surveys. Our evaluations suggest that our methodology estimates the proportion of sites used relatively reliably. Chimpanzees are mostly detected when they are present and when videos are filmed in high-resolution: the highest recall rate was 77%, for a false alarm rate of 2.8% for videos containing only chimpanzee frontal face views. Certainly, our study is only a first step for transferring face detection software from the lab into field application. Our results are promising and indicate that the current limitation of detecting chimpanzees in camera trap footage due to lack of suitable face views can be easily overcome on the level of field data collection, that is, by the combined placement of multiple high-resolution cameras facing reverse directions. This will enable to routinely conduct chimpanzee occupancy surveys based on camera trapping and semi-automated processing of footage. Using semi-automated ape face detection technology for processing camera trap footage requires only 2-4% of the time compared to manual analysis and allows to estimate site use by chimpanzees relatively reliably. © 2017 Wiley Periodicals, Inc.

  15. The Face-to-Face Light Detection Paradigm: A New Methodology for Investigating Visuospatial Attention Across Different Face Regions in Live Face-to-Face Communication Settings.

    PubMed

    Thompson, Laura A; Malloy, Daniel M; Cone, John M; Hendrickson, David L

    2010-01-01

    We introduce a novel paradigm for studying the cognitive processes used by listeners within interactive settings. This paradigm places the talker and the listener in the same physical space, creating opportunities for investigations of attention and comprehension processes taking place during interactive discourse situations. An experiment was conducted to compare results from previous research using videotaped stimuli to those obtained within the live face-to-face task paradigm. A headworn apparatus is used to briefly display LEDs on the talker's face in four locations as the talker communicates with the participant. In addition to the primary task of comprehending speeches, participants make a secondary task light detection response. In the present experiment, the talker gave non-emotionally-expressive speeches that were used in past research with videotaped stimuli. Signal detection analysis was employed to determine which areas of the face received the greatest focus of attention. Results replicate previous findings using videotaped methods.

  16. The Face-to-Face Light Detection Paradigm: A New Methodology for Investigating Visuospatial Attention Across Different Face Regions in Live Face-to-Face Communication Settings

    PubMed Central

    Thompson, Laura A.; Malloy, Daniel M.; Cone, John M.; Hendrickson, David L.

    2009-01-01

    We introduce a novel paradigm for studying the cognitive processes used by listeners within interactive settings. This paradigm places the talker and the listener in the same physical space, creating opportunities for investigations of attention and comprehension processes taking place during interactive discourse situations. An experiment was conducted to compare results from previous research using videotaped stimuli to those obtained within the live face-to-face task paradigm. A headworn apparatus is used to briefly display LEDs on the talker’s face in four locations as the talker communicates with the participant. In addition to the primary task of comprehending speeches, participants make a secondary task light detection response. In the present experiment, the talker gave non-emotionally-expressive speeches that were used in past research with videotaped stimuli. Signal detection analysis was employed to determine which areas of the face received the greatest focus of attention. Results replicate previous findings using videotaped methods. PMID:21113354

  17. Automated Inspection of Defects in Optical Fiber Connector End Face Using Novel Morphology Approaches.

    PubMed

    Mei, Shuang; Wang, Yudan; Wen, Guojun; Hu, Yang

    2018-05-03

    Increasing deployment of optical fiber networks and the need for reliable high bandwidth make the task of inspecting optical fiber connector end faces a crucial process that must not be neglected. Traditional end face inspections are usually performed by manual visual methods, which are low in efficiency and poor in precision for long-term industrial applications. More seriously, the inspection results cannot be quantified for subsequent analysis. Aiming at the characteristics of typical defects in the inspection process for optical fiber end faces, we propose a novel method, “difference of min-max ranking filtering” (DO2MR), for detection of region-based defects, e.g., dirt, oil, contamination, pits, and chips, and a special model, a “linear enhancement inspector” (LEI), for the detection of scratches. The DO2MR is a morphology method that intends to determine whether a pixel belongs to a defective region by comparing the difference of gray values of pixels in the neighborhood around the pixel. The LEI is also a morphology method that is designed to search for scratches at different orientations with a special linear detector. These two approaches can be easily integrated into optical inspection equipment for automatic quality verification. As far as we know, this is the first time that complete defect detection methods for optical fiber end faces are available in the literature. Experimental results demonstrate that the proposed DO2MR and LEI models yield good comprehensive performance with high precision and accepted recall rates, and the image-level detection accuracies reach 96.0 and 89.3%, respectively.

  18. Internal representations for face detection: an application of noise-based image classification to BOLD responses.

    PubMed

    Nestor, Adrian; Vettel, Jean M; Tarr, Michael J

    2013-11-01

    What basic visual structures underlie human face detection and how can we extract such structures directly from the amplitude of neural responses elicited by face processing? Here, we address these issues by investigating an extension of noise-based image classification to BOLD responses recorded in high-level visual areas. First, we assess the applicability of this classification method to such data and, second, we explore its results in connection with the neural processing of faces. To this end, we construct luminance templates from white noise fields based on the response of face-selective areas in the human ventral cortex. Using behaviorally and neurally-derived classification images, our results reveal a family of simple but robust image structures subserving face representation and detection. Thus, we confirm the role played by classical face selective regions in face detection and we help clarify the representational basis of this perceptual function. From a theory standpoint, our findings support the idea of simple but highly diagnostic neurally-coded features for face detection. At the same time, from a methodological perspective, our work demonstrates the ability of noise-based image classification in conjunction with fMRI to help uncover the structure of high-level perceptual representations. Copyright © 2012 Wiley Periodicals, Inc.

  19. The review and results of different methods for facial recognition

    NASA Astrophysics Data System (ADS)

    Le, Yifan

    2017-09-01

    In recent years, facial recognition draws much attention due to its wide potential applications. As a unique technology in Biometric Identification, facial recognition represents a significant improvement since it could be operated without cooperation of people under detection. Hence, facial recognition will be taken into defense system, medical detection, human behavior understanding, etc. Several theories and methods have been established to make progress in facial recognition: (1) A novel two-stage facial landmark localization method is proposed which has more accurate facial localization effect under specific database; (2) A statistical face frontalization method is proposed which outperforms state-of-the-art methods for face landmark localization; (3) It proposes a general facial landmark detection algorithm to handle images with severe occlusion and images with large head poses; (4) There are three methods proposed on Face Alignment including shape augmented regression method, pose-indexed based multi-view method and a learning based method via regressing local binary features. The aim of this paper is to analyze previous work of different aspects in facial recognition, focusing on concrete method and performance under various databases. In addition, some improvement measures and suggestions in potential applications will be put forward.

  20. Skin Color Segmentation Using Coarse-to-Fine Region on Normalized RGB Chromaticity Diagram for Face Detection

    NASA Astrophysics Data System (ADS)

    Soetedjo, Aryuanto; Yamada, Koichi

    This paper describes a new color segmentation based on a normalized RGB chromaticity diagram for face detection. Face skin is extracted from color images using a coarse skin region with fixed boundaries followed by a fine skin region with variable boundaries. Two newly developed histograms that have prominent peaks of skin color and non-skin colors are employed to adjust the boundaries of the skin region. The proposed approach does not need a skin color model, which depends on a specific camera parameter and is usually limited to a particular environment condition, and no sample images are required. The experimental results using color face images of various races under varying lighting conditions and complex backgrounds, obtained from four different resources on the Internet, show a high detection rate of 87%. The results of the detection rate and computation time are comparable to the well known real-time face detection method proposed by Viola-Jones [11], [12].

  1. Deficient cortical face-sensitive N170 responses and basic visual processing in schizophrenia.

    PubMed

    Maher, S; Mashhoon, Y; Ekstrom, T; Lukas, S; Chen, Y

    2016-01-01

    Face detection, an ability to identify a visual stimulus as a face, is impaired in patients with schizophrenia. It is unclear whether impaired face processing in this psychiatric disorder results from face-specific domains or stems from more basic visual domains. In this study, we examined cortical face-sensitive N170 response in schizophrenia, taking into account deficient basic visual contrast processing. We equalized visual contrast signals among patients (n=20) and controls (n=20) and between face and tree images, based on their individual perceptual capacities (determined using psychophysical methods). We measured N170, a putative temporal marker of face processing, during face detection and tree detection. In controls, N170 amplitudes were significantly greater for faces than trees across all three visual contrast levels tested (perceptual threshold, two times perceptual threshold and 100%). In patients, however, N170 amplitudes did not differ between faces and trees, indicating diminished face selectivity (indexed by the differential responses to face vs. tree). These results indicate a lack of face-selectivity in temporal responses of brain machinery putatively responsible for face processing in schizophrenia. This neuroimaging finding suggests that face-specific processing is compromised in this psychiatric disorder. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Directional templates for real-time detection of coronal axis rotated faces

    NASA Astrophysics Data System (ADS)

    Perez, Claudio A.; Estevez, Pablo A.; Garate, Patricio

    2004-10-01

    Real-time face and iris detection on video images has gained renewed attention because of multiple possible applications in studying eye function, drowsiness detection, virtual keyboard interfaces, face recognition, video processing and multimedia retrieval. In this paper, a study is presented on using directional templates in the detection of faces rotated in the coronal axis. The templates are built by extracting the directional image information from the regions of the eyes, nose and mouth. The face position is determined by computing a line integral using the templates over the face directional image. The line integral reaches a maximum when it coincides with the face position. It is shown an improvement in localization selectivity by the increased value in the line integral computed with the directional template. Besides, improvements in the line integral value for face size and face rotation angle was also found through the computation of the line integral using the directional template. Based on these results the new templates should improve selectivity and hence provide the means to restrict computations to a fewer number of templates and restrict the region of search during the face and eye tracking procedure. The proposed method is real time, completely non invasive and was applied with no background limitation and normal illumination conditions in an indoor environment.

  3. Automatic Detection of Frontal Face Midline by Chain-coded Merlin-Farber Hough Trasform

    NASA Astrophysics Data System (ADS)

    Okamoto, Daichi; Ohyama, Wataru; Wakabayashi, Tetsushi; Kimura, Fumitaka

    We propose a novel approach for detection of the facial midline (facial symmetry axis) from a frontal face image. The facial midline has several applications, for instance reducing computational cost required for facial feature extraction (FFE) and postoperative assessment for cosmetic or dental surgery. The proposed method detects the facial midline of a frontal face from an edge image as the symmetry axis using the Merlin-Faber Hough transformation. And a new performance improvement scheme for midline detection by MFHT is present. The main concept of the proposed scheme is suppression of redundant vote on the Hough parameter space by introducing chain code representation for the binary edge image. Experimental results on the image dataset containing 2409 images from FERET database indicate that the proposed algorithm can improve the accuracy of midline detection from 89.9% to 95.1 % for face images with different scales and rotation.

  4. Adapting Local Features for Face Detection in Thermal Image.

    PubMed

    Ma, Chao; Trung, Ngo Thanh; Uchiyama, Hideaki; Nagahara, Hajime; Shimada, Atsushi; Taniguchi, Rin-Ichiro

    2017-11-27

    A thermal camera captures the temperature distribution of a scene as a thermal image. In thermal images, facial appearances of different people under different lighting conditions are similar. This is because facial temperature distribution is generally constant and not affected by lighting condition. This similarity in face appearances is advantageous for face detection. To detect faces in thermal images, cascade classifiers with Haar-like features are generally used. However, there are few studies exploring the local features for face detection in thermal images. In this paper, we introduce two approaches relying on local features for face detection in thermal images. First, we create new feature types by extending Multi-Block LBP. We consider a margin around the reference and the generally constant distribution of facial temperature. In this way, we make the features more robust to image noise and more effective for face detection in thermal images. Second, we propose an AdaBoost-based training method to get cascade classifiers with multiple types of local features. These feature types have different advantages. In this way we enhance the description power of local features. We did a hold-out validation experiment and a field experiment. In the hold-out validation experiment, we captured a dataset from 20 participants, comprising 14 males and 6 females. For each participant, we captured 420 images with 10 variations in camera distance, 21 poses, and 2 appearances (participant with/without glasses). We compared the performance of cascade classifiers trained by different sets of the features. The experiment results showed that the proposed approaches effectively improve the performance of face detection in thermal images. In the field experiment, we compared the face detection performance in realistic scenes using thermal and RGB images, and gave discussion based on the results.

  5. Automated Detection of Actinic Keratoses in Clinical Photographs

    PubMed Central

    Hames, Samuel C.; Sinnya, Sudipta; Tan, Jean-Marie; Morze, Conrad; Sahebian, Azadeh; Soyer, H. Peter; Prow, Tarl W.

    2015-01-01

    Background Clinical diagnosis of actinic keratosis is known to have intra- and inter-observer variability, and there is currently no non-invasive and objective measure to diagnose these lesions. Objective The aim of this pilot study was to determine if automatically detecting and circumscribing actinic keratoses in clinical photographs is feasible. Methods Photographs of the face and dorsal forearms were acquired in 20 volunteers from two groups: the first with at least on actinic keratosis present on the face and each arm, the second with no actinic keratoses. The photographs were automatically analysed using colour space transforms and morphological features to detect erythema. The automated output was compared with a senior consultant dermatologist’s assessment of the photographs, including the intra-observer variability. Performance was assessed by the correlation between total lesions detected by automated method and dermatologist, and whether the individual lesions detected were in the same location as the dermatologist identified lesions. Additionally, the ability to limit false positives was assessed by automatic assessment of the photographs from the no actinic keratosis group in comparison to the high actinic keratosis group. Results The correlation between the automatic and dermatologist counts was 0.62 on the face and 0.51 on the arms, compared to the dermatologist’s intra-observer variation of 0.83 and 0.93 for the same. Sensitivity of automatic detection was 39.5% on the face, 53.1% on the arms. Positive predictive values were 13.9% on the face and 39.8% on the arms. Significantly more lesions (p<0.0001) were detected in the high actinic keratosis group compared to the no actinic keratosis group. Conclusions The proposed method was inferior to assessment by the dermatologist in terms of sensitivity and positive predictive value. However, this pilot study used only a single simple feature and was still able to achieve sensitivity of detection of 53.1% on the arms.This suggests that image analysis is a feasible avenue of investigation for overcoming variability in clinical assessment. Future studies should focus on more sophisticated features to improve sensitivity for actinic keratoses without erythema and limit false positives associated with the anatomical structures on the face. PMID:25615930

  6. Enhancing the performance of cooperative face detector by NFGS

    NASA Astrophysics Data System (ADS)

    Yesugade, Snehal; Dave, Palak; Srivastava, Srinkhala; Das, Apurba

    2015-07-01

    Computerized human face detection is an important task of deformable pattern recognition in today's world. Especially in cooperative authentication scenarios like ATM fraud detection, attendance recording, video tracking and video surveillance, the accuracy of the face detection engine in terms of accuracy, memory utilization and speed have been active areas of research for the last decade. The Haar based face detection or SIFT and EBGM based face recognition systems are fairly reliable in this regard. But, there the features are extracted in terms of gray textures. When the input is a high resolution online video with a fairly large viewing area, Haar needs to search for face everywhere (say 352×250 pixels) and every time (e.g., 30 FPS capture all the time). In the current paper we have proposed to address both the aforementioned scenarios by a neuro-visually inspired method of figure-ground segregation (NFGS) [5] to result in a two-dimensional binary array from gray face image. The NFGS would identify the reference video frame in a low sampling rate and updates the same with significant change of environment like illumination. The proposed algorithm would trigger the face detector only when appearance of a new entity is encountered into the viewing area. To address the detection accuracy, classical face detector would be enabled only in a narrowed down region of interest (RoI) as fed by the NFGS. The act of updating the RoI would be done in each frame online with respect to the moving entity which in turn would improve both FR (False Rejection) and FA (False Acceptance) of the face detection system.

  7. A smart technique for attendance system to recognize faces through parallelism

    NASA Astrophysics Data System (ADS)

    Prabhavathi, B.; Tanuja, V.; Madhu Viswanatham, V.; Rajashekhara Babu, M.

    2017-11-01

    Major part of recognising a person is face with the help of image processing techniques we can exploit the physical features of a person. In the old approach method that is used in schools and colleges it is there that the professor calls the student name and then the attendance for the students marked. Here in paper want to deviate from the old approach and go with the new approach by using techniques that are there in image processing. In this paper we presenting spontaneous presence for students in classroom. At first classroom image has been in use and after that image is kept in data record. For the images that are stored in the database we apply system algorithm which includes steps such as, histogram classification, noise removal, face detection and face recognition methods. So by using these steps we detect the faces and then compare it with the database. The attendance gets marked automatically if the system recognizes the faces.

  8. The safety helmet detection technology and its application to the surveillance system.

    PubMed

    Wen, Che-Yen

    2004-07-01

    The Automatic Teller Machine (ATM) plays an important role in the modem economy. It provides a fast and convenient way to process transactions between banks and their customers. Unfortunately, it also provides a convenient way for criminals to get illegal money or use stolen ATM cards to extract money from their victims' accounts. For safety reasons, each ATM has a surveillance system to record customer's face information. However, when criminals use an ATM to withdraw money illegally, they usually hide their faces with something (in Taiwan, criminals usually use safety helmets to block their faces) to avoid the surveillance system recording their face information, which decreases the efficiency of the surveillance system. In this paper, we propose a circle/circular arc detection method based upon the modified Hough transform, and apply it to the detection of safety helmets for the surveillance system of ATMs. Since the safety helmet location will be within the set of the obtainable circles/circular arcs (if any exist), we use geometric features to verify if any safety helmet exists in the set. The proposed method can be used to help the surveillance systems record a customer's face information more precisely. If customers wear safety helmets to block their faces, the system can send a message to remind them to take off their helmets. Besides this, the method can be applied to the surveillance systems of banks by providing an early warning safeguard when any "customer" or "intruder" uses a safety helmet to avoid his/her face information from being recorded by the surveillance system. This will make the surveillance system more useful. Real images are used to analyze the performance of the proposed method.

  9. Research on facial expression simulation based on depth image

    NASA Astrophysics Data System (ADS)

    Ding, Sha-sha; Duan, Jin; Zhao, Yi-wu; Xiao, Bo; Wang, Hao

    2017-11-01

    Nowadays, face expression simulation is widely used in film and television special effects, human-computer interaction and many other fields. Facial expression is captured by the device of Kinect camera .The method of AAM algorithm based on statistical information is employed to detect and track faces. The 2D regression algorithm is applied to align the feature points. Among them, facial feature points are detected automatically and 3D cartoon model feature points are signed artificially. The aligned feature points are mapped by keyframe techniques. In order to improve the animation effect, Non-feature points are interpolated based on empirical models. Under the constraint of Bézier curves we finish the mapping and interpolation. Thus the feature points on the cartoon face model can be driven if the facial expression varies. In this way the purpose of cartoon face expression simulation in real-time is came ture. The experiment result shows that the method proposed in this text can accurately simulate the facial expression. Finally, our method is compared with the previous method. Actual data prove that the implementation efficiency is greatly improved by our method.

  10. Global Binary Continuity for Color Face Detection With Complex Background

    NASA Astrophysics Data System (ADS)

    Belavadi, Bhaskar; Mahendra Prashanth, K. V.; Joshi, Sujay S.; Suprathik, N.

    2017-08-01

    In this paper, we propose a method to detect human faces in color images, with complex background. The proposed algorithm makes use of basically two color space models, specifically HSV and YCgCr. The color segmented image is filled uniformly with a single color (binary) and then all unwanted discontinuous lines are removed to get the final image. Experimental results on Caltech database manifests that the purported model is able to accomplish far better segmentation for faces of varying orientations, skin color and background environment.

  11. Face detection in color images using skin color, Laplacian of Gaussian, and Euler number

    NASA Astrophysics Data System (ADS)

    Saligrama Sundara Raman, Shylaja; Kannanedhi Narasimha Sastry, Balasubramanya Murthy; Subramanyam, Natarajan; Senkutuvan, Ramya; Srikanth, Radhika; John, Nikita; Rao, Prateek

    2010-02-01

    In this a paper, a feature based approach to face detection has been proposed using an ensemble of algorithms. The method uses chrominance values and edge features to classify the image as skin and nonskin regions. The edge detector used for this purpose is Laplacian of Gaussian (LoG) which is found to be appropriate when images having multiple faces with noise in them. Eight connectivity analysis of these regions will segregate them as probable face or nonface. The procedure is made more robust by identifying local features within these skin regions which include number of holes, percentage of skin and the golden ratio. The method proposed has been tested on color face images of various races obtained from different sources and its performance is found to be encouraging as the color segmentation cleans up almost all the complex facial features. The result obtained has a calculated accuracy of 86.5% on a test set of 230 images.

  12. A brain-computer interface for potential non-verbal facial communication based on EEG signals related to specific emotions

    PubMed Central

    Kashihara, Koji

    2014-01-01

    Unlike assistive technology for verbal communication, the brain-machine or brain-computer interface (BMI/BCI) has not been established as a non-verbal communication tool for amyotrophic lateral sclerosis (ALS) patients. Face-to-face communication enables access to rich emotional information, but individuals suffering from neurological disorders, such as ALS and autism, may not express their emotions or communicate their negative feelings. Although emotions may be inferred by looking at facial expressions, emotional prediction for neutral faces necessitates advanced judgment. The process that underlies brain neuronal responses to neutral faces and causes emotional changes remains unknown. To address this problem, therefore, this study attempted to decode conditioned emotional reactions to neutral face stimuli. This direction was motivated by the assumption that if electroencephalogram (EEG) signals can be used to detect patients' emotional responses to specific inexpressive faces, the results could be incorporated into the design and development of BMI/BCI-based non-verbal communication tools. To these ends, this study investigated how a neutral face associated with a negative emotion modulates rapid central responses in face processing and then identified cortical activities. The conditioned neutral face-triggered event-related potentials that originated from the posterior temporal lobe statistically significantly changed during late face processing (600–700 ms) after stimulus, rather than in early face processing activities, such as P1 and N170 responses. Source localization revealed that the conditioned neutral faces increased activity in the right fusiform gyrus (FG). This study also developed an efficient method for detecting implicit negative emotional responses to specific faces by using EEG signals. A classification method based on a support vector machine enables the easy classification of neutral faces that trigger specific individual emotions. In accordance with this classification, a face on a computer morphs into a sad or displeased countenance. The proposed method could be incorporated as a part of non-verbal communication tools to enable emotional expression. PMID:25206321

  13. A brain-computer interface for potential non-verbal facial communication based on EEG signals related to specific emotions.

    PubMed

    Kashihara, Koji

    2014-01-01

    Unlike assistive technology for verbal communication, the brain-machine or brain-computer interface (BMI/BCI) has not been established as a non-verbal communication tool for amyotrophic lateral sclerosis (ALS) patients. Face-to-face communication enables access to rich emotional information, but individuals suffering from neurological disorders, such as ALS and autism, may not express their emotions or communicate their negative feelings. Although emotions may be inferred by looking at facial expressions, emotional prediction for neutral faces necessitates advanced judgment. The process that underlies brain neuronal responses to neutral faces and causes emotional changes remains unknown. To address this problem, therefore, this study attempted to decode conditioned emotional reactions to neutral face stimuli. This direction was motivated by the assumption that if electroencephalogram (EEG) signals can be used to detect patients' emotional responses to specific inexpressive faces, the results could be incorporated into the design and development of BMI/BCI-based non-verbal communication tools. To these ends, this study investigated how a neutral face associated with a negative emotion modulates rapid central responses in face processing and then identified cortical activities. The conditioned neutral face-triggered event-related potentials that originated from the posterior temporal lobe statistically significantly changed during late face processing (600-700 ms) after stimulus, rather than in early face processing activities, such as P1 and N170 responses. Source localization revealed that the conditioned neutral faces increased activity in the right fusiform gyrus (FG). This study also developed an efficient method for detecting implicit negative emotional responses to specific faces by using EEG signals. A classification method based on a support vector machine enables the easy classification of neutral faces that trigger specific individual emotions. In accordance with this classification, a face on a computer morphs into a sad or displeased countenance. The proposed method could be incorporated as a part of non-verbal communication tools to enable emotional expression.

  14. A causal relationship between face-patch activity and face-detection behavior.

    PubMed

    Sadagopan, Srivatsun; Zarco, Wilbert; Freiwald, Winrich A

    2017-04-04

    The primate brain contains distinct areas densely populated by face-selective neurons. One of these, face-patch ML, contains neurons selective for contrast relationships between face parts. Such contrast-relationships can serve as powerful heuristics for face detection. However, it is unknown whether neurons with such selectivity actually support face-detection behavior. Here, we devised a naturalistic face-detection task and combined it with fMRI-guided pharmacological inactivation of ML to test whether ML is of critical importance for real-world face detection. We found that inactivation of ML impairs face detection. The effect was anatomically specific, as inactivation of areas outside ML did not affect face detection, and it was categorically specific, as inactivation of ML impaired face detection while sparing body and object detection. These results establish that ML function is crucial for detection of faces in natural scenes, performing a critical first step on which other face processing operations can build.

  15. An efficient method for facial component detection in thermal images

    NASA Astrophysics Data System (ADS)

    Paul, Michael; Blanik, Nikolai; Blazek, Vladimir; Leonhardt, Steffen

    2015-04-01

    A method to detect certain regions in thermal images of human faces is presented. In this approach, the following steps are necessary to locate the periorbital and the nose regions: First, the face is segmented from the background by thresholding and morphological filtering. Subsequently, a search region within the face, around its center of mass, is evaluated. Automatically computed temperature thresholds are used per subject and image or image sequence to generate binary images, in which the periorbital regions are located by integral projections. Then, the located positions are used to approximate the nose position. It is possible to track features in the located regions. Therefore, these regions are interesting for different applications like human-machine interaction, biometrics and biomedical imaging. The method is easy to implement and does not rely on any training images or templates. Furthermore, the approach saves processing resources due to simple computations and restricted search regions.

  16. A study on facial expressions recognition

    NASA Astrophysics Data System (ADS)

    Xu, Jingjing

    2017-09-01

    In terms of communication, postures and facial expressions of such feelings like happiness, anger and sadness play important roles in conveying information. With the development of the technology, recently a number of algorithms dealing with face alignment, face landmark detection, classification, facial landmark localization and pose estimation have been put forward. However, there are a lot of challenges and problems need to be fixed. In this paper, a few technologies have been concluded and analyzed, and they all relate to handling facial expressions recognition and poses like pose-indexed based multi-view method for face alignment, robust facial landmark detection under significant head pose and occlusion, partitioning the input domain for classification, robust statistics face formalization.

  17. Adaptive skin detection based on online training

    NASA Astrophysics Data System (ADS)

    Zhang, Ming; Tang, Liang; Zhou, Jie; Rong, Gang

    2007-11-01

    Skin is a widely used cue for porn image classification. Most conventional methods are off-line training schemes. They usually use a fixed boundary to segment skin regions in the images and are effective only in restricted conditions: e.g. good lightness and unique human race. This paper presents an adaptive online training scheme for skin detection which can handle these tough cases. In our approach, skin detection is considered as a classification problem on Gaussian mixture model. For each image, human face is detected and the face color is used to establish a primary estimation of skin color distribution. Then an adaptive online training algorithm is used to find the real boundary between skin color and background color in current image. Experimental results on 450 images showed that the proposed method is more robust in general situations than the conventional ones.

  18. Real-time driver fatigue detection based on face alignment

    NASA Astrophysics Data System (ADS)

    Tao, Huanhuan; Zhang, Guiying; Zhao, Yong; Zhou, Yi

    2017-07-01

    The performance and robustness of fatigue detection largely decrease if the driver with glasses. To address this issue, this paper proposes a practical driver fatigue detection method based on face alignment at 3000 FPS algorithm. Firstly, the eye regions of the driver are localized by exploiting 6 landmarks surrounding each eye. Secondly, the HOG features of the extracted eye regions are calculated and put into SVM classifier to recognize the eye state. Finally, the value of PERCLOS is calculated to determine whether the driver is drowsy or not. An alarm will be generated if the eye is closed for a specified period of time. The accuracy and real-time on testing videos with different drivers demonstrate that the proposed algorithm is robust and obtain better accuracy for driver fatigue detection compared with some previous method.

  19. Brain Signals of Face Processing as Revealed by Event-Related Potentials

    PubMed Central

    Olivares, Ela I.; Iglesias, Jaime; Saavedra, Cristina; Trujillo-Barreto, Nelson J.; Valdés-Sosa, Mitchell

    2015-01-01

    We analyze the functional significance of different event-related potentials (ERPs) as electrophysiological indices of face perception and face recognition, according to cognitive and neurofunctional models of face processing. Initially, the processing of faces seems to be supported by early extrastriate occipital cortices and revealed by modulations of the occipital P1. This early response is thought to reflect the detection of certain primary structural aspects indicating the presence grosso modo of a face within the visual field. The posterior-temporal N170 is more sensitive to the detection of faces as complex-structured stimuli and, therefore, to the presence of its distinctive organizational characteristics prior to within-category identification. In turn, the relatively late and probably more rostrally generated N250r and N400-like responses might respectively indicate processes of access and retrieval of face-related information, which is stored in long-term memory (LTM). New methods of analysis of electrophysiological and neuroanatomical data, namely, dynamic causal modeling, single-trial and time-frequency analyses, are highly recommended to advance in the knowledge of those brain mechanisms concerning face processing. PMID:26160999

  20. Robust Point Set Matching for Partial Face Recognition.

    PubMed

    Weng, Renliang; Lu, Jiwen; Tan, Yap-Peng

    2016-03-01

    Over the past three decades, a number of face recognition methods have been proposed in computer vision, and most of them use holistic face images for person identification. In many real-world scenarios especially some unconstrained environments, human faces might be occluded by other objects, and it is difficult to obtain fully holistic face images for recognition. To address this, we propose a new partial face recognition approach to recognize persons of interest from their partial faces. Given a pair of gallery image and probe face patch, we first detect keypoints and extract their local textural features. Then, we propose a robust point set matching method to discriminatively match these two extracted local feature sets, where both the textural information and geometrical information of local features are explicitly used for matching simultaneously. Finally, the similarity of two faces is converted as the distance between these two aligned feature sets. Experimental results on four public face data sets show the effectiveness of the proposed approach.

  1. Impaired face detection may explain some but not all cases of developmental prosopagnosia.

    PubMed

    Dalrymple, Kirsten A; Duchaine, Brad

    2016-05-01

    Developmental prosopagnosia (DP) is defined by severe face recognition difficulties due to the failure to develop the visual mechanisms for processing faces. The two-process theory of face recognition (Morton & Johnson, 1991) implies that DP could result from a failure of an innate face detection system; this failure could prevent an individual from then tuning higher-level processes for face recognition (Johnson, 2005). Work with adults indicates that some individuals with DP have normal face detection whereas others are impaired. However, face detection has not been addressed in children with DP, even though their results may be especially informative because they have had less opportunity to develop strategies that could mask detection deficits. We tested the face detection abilities of seven children with DP. Four were impaired at face detection to some degree (i.e. abnormally slow, or failed to find faces) while the remaining three children had normal face detection. Hence, the cases with impaired detection are consistent with the two-process account suggesting that DP could result from a failure of face detection. However, the cases with normal detection implicate a higher-level origin. The dissociation between normal face detection and impaired identity perception also indicates that these abilities depend on different neurocognitive processes. © 2015 John Wiley & Sons Ltd.

  2. Human ear detection in the thermal infrared spectrum

    NASA Astrophysics Data System (ADS)

    Abaza, Ayman; Bourlai, Thirimachos

    2012-06-01

    In this paper the problem of human ear detection in the thermal infrared (IR) spectrum is studied in order to illustrate the advantages and limitations of the most important steps of ear-based biometrics that can operate in day and night time environments. The main contributions of this work are two-fold: First, a dual-band database is assembled that consists of visible and thermal profile face images. The thermal data was collected using a high definition middle-wave infrared (3-5 microns) camera that is capable of acquiring thermal imprints of human skin. Second, a fully automated, thermal imaging based ear detection method is developed for real-time segmentation of human ears in either day or night time environments. The proposed method is based on Haar features forming a cascaded AdaBoost classifier (our modified version of the original Viola-Jones approach1 that was designed to be applied mainly in visible band images). The main advantage of the proposed method, applied on our profile face image data set collected in the thermal-band, is that it is designed to reduce the learning time required by the original Viola-Jones method from several weeks to several hours. Unlike other approaches reported in the literature, which have been tested but not designed to operate in the thermal band, our method yields a high detection accuracy that reaches ~ 91.5%. Further analysis on our data set yielded that: (a) photometric normalization techniques do not directly improve ear detection performance. However, when using a certain photometric normalization technique (CLAHE) on falsely detected images, the detection rate improved by ~ 4%; (b) the high detection accuracy of our method did not degrade when we lowered down the original spatial resolution of thermal ear images. For example, even after using one third of the original spatial resolution (i.e. ~ 20% of the original computational time) of the thermal profile face images, the high ear detection accuracy of our method remained unaffected. This resulted also in speeding up the detection time of an ear image from 265 to 17 milliseconds per image. To the best of our knowledge this is the first time that the problem of human ear detection in the thermal band is being investigated in the open literature.

  3. Greater sensitivity of the cortical face processing system to perceptually-equated face detection

    PubMed Central

    Maher, S.; Ekstrom, T.; Tong, Y.; Nickerson, L.D.; Frederick, B.; Chen, Y.

    2015-01-01

    Face detection, the perceptual capacity to identify a visual stimulus as a face before probing deeper into specific attributes (such as its identity or emotion), is essential for social functioning. Despite the importance of this functional capacity, face detection and its underlying brain mechanisms are not well understood. This study evaluated the roles that the cortical face processing system, which is identified largely through studying other aspects of face perception, play in face detection. Specifically, we used functional magnetic resonance imaging (fMRI) to examine the activations of the fusifom face area (FFA), occipital face area (OFA) and superior temporal sulcus (STS) when face detection was isolated from other aspects of face perception and when face detection was perceptually-equated across individual human participants (n=20). During face detection, FFA and OFA were significantly activated, even for stimuli presented at perceptual-threshold levels, whereas STS was not. During tree detection, however, FFA and OFA were responsive only for highly salient (i.e., high contrast) stimuli. Moreover, activation of FFA during face detection predicted a significant portion of the perceptual performance levels that were determined psychophysically for each participant. This pattern of result indicates that FFA and OFA have a greater sensitivity to face detection signals and selectively support the initial process of face vs. non-face object perception. PMID:26592952

  4. Manipulation Detection and Preference Alterations in a Choice Blindness Paradigm

    PubMed Central

    Taya, Fumihiko; Gupta, Swati; Farber, Ilya; Mullette-Gillman, O'Dhaniel A.

    2014-01-01

    Objectives It is commonly believed that individuals make choices based upon their preferences and have access to the reasons for their choices. Recent studies in several areas suggest that this is not always the case. In choice blindness paradigms, two-alternative forced-choice in which chosen-options are later replaced by the unselected option, individuals often fail to notice replacement of their chosen option, confabulate explanations for why they chose the unselected option, and even show increased preferences for the unselected-but-replaced options immediately after choice (seconds). Although choice blindness has been replicated across a variety of domains, there are numerous outstanding questions. Firstly, we sought to investigate how individual- or trial-factors modulated detection of the manipulations. Secondly, we examined the nature and temporal duration (minutes vs. days) of the preference alterations induced by these manipulations. Methods Participants performed a computerized choice blindness task, selecting the more attractive face between presented pairs of female faces, and providing a typewritten explanation for their choice on half of the trials. Chosen-face cue manipulations were produced on a subset of trials by presenting the unselected face during the choice explanation as if it had been selected. Following all choice trials, participants rated the attractiveness of each face individually, and rated the similarity of each face pair. After approximately two weeks, participants re-rated the attractiveness of each individual face online. Results Participants detected manipulations on only a small proportion of trials, with detections by fewer than half of participants. Detection rates increased with the number of prior detections, and detection rates subsequent to first detection were modulated by the choice certainty. We show clear short-term modulation of preferences in both manipulated and non-manipulated explanation trials compared to choice-only trials (with opposite directions of effect). Preferences were altered in the direction that subjects were led to believe they selected. PMID:25247886

  5. Automatic Fatigue Detection of Drivers through Yawning Analysis

    NASA Astrophysics Data System (ADS)

    Azim, Tayyaba; Jaffar, M. Arfan; Ramzan, M.; Mirza, Anwar M.

    This paper presents a non-intrusive fatigue detection system based on the video analysis of drivers. The focus of the paper is on how to detect yawning which is an important cue for determining driver's fatigue. Initially, the face is located through Viola-Jones face detection method in a video frame. Then, a mouth window is extracted from the face region, in which lips are searched through spatial fuzzy c-means (s-FCM) clustering. The degree of mouth openness is extracted on the basis of mouth features, to determine driver's yawning state. If the yawning state of the driver persists for several consecutive frames, the system concludes that the driver is non-vigilant due to fatigue and is thus warned through an alarm. The system reinitializes when occlusion or misdetection occurs. Experiments were carried out using real data, recorded in day and night lighting conditions, and with users belonging to different race and gender.

  6. The wide window of face detection.

    PubMed

    Hershler, Orit; Golan, Tal; Bentin, Shlomo; Hochstein, Shaul

    2010-08-20

    Faces are detected more rapidly than other objects in visual scenes and search arrays, but the cause for this face advantage has been contested. In the present study, we found that under conditions of spatial uncertainty, faces were easier to detect than control targets (dog faces, clocks and cars) even in the absence of surrounding stimuli, making an explanation based only on low-level differences unlikely. This advantage improved with eccentricity in the visual field, enabling face detection in wider visual windows, and pointing to selective sparing of face detection at greater eccentricities. This face advantage might be due to perceptual factors favoring face detection. In addition, the relative face advantage is greater under flanked than non-flanked conditions, suggesting an additional, possibly attention-related benefit enabling face detection in groups of distracters.

  7. Detecting Visually Observable Disease Symptoms from Faces.

    PubMed

    Wang, Kuan; Luo, Jiebo

    2016-12-01

    Recent years have witnessed an increasing interest in the application of machine learning to clinical informatics and healthcare systems. A significant amount of research has been done on healthcare systems based on supervised learning. In this study, we present a generalized solution to detect visually observable symptoms on faces using semi-supervised anomaly detection combined with machine vision algorithms. We rely on the disease-related statistical facts to detect abnormalities and classify them into multiple categories to narrow down the possible medical reasons of detecting. Our method is in contrast with most existing approaches, which are limited by the availability of labeled training data required for supervised learning, and therefore offers the major advantage of flagging any unusual and visually observable symptoms.

  8. The shape of the face template: geometric distortions of faces and their detection in natural scenes.

    PubMed

    Pongakkasira, Kaewmart; Bindemann, Markus

    2015-04-01

    Human face detection might be driven by skin-coloured face-shaped templates. To explore this idea, this study compared the detection of faces for which the natural height-to-width ratios were preserved with distorted faces that were stretched vertically or horizontally. The impact of stretching on detection performance was not obvious when faces were equated to their unstretched counterparts in terms of their height or width dimension (Experiment 1). However, stretching impaired detection when the original and distorted faces were matched for their surface area (Experiment 2), and this was found with both vertically and horizontally stretched faces (Experiment 3). This effect was evident in accuracy, response times, and also observers' eye movements to faces. These findings demonstrate that height-to-width ratios are an important component of the cognitive template for face detection. The results also highlight important differences between face detection and face recognition. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Face detection and eyeglasses detection for thermal face recognition

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng

    2012-01-01

    Thermal face recognition becomes an active research direction in human identification because it does not rely on illumination condition. Face detection and eyeglasses detection are necessary steps prior to face recognition using thermal images. Infrared light cannot go through glasses and thus glasses will appear as dark areas in a thermal image. One possible solution is to detect eyeglasses and to exclude the eyeglasses areas before face matching. In thermal face detection, a projection profile analysis algorithm is proposed, where region growing and morphology operations are used to segment the body of a subject; then the derivatives of two projections (horizontal and vertical) are calculated and analyzed to locate a minimal rectangle of containing the face area. Of course, the searching region of a pair of eyeglasses is within the detected face area. The eyeglasses detection algorithm should produce either a binary mask if eyeglasses present, or an empty set if no eyeglasses at all. In the proposed eyeglasses detection algorithm, block processing, region growing, and priori knowledge (i.e., low mean and variance within glasses areas, the shapes and locations of eyeglasses) are employed. The results of face detection and eyeglasses detection are quantitatively measured and analyzed using the manually defined ground truths (for both face and eyeglasses). Our experimental results shown that the proposed face detection and eyeglasses detection algorithms performed very well in contrast with the predefined ground truths.

  10. A novel approach for the quantitation of carbohydrates in mash, wort, and beer with RP-HPLC using 1-naphthylamine for precolumn derivatization.

    PubMed

    Rakete, Stefan; Glomb, Marcus A

    2013-04-24

    A novel universal method for the determination of reducing mono-, di-, and oligosaccharides in complex matrices on RP-HPLC using 1-naphthylamine for precolumn derivatization with sodium cyanoborhydride was established to study changes in the carbohydrate profile during beer brewing. Fluorescence and mass spectrometric detection enabled very sensitive analyses of beer-relevant carbohydrates. Mass spectrometry additionally allowed the identification of the molecular weight and thereby the degree of polymerization of unknown carbohydrates. Thus, carbohydrates with up to 16 glucose units were detected. Comparison demonstrated that the novel method was superior to fluorophore-assisted carbohydrate electrophoresis (FACE). The results proved the HPLC method clearly to be more powerful in regard to sensitivity and resolution. Analogous to FACE, this method was designated fluorophore-assisted carbohydrate HPLC (FAC-HPLC).

  11. A robust human face detection algorithm

    NASA Astrophysics Data System (ADS)

    Raviteja, Thaluru; Karanam, Srikrishna; Yeduguru, Dinesh Reddy V.

    2012-01-01

    Human face detection plays a vital role in many applications like video surveillance, managing a face image database, human computer interface among others. This paper proposes a robust algorithm for face detection in still color images that works well even in a crowded environment. The algorithm uses conjunction of skin color histogram, morphological processing and geometrical analysis for detecting human faces. To reinforce the accuracy of face detection, we further identify mouth and eye regions to establish the presence/absence of face in a particular region of interest.

  12. Familiarity facilitates feature-based face processing.

    PubMed

    Visconti di Oleggio Castello, Matteo; Wheeler, Kelsey G; Cipolli, Carlo; Gobbini, M Ida

    2017-01-01

    Recognition of personally familiar faces is remarkably efficient, effortless and robust. We asked if feature-based face processing facilitates detection of familiar faces by testing the effect of face inversion on a visual search task for familiar and unfamiliar faces. Because face inversion disrupts configural and holistic face processing, we hypothesized that inversion would diminish the familiarity advantage to the extent that it is mediated by such processing. Subjects detected personally familiar and stranger target faces in arrays of two, four, or six face images. Subjects showed significant facilitation of personally familiar face detection for both upright and inverted faces. The effect of familiarity on target absent trials, which involved only rejection of unfamiliar face distractors, suggests that familiarity facilitates rejection of unfamiliar distractors as well as detection of familiar targets. The preserved familiarity effect for inverted faces suggests that facilitation of face detection afforded by familiarity reflects mostly feature-based processes.

  13. Pick on someone your own size: the detection of threatening facial expressions posed by both child and adult models.

    PubMed

    LoBue, Vanessa; Matthews, Kaleigh; Harvey, Teresa; Thrasher, Cat

    2014-02-01

    For decades, researchers have documented a bias for the rapid detection of angry faces in adult, child, and even infant participants. However, despite the age of the participant, the facial stimuli used in all of these experiments were schematic drawings or photographs of adult faces. The current research is the first to examine the detection of both child and adult emotional facial expressions. In our study, 3- to 5-year-old children and adults detected angry, sad, and happy faces among neutral distracters. The depicted faces were of adults or of other children. As in previous work, children detected angry faces more quickly than happy and neutral faces overall, and they tended to detect the faces of other children more quickly than the faces of adults. Adults also detected angry faces more quickly than happy and sad faces even when the faces depicted child models. The results are discussed in terms of theoretical implications for the development of a bias for threat in detection. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. False match elimination for face recognition based on SIFT algorithm

    NASA Astrophysics Data System (ADS)

    Gu, Xuyuan; Shi, Ping; Shao, Meide

    2011-06-01

    The SIFT (Scale Invariant Feature Transform) is a well known algorithm used to detect and describe local features in images. It is invariant to image scale, rotation and robust to the noise and illumination. In this paper, a novel method used for face recognition based on SIFT is proposed, which combines the optimization of SIFT, mutual matching and Progressive Sample Consensus (PROSAC) together and can eliminate the false matches of face recognition effectively. Experiments on ORL face database show that many false matches can be eliminated and better recognition rate is achieved.

  15. Friends with Faces: How Social Networks Can Enhance Face Recognition and Vice Versa

    NASA Astrophysics Data System (ADS)

    Mavridis, Nikolaos; Kazmi, Wajahat; Toulis, Panos

    The "friendship" relation, a social relation among individuals, is one of the primary relations modeled in some of the world's largest online social networking sites, such as "FaceBook." On the other hand, the "co-occurrence" relation, as a relation among faces appearing in pictures, is one that is easily detectable using modern face detection techniques. These two relations, though appearing in different realms (social vs. visual sensory), have a strong correlation: faces that co-occur in photos often belong to individuals who are friends. Using real-world data gathered from "Facebook," which were gathered as part of the "FaceBots" project, the world's first physical face-recognizing and conversing robot that can utilize and publish information on "Facebook" was established. We present here methods as well as results for utilizing this correlation in both directions. Both algorithms for utilizing knowledge of the social context for faster and better face recognition are given, as well as algorithms for estimating the friendship network of a number of individuals given photos containing their faces. The results are quite encouraging. In the primary example, doubling of the recognition accuracy as well as a sixfold improvement in speed is demonstrated. Various improvements, interesting statistics, as well as an empirical investigation leading to predictions of scalability to much bigger data sets are discussed.

  16. Microfluidic Analysis with Front-Face Fluorometric Detection for the Determination of Total Inorganic Iodine in Drinking Water.

    PubMed

    Inpota, Prawpan; Strzelak, Kamil; Koncki, Robert; Sripumkhai, Wisaroot; Jeamsaksiri, Wutthinan; Ratanawimarnwong, Nuanlaor; Wilairat, Prapin; Choengchan, Nathawut; Chantiwas, Rattikan; Nacapricha, Duangjai

    2018-01-01

    A microfluidic method with front-face fluorometric detection was developed for the determination of total inorganic iodine in drinking water. A polydimethylsiloxane (PDMS) microfluidic device was employed in conjunction with the Sandell-Kolthoff reaction, in which iodide catalyzed the redox reaction between Ce(IV) and As(III). Direct alignment of an optical fiber attached to a spectrofluorometer was used as a convenient detector for remote front-face fluorometric detection. Trace inorganic iodine (IO 3 - and I - ) present naturally in drinking water was measured by on-line conversion of iodate to iodide for determination of total inorganic iodine. On-line conversion efficiency of iodate to iodide using the microfluidic device was investigated. Excellent conversion efficiency of 93 - 103% (%RSD = 1.6 - 11%) was obtained. Inorganic iodine concentrations in drinking water samples were measured, and the results obtained were in good agreement with those obtained by an ICP-MS method. Spiked sample recoveries were in the range of 86%(±5) - 128%(±8) (n = 12). Interference of various anions and cations were investigated with tolerance limit concentrations ranging from 10 -6 to 2.5 M depending on the type of ions. The developed method is simple and convenient, and it is a green method for iodine analysis, as it greatly reduces the amount of toxic reagent consumed with reagent volumes in the microfluidic scale.

  17. Atypical face shape and genomic structural variants in epilepsy

    PubMed Central

    Chinthapalli, Krishna; Bartolini, Emanuele; Novy, Jan; Suttie, Michael; Marini, Carla; Falchi, Melania; Fox, Zoe; Clayton, Lisa M. S.; Sander, Josemir W.; Guerrini, Renzo; Depondt, Chantal; Hennekam, Raoul; Hammond, Peter

    2012-01-01

    Many pathogenic structural variants of the human genome are known to cause facial dysmorphism. During the past decade, pathogenic structural variants have also been found to be an important class of genetic risk factor for epilepsy. In other fields, face shape has been assessed objectively using 3D stereophotogrammetry and dense surface models. We hypothesized that computer-based analysis of 3D face images would detect subtle facial abnormality in people with epilepsy who carry pathogenic structural variants as determined by chromosome microarray. In 118 children and adults attending three European epilepsy clinics, we used an objective measure called Face Shape Difference to show that those with pathogenic structural variants have a significantly more atypical face shape than those without such variants. This is true when analysing the whole face, or the periorbital region or the perinasal region alone. We then tested the predictive accuracy of our measure in a second group of 63 patients. Using a minimum threshold to detect face shape abnormalities with pathogenic structural variants, we found high sensitivity (4/5, 80% for whole face; 3/5, 60% for periorbital and perinasal regions) and specificity (45/58, 78% for whole face and perinasal regions; 40/58, 69% for periorbital region). We show that the results do not seem to be affected by facial injury, facial expression, intellectual disability, drug history or demographic differences. Finally, we use bioinformatics tools to explore relationships between facial shape and gene expression within the developing forebrain. Stereophotogrammetry and dense surface models are powerful, objective, non-contact methods of detecting relevant face shape abnormalities. We demonstrate that they are useful in identifying atypical face shape in adults or children with structural variants, and they may give insights into the molecular genetics of facial development. PMID:22975390

  18. Implementing psychophysiology in clinical assessments of adolescent social anxiety: use of rater judgments based on graphical representations of psychophysiology.

    PubMed

    De Los Reyes, Andres; Augenstein, Tara M; Aldao, Amelia; Thomas, Sarah A; Daruwala, Samantha; Kline, Kathryn; Regan, Timothy

    2015-01-01

    Social stressor tasks induce adolescents' social distress as indexed by low-cost psychophysiological methods. Unknown is how to incorporate these methods within clinical assessments. Having assessors judge graphical depictions of psychophysiological data may facilitate detections of data patterns that may be difficult to identify using judgments about numerical depictions of psychophysiological data. Specifically, the Chernoff Face method involves graphically representing data using features on the human face (eyes, nose, mouth, and face shape). This method capitalizes on humans' abilities to discern subtle variations in facial features. Using adolescent heart rate norms and Chernoff Faces, we illustrated a method for implementing psychophysiology within clinical assessments of adolescent social anxiety. Twenty-two clinic-referred adolescents completed a social anxiety self-report and provided psychophysiological data using wireless heart rate monitors during a social stressor task. We graphically represented participants' psychophysiological data and normative adolescent heart rates. For each participant, two undergraduate coders made comparative judgments between the dimensions (eyes, nose, mouth, and face shape) of two Chernoff Faces. One Chernoff Face represented a participant's heart rate within a context (baseline, speech preparation, or speech-giving). The second Chernoff Face represented normative heart rate data matched to the participant's age. Using Chernoff Faces, coders reliably and accurately identified contextual variation in participants' heart rate responses to social stress. Further, adolescents' self-reported social anxiety symptoms predicted Chernoff Face judgments, and judgments could be differentiated by social stress context. Our findings have important implications for implementing psychophysiology within clinical assessments of adolescent social anxiety.

  19. Efficient search for a face by chimpanzees (Pan troglodytes).

    PubMed

    Tomonaga, Masaki; Imura, Tomoko

    2015-07-16

    The face is quite an important stimulus category for human and nonhuman primates in their social lives. Recent advances in comparative-cognitive research clearly indicate that chimpanzees and humans process faces in a special manner; that is, using holistic or configural processing. Both species exhibit the face-inversion effect in which the inverted presentation of a face deteriorates their perception and recognition. Furthermore, recent studies have shown that humans detect human faces among non-facial objects rapidly. We report that chimpanzees detected chimpanzee faces among non-facial objects quite efficiently. This efficient search was not limited to own-species faces. They also found human adult and baby faces--but not monkey faces--efficiently. Additional testing showed that a front-view face was more readily detected than a profile, suggesting the important role of eye-to-eye contact. Chimpanzees also detected a photograph of a banana as efficiently as a face, but a further examination clearly indicated that the banana was detected mainly due to a low-level feature (i.e., color). Efficient face detection was hampered by an inverted presentation, suggesting that configural processing of faces is a critical element of efficient face detection in both species. This conclusion was supported by a simple simulation experiment using the saliency model.

  20. Efficient search for a face by chimpanzees (Pan troglodytes)

    PubMed Central

    Tomonaga, Masaki; Imura, Tomoko

    2015-01-01

    The face is quite an important stimulus category for human and nonhuman primates in their social lives. Recent advances in comparative-cognitive research clearly indicate that chimpanzees and humans process faces in a special manner; that is, using holistic or configural processing. Both species exhibit the face-inversion effect in which the inverted presentation of a face deteriorates their perception and recognition. Furthermore, recent studies have shown that humans detect human faces among non-facial objects rapidly. We report that chimpanzees detected chimpanzee faces among non-facial objects quite efficiently. This efficient search was not limited to own-species faces. They also found human adult and baby faces-but not monkey faces-efficiently. Additional testing showed that a front-view face was more readily detected than a profile, suggesting the important role of eye-to-eye contact. Chimpanzees also detected a photograph of a banana as efficiently as a face, but a further examination clearly indicated that the banana was detected mainly due to a low-level feature (i.e., color). Efficient face detection was hampered by an inverted presentation, suggesting that configural processing of faces is a critical element of efficient face detection in both species. This conclusion was supported by a simple simulation experiment using the saliency model. PMID:26180944

  1. Less is more? Detecting lies in veiled witnesses.

    PubMed

    Leach, Amy-May; Ammar, Nawal; England, D Nicole; Remigio, Laura M; Kleinberg, Bennett; Verschuere, Bruno J

    2016-08-01

    Judges in the United States, the United Kingdom, and Canada have ruled that witnesses may not wear the niqab-a type of face veil-when testifying, in part because they believed that it was necessary to see a person's face to detect deception (Muhammad v. Enterprise Rent-A-Car, 2006; R. v. N. S., 2010; The Queen v. D(R), 2013). In two studies, we used conventional research methods and safeguards to empirically examine the assumption that niqabs interfere with lie detection. Female witnesses were randomly assigned to lie or tell the truth while remaining unveiled or while wearing a hijab (i.e., a head veil) or a niqab (i.e., a face veil). In Study 1, laypersons in Canada (N = 232) were more accurate at detecting deception in witnesses who wore niqabs or hijabs than in those who did not wear veils. Concealing portions of witnesses' faces led laypersons to change their decision-making strategies without eliciting negative biases. Lie detection results were partially replicated in Study 2, with laypersons in Canada, the United Kingdom, and the Netherlands (N = 291): observers' performance was better when witnesses wore either niqabs or hijabs than when witnesses did not wear veils. These findings suggest that, contrary to judicial opinion, niqabs do not interfere with-and may, in fact, improve-the ability to detect deception. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  2. Iterative nonlinear joint transform correlation for the detection of objects in cluttered scenes

    NASA Astrophysics Data System (ADS)

    Haist, Tobias; Tiziani, Hans J.

    1999-03-01

    An iterative correlation technique with digital image processing in the feedback loop for the detection of small objects in cluttered scenes is proposed. A scanning aperture is combined with the method in order to improve the immunity against noise and clutter. Multiple reference objects or different views of one object are processed in parallel. We demonstrate the method by detecting a noisy and distorted face in a crowd with a nonlinear joint transform correlator.

  3. Efficient human face detection in infancy.

    PubMed

    Jakobsen, Krisztina V; Umstead, Lindsey; Simpson, Elizabeth A

    2016-01-01

    Adults detect conspecific faces more efficiently than heterospecific faces; however, the development of this own-species bias (OSB) remains unexplored. We tested whether 6- and 11-month-olds exhibit OSB in their attention to human and animal faces in complex visual displays with high perceptual load (25 images competing for attention). Infants (n = 48) and adults (n = 43) passively viewed arrays containing a face among 24 non-face distractors while we measured their gaze with remote eye tracking. While OSB is typically not observed until about 9 months, we found that, already by 6 months, human faces were more likely to be detected, were detected more quickly (attention capture), and received longer looks (attention holding) than animal faces. These data suggest that 6-month-olds already exhibit OSB in face detection efficiency, consistent with perceptual attunement. This specialization may reflect the biological importance of detecting conspecific faces, a foundational ability for early social interactions. © 2015 Wiley Periodicals, Inc.

  4. Nondestructive Evaluation (NDE) for Inspection of Composite Sandwich Structures

    NASA Technical Reports Server (NTRS)

    Zalameda, Joseph N.; Parker, F. Raymond

    2014-01-01

    Composite honeycomb structures are widely used in aerospace applications due to their low weight and high strength advantages. Developing nondestructive evaluation (NDE) inspection methods are essential for their safe performance. Flash thermography is a commonly used technique for composite honeycomb structure inspections due to its large area and rapid inspection capability. Flash thermography is shown to be sensitive for detection of face sheet impact damage and face sheet to core disbond. Data processing techniques, using principal component analysis to improve the defect contrast, are discussed. Limitations to the thermal detection of the core are investigated. In addition to flash thermography, X-ray computed tomography is used. The aluminum honeycomb core provides excellent X-ray contrast compared to the composite face sheet. The X-ray CT technique was used to detect impact damage, core crushing, and skin to core disbonds. Additionally, the X-ray CT technique is used to validate the thermography results.

  5. Searching for differences in race: is there evidence for preferential detection of other-race faces?

    PubMed

    Lipp, Ottmar V; Terry, Deborah J; Smith, Joanne R; Tellegen, Cassandra L; Kuebbeler, Jennifer; Newey, Mareka

    2009-06-01

    Previous research has suggested that like animal and social fear-relevant stimuli, other-race faces (African American) are detected preferentially in visual search. Three experiments using Chinese or Indonesian faces as other-race faces yielded the opposite pattern of results: faster detection of same-race faces among other-race faces. This apparently inconsistent pattern of results was resolved by showing that Asian and African American faces are detected preferentially in tasks that have small stimulus sets and employ fixed target searches. Asian and African American other-race faces are found slower among Caucasian face backgrounds if larger stimulus sets are used in tasks with a variable mapping of stimulus to background or target. Thus, preferential detection of other-race faces was not found under task conditions in which preferential detection of animal and social fear-relevant stimuli is evident. Although consistent with the view that same-race faces are processed in more detail than other-race faces, the current findings suggest that other-race faces do not draw attention preferentially.

  6. Simulation and visualization of face seal motion stability by means of computer generated movies

    NASA Technical Reports Server (NTRS)

    Etsion, I.; Auer, B. M.

    1980-01-01

    A computer aided design method for mechanical face seals is described. Based on computer simulation, the actual motion of the flexibly mounted element of the seal can be visualized. This is achieved by solving the equations of motion of this element, calculating the displacements in its various degrees of freedom vs. time, and displaying the transient behavior in the form of a motion picture. Incorporating such a method in the design phase allows one to detect instabilities and to correct undesirable behavior of the seal. A theoretical background is presented. Details of the motion display technique are described, and the usefulness of the method is demonstrated by an example of a noncontacting conical face seal.

  7. Simulation and visualization of face seal motion stability by means of computer generated movies

    NASA Technical Reports Server (NTRS)

    Etsion, I.; Auer, B. M.

    1981-01-01

    A computer aided design method for mechanical face seals is described. Based on computer simulation, the actual motion of the flexibly mounted element of the seal can be visualized. This is achieved by solving the equations of motion of this element, calculating the displacements in its various degrees of freedom vs. time, and displaying the transient behavior in the form of a motion picture. Incorporating such a method in the design phase allows one to detect instabilities and to correct undesirable behavior of the seal. A theoretical background is presented. Details of the motion display technique are described, and the usefulness of the method is demonstrated by an example of a noncontacting conical face seal.

  8. Is Your Avatar Ethical? On-Line Course Tools that Are Methods for Student Identity and Verification

    ERIC Educational Resources Information Center

    Semple, Mid; Hatala, Jeffrey; Franks, Patricia; Rossi, Margherita A.

    2011-01-01

    On-line college courses present a mandate for student identity verification for accreditation and funding sources. Student authentication requires course modification to detect fraud and misrepresentation of authorship in assignment submissions. The reality is that some college students cheat in face-to-face classrooms; however, the potential for…

  9. [Review of driver fatigue/drowsiness detection methods].

    PubMed

    Wang, Lei; Wu, Xiaojuan; Yu, Mengsun

    2007-02-01

    Driver fatigue/drowsiness is one of the important causes of serious traffic accidents and results in so many people deaths or injuries, but also substantial directly and indirectly economic expenses. Therefore, many countries make great effort on how to detect drowsiness during driving. In this paper, we introduce the recent developments of driver fatigue/drowsiness detection technology of world wide and try to classify the existing methods into several kinds according to different features measured, and analyzed. Finally, the challenges faced to fatigue/drowsiness detection technology and the development trend are presented.

  10. Robust 3D face landmark localization based on local coordinate coding.

    PubMed

    Song, Mingli; Tao, Dacheng; Sun, Shengpeng; Chen, Chun; Maybank, Stephen J

    2014-12-01

    In the 3D facial animation and synthesis community, input faces are usually required to be labeled by a set of landmarks for parameterization. Because of the variations in pose, expression and resolution, automatic 3D face landmark localization remains a challenge. In this paper, a novel landmark localization approach is presented. The approach is based on local coordinate coding (LCC) and consists of two stages. In the first stage, we perform nose detection, relying on the fact that the nose shape is usually invariant under the variations in the pose, expression, and resolution. Then, we use the iterative closest points algorithm to find a 3D affine transformation that aligns the input face to a reference face. In the second stage, we perform resampling to build correspondences between the input 3D face and the training faces. Then, an LCC-based localization algorithm is proposed to obtain the positions of the landmarks in the input face. Experimental results show that the proposed method is comparable to state of the art methods in terms of its robustness, flexibility, and accuracy.

  11. Automatic textual annotation of video news based on semantic visual object extraction

    NASA Astrophysics Data System (ADS)

    Boujemaa, Nozha; Fleuret, Francois; Gouet, Valerie; Sahbi, Hichem

    2003-12-01

    In this paper, we present our work for automatic generation of textual metadata based on visual content analysis of video news. We present two methods for semantic object detection and recognition from a cross modal image-text thesaurus. These thesaurus represent a supervised association between models and semantic labels. This paper is concerned with two semantic objects: faces and Tv logos. In the first part, we present our work for efficient face detection and recogniton with automatic name generation. This method allows us also to suggest the textual annotation of shots close-up estimation. On the other hand, we were interested to automatically detect and recognize different Tv logos present on incoming different news from different Tv Channels. This work was done jointly with the French Tv Channel TF1 within the "MediaWorks" project that consists on an hybrid text-image indexing and retrieval plateform for video news.

  12. Gender classification system in uncontrolled environments

    NASA Astrophysics Data System (ADS)

    Zeng, Pingping; Zhang, Yu-Jin; Duan, Fei

    2011-01-01

    Most face analysis systems available today perform mainly on restricted databases of images in terms of size, age, illumination. In addition, it is frequently assumed that all images are frontal and unconcealed. Actually, in a non-guided real-time supervision, the face pictures taken may often be partially covered and with head rotation less or more. In this paper, a special system supposed to be used in real-time surveillance with un-calibrated camera and non-guided photography is described. It mainly consists of five parts: face detection, non-face filtering, best-angle face selection, texture normalization, and gender classification. Emphases are focused on non-face filtering and best-angle face selection parts as well as texture normalization. Best-angle faces are figured out by PCA reconstruction, which equals to an implicit face alignment and results in a huge increase of the accuracy for gender classification. Dynamic skin model and a masked PCA reconstruction algorithm are applied to filter out faces detected in error. In order to fully include facial-texture and shape-outline features, a hybrid feature that is a combination of Gabor wavelet and PHoG (pyramid histogram of gradients) was proposed to equitable inner texture and outer contour. Comparative study on the effects of different non-face filtering and texture masking methods in the context of gender classification by SVM is reported through experiments on a set of UT (a company name) face images, a large number of internet images and CAS (Chinese Academy of Sciences) face database. Some encouraging results are obtained.

  13. Automated detection of pain from facial expressions: a rule-based approach using AAM

    NASA Astrophysics Data System (ADS)

    Chen, Zhanli; Ansari, Rashid; Wilkie, Diana J.

    2012-02-01

    In this paper, we examine the problem of using video analysis to assess pain, an important problem especially for critically ill, non-communicative patients, and people with dementia. We propose and evaluate an automated method to detect the presence of pain manifested in patient videos using a unique and large collection of cancer patient videos captured in patient homes. The method is based on detecting pain-related facial action units defined in the Facial Action Coding System (FACS) that is widely used for objective assessment in pain analysis. In our research, a person-specific Active Appearance Model (AAM) based on Project-Out Inverse Compositional Method is trained for each patient individually for the modeling purpose. A flexible representation of the shape model is used in a rule-based method that is better suited than the more commonly used classifier-based methods for application to the cancer patient videos in which pain-related facial actions occur infrequently and more subtly. The rule-based method relies on the feature points that provide facial action cues and is extracted from the shape vertices of AAM, which have a natural correspondence to face muscular movement. In this paper, we investigate the detection of a commonly used set of pain-related action units in both the upper and lower face. Our detection results show good agreement with the results obtained by three trained FACS coders who independently reviewed and scored the action units in the cancer patient videos.

  14. Whole-face procedures for recovering facial images from memory.

    PubMed

    Frowd, Charlie D; Skelton, Faye; Hepton, Gemma; Holden, Laura; Minahil, Simra; Pitchford, Melanie; McIntyre, Alex; Brown, Charity; Hancock, Peter J B

    2013-06-01

    Research has indicated that traditional methods for accessing facial memories usually yield unidentifiable images. Recent research, however, has made important improvements in this area to the witness interview, method used for constructing the face and recognition of finished composites. Here, we investigated whether three of these improvements would produce even-more recognisable images when used in conjunction with each other. The techniques are holistic in nature: they involve processes which operate on an entire face. Forty participants first inspected an unfamiliar target face. Nominally 24h later, they were interviewed using a standard type of cognitive interview (CI) to recall the appearance of the target, or an enhanced 'holistic' interview where the CI was followed by procedures for focussing on the target's character. Participants then constructed a composite using EvoFIT, a recognition-type system that requires repeatedly selecting items from face arrays, with 'breeding', to 'evolve' a composite. They either saw faces in these arrays with blurred external features, or an enhanced method where these faces were presented with masked external features. Then, further participants attempted to name the composites, first by looking at the face front-on, the normal method, and then for a second time by looking at the face side-on, which research demonstrates facilitates recognition. All techniques improved correct naming on their own, but together promoted highly-recognisable composites with mean naming at 74% correct. The implication is that these techniques, if used together by practitioners, should substantially increase the detection of suspects using this forensic method of person identification. Copyright © 2013 Forensic Science Society. Published by Elsevier Ireland Ltd. All rights reserved.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polese, Luigi Gentile; Brackney, Larry

    An image-based occupancy sensor includes a motion detection module that receives and processes an image signal to generate a motion detection signal, a people detection module that receives the image signal and processes the image signal to generate a people detection signal, a face detection module that receives the image signal and processes the image signal to generate a face detection signal, and a sensor integration module that receives the motion detection signal from the motion detection module, receives the people detection signal from the people detection module, receives the face detection signal from the face detection module, and generatesmore » an occupancy signal using the motion detection signal, the people detection signal, and the face detection signal, with the occupancy signal indicating vacancy or occupancy, with an occupancy indication specifying that one or more people are detected within the monitored volume.« less

  16. Reverse engineering the face space: Discovering the critical features for face identification.

    PubMed

    Abudarham, Naphtali; Yovel, Galit

    2016-01-01

    How do we identify people? What are the critical facial features that define an identity and determine whether two faces belong to the same person or different people? To answer these questions, we applied the face space framework, according to which faces are represented as points in a multidimensional feature space, such that face space distances are correlated with perceptual similarities between faces. In particular, we developed a novel method that allowed us to reveal the critical dimensions (i.e., critical features) of the face space. To that end, we constructed a concrete face space, which included 20 facial features of natural face images, and asked human observers to evaluate feature values (e.g., how thick are the lips). Next, we systematically and quantitatively changed facial features, and measured the perceptual effects of these manipulations. We found that critical features were those for which participants have high perceptual sensitivity (PS) for detecting differences across identities (e.g., which of two faces has thicker lips). Furthermore, these high PS features vary minimally across different views of the same identity, suggesting high PS features support face recognition across different images of the same face. The methods described here set an infrastructure for discovering the critical features of other face categories not studied here (e.g., Asians, familiar) as well as other aspects of face processing, such as attractiveness or trait inferences.

  17. Evaluation of a processing scheme for calcified atheromatous carotid artery detection in face/neck CBCT images

    NASA Astrophysics Data System (ADS)

    Matheus, B. R. N.; Centurion, B. S.; Rubira-Bullen, I. R. F.; Schiabel, H.

    2017-03-01

    Cone Beam Computed Tomography (CBCT), a kind of face and neck exams can be opportunity to identify, as an incidental finding, calcifications of the carotid artery (CACA). Given the similarity of the CACA with calcification found in several x-ray exams, this work suggests that a similar technique designed to detect breast calcifications in mammography images could be applied to detect such calcifications in CBCT. The method used a 3D version of the calcification detection technique [1], based on a signal enhancement using a convolution with a 3D Laplacian of Gaussian (LoG) function followed by removing the high contrast bone structure from the image. Initial promising results show a 71% sensitivity with 0.48 false positive per exam.

  18. Simple Common Plane contact detection algorithm for FE/FD methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vorobiev, O

    2006-07-19

    Common-plane (CP) algorithm is widely used in Discrete Element Method (DEM) to model contact forces between interacting particles or blocks. A new simple contact detection algorithm is proposed to model contacts in FE/FD methods which is similar to the CP algorithm. The CP is defined as a plane separating interacting faces of FE/FD mesh instead of blocks or particles in the original CP method. The method does not require iterations. It is very robust and easy to implement both in 2D and 3D case.

  19. Seeing Objects as Faces Enhances Object Detection.

    PubMed

    Takahashi, Kohske; Watanabe, Katsumi

    2015-10-01

    The face is a special visual stimulus. Both bottom-up processes for low-level facial features and top-down modulation by face expectations contribute to the advantages of face perception. However, it is hard to dissociate the top-down factors from the bottom-up processes, since facial stimuli mandatorily lead to face awareness. In the present study, using the face pareidolia phenomenon, we demonstrated that face awareness, namely seeing an object as a face, enhances object detection performance. In face pareidolia, some people see a visual stimulus, for example, three dots arranged in V shape, as a face, while others do not. This phenomenon allows us to investigate the effect of face awareness leaving the stimulus per se unchanged. Participants were asked to detect a face target or a triangle target. While target per se was identical between the two tasks, the detection sensitivity was higher when the participants recognized the target as a face. This was the case irrespective of the stimulus eccentricity or the vertical orientation of the stimulus. These results demonstrate that seeing an object as a face facilitates object detection via top-down modulation. The advantages of face perception are, therefore, at least partly, due to face awareness.

  20. Seeing Objects as Faces Enhances Object Detection

    PubMed Central

    Watanabe, Katsumi

    2015-01-01

    The face is a special visual stimulus. Both bottom-up processes for low-level facial features and top-down modulation by face expectations contribute to the advantages of face perception. However, it is hard to dissociate the top-down factors from the bottom-up processes, since facial stimuli mandatorily lead to face awareness. In the present study, using the face pareidolia phenomenon, we demonstrated that face awareness, namely seeing an object as a face, enhances object detection performance. In face pareidolia, some people see a visual stimulus, for example, three dots arranged in V shape, as a face, while others do not. This phenomenon allows us to investigate the effect of face awareness leaving the stimulus per se unchanged. Participants were asked to detect a face target or a triangle target. While target per se was identical between the two tasks, the detection sensitivity was higher when the participants recognized the target as a face. This was the case irrespective of the stimulus eccentricity or the vertical orientation of the stimulus. These results demonstrate that seeing an object as a face facilitates object detection via top-down modulation. The advantages of face perception are, therefore, at least partly, due to face awareness. PMID:27648219

  1. A Robust Shape Reconstruction Method for Facial Feature Point Detection.

    PubMed

    Tan, Shuqiu; Chen, Dongyi; Guo, Chenggang; Huang, Zhiqi

    2017-01-01

    Facial feature point detection has been receiving great research advances in recent years. Numerous methods have been developed and applied in practical face analysis systems. However, it is still a quite challenging task because of the large variability in expression and gestures and the existence of occlusions in real-world photo shoot. In this paper, we present a robust sparse reconstruction method for the face alignment problems. Instead of a direct regression between the feature space and the shape space, the concept of shape increment reconstruction is introduced. Moreover, a set of coupled overcomplete dictionaries termed the shape increment dictionary and the local appearance dictionary are learned in a regressive manner to select robust features and fit shape increments. Additionally, to make the learned model more generalized, we select the best matched parameter set through extensive validation tests. Experimental results on three public datasets demonstrate that the proposed method achieves a better robustness over the state-of-the-art methods.

  2. A fast image retrieval method based on SVM and imbalanced samples in filtering multimedia message spam

    NASA Astrophysics Data System (ADS)

    Chen, Zhang; Peng, Zhenming; Peng, Lingbing; Liao, Dongyi; He, Xin

    2011-11-01

    With the swift and violent development of the Multimedia Messaging Service (MMS), it becomes an urgent task to filter the Multimedia Message (MM) spam effectively in real-time. For the fact that most MMs contain images or videos, a method based on retrieving images is given in this paper for filtering MM spam. The detection method used in this paper is a combination of skin-color detection, texture detection, and face detection, and the classifier for this imbalanced problem is a very fast multi-classification combining Support vector machine (SVM) with unilateral binary decision tree. The experiments on 3 test sets show that the proposed method is effective, with the interception rate up to 60% and the average detection time for each image less than 1 second.

  3. Automated macromolecular crystal detection system and method

    DOEpatents

    Christian, Allen T [Tracy, CA; Segelke, Brent [San Ramon, CA; Rupp, Bernard [Livermore, CA; Toppani, Dominique [Fontainebleau, FR

    2007-06-05

    An automated macromolecular method and system for detecting crystals in two-dimensional images, such as light microscopy images obtained from an array of crystallization screens. Edges are detected from the images by identifying local maxima of a phase congruency-based function associated with each image. The detected edges are segmented into discrete line segments, which are subsequently geometrically evaluated with respect to each other to identify any crystal-like qualities such as, for example, parallel lines, facing each other, similarity in length, and relative proximity. And from the evaluation a determination is made as to whether crystals are present in each image.

  4. A Pulse Rate Detection Method for Mouse Application Based on Multi-PPG Sensors

    PubMed Central

    Chen, Wei-Hao

    2017-01-01

    Heart rate is an important physiological parameter for healthcare. Among measurement methods, photoplethysmography (PPG) is an easy and convenient method for pulse rate detection. However, as the PPG signal faces the challenge of motion artifacts and is constrained by the position chosen, the purpose of this paper is to implement a comfortable and easy-to-use multi-PPG sensor module combined with a stable and accurate real-time pulse rate detection method on a computer mouse. A weighted average method for multi-PPG sensors is used to adjust the weight of each signal channel in order to raise the accuracy and stability of the detected signal, therefore reducing the disturbance of noise under the environment of moving effectively and efficiently. According to the experiment results, the proposed method can increase the usability and probability of PPG signal detection on palms. PMID:28708112

  5. Differences in Looking at Own- and Other-Race Faces Are Subtle and Analysis-Dependent: An Account of Discrepant Reports.

    PubMed

    Arizpe, Joseph; Kravitz, Dwight J; Walsh, Vincent; Yovel, Galit; Baker, Chris I

    2016-01-01

    The Other-Race Effect (ORE) is the robust and well-established finding that people are generally poorer at facial recognition of individuals of another race than of their own race. Over the past four decades, much research has focused on the ORE because understanding this phenomenon is expected to elucidate fundamental face processing mechanisms and the influence of experience on such mechanisms. Several recent studies of the ORE in which the eye-movements of participants viewing own- and other-race faces were tracked have, however, reported highly conflicting results regarding the presence or absence of differential patterns of eye-movements to own- versus other-race faces. This discrepancy, of course, leads to conflicting theoretical interpretations of the perceptual basis for the ORE. Here we investigate fixation patterns to own- versus other-race (African and Chinese) faces for Caucasian participants using different analysis methods. While we detect statistically significant, though subtle, differences in fixation pattern using an Area of Interest (AOI) approach, we fail to detect significant differences when applying a spatial density map approach. Though there were no significant differences in the spatial density maps, the qualitative patterns matched the results from the AOI analyses reflecting how, in certain contexts, Area of Interest (AOI) analyses can be more sensitive in detecting the differential fixation patterns than spatial density analyses, due to spatial pooling of data with AOIs. AOI analyses, however, also come with the limitation of requiring a priori specification. These findings provide evidence that the conflicting reports in the prior literature may be at least partially accounted for by the differences in the statistical sensitivity associated with the different analysis methods employed across studies. Overall, our results suggest that detection of differences in eye-movement patterns can be analysis-dependent and rests on the assumptions inherent in the given analysis.

  6. Differences in Looking at Own- and Other-Race Faces Are Subtle and Analysis-Dependent: An Account of Discrepant Reports

    PubMed Central

    Arizpe, Joseph; Kravitz, Dwight J.; Walsh, Vincent; Yovel, Galit; Baker, Chris I.

    2016-01-01

    The Other-Race Effect (ORE) is the robust and well-established finding that people are generally poorer at facial recognition of individuals of another race than of their own race. Over the past four decades, much research has focused on the ORE because understanding this phenomenon is expected to elucidate fundamental face processing mechanisms and the influence of experience on such mechanisms. Several recent studies of the ORE in which the eye-movements of participants viewing own- and other-race faces were tracked have, however, reported highly conflicting results regarding the presence or absence of differential patterns of eye-movements to own- versus other-race faces. This discrepancy, of course, leads to conflicting theoretical interpretations of the perceptual basis for the ORE. Here we investigate fixation patterns to own- versus other-race (African and Chinese) faces for Caucasian participants using different analysis methods. While we detect statistically significant, though subtle, differences in fixation pattern using an Area of Interest (AOI) approach, we fail to detect significant differences when applying a spatial density map approach. Though there were no significant differences in the spatial density maps, the qualitative patterns matched the results from the AOI analyses reflecting how, in certain contexts, Area of Interest (AOI) analyses can be more sensitive in detecting the differential fixation patterns than spatial density analyses, due to spatial pooling of data with AOIs. AOI analyses, however, also come with the limitation of requiring a priori specification. These findings provide evidence that the conflicting reports in the prior literature may be at least partially accounted for by the differences in the statistical sensitivity associated with the different analysis methods employed across studies. Overall, our results suggest that detection of differences in eye-movement patterns can be analysis-dependent and rests on the assumptions inherent in the given analysis. PMID:26849447

  7. Three-dimensional face model reproduction method using multiview images

    NASA Astrophysics Data System (ADS)

    Nagashima, Yoshio; Agawa, Hiroshi; Kishino, Fumio

    1991-11-01

    This paper describes a method of reproducing three-dimensional face models using multi-view images for a virtual space teleconferencing system that achieves a realistic visual presence for teleconferencing. The goal of this research, as an integral component of a virtual space teleconferencing system, is to generate a three-dimensional face model from facial images, synthesize images of the model virtually viewed from different angles, and with natural shadow to suit the lighting conditions of the virtual space. The proposed method is as follows: first, front and side view images of the human face are taken by TV cameras. The 3D data of facial feature points are obtained from front- and side-views by an image processing technique based on the color, shape, and correlation of face components. Using these 3D data, the prepared base face models, representing typical Japanese male and female faces, are modified to approximate the input facial image. The personal face model, representing the individual character, is then reproduced. Next, an oblique view image is taken by TV camera. The feature points of the oblique view image are extracted using the same image processing technique. A more precise personal model is reproduced by fitting the boundary of the personal face model to the boundary of the oblique view image. The modified boundary of the personal face model is determined by using face direction, namely rotation angle, which is detected based on the extracted feature points. After the 3D model is established, the new images are synthesized by mapping facial texture onto the model.

  8. Back-Face Strain for Monitoring Stable Crack Extension in Precracked Flexure Specimens

    NASA Technical Reports Server (NTRS)

    Salem, Jonathan A.; Ghosn, Louis J.

    2010-01-01

    Calibrations relating back-face strain to crack length in precracked flexure specimens were developed for different strain gage sizes. The functions were verified via experimental compliance measurements of notched and precracked ceramic beams. Good agreement between the functions and experiments occurred, and fracture toughness was calculated via several operational methods: maximum test load and optically measured precrack length; load at 2 percent crack extension and optical precrack length; maximum load and back-face strain crack length. All the methods gave vary comparable results. The initiation toughness, K(sub Ii) , was also estimated from the initial compliance and load.The results demonstrate that stability of precracked ceramics specimens tested in four-point flexure is a common occurrence, and that methods such as remotely-monitored load-point displacement are only adequate for detecting stable extension of relatively deep cracks.

  9. A new method for skin color enhancement

    NASA Astrophysics Data System (ADS)

    Zeng, Huanzhao; Luo, Ronnier

    2012-01-01

    Skin tone is the most important color category in memory colors. Reproducing it pleasingly is an important factor in photographic color reproduction. Moving skin colors toward their preferred skin color center improves the skin color preference on photographic color reproduction. Two key factors to successfully enhance skin colors are: a method to detect original skin colors effectively even if they are shifted far away from the regular skin color region, and a method to morph skin colors toward a preferred skin color region properly without introducing artifacts. A method for skin color enhancement presented by the authors in the same conference last year applies a static skin color model for skin color detection, which may miss to detect skin colors that are far away from regular skin tones. In this paper, a new method using the combination of face detection and statistical skin color modeling is proposed to effectively detect skin pixels and to enhance skin colors more effectively.

  10. Detection of Antibiotics and Evaluation of Antibacterial Activity with Screen-Printed Electrodes

    PubMed Central

    Titoiu, Ana Maria; Marty, Jean-Louis

    2018-01-01

    This review provides a brief overview of the fabrication and properties of screen-printed electrodes and details the different opportunities to apply them for the detection of antibiotics, detection of bacteria and antibiotic susceptibility. Among the alternative approaches to costly chromatographic or ELISA methods for antibiotics detection and to lengthy culture methods for bacteria detection, electrochemical biosensors based on screen-printed electrodes present some distinctive advantages. Chemical and (bio)sensors for the detection of antibiotics and assays coupling detection with screen-printed electrodes with immunomagnetic separation are described. With regards to detection of bacteria, the emphasis is placed on applications targeting viable bacterial cells. While the electrochemical sensors and biosensors face many challenges before replacing standard analysis methods, the potential of screen-printed electrodes is increasingly exploited and more applications are anticipated to advance towards commercial analytical tools. PMID:29562637

  11. Detection of emotional faces: salient physical features guide effective visual search.

    PubMed

    Calvo, Manuel G; Nummenmaa, Lauri

    2008-08-01

    In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent, surprised and disgusted faces was found both under upright and inverted display conditions. Inversion slowed down the detection of these faces less than that of others (fearful, angry, and sad). Accordingly, the detection advantage involves processing of featural rather than configural information. The facial features responsible for the detection advantage are located in the mouth rather than the eye region. Computationally modeled visual saliency predicted both attentional orienting and detection. Saliency was greatest for the faces (happy) and regions (mouth) that were fixated earlier and detected faster, and there was close correspondence between the onset of the modeled saliency peak and the time at which observers initially fixated the faces. The authors conclude that visual saliency of specific facial features--especially the smiling mouth--is responsible for facilitated initial orienting, which thus shortens detection. (PsycINFO Database Record (c) 2008 APA, all rights reserved).

  12. Beauty hinders attention switch in change detection: the role of facial attractiveness and distinctiveness.

    PubMed

    Chen, Wenfeng; Liu, Chang Hong; Nakabayashi, Kazuyo

    2012-01-01

    Recent research has shown that the presence of a task-irrelevant attractive face can induce a transient diversion of attention from a perceptual task that requires covert deployment of attention to one of the two locations. However, it is not known whether this spontaneous appraisal for facial beauty also modulates attention in change detection among multiple locations, where a slower, and more controlled search process is simultaneously affected by the magnitude of a change and the facial distinctiveness. Using the flicker paradigm, this study examines how spontaneous appraisal for facial beauty affects the detection of identity change among multiple faces. Participants viewed a display consisting of two alternating frames of four faces separated by a blank frame. In half of the trials, one of the faces (target face) changed to a different person. The task of the participant was to indicate whether a change of face identity had occurred. The results showed that (1) observers were less efficient at detecting identity change among multiple attractive faces relative to unattractive faces when the target and distractor faces were not highly distinctive from one another; and (2) it is difficult to detect a change if the new face is similar to the old. The findings suggest that attractive faces may interfere with the attention-switch process in change detection. The results also show that attention in change detection was strongly modulated by physical similarity between the alternating faces. Although facial beauty is a powerful stimulus that has well-demonstrated priority, its influence on change detection is easily superseded by low-level image similarity. The visual system appears to take a different approach to facial beauty when a task requires resource-demanding feature comparisons.

  13. Rogue athletes and recombinant DNA technology: challenges for doping control.

    PubMed

    Azzazy, Hassan M E; Mansour, Mai M H

    2007-10-01

    The quest for athletic excellence holds no limit for some athletes, and the advances in recombinant DNA technology have handed these athletes the ultimate doping weapons: recombinant proteins and gene doping. Some detection methods are now available for several recombinant proteins that are commercially available as pharmaceuticals and being abused by dopers. However, researchers are struggling to come up with efficient detection methods in preparation for the imminent threat of gene doping, expected in the 2008 Olympics. This Forum article presents the main detection strategies for recombinant proteins and the forthcoming detection strategies for gene doping as well as the prime analytical challenges facing them.

  14. Face liveness detection using shearlet-based feature descriptors

    NASA Astrophysics Data System (ADS)

    Feng, Litong; Po, Lai-Man; Li, Yuming; Yuan, Fang

    2016-07-01

    Face recognition is a widely used biometric technology due to its convenience but it is vulnerable to spoofing attacks made by nonreal faces such as photographs or videos of valid users. The antispoof problem must be well resolved before widely applying face recognition in our daily life. Face liveness detection is a core technology to make sure that the input face is a live person. However, this is still very challenging using conventional liveness detection approaches of texture analysis and motion detection. The aim of this paper is to propose a feature descriptor and an efficient framework that can be used to effectively deal with the face liveness detection problem. In this framework, new feature descriptors are defined using a multiscale directional transform (shearlet transform). Then, stacked autoencoders and a softmax classifier are concatenated to detect face liveness. We evaluated this approach using the CASIA Face antispoofing database and replay-attack database. The experimental results show that our approach performs better than the state-of-the-art techniques following the provided protocols of these databases, and it is possible to significantly enhance the security of the face recognition biometric system. In addition, the experimental results also demonstrate that this framework can be easily extended to classify different spoofing attacks.

  15. An effective method on pornographic images realtime recognition

    NASA Astrophysics Data System (ADS)

    Wang, Baosong; Lv, Xueqiang; Wang, Tao; Wang, Chengrui

    2013-03-01

    In this paper, skin detection, texture filtering and face detection are used to extract feature on an image library, training them with the decision tree arithmetic to create some rules as a decision tree classifier to distinguish an unknown image. Experiment based on more than twenty thousand images, the precision rate can get 76.21% when testing on 13025 pornographic images and elapsed time is less than 0.2s. This experiment shows it has a good popularity. Among the steps mentioned above, proposing a new skin detection model which called irregular polygon region skin detection model based on YCbCr color space. This skin detection model can lower the false detection rate on skin detection. A new method called sequence region labeling on binary connected area can calculate features on connected area, it is faster and needs less memory than other recursive methods.

  16. Joint Transform Correlation for face tracking: elderly fall detection application

    NASA Astrophysics Data System (ADS)

    Katz, Philippe; Aron, Michael; Alfalou, Ayman

    2013-03-01

    In this paper, an iterative tracking algorithm based on a non-linear JTC (Joint Transform Correlator) architecture and enhanced by a digital image processing method is proposed and validated. This algorithm is based on the computation of a correlation plane where the reference image is updated at each frame. For that purpose, we use the JTC technique in real time to track a patient (target image) in a room fitted with a video camera. The correlation plane is used to localize the target image in the current video frame (frame i). Then, the reference image to be exploited in the next frame (frame i+1) is updated according to the previous one (frame i). In an effort to validate our algorithm, our work is divided into two parts: (i) a large study based on different sequences with several situations and different JTC parameters is achieved in order to quantify their effects on the tracking performances (decimation, non-linearity coefficient, size of the correlation plane, size of the region of interest...). (ii) the tracking algorithm is integrated into an application of elderly fall detection. The first reference image is a face detected by means of Haar descriptors, and then localized into the new video image thanks to our tracking method. In order to avoid a bad update of the reference frame, a method based on a comparison of image intensity histograms is proposed and integrated in our algorithm. This step ensures a robust tracking of the reference frame. This article focuses on face tracking step optimisation and evalutation. A supplementary step of fall detection, based on vertical acceleration and position, will be added and studied in further work.

  17. Cancer knowledge and misconceptions: a survey of immigrant Salvadorean women.

    PubMed

    Shankar, S; Figueroa-Valles, N

    1999-01-01

    This study assessed cancer knowledge, beliefs, and awareness of signs, symptoms, and early detection methods in immigrant Salvadorean women in the Washington, D.C. metropolitan area (DCMA). A face-to-face survey sampled 843 females aged 20 and above. Descriptive statistics were used to compute frequency of response for sociodemographic characteristics, beliefs, and awareness of signs, symptoms, and early detection methods. The sample's mean age was 34.5 years; 10% had no schooling, and 7.4% had more than a high school education. Sixty-six percent of the women worked full- or part-time; 16% had an annual income of $20,000 or more; and 26% reported having medical insurance. Thirty percent of the sample lacked knowledge of the etiology and spread of cancer. The statement, "Bumps on your body can cause cancer" was endorsed by 61.6%. Beliefs that "destiny cannot be changed" or "just about anything can cause cancer" were prevalent among 18.5%. "Cancer is a punishment from God" was believed by 10.9%. A general physical examination was the most frequent (82%) early detection method mentioned. The Pap test was identified by 24.2%, and mammography by 14%; 5.6% mentioned breast self examination. Similar to other Hispanics, immigrant Salvadorean women in DCMA demonstrated a lack of knowledge of cancer's signs and symptoms, and early detection methods of and beliefs about cancer. Educational programs designed specifically for immigrant Salvadorean women to increase their knowledge of cancer and prevention methods are essential.

  18. Cell boundary fault detection system

    DOEpatents

    Archer, Charles Jens [Rochester, MN; Pinnow, Kurt Walter [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian Edward [Rochester, MN

    2009-05-05

    A method determines a nodal fault along the boundary, or face, of a computing cell. Nodes on adjacent cell boundaries communicate with each other, and the communications are analyzed to determine if a node or connection is faulty.

  19. Toward automated face detection in thermal and polarimetric thermal imagery

    NASA Astrophysics Data System (ADS)

    Gordon, Christopher; Acosta, Mark; Short, Nathan; Hu, Shuowen; Chan, Alex L.

    2016-05-01

    Visible spectrum face detection algorithms perform pretty reliably under controlled lighting conditions. However, variations in illumination and application of cosmetics can distort the features used by common face detectors, thereby degrade their detection performance. Thermal and polarimetric thermal facial imaging are relatively invariant to illumination and robust to the application of makeup, due to their measurement of emitted radiation instead of reflected light signals. The objective of this work is to evaluate a government off-the-shelf wavelet based naïve-Bayes face detection algorithm and a commercial off-the-shelf Viola-Jones cascade face detection algorithm on face imagery acquired in different spectral bands. New classifiers were trained using the Viola-Jones cascade object detection framework with preprocessed facial imagery. Preprocessing using Difference of Gaussians (DoG) filtering reduces the modality gap between facial signatures across the different spectral bands, thus enabling more correlated histogram of oriented gradients (HOG) features to be extracted from the preprocessed thermal and visible face images. Since the availability of training data is much more limited in the thermal spectrum than in the visible spectrum, it is not feasible to train a robust multi-modal face detector using thermal imagery alone. A large training dataset was constituted with DoG filtered visible and thermal imagery, which was subsequently used to generate a custom trained Viola-Jones detector. A 40% increase in face detection rate was achieved on a testing dataset, as compared to the performance of a pre-trained/baseline face detector. Insights gained in this research are valuable in the development of more robust multi-modal face detectors.

  20. A comparison of moving object detection methods for real-time moving object detection

    NASA Astrophysics Data System (ADS)

    Roshan, Aditya; Zhang, Yun

    2014-06-01

    Moving object detection has a wide variety of applications from traffic monitoring, site monitoring, automatic theft identification, face detection to military surveillance. Many methods have been developed across the globe for moving object detection, but it is very difficult to find one which can work globally in all situations and with different types of videos. The purpose of this paper is to evaluate existing moving object detection methods which can be implemented in software on a desktop or laptop, for real time object detection. There are several moving object detection methods noted in the literature, but few of them are suitable for real time moving object detection. Most of the methods which provide for real time movement are further limited by the number of objects and the scene complexity. This paper evaluates the four most commonly used moving object detection methods as background subtraction technique, Gaussian mixture model, wavelet based and optical flow based methods. The work is based on evaluation of these four moving object detection methods using two (2) different sets of cameras and two (2) different scenes. The moving object detection methods have been implemented using MatLab and results are compared based on completeness of detected objects, noise, light change sensitivity, processing time etc. After comparison, it is observed that optical flow based method took least processing time and successfully detected boundary of moving objects which also implies that it can be implemented for real-time moving object detection.

  1. Automated facial attendance logger for students

    NASA Astrophysics Data System (ADS)

    Krithika, L. B.; Kshitish, S.; Kishore, M. R.

    2017-11-01

    From the past two decades, various spheres of activity are in the aspect of ‘Face recognition’ as an essential tool. The complete series of actions of face recognition is composed of 3 stages: Face Detection, Feature Extraction and Recognition. In this paper, we make an effort to put forth a new application of face recognition and detection in education. The proposed system scans the classroom and detects the face of the students in class and matches the scanned face with the templates that is available in the database and updates the attendance of the respective students.

  2. Standoff imaging of a masked human face using a 670 GHz high resolution radar

    NASA Astrophysics Data System (ADS)

    Kjellgren, Jan; Svedin, Jan; Cooper, Ken B.

    2011-11-01

    This paper presents an exploratory attempt to use high-resolution radar measurements for face identification in forensic applications. An imaging radar system developed by JPL was used to measure a human face at 670 GHz. Frontal views of the face were measured both with and without a ski mask at a range of 25 m. The realized spatial resolution was roughly 1 cm in all three dimensions. The surfaces of the ski mask and the face were detected by using the two dominating reflections from amplitude data. Various methods for visualization of these surfaces are presented. The possibility to use radar data to determine certain face distance measures between well-defined face landmarks, typically used for anthropometric statistics, was explored. The measures used here were face length, frontal breadth and interpupillary distance. In many cases the radar system seems to provide sufficient information to exclude an innocent subject from suspicion. For an accurate identification it is believed that a system must provide significantly more information.

  3. The Effect of Early Visual Deprivation on the Development of Face Detection

    ERIC Educational Resources Information Center

    Mondloch, Catherine J.; Segalowitz, Sidney J.; Lewis, Terri L.; Dywan, Jane; Le Grand, Richard; Maurer, Daphne

    2013-01-01

    The expertise of adults in face perception is facilitated by their ability to rapidly detect that a stimulus is a face. In two experiments, we examined the role of early visual input in the development of face detection by testing patients who had been treated as infants for bilateral congenital cataract. Experiment 1 indicated that, at age 9 to…

  4. Using constrained information entropy to detect rare adverse drug reactions from medical forums.

    PubMed

    Yi Zheng; Chaowang Lan; Hui Peng; Jinyan Li

    2016-08-01

    Adverse drug reactions (ADRs) detection is critical to avoid malpractices yet challenging due to its uncertainty in pre-marketing review and the underreporting in post-marketing surveillance. To conquer this predicament, social media based ADRs detection methods have been proposed recently. However, existing researches are mostly co-occurrence based methods and face several issues, in particularly, leaving out the rare ADRs and unable to distinguish irrelevant ADRs. In this work, we introduce a constrained information entropy (CIE) method to solve these problems. CIE first recognizes the drug-related adverse reactions using a predefined keyword dictionary and then captures high- and low-frequency (rare) ADRs by information entropy. Extensive experiments on medical forums dataset demonstrate that CIE outperforms the state-of-the-art co-occurrence based methods, especially in rare ADRs detection.

  5. Real-time traffic sign recognition based on a general purpose GPU and deep-learning.

    PubMed

    Lim, Kwangyong; Hong, Yongwon; Choi, Yeongwoo; Byun, Hyeran

    2017-01-01

    We present a General Purpose Graphics Processing Unit (GPGPU) based real-time traffic sign detection and recognition method that is robust against illumination changes. There have been many approaches to traffic sign recognition in various research fields; however, previous approaches faced several limitations when under low illumination or wide variance of light conditions. To overcome these drawbacks and improve processing speeds, we propose a method that 1) is robust against illumination changes, 2) uses GPGPU-based real-time traffic sign detection, and 3) performs region detecting and recognition using a hierarchical model. This method produces stable results in low illumination environments. Both detection and hierarchical recognition are performed in real-time, and the proposed method achieves 0.97 F1-score on our collective dataset, which uses the Vienna convention traffic rules (Germany and South Korea).

  6. Methods to increase reporting of childhood sexual abuse in surveys: the sensitivity and specificity of face-to-face interviews versus a sealed envelope method in Ugandan primary school children.

    PubMed

    Barr, Anna Louise; Knight, Louise; Franҫa-Junior, Ivan; Allen, Elizabeth; Naker, Dipak; Devries, Karen M

    2017-02-23

    Underreporting of childhood sexual abuse is a major barrier to obtaining reliable prevalence estimates. We tested the sensitivity and specificity of the face-to-face-interview (FTFI) method by comparing the number of disclosures of forced sex against a more confidential mode of data collection, the sealed-envelope method (SEM). We also report on characteristics of individuals associated with non-disclosure in FTFIs. Secondary analysis of data from a cross-sectional survey conducted in 2014, with n = 3843 children attending primary school in Luwero District, Uganda. Sensitivity and specificity were calculated, and mixed effects logistic regression models tested factors associated with disclosure in one or both modes. In the FTFI, 1.1% (n = 42) of children reported ever experiencing forced sex, compared to 7.0% (n = 268) in the SEM. The FTFI method demonstrated low sensitivity (13.1%, 95%CI 9.3-17.7%) and high specificity (99.8%, 95%CI 99.6-99.9%) in detecting cases of forced sex, when compared to the SEM. Boys were less likely than girls to disclose in the FTFI, however there was no difference in prevalence by sex using the SEM (aOR = 0.91, 95%CI 0.7-1.2; P = 0.532). Disclosing experience of other forms of sexual violence was associated with experience of forced sex for both modes of disclosure. The SEM method was superior to FTFIs in identifying cases of forced sex amongst primary school children, particularly for boys. Reporting of other forms of sexual violence in FTFIs may indicate experience of forced sex. Future survey research, and efforts to estimate prevalence of sexual violence, should make use of more confidential disclosure methods to detect childhood sexual abuse.

  7. A simple method for detection of gunshot residue particles from hands, hair, face, and clothing using scanning electron microscopy/wavelength dispersive X-ray (SEM/WDX).

    PubMed

    Kage, S; Kudo, K; Kaizoji, A; Ryumoto, J; Ikeda, H; Ikeda, N

    2001-07-01

    We devised a simple and rapid method for detection of gunshot residue (GSR) particles, using scanning electron microscopy/wavelength dispersive X-ray (SEM/WDX) analysis. Experiments were done on samples containing GSR particles obtained from hands, hair, face, and clothing, using double-sided adhesive coated aluminum stubs (tape-lift method). SEM/WDX analyses for GSR were carried out in three steps: the first step was map analysis for barium (Ba) to search for GSR particles from lead styphnate primed ammunition, or tin (Sn) to search for GSR particles from mercury fulminate primed ammunition. The second step was determination of the location of GSR particles by X-ray imaging of Ba or Sn at a magnification of x 1000-2000 in the SEM, using data of map analysis, and the third step was identification of GSR particles, using WDX spectrometers. Analysis of samples from each primer of a stub took about 3 h. Practical applications were shown for utility of this method.

  8. Thermal Inspection of Composite Honeycomb Structures

    NASA Technical Reports Server (NTRS)

    Zalameda, Joseph N.; Parker, F. Raymond

    2014-01-01

    Composite honeycomb structures continue to be widely used in aerospace applications due to their low weight and high strength advantages. Developing nondestructive evaluation (NDE) inspection methods are essential for their safe performance. Pulsed thermography is a commonly used technique for composite honeycomb structure inspections due to its large area and rapid inspection capability. Pulsed thermography is shown to be sensitive for detection of face sheet impact damage and face sheet to core disbond. Data processing techniques, using principal component analysis to improve the defect contrast, are presented. In addition, limitations to the thermal detection of the core are investigated. Other NDE techniques, such as computed tomography X-ray and ultrasound, are used for comparison to the thermography results.

  9. Association and dissociation between detection and discrimination of objects of expertise: Evidence from visual search.

    PubMed

    Golan, Tal; Bentin, Shlomo; DeGutis, Joseph M; Robertson, Lynn C; Harel, Assaf

    2014-02-01

    Expertise in face recognition is characterized by high proficiency in distinguishing between individual faces. However, faces also enjoy an advantage at the early stage of basic-level detection, as demonstrated by efficient visual search for faces among nonface objects. In the present study, we asked (1) whether the face advantage in detection is a unique signature of face expertise, or whether it generalizes to other objects of expertise, and (2) whether expertise in face detection is intrinsically linked to expertise in face individuation. We compared how groups with varying degrees of object and face expertise (typical adults, developmental prosopagnosics [DP], and car experts) search for objects within and outside their domains of expertise (faces, cars, airplanes, and butterflies) among a variable set of object distractors. Across all three groups, search efficiency (indexed by reaction time slopes) was higher for faces and airplanes than for cars and butterflies. Notably, the search slope for car targets was considerably shallower in the car experts than in nonexperts. Although the mean face slope was slightly steeper among the DPs than in the other two groups, most of the DPs' search slopes were well within the normative range. This pattern of results suggests that expertise in object detection is indeed associated with expertise at the subordinate level, that it is not specific to faces, and that the two types of expertise are distinct facilities. We discuss the potential role of experience in bridging between low-level discriminative features and high-level naturalistic categories.

  10. Robust and Blind 3D Mesh Watermarking in Spatial Domain Based on Faces Categorization and Sorting

    NASA Astrophysics Data System (ADS)

    Molaei, Amir Masoud; Ebrahimnezhad, Hossein; Sedaaghi, Mohammad Hossein

    2016-06-01

    In this paper, a 3D watermarking algorithm in spatial domain is presented with blind detection. In the proposed method, a negligible visual distortion is observed in host model. Initially, a preprocessing is applied on the 3D model to make it robust against geometric transformation attacks. Then, a number of triangle faces are determined as mark triangles using a novel systematic approach in which faces are categorized and sorted robustly. In order to enhance the capability of information retrieval by attacks, block watermarks are encoded using Reed-Solomon block error-correcting code before embedding into the mark triangles. Next, the encoded watermarks are embedded in spherical coordinates. The proposed method is robust against additive noise, mesh smoothing and quantization attacks. Also, it is stout next to geometric transformation, vertices and faces reordering attacks. Moreover, the proposed algorithm is designed so that it is robust against the cropping attack. Simulation results confirm that the watermarked models confront very low distortion if the control parameters are selected properly. Comparison with other methods demonstrates that the proposed method has good performance against the mesh smoothing attacks.

  11. Finger tips detection for two handed gesture recognition

    NASA Astrophysics Data System (ADS)

    Bhuyan, M. K.; Kar, Mithun Kumar; Neog, Debanga Raj

    2011-10-01

    In this paper, a novel algorithm is proposed for fingertips detection in view of two-handed static hand pose recognition. In our method, finger tips of both hands are detected after detecting hand regions by skin color-based segmentation. At first, the face is removed in the image by using Haar classifier and subsequently, the regions corresponding to the gesturing hands are isolated by a region labeling technique. Next, the key geometric features characterizing gesturing hands are extracted for two hands. Finally, for all possible/allowable finger movements, a probabilistic model is developed for pose recognition. Proposed method can be employed in a variety of applications like sign language recognition and human-robot-interactions etc.

  12. Recovery of facial expressions using functional electrical stimulation after full-face transplantation.

    PubMed

    Topçu, Çağdaş; Uysal, Hilmi; Özkan, Ömer; Özkan, Özlenen; Polat, Övünç; Bedeloğlu, Merve; Akgül, Arzu; Döğer, Ela Naz; Sever, Refik; Çolak, Ömer Halil

    2018-03-06

    We assessed the recovery of 2 face transplantation patients with measures of complexity during neuromuscular rehabilitation. Cognitive rehabilitation methods and functional electrical stimulation were used to improve facial emotional expressions of full-face transplantation patients for 5 months. Rehabilitation and analyses were conducted at approximately 3 years after full facial transplantation in the patient group. We report complexity analysis of surface electromyography signals of these two patients in comparison to the results of 10 healthy individuals. Facial surface electromyography data were collected during 6 basic emotional expressions and 4 primary facial movements from 2 full-face transplantation patients and 10 healthy individuals to determine a strategy of functional electrical stimulation and understand the mechanisms of rehabilitation. A new personalized rehabilitation technique was developed using the wavelet packet method. Rehabilitation sessions were applied twice a month for 5 months. Subsequently, motor and functional progress was assessed by comparing the fuzzy entropy of surface electromyography data against the results obtained from patients before rehabilitation and the mean results obtained from 10 healthy subjects. At the end of personalized rehabilitation, the patient group showed improvements in their facial symmetry and their ability to perform basic facial expressions and primary facial movements. Similarity in the pattern of fuzzy entropy for facial expressions between the patient group and healthy individuals increased. Synkinesis was detected during primary facial movements in the patient group, and one patient showed synkinesis during the happiness expression. Synkinesis in the lower face region of one of the patients was eliminated for the lid tightening movement. The recovery of emotional expressions after personalized rehabilitation was satisfactory to the patients. The assessment with complexity analysis of sEMG data can be used for developing new neurorehabilitation techniques and detecting synkinesis after full-face transplantation.

  13. Drowsy driver mobile application: Development of a novel scleral-area detection method.

    PubMed

    Mohammad, Faisal; Mahadas, Kausalendra; Hung, George K

    2017-10-01

    A reliable and practical app for mobile devices was developed to detect driver drowsiness. It consisted of two main components: a Haar cascade classifier, provided by a computer vision framework called OpenCV, for face/eye detection; and a dedicated JAVA software code for image processing that was applied over a masked region circumscribing the eye. A binary threshold was performed over the masked region to provide a quantitative measure of the number of white pixels in the sclera, which represented the state of eye opening. A continuously low white-pixel count would indicate drowsiness, thereby triggering an alarm to alert the driver. This system was successfully implemented on: (1) a static face image, (2) two subjects under laboratory conditions, and (3) a subject in a vehicle environment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Effects of boundary-layer separation controllers on a desktop fume hood.

    PubMed

    Huang, Rong Fung; Chen, Jia-Kun; Hsu, Ching Min; Hung, Shuo-Fu

    2016-10-02

    A desktop fume hood installed with an innovative design of flow boundary-layer separation controllers on the leading edges of the side plates, work surface, and corners was developed and characterized for its flow and containment leakage characteristics. The geometric features of the developed desktop fume hood included a rearward offset suction slot, two side plates, two side-plate boundary-layer separation controllers on the leading edges of the side plates, a slanted surface on the leading edge of the work surface, and two small triangular plates on the upper left and right corners of the hood face. The flow characteristics were examined using the laser-assisted smoke flow visualization technique. The containment leakages were measured by the tracer gas (sulphur hexafluoride) detection method on the hood face plane with a mannequin installed in front of the hood. The results of flow visualization showed that the smoke dispersions induced by the boundary-layer separations on the leading edges of the side plates and work surface, as well as the three-dimensional complex flows on the upper-left and -right corners of the hood face, were effectively alleviated by the boundary-layer separation controllers. The results of the tracer gas detection method with a mannequin standing in front of the hood showed that the leakage levels were negligibly small (≤0.003 ppm) at low face velocities (≥0.19 m/s).

  15. Optogenetic and pharmacological suppression of spatial clusters of face neurons reveal their causal role in face gender discrimination.

    PubMed

    Afraz, Arash; Boyden, Edward S; DiCarlo, James J

    2015-05-26

    Neurons that respond more to images of faces over nonface objects were identified in the inferior temporal (IT) cortex of primates three decades ago. Although it is hypothesized that perceptual discrimination between faces depends on the neural activity of IT subregions enriched with "face neurons," such a causal link has not been directly established. Here, using optogenetic and pharmacological methods, we reversibly suppressed the neural activity in small subregions of IT cortex of macaque monkeys performing a facial gender-discrimination task. Each type of intervention independently demonstrated that suppression of IT subregions enriched in face neurons induced a contralateral deficit in face gender-discrimination behavior. The same neural suppression of other IT subregions produced no detectable change in behavior. These results establish a causal link between the neural activity in IT face neuron subregions and face gender-discrimination behavior. Also, the demonstration that brief neural suppression of specific spatial subregions of IT induces behavioral effects opens the door for applying the technical advantages of optogenetics to a systematic attack on the causal relationship between IT cortex and high-level visual perception.

  16. Effective connectivities of cortical regions for top-down face processing: A Dynamic Causal Modeling study

    PubMed Central

    Li, Jun; Liu, Jiangang; Liang, Jimin; Zhang, Hongchuan; Zhao, Jizheng; Rieth, Cory A.; Huber, David E.; Li, Wu; Shi, Guangming; Ai, Lin; Tian, Jie; Lee, Kang

    2013-01-01

    To study top-down face processing, the present study used an experimental paradigm in which participants detected non-existent faces in pure noise images. Conventional BOLD signal analysis identified three regions involved in this illusory face detection. These regions included the left orbitofrontal cortex (OFC) in addition to the right fusiform face area (FFA) and right occipital face area (OFA), both of which were previously known to be involved in both top-down and bottom-up processing of faces. We used Dynamic Causal Modeling (DCM) and Bayesian model selection to further analyze the data, revealing both intrinsic and modulatory effective connectivities among these three cortical regions. Specifically, our results support the claim that the orbitofrontal cortex plays a crucial role in the top-down processing of faces by regulating the activities of the occipital face area, and the occipital face area in turn detects the illusory face features in the visual stimuli and then provides this information to the fusiform face area for further analysis. PMID:20423709

  17. A Viola-Jones based hybrid face detection framework

    NASA Astrophysics Data System (ADS)

    Murphy, Thomas M.; Broussard, Randy; Schultz, Robert; Rakvic, Ryan; Ngo, Hau

    2013-12-01

    Improvements in face detection performance would benefit many applications. The OpenCV library implements a standard solution, the Viola-Jones detector, with a statistically boosted rejection cascade of binary classifiers. Empirical evidence has shown that Viola-Jones underdetects in some instances. This research shows that a truncated cascade augmented by a neural network could recover these undetected faces. A hybrid framework is constructed, with a truncated Viola-Jones cascade followed by an artificial neural network, used to refine the face decision. Optimally, a truncation stage that captured all faces and allowed the neural network to remove the false alarms is selected. A feedforward backpropagation network with one hidden layer is trained to discriminate faces based upon the thresholding (detection) values of intermediate stages of the full rejection cascade. A clustering algorithm is used as a precursor to the neural network, to group significant overlappings. Evaluated on the CMU/VASC Image Database, comparison with an unmodified OpenCV approach shows: (1) a 37% increase in detection rates if constrained by the requirement of no increase in false alarms, (2) a 48% increase in detection rates if some additional false alarms are tolerated, and (3) an 82% reduction in false alarms with no reduction in detection rates. These results demonstrate improved face detection and could address the need for such improvement in various applications.

  18. Real-time traffic sign recognition based on a general purpose GPU and deep-learning

    PubMed Central

    Hong, Yongwon; Choi, Yeongwoo; Byun, Hyeran

    2017-01-01

    We present a General Purpose Graphics Processing Unit (GPGPU) based real-time traffic sign detection and recognition method that is robust against illumination changes. There have been many approaches to traffic sign recognition in various research fields; however, previous approaches faced several limitations when under low illumination or wide variance of light conditions. To overcome these drawbacks and improve processing speeds, we propose a method that 1) is robust against illumination changes, 2) uses GPGPU-based real-time traffic sign detection, and 3) performs region detecting and recognition using a hierarchical model. This method produces stable results in low illumination environments. Both detection and hierarchical recognition are performed in real-time, and the proposed method achieves 0.97 F1-score on our collective dataset, which uses the Vienna convention traffic rules (Germany and South Korea). PMID:28264011

  19. Tweaked residual convolutional network for face alignment

    NASA Astrophysics Data System (ADS)

    Du, Wenchao; Li, Ke; Zhao, Qijun; Zhang, Yi; Chen, Hu

    2017-08-01

    We propose a novel Tweaked Residual Convolutional Network approach for face alignment with two-level convolutional networks architecture. Specifically, the first-level Tweaked Convolutional Network (TCN) module predicts the landmark quickly but accurately enough as a preliminary, by taking low-resolution version of the detected face holistically as the input. The following Residual Convolutional Networks (RCN) module progressively refines the landmark by taking as input the local patch extracted around the predicted landmark, particularly, which allows the Convolutional Neural Network (CNN) to extract local shape-indexed features to fine tune landmark position. Extensive evaluations show that the proposed Tweaked Residual Convolutional Network approach outperforms existing methods.

  20. Noseleaf pit in Egyptian slit-faced bat as a doubly curved reflector

    NASA Astrophysics Data System (ADS)

    Zhuang, Qiao; Wang, Xiao-Min; Li, Ming-Xuan; Mao, Jie; Wang, Fu-Xun

    2012-02-01

    Noseleaves in slit-faced bats have been hypothesized to affect the sonar beam. Using numerical methods, we show that the pit in the noseleaf of an Egyptian slit-faced bat has an effect on focusing the acoustic near field as well as shaping the radiation patterns and hence enhancing the directionality. The underlying physical mechanism suggested by the properties of the effect is that the pit acts as a doubly curved reflector. Thanks to the pit the beam shape is overall directional and more selectively widened at the high end of the biosonar frequency range to improve spatial coverage and detectability of targets.

  1. A quick eye to anger: An investigation of a differential effect of facial features in detecting angry and happy expressions.

    PubMed

    Lo, L Y; Cheng, M Y

    2017-06-01

    Detection of angry and happy faces is generally found to be easier and faster than that of faces expressing emotions other than anger or happiness. This can be explained by the threatening account and the feature account. Few empirical studies have explored the interaction between these two accounts which are seemingly, but not necessarily, mutually exclusive. The present studies hypothesised that prominent facial features are important in facilitating the detection process of both angry and happy expressions; yet the detection of happy faces was more facilitated by the prominent features than angry faces. Results confirmed the hypotheses and indicated that participants reacted faster to the emotional expressions with prominent features (in Study 1) and the detection of happy faces was more facilitated by the prominent feature than angry faces (in Study 2). The findings are compatible with evolutionary speculation which suggests that the angry expression is an alarming signal of potential threats to survival. Compared to the angry faces, the happy faces need more salient physical features to obtain a similar level of processing efficiency. © 2015 International Union of Psychological Science.

  2. Eye coding mechanisms in early human face event-related potentials.

    PubMed

    Rousselet, Guillaume A; Ince, Robin A A; van Rijsbergen, Nicola J; Schyns, Philippe G

    2014-11-10

    In humans, the N170 event-related potential (ERP) is an integrated measure of cortical activity that varies in amplitude and latency across trials. Researchers often conjecture that N170 variations reflect cortical mechanisms of stimulus coding for recognition. Here, to settle the conjecture and understand cortical information processing mechanisms, we unraveled the coding function of N170 latency and amplitude variations in possibly the simplest socially important natural visual task: face detection. On each experimental trial, 16 observers saw face and noise pictures sparsely sampled with small Gaussian apertures. Reverse-correlation methods coupled with information theory revealed that the presence of the eye specifically covaries with behavioral and neural measurements: the left eye strongly modulates reaction times and lateral electrodes represent mainly the presence of the contralateral eye during the rising part of the N170, with maximum sensitivity before the N170 peak. Furthermore, single-trial N170 latencies code more about the presence of the contralateral eye than N170 amplitudes and early latencies are associated with faster reaction times. The absence of these effects in control images that did not contain a face refutes alternative accounts based on retinal biases or allocation of attention to the eye location on the face. We conclude that the rising part of the N170, roughly 120-170 ms post-stimulus, is a critical time-window in human face processing mechanisms, reflecting predominantly, in a face detection task, the encoding of a single feature: the contralateral eye. © 2014 ARVO.

  3. Fast hierarchical knowledge-based approach for human face detection in color images

    NASA Astrophysics Data System (ADS)

    Jiang, Jun; Gong, Jie; Zhang, Guilin; Hu, Ruolan

    2001-09-01

    This paper presents a fast hierarchical knowledge-based approach for automatically detecting multi-scale upright faces in still color images. The approach consists of three levels. At the highest level, skin-like regions are determinated by skin model, which is based on the color attributes hue and saturation in HSV color space, as well color attributes red and green in normalized color space. In level 2, a new eye model is devised to select human face candidates in segmented skin-like regions. An important feature of the eye model is that it is independent of the scale of human face. So it is possible for finding human faces in different scale with scanning image only once, and it leads to reduction the computation time of face detection greatly. In level 3, a human face mosaic image model, which is consistent with physical structure features of human face well, is applied to judge whether there are face detects in human face candidate regions. This model includes edge and gray rules. Experiment results show that the approach has high robustness and fast speed. It has wide application perspective at human-computer interactions and visual telephone etc.

  4. Calculation of stresses in a rock mass and lining in stagewise face drivage

    NASA Astrophysics Data System (ADS)

    Seryakov, VM; Zhamalova, BR

    2018-03-01

    Using the method of calculating mechanical state of a rock mass for the conditions of stagewise drivage of a production face in large cross-section excavations, the specific features of stress redistribution in lining of excavations are found. The zones of tensile stresses in the lining are detected. The authors discuss the influence of the initial stress state of rocks on the tension stress zones induced in the lining in course of the heading advance

  5. The Face in the Crowd Effect Unconfounded: Happy Faces, Not Angry Faces, Are More Efficiently Detected in Single- and Multiple-Target Visual Search Tasks

    ERIC Educational Resources Information Center

    Becker, D. Vaughn; Anderson, Uriah S.; Mortensen, Chad R.; Neufeld, Samantha L.; Neel, Rebecca

    2011-01-01

    Is it easier to detect angry or happy facial expressions in crowds of faces? The present studies used several variations of the visual search task to assess whether people selectively attend to expressive faces. Contrary to widely cited studies (e.g., Ohman, Lundqvist, & Esteves, 2001) that suggest angry faces "pop out" of crowds, our review of…

  6. Multivoxel patterns in face-sensitive temporal regions reveal an encoding schema based on detecting life in a face.

    PubMed

    Looser, Christine E; Guntupalli, Jyothi S; Wheatley, Thalia

    2013-10-01

    More than a decade of research has demonstrated that faces evoke prioritized processing in a 'core face network' of three brain regions. However, whether these regions prioritize the detection of global facial form (shared by humans and mannequins) or the detection of life in a face has remained unclear. Here, we dissociate form-based and animacy-based encoding of faces by using animate and inanimate faces with human form (humans, mannequins) and dog form (real dogs, toy dogs). We used multivariate pattern analysis of BOLD responses to uncover the representational similarity space for each area in the core face network. Here, we show that only responses in the inferior occipital gyrus are organized by global facial form alone (human vs dog) while animacy becomes an additional organizational priority in later face-processing regions: the lateral fusiform gyri (latFG) and right superior temporal sulcus. Additionally, patterns evoked by human faces were maximally distinct from all other face categories in the latFG and parts of the extended face perception system. These results suggest that once a face configuration is perceived, faces are further scrutinized for whether the face is alive and worthy of social cognitive resources.

  7. High-emulation mask recognition with high-resolution hyperspectral video capture system

    NASA Astrophysics Data System (ADS)

    Feng, Jiao; Fang, Xiaojing; Li, Shoufeng; Wang, Yongjin

    2014-11-01

    We present a method for distinguishing human face from high-emulation mask, which is increasingly used by criminals for activities such as stealing card numbers and passwords on ATM. Traditional facial recognition technique is difficult to detect such camouflaged criminals. In this paper, we use the high-resolution hyperspectral video capture system to detect high-emulation mask. A RGB camera is used for traditional facial recognition. A prism and a gray scale camera are used to capture spectral information of the observed face. Experiments show that mask made of silica gel has different spectral reflectance compared with the human skin. As multispectral image offers additional spectral information about physical characteristics, high-emulation mask can be easily recognized.

  8. Technology survey on video face tracking

    NASA Astrophysics Data System (ADS)

    Zhang, Tong; Gomes, Herman Martins

    2014-03-01

    With the pervasiveness of monitoring cameras installed in public areas, schools, hospitals, work places and homes, video analytics technologies for interpreting these video contents are becoming increasingly relevant to people's lives. Among such technologies, human face detection and tracking (and face identification in many cases) are particularly useful in various application scenarios. While plenty of research has been conducted on face tracking and many promising approaches have been proposed, there are still significant challenges in recognizing and tracking people in videos with uncontrolled capturing conditions, largely due to pose and illumination variations, as well as occlusions and cluttered background. It is especially complex to track and identify multiple people simultaneously in real time due to the large amount of computation involved. In this paper, we present a survey on literature and software that are published or developed during recent years on the face tracking topic. The survey covers the following topics: 1) mainstream and state-of-the-art face tracking methods, including features used to model the targets and metrics used for tracking; 2) face identification and face clustering from face sequences; and 3) software packages or demonstrations that are available for algorithm development or trial. A number of publically available databases for face tracking are also introduced.

  9. Image Classification for Web Genre Identification

    DTIC Science & Technology

    2012-01-01

    recognition and landscape detection using the computer vision toolkit OpenCV1. For facial recognition , we researched the possibilities of using the...method for connecting these names with a face/personal photo and logo respectively. [2] METHODOLOGY For this project, we focused primarily on facial

  10. Detection of Emotional Faces: Salient Physical Features Guide Effective Visual Search

    ERIC Educational Resources Information Center

    Calvo, Manuel G.; Nummenmaa, Lauri

    2008-01-01

    In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent,…

  11. Right wing authoritarianism is associated with race bias in face detection

    PubMed Central

    Bret, Amélie; Beffara, Brice; McFadyen, Jessica; Mermillod, Martial

    2017-01-01

    Racial discrimination can be observed in a wide range of psychological processes, including even the earliest phases of face detection. It remains unclear, however, whether racially-biased low-level face processing is influenced by ideologies, such as right wing authoritarianism or social dominance orientation. In the current study, we hypothesized that socio-political ideologies such as these can substantially predict perceptive racial bias during early perception. To test this hypothesis, 67 participants detected faces within arrays of neutral objects. The faces were either Caucasian (in-group) or North African (out-group) and either had a neutral or angry expression. Results showed that participants with higher self-reported right-wing authoritarianism were more likely to show slower response times for detecting out- vs. in-groups faces. We interpreted our results according to the Dual Process Motivational Model and suggest that socio-political ideologies may foster early racial bias via attentional disengagement. PMID:28692705

  12. Facial detection using deep learning

    NASA Astrophysics Data System (ADS)

    Sharma, Manik; Anuradha, J.; Manne, H. K.; Kashyap, G. S. C.

    2017-11-01

    In the recent past, we have observed that Facebook has developed an uncanny ability to recognize people in photographs. Previously, we had to tag people in photos by clicking on them and typing their name. Now as soon as we upload a photo, Facebook tags everyone on its own. Facebook can recognize faces with 98% accuracy which is pretty much as good as humans can do. This technology is called Face Detection. Face detection is a popular topic in biometrics. We have surveillance cameras in public places for video capture as well as security purposes. The main advantages of this algorithm over other are uniqueness and approval. We need speed and accuracy to identify. But face detection is really a series of several related problems: First, look at a picture and find all the faces in it. Second, focus on each face and understand that even if a face is turned in a weird direction or in bad lighting, it is still the same person. Third select features which can be used to identify each face uniquely like size of the eyes, face etc. Finally, compare these features to data we have to find the person name. As a human, your brain is wired to do all of this automatically and instantly. In fact, humans are too good at recognizing faces. Computers are not capable of this kind of high-level generalization, so we must teach them how to do each step in this process separately. The growth of face detection is largely driven by growing applications such as credit card verification, surveillance video images, authentication for banking and security system access.

  13. Three-dimensional face pose detection and tracking using monocular videos: tool and application.

    PubMed

    Dornaika, Fadi; Raducanu, Bogdan

    2009-08-01

    Recently, we have proposed a real-time tracker that simultaneously tracks the 3-D head pose and facial actions in monocular video sequences that can be provided by low quality cameras. This paper has two main contributions. First, we propose an automatic 3-D face pose initialization scheme for the real-time tracker by adopting a 2-D face detector and an eigenface system. Second, we use the proposed methods-the initialization and tracking-for enhancing the human-machine interaction functionality of an AIBO robot. More precisely, we show how the orientation of the robot's camera (or any active vision system) can be controlled through the estimation of the user's head pose. Applications based on head-pose imitation such as telepresence, virtual reality, and video games can directly exploit the proposed techniques. Experiments on real videos confirm the robustness and usefulness of the proposed methods.

  14. Spatial Mechanisms within the Dorsal Visual Pathway Contribute to the Configural Processing of Faces.

    PubMed

    Zachariou, Valentinos; Nikas, Christine V; Safiullah, Zaid N; Gotts, Stephen J; Ungerleider, Leslie G

    2017-08-01

    Human face recognition is often attributed to configural processing; namely, processing the spatial relationships among the features of a face. If configural processing depends on fine-grained spatial information, do visuospatial mechanisms within the dorsal visual pathway contribute to this process? We explored this question in human adults using functional magnetic resonance imaging and transcranial magnetic stimulation (TMS) in a same-different face detection task. Within localized, spatial-processing regions of the posterior parietal cortex, configural face differences led to significantly stronger activation compared to featural face differences, and the magnitude of this activation correlated with behavioral performance. In addition, detection of configural relative to featural face differences led to significantly stronger functional connectivity between the right FFA and the spatial processing regions of the dorsal stream, whereas detection of featural relative to configural face differences led to stronger functional connectivity between the right FFA and left FFA. Critically, TMS centered on these parietal regions impaired performance on configural but not featural face difference detections. We conclude that spatial mechanisms within the dorsal visual pathway contribute to the configural processing of facial features and, more broadly, that the dorsal stream may contribute to the veridical perception of faces. Published by Oxford University Press 2016.

  15. A level-set method for pathology segmentation in fluorescein angiograms and en face retinal images of patients with age-related macular degeneration

    NASA Astrophysics Data System (ADS)

    Mohammad, Fatimah; Ansari, Rashid; Shahidi, Mahnaz

    2013-03-01

    The visibility and continuity of the inner segment outer segment (ISOS) junction layer of the photoreceptors on spectral domain optical coherence tomography images is known to be related to visual acuity in patients with age-related macular degeneration (AMD). Automatic detection and segmentation of lesions and pathologies in retinal images is crucial for the screening, diagnosis, and follow-up of patients with retinal diseases. One of the challenges of using the classical level-set algorithms for segmentation involves the placement of the initial contour. Manually defining the contour or randomly placing it in the image may lead to segmentation of erroneous structures. It is important to be able to automatically define the contour by using information provided by image features. We explored a level-set method which is based on the classical Chan-Vese model and which utilizes image feature information for automatic contour placement for the segmentation of pathologies in fluorescein angiograms and en face retinal images of the ISOS layer. This was accomplished by exploiting a priori knowledge of the shape and intensity distribution allowing the use of projection profiles to detect the presence of pathologies that are characterized by intensity differences with surrounding areas in retinal images. We first tested our method by applying it to fluorescein angiograms. We then applied our method to en face retinal images of patients with AMD. The experimental results included demonstrate that the proposed method provided a quick and improved outcome as compared to the classical Chan-Vese method in which the initial contour is randomly placed, thus indicating the potential to provide a more accurate and detailed view of changes in pathologies due to disease progression and treatment.

  16. Low-complexity object detection with deep convolutional neural network for embedded systems

    NASA Astrophysics Data System (ADS)

    Tripathi, Subarna; Kang, Byeongkeun; Dane, Gokce; Nguyen, Truong

    2017-09-01

    We investigate low-complexity convolutional neural networks (CNNs) for object detection for embedded vision applications. It is well-known that consolidation of an embedded system for CNN-based object detection is more challenging due to computation and memory requirement comparing with problems like image classification. To achieve these requirements, we design and develop an end-to-end TensorFlow (TF)-based fully-convolutional deep neural network for generic object detection task inspired by one of the fastest framework, YOLO.1 The proposed network predicts the localization of every object by regressing the coordinates of the corresponding bounding box as in YOLO. Hence, the network is able to detect any objects without any limitations in the size of the objects. However, unlike YOLO, all the layers in the proposed network is fully-convolutional. Thus, it is able to take input images of any size. We pick face detection as an use case. We evaluate the proposed model for face detection on FDDB dataset and Widerface dataset. As another use case of generic object detection, we evaluate its performance on PASCAL VOC dataset. The experimental results demonstrate that the proposed network can predict object instances of different sizes and poses in a single frame. Moreover, the results show that the proposed method achieves comparative accuracy comparing with the state-of-the-art CNN-based object detection methods while reducing the model size by 3× and memory-BW by 3 - 4× comparing with one of the best real-time CNN-based object detectors, YOLO. Our 8-bit fixed-point TF-model provides additional 4× memory reduction while keeping the accuracy nearly as good as the floating-point model. Moreover, the fixed- point model is capable of achieving 20× faster inference speed comparing with the floating-point model. Thus, the proposed method is promising for embedded implementations.

  17. Method and apparatus for monitoring the flow of mercury in a system

    DOEpatents

    Grossman, Mark W.

    1987-01-01

    An apparatus and method for monitoring the flow of mercury in a system. The equipment enables the entrainment of the mercury in a carrier gas e.g., an inert gas, which passes as mercury vapor between a pair of optically transparent windows. The attenuation of the emission is indicative of the quantity of mercury (and its isotopes) in the system. A 253.7 nm light is shone through one of the windows and the unabsorbed light is detected through the other window. The absorption of the 253.7 nm light is thereby measured whereby the quantity of mercury passing between the windows can be determined. The apparatus includes an in-line sensor for measuring the quantity of mercury. It includes a conduit together with a pair of apertures disposed in a face to face relationship and arranged on opposite sides of the conduit. A pair of optically transparent windows are disposed upon a pair of viewing tubes. A portion of each of the tubes is disposed inside of the conduit and within each of the apertures. The two windows are disposed in a face to face relationship on the ends of the viewing tubes and the entire assembly is hermetically sealed from the atmosphere whereby when 253.7 nm ultraviolet light is shone through one of the windows and detected through the other, the quantity of mercury which is passing by can be continuously monitored due to absorption which is indicated by attenuation of the amplitude of the observed emission.

  18. Efficient live face detection to counter spoof attack in face recognition systems

    NASA Astrophysics Data System (ADS)

    Biswas, Bikram Kumar; Alam, Mohammad S.

    2015-03-01

    Face recognition is a critical tool used in almost all major biometrics based security systems. But recognition, authentication and liveness detection of the face of an actual user is a major challenge because an imposter or a non-live face of the actual user can be used to spoof the security system. In this research, a robust technique is proposed which detects liveness of faces in order to counter spoof attacks. The proposed technique uses a three-dimensional (3D) fast Fourier transform to compare spectral energies of a live face and a fake face in a mathematically selective manner. The mathematical model involves evaluation of energies of selective high frequency bands of average power spectra of both live and non-live faces. It also carries out proper recognition and authentication of the face of the actual user using the fringe-adjusted joint transform correlation technique, which has been found to yield the highest correlation output for a match. Experimental tests show that the proposed technique yields excellent results for identifying live faces.

  19. Identifying children with autism spectrum disorder based on their face processing abnormality: A machine learning framework.

    PubMed

    Liu, Wenbo; Li, Ming; Yi, Li

    2016-08-01

    The atypical face scanning patterns in individuals with Autism Spectrum Disorder (ASD) has been repeatedly discovered by previous research. The present study examined whether their face scanning patterns could be potentially useful to identify children with ASD by adopting the machine learning algorithm for the classification purpose. Particularly, we applied the machine learning method to analyze an eye movement dataset from a face recognition task [Yi et al., 2016], to classify children with and without ASD. We evaluated the performance of our model in terms of its accuracy, sensitivity, and specificity of classifying ASD. Results indicated promising evidence for applying the machine learning algorithm based on the face scanning patterns to identify children with ASD, with a maximum classification accuracy of 88.51%. Nevertheless, our study is still preliminary with some constraints that may apply in the clinical practice. Future research should shed light on further valuation of our method and contribute to the development of a multitask and multimodel approach to aid the process of early detection and diagnosis of ASD. Autism Res 2016, 9: 888-898. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.

  20. Improving flow patterns and spillage characteristics of a box-type commercial kitchen hood.

    PubMed

    Huang, Rong Fung; Chen, Jia-Kun; Han, Meng-Ji; Priyambodo, Yusuf

    2014-01-01

    A conventional box-type commercial kitchen hood and its improved version (termed the "IQV commercial kitchen hood") were studied using the laser-assisted smoke flow visualization technique and tracer-gas (sulfur hexafluoride) detection methods. The laser-assisted smoke flow visualization technique qualitatively revealed the flow field of the hood and the areas apt for leakages of hood containment. The tracer-gas concentration detection method measured the quantitative leakage levels of the hood containment. The oil mists that were generated in the conventional box-type commercial kitchen hood leaked significantly into the environment from the areas near the front edges of ceiling and side walls. Around these areas, the boundary-layer separation occurred, inducing highly unsteady and turbulent recirculating flow, and leading to spillages of hood containment due to inappropriate aerodynamic design at the front edges of the ceiling and side walls. The tracer-gas concentration measurements on the conventional box-type commercial kitchen hood showed that the sulfur hexafluoride concentrations detected at the hood face attained very large values on an order of magnitude about 10(3)-10(4) ppb. By combining the backward-offset narrow suction slot, deflection plates, and quarter-circular arcs at the hood entrance, the IQV commercial kitchen hood presented a flow field containing four backward-inclined cyclone flow structures. The oil mists generated by cooking were coherently confined in these upward-rising cyclone flow structures and finally exhausted through the narrow suction slot. The tracer-gas concentration measurements on the IQV commercial kitchen hood showed that the order of magnitude of the sulfur hexafluoride concentrations detected at the hood face is negligibly small--only about 10(0) ppb across the whole hood face.

  1. The feasibility test of state-of-the-art face detection algorithms for vehicle occupant detection

    NASA Astrophysics Data System (ADS)

    Makrushin, Andrey; Dittmann, Jana; Vielhauer, Claus; Langnickel, Mirko; Kraetzer, Christian

    2010-01-01

    Vehicle seat occupancy detection systems are designed to prevent the deployment of airbags at unoccupied seats, thus avoiding the considerable cost imposed by the replacement of airbags. Occupancy detection can also improve passenger comfort, e.g. by activating air-conditioning systems. The most promising development perspectives are seen in optical sensing systems which have become cheaper and smaller in recent years. The most plausible way to check the seat occupancy by occupants is the detection of presence and location of heads, or more precisely, faces. This paper compares the detection performances of the three most commonly used and widely available face detection algorithms: Viola- Jones, Kienzle et al. and Nilsson et al. The main objective of this work is to identify whether one of these systems is suitable for use in a vehicle environment with variable and mostly non-uniform illumination conditions, and whether any one face detection system can be sufficient for seat occupancy detection. The evaluation of detection performance is based on a large database comprising 53,928 video frames containing proprietary data collected from 39 persons of both sexes and different ages and body height as well as different objects such as bags and rearward/forward facing child restraint systems.

  2. The relationship between visual search and categorization of own- and other-age faces.

    PubMed

    Craig, Belinda M; Lipp, Ottmar V

    2018-03-13

    Young adult participants are faster to detect young adult faces in crowds of infant and child faces than vice versa. These findings have been interpreted as evidence for more efficient attentional capture by own-age than other-age faces, but could alternatively reflect faster rejection of other-age than own-age distractors, consistent with the previously reported other-age categorization advantage: faster categorization of other-age than own-age faces. Participants searched for own-age faces in other-age backgrounds or vice versa. Extending the finding to different other-age groups, young adult participants were faster to detect young adult faces in both early adolescent (Experiment 1) and older adult backgrounds (Experiment 2). To investigate whether the own-age detection advantage could be explained by faster categorization and rejection of other-age background faces, participants in experiments 3 and 4 also completed an age categorization task. Relatively faster categorization of other-age faces was related to relatively faster search through other-age backgrounds on target absent trials but not target present trials. These results confirm that other-age faces are more quickly categorized and searched through and that categorization and search processes are related; however, this correlational approach could not confirm or reject the contribution of background face processing to the own-age detection advantage. © 2018 The British Psychological Society.

  3. Faces do not capture special attention in children with autism spectrum disorder: a change blindness study.

    PubMed

    Kikuchi, Yukiko; Senju, Atsushi; Tojo, Yoshikuni; Osanai, Hiroo; Hasegawa, Toshikazu

    2009-01-01

    Two experiments investigated attention of children with autism spectrum disorder (ASD) to faces and objects. In both experiments, children (7- to 15-year-olds) detected the difference between 2 visual scenes. Results in Experiment 1 revealed that typically developing children (n = 16) detected the change in faces faster than in objects, whereas children with ASD (n = 16) were equally fast in detecting changes in faces and objects. These results were replicated in Experiment 2 (n = 16 in children with ASD and 22 in typically developing children), which does not require face recognition skill. Results suggest that children with ASD lack an attentional bias toward others' faces, which could contribute to their atypical social orienting.

  4. Optogenetic and pharmacological suppression of spatial clusters of face neurons reveal their causal role in face gender discrimination

    PubMed Central

    Afraz, Arash; Boyden, Edward S.; DiCarlo, James J.

    2015-01-01

    Neurons that respond more to images of faces over nonface objects were identified in the inferior temporal (IT) cortex of primates three decades ago. Although it is hypothesized that perceptual discrimination between faces depends on the neural activity of IT subregions enriched with “face neurons,” such a causal link has not been directly established. Here, using optogenetic and pharmacological methods, we reversibly suppressed the neural activity in small subregions of IT cortex of macaque monkeys performing a facial gender-discrimination task. Each type of intervention independently demonstrated that suppression of IT subregions enriched in face neurons induced a contralateral deficit in face gender-discrimination behavior. The same neural suppression of other IT subregions produced no detectable change in behavior. These results establish a causal link between the neural activity in IT face neuron subregions and face gender-discrimination behavior. Also, the demonstration that brief neural suppression of specific spatial subregions of IT induces behavioral effects opens the door for applying the technical advantages of optogenetics to a systematic attack on the causal relationship between IT cortex and high-level visual perception. PMID:25953336

  5. Face Alignment via Regressing Local Binary Features.

    PubMed

    Ren, Shaoqing; Cao, Xudong; Wei, Yichen; Sun, Jian

    2016-03-01

    This paper presents a highly efficient and accurate regression approach for face alignment. Our approach has two novel components: 1) a set of local binary features and 2) a locality principle for learning those features. The locality principle guides us to learn a set of highly discriminative local binary features for each facial landmark independently. The obtained local binary features are used to jointly learn a linear regression for the final output. This approach achieves the state-of-the-art results when tested on the most challenging benchmarks to date. Furthermore, because extracting and regressing local binary features are computationally very cheap, our system is much faster than previous methods. It achieves over 3000 frames per second (FPS) on a desktop or 300 FPS on a mobile phone for locating a few dozens of landmarks. We also study a key issue that is important but has received little attention in the previous research, which is the face detector used to initialize alignment. We investigate several face detectors and perform quantitative evaluation on how they affect alignment accuracy. We find that an alignment friendly detector can further greatly boost the accuracy of our alignment method, reducing the error up to 16% relatively. To facilitate practical usage of face detection/alignment methods, we also propose a convenient metric to measure how good a detector is for alignment initialization.

  6. Automated detection of photoreceptor disruption in mild diabetic retinopathy on volumetric optical coherence tomography

    PubMed Central

    Wang, Zhuo; Camino, Acner; Zhang, Miao; Wang, Jie; Hwang, Thomas S.; Wilson, David J.; Huang, David; Li, Dengwang; Jia, Yali

    2017-01-01

    Diabetic retinopathy is a pathology where microvascular circulation abnormalities ultimately result in photoreceptor disruption and, consequently, permanent loss of vision. Here, we developed a method that automatically detects photoreceptor disruption in mild diabetic retinopathy by mapping ellipsoid zone reflectance abnormalities from en face optical coherence tomography images. The algorithm uses a fuzzy c-means scheme with a redefined membership function to assign a defect severity level on each pixel and generate a probability map of defect category affiliation. A novel scheme of unsupervised clustering optimization allows accurate detection of the affected area. The achieved accuracy, sensitivity and specificity were about 90% on a population of thirteen diseased subjects. This method shows potential for accurate and fast detection of early biomarkers in diabetic retinopathy evolution. PMID:29296475

  7. Automated detection of photoreceptor disruption in mild diabetic retinopathy on volumetric optical coherence tomography.

    PubMed

    Wang, Zhuo; Camino, Acner; Zhang, Miao; Wang, Jie; Hwang, Thomas S; Wilson, David J; Huang, David; Li, Dengwang; Jia, Yali

    2017-12-01

    Diabetic retinopathy is a pathology where microvascular circulation abnormalities ultimately result in photoreceptor disruption and, consequently, permanent loss of vision. Here, we developed a method that automatically detects photoreceptor disruption in mild diabetic retinopathy by mapping ellipsoid zone reflectance abnormalities from en face optical coherence tomography images. The algorithm uses a fuzzy c-means scheme with a redefined membership function to assign a defect severity level on each pixel and generate a probability map of defect category affiliation. A novel scheme of unsupervised clustering optimization allows accurate detection of the affected area. The achieved accuracy, sensitivity and specificity were about 90% on a population of thirteen diseased subjects. This method shows potential for accurate and fast detection of early biomarkers in diabetic retinopathy evolution.

  8. Face detection on distorted images using perceptual quality-aware features

    NASA Astrophysics Data System (ADS)

    Gunasekar, Suriya; Ghosh, Joydeep; Bovik, Alan C.

    2014-02-01

    We quantify the degradation in performance of a popular and effective face detector when human-perceived image quality is degraded by distortions due to additive white gaussian noise, gaussian blur or JPEG compression. It is observed that, within a certain range of perceived image quality, a modest increase in image quality can drastically improve face detection performance. These results can be used to guide resource or bandwidth allocation in a communication/delivery system that is associated with face detection tasks. A new face detector based on QualHOG features is also proposed that augments face-indicative HOG features with perceptual quality-aware spatial Natural Scene Statistics (NSS) features, yielding improved tolerance against image distortions. The new detector provides statistically significant improvements over a strong baseline on a large database of face images representing a wide range of distortions. To facilitate this study, we created a new Distorted Face Database, containing face and non-face patches from images impaired by a variety of common distortion types and levels. This new dataset is available for download and further experimentation at www.ideal.ece.utexas.edu/˜suriya/DFD/.

  9. DDDAMS-based Urban Surveillance and Crowd Control via UAVs and UGVs

    DTIC Science & Technology

    2015-12-04

    for crowd dynamics modeling by incorporating multi-resolution data, where a grid-based method is used to model crowd motion with UAVs’ low -resolution...information and more computational intensive (and time-consuming). Given that the deployment of fidelity selection results in simulation faces computational... low fidelity information FOV y (A) DR x (A) DR y (A) Not detected high fidelity information Table 1: Parameters for UAV and UGV for their detection

  10. Neutral face classification using personalized appearance models for fast and robust emotion detection.

    PubMed

    Chiranjeevi, Pojala; Gopalakrishnan, Viswanath; Moogi, Pratibha

    2015-09-01

    Facial expression recognition is one of the open problems in computer vision. Robust neutral face recognition in real time is a major challenge for various supervised learning-based facial expression recognition methods. This is due to the fact that supervised methods cannot accommodate all appearance variability across the faces with respect to race, pose, lighting, facial biases, and so on, in the limited amount of training data. Moreover, processing each and every frame to classify emotions is not required, as user stays neutral for majority of the time in usual applications like video chat or photo album/web browsing. Detecting neutral state at an early stage, thereby bypassing those frames from emotion classification would save the computational power. In this paper, we propose a light-weight neutral versus emotion classification engine, which acts as a pre-processer to the traditional supervised emotion classification approaches. It dynamically learns neutral appearance at key emotion (KE) points using a statistical texture model, constructed by a set of reference neutral frames for each user. The proposed method is made robust to various types of user head motions by accounting for affine distortions based on a statistical texture model. Robustness to dynamic shift of KE points is achieved by evaluating the similarities on a subset of neighborhood patches around each KE point using the prior information regarding the directionality of specific facial action units acting on the respective KE point. The proposed method, as a result, improves emotion recognition (ER) accuracy and simultaneously reduces computational complexity of the ER system, as validated on multiple databases.

  11. Using false colors to protect visual privacy of sensitive content

    NASA Astrophysics Data System (ADS)

    Ćiftçi, Serdar; Korshunov, Pavel; Akyüz, Ahmet O.; Ebrahimi, Touradj

    2015-03-01

    Many privacy protection tools have been proposed for preserving privacy. Tools for protection of visual privacy available today lack either all or some of the important properties that are expected from such tools. Therefore, in this paper, we propose a simple yet effective method for privacy protection based on false color visualization, which maps color palette of an image into a different color palette, possibly after a compressive point transformation of the original pixel data, distorting the details of the original image. This method does not require any prior face detection or other sensitive regions detection and, hence, unlike typical privacy protection methods, it is less sensitive to inaccurate computer vision algorithms. It is also secure as the look-up tables can be encrypted, reversible as table look-ups can be inverted, flexible as it is independent of format or encoding, adjustable as the final result can be computed by interpolating the false color image with the original using different degrees of interpolation, less distracting as it does not create visually unpleasant artifacts, and selective as it preserves better semantic structure of the input. Four different color scales and four different compression functions, one which the proposed method relies, are evaluated via objective (three face recognition algorithms) and subjective (50 human subjects in an online-based study) assessments using faces from FERET public dataset. The evaluations demonstrate that DEF and RBS color scales lead to the strongest privacy protection, while compression functions add little to the strength of privacy protection. Statistical analysis also shows that recognition algorithms and human subjects perceive the proposed protection similarly

  12. Preserved search asymmetry in the detection of fearful faces among neutral faces in individuals with Williams syndrome revealed by measurement of both manual responses and eye tracking.

    PubMed

    Hirai, Masahiro; Muramatsu, Yukako; Mizuno, Seiji; Kurahashi, Naoko; Kurahashi, Hirokazu; Nakamura, Miho

    2017-01-01

    Individuals with Williams syndrome (WS) exhibit an atypical social phenotype termed hypersociability. One theory accounting for hypersociability presumes an atypical function of the amygdala, which processes fear-related information. However, evidence is lacking regarding the detection mechanisms of fearful faces for individuals with WS. Here, we introduce a visual search paradigm to elucidate the mechanisms for detecting fearful faces by evaluating the search asymmetry; the reaction time when both the target and distractors were swapped was asymmetrical. Eye movements reflect subtle atypical attentional properties, whereas, manual responses are unable to capture atypical attentional profiles toward faces in individuals with WS. Therefore, we measured both eye movements and manual responses of individuals with WS and typically developed children and adults in visual searching for a fearful face among neutral faces or a neutral face among fearful faces. Two task measures, namely reaction time and performance accuracy, were analyzed for each stimulus as well as gaze behavior and the initial fixation onset latency. Overall, reaction times in the WS group and the mentally age-matched control group were significantly longer than those in the chronologically age-matched group. We observed a search asymmetry effect in all groups: when a neutral target facial expression was presented among fearful faces, the reaction times were significantly prolonged in comparison with when a fearful target facial expression was displayed among neutral distractor faces. Furthermore, the first fixation onset latency of eye movement toward a target facial expression showed a similar tendency for manual responses. Although overall responses in detecting fearful faces for individuals with WS are slower than those for control groups, search asymmetry was observed. Therefore, cognitive mechanisms underlying the detection of fearful faces seem to be typical in individuals with WS. This finding is discussed with reference to the amygdala account explaining hypersociability in individuals with WS.

  13. What makes a cell face-selective: the importance of contrast

    PubMed Central

    Ohayon, Shay; Freiwald, Winrich A; Tsao, Doris Y

    2012-01-01

    Summary Faces are robustly detected by computer vision algorithms that search for characteristic coarse contrast features. Here, we investigated whether face-selective cells in the primate brain exploit contrast features as well. We recorded from face-selective neurons in macaque inferotemporal cortex, while presenting a face-like collage of regions whose luminances were changed randomly. Modulating contrast combinations between regions induced activity changes ranging from no response to a response greater than that to a real face in 50% of cells. The critical stimulus factor determining response magnitude was contrast polarity, e.g., nose region brighter than left eye. Contrast polarity preferences were consistent across cells, suggesting a common computational strategy across the population, and matched features used by computer vision algorithms for face detection. Furthermore, most cells were tuned both for contrast polarity and for the geometry of facial features, suggesting cells encode information useful both for detection and recognition. PMID:22578507

  14. Effects of color information on face processing using event-related potentials and gamma oscillations.

    PubMed

    Minami, T; Goto, K; Kitazaki, M; Nakauchi, S

    2011-03-10

    In humans, face configuration, contour and color may affect face perception, which is important for social interactions. This study aimed to determine the effect of color information on face perception by measuring event-related potentials (ERPs) during the presentation of natural- and bluish-colored faces. Our results demonstrated that the amplitude of the N170 event-related potential, which correlates strongly with face processing, was higher in response to a bluish-colored face than to a natural-colored face. However, gamma-band activity was insensitive to the deviation from a natural face color. These results indicated that color information affects the N170 associated with a face detection mechanism, which suggests that face color is important for face detection. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved.

  15. Innovation in Weight Loss Programs: A 3-Dimensional Virtual-World Approach

    PubMed Central

    Massey, Anne P; DeVaneaux, Celeste A

    2012-01-01

    Background The rising trend in obesity calls for innovative weight loss programs. While behavioral-based face-to-face programs have proven to be the most effective, they are expensive and often inaccessible. Internet or Web-based weight loss programs have expanded reach but may lack qualities critical to weight loss and maintenance such as human interaction, social support, and engagement. In contrast to Web technologies, virtual reality technologies offer unique affordances as a behavioral intervention by directly supporting engagement and active learning. Objective To explore the effectiveness of a virtual-world weight loss program relative to weight loss and behavior change. Methods We collected data from overweight people (N = 54) participating in a face-to-face or a virtual-world weight loss program. Weight, body mass index (BMI), percentage weight change, and health behaviors (ie, weight loss self-efficacy, physical activity self-efficacy, self-reported physical activity, and fruit and vegetable consumption) were assessed before and after the 12-week program. Repeated measures analysis was used to detect differences between groups and across time. Results A total of 54 participants with a BMI of 32 (SD 6.05) kg/m2 enrolled in the study, with a 13% dropout rate for each group (virtual world group: 5/38; face-to-face group: 3/24). Both groups lost a significant amount of weight (virtual world: 3.9 kg, P < .001; face-to-face: 2.8 kg, P = .002); however, no significant differences between groups were detected (P = .29). Compared with baseline, the virtual-world group lost an average of 4.2%, with 33% (11/33) of the participants losing a clinically significant (≥5%) amount of baseline weight. The face-to-face group lost an average of 3.0% of their baseline weight, with 29% (6/21) losing a clinically significant amount. We detected a significant group × time interaction for moderate (P = .006) and vigorous physical activity (P = .008), physical activity self-efficacy (P = .04), fruit and vegetable consumption (P = .007), and weight loss self-efficacy (P < .001). Post hoc paired t tests indicated significant improvements across all of the variables for the virtual-world group. Conclusions Overall, these results offer positive early evidence that a virtual-world-based weight loss program can be as effective as a face-to-face one relative to biometric changes. In addition, our results suggest that a virtual world may be a more effective platform to influence meaningful behavioral changes and improve self-efficacy. PMID:22995535

  16. Dissociation between recognition and detection advantage for facial expressions: a meta-analysis.

    PubMed

    Nummenmaa, Lauri; Calvo, Manuel G

    2015-04-01

    Happy facial expressions are recognized faster and more accurately than other expressions in categorization tasks, whereas detection in visual search tasks is widely believed to be faster for angry than happy faces. We used meta-analytic techniques for resolving this categorization versus detection advantage discrepancy for positive versus negative facial expressions. Effect sizes were computed on the basis of the r statistic for a total of 34 recognition studies with 3,561 participants and 37 visual search studies with 2,455 participants, yielding a total of 41 effect sizes for recognition accuracy, 25 for recognition speed, and 125 for visual search speed. Random effects meta-analysis was conducted to estimate effect sizes at population level. For recognition tasks, an advantage in recognition accuracy and speed for happy expressions was found for all stimulus types. In contrast, for visual search tasks, moderator analysis revealed that a happy face detection advantage was restricted to photographic faces, whereas a clear angry face advantage was found for schematic and "smiley" faces. Robust detection advantage for nonhappy faces was observed even when stimulus emotionality was distorted by inversion or rearrangement of the facial features, suggesting that visual features primarily drive the search. We conclude that the recognition advantage for happy faces is a genuine phenomenon related to processing of facial expression category and affective valence. In contrast, detection advantages toward either happy (photographic stimuli) or nonhappy (schematic) faces is contingent on visual stimulus features rather than facial expression, and may not involve categorical or affective processing. (c) 2015 APA, all rights reserved).

  17. Effects of configural processing on the perceptual spatial resolution for face features.

    PubMed

    Namdar, Gal; Avidan, Galia; Ganel, Tzvi

    2015-11-01

    Configural processing governs human perception across various domains, including face perception. An established marker of configural face perception is the face inversion effect, in which performance is typically better for upright compared to inverted faces. In two experiments, we tested whether configural processing could influence basic visual abilities such as perceptual spatial resolution (i.e., the ability to detect spatial visual changes). Face-related perceptual spatial resolution was assessed by measuring the just noticeable difference (JND) to subtle positional changes between specific features in upright and inverted faces. The results revealed robust inversion effect for spatial sensitivity to configural-based changes, such as the distance between the mouth and the nose, or the distance between the eyes and the nose. Critically, spatial resolution for face features within the region of the eyes (e.g., the interocular distance between the eyes) was not affected by inversion, suggesting that the eye region operates as a separate 'gestalt' unit which is relatively immune to manipulations that would normally hamper configural processing. Together these findings suggest that face orientation modulates fundamental psychophysical abilities including spatial resolution. Furthermore, they indicate that classic psychophysical methods can be used as a valid measure of configural face processing. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Measurement of H2S in vivo and in vitro by the monobromobimane method

    PubMed Central

    Shen, Xinggui; Kolluru, Gopi K.; Yuan, Shuai; Kevil, Christopher

    2015-01-01

    The gasotransmitter hydrogen sulfide (H2S) is known as an important regulator in several physiological and pathological responses. Among the challenges facing the field is the accurate and reliable measurement of hydrogen sulfide bioavailability. We have reported an approach to discretely measure sulfide and sulfide pools using the monobromobimane (MBB) method coupled with RP-HPLC. The method involves the derivatization of sulfide with excess MBB under precise reaction conditions at room temperature to form sulfide-dibimane. The resultant fluorescent sulfide-dibimane (SDB) is analyzed by RP-HPLC using fluorescence detection with the limit of detection for SDB (2 nM). Care must be taken to avoid conditions that may confound H2S measurement with this method. Overall, RP-HPLC with fluorescence detection of SDB is a useful and powerful tool to measure biological sulfide levels. PMID:25725514

  19. Measurement of H2S in vivo and in vitro by the monobromobimane method.

    PubMed

    Shen, Xinggui; Kolluru, Gopi K; Yuan, Shuai; Kevil, Christopher G

    2015-01-01

    The gasotransmitter hydrogen sulfide (H2S) is known as an important regulator in several physiological and pathological responses. Among the challenges facing the field is the accurate and reliable measurement of hydrogen sulfide bioavailability. We have reported an approach to discretely measure sulfide and sulfide pools using the monobromobimane (MBB) method coupled with reversed phase high-performance liquid chromatography (RP-HPLC). The method involves the derivatization of sulfide with excess MBB under precise reaction conditions at room temperature to form sulfide dibimane (SDB). The resultant fluorescent SDB is analyzed by RP-HPLC using fluorescence detection with the limit of detection for SDB (2 nM). Care must be taken to avoid conditions that may confound H2S measurement with this method. Overall, RP-HPLC with fluorescence detection of SDB is a useful and powerful tool to measure biological sulfide levels. © 2015 Elsevier Inc. All rights reserved.

  20. Early detection of ecosystem regime shifts: a multiple method evaluation for management application.

    PubMed

    Lindegren, Martin; Dakos, Vasilis; Gröger, Joachim P; Gårdmark, Anna; Kornilovs, Georgs; Otto, Saskia A; Möllmann, Christian

    2012-01-01

    Critical transitions between alternative stable states have been shown to occur across an array of complex systems. While our ability to identify abrupt regime shifts in natural ecosystems has improved, detection of potential early-warning signals previous to such shifts is still very limited. Using real monitoring data of a key ecosystem component, we here apply multiple early-warning indicators in order to assess their ability to forewarn a major ecosystem regime shift in the Central Baltic Sea. We show that some indicators and methods can result in clear early-warning signals, while other methods may have limited utility in ecosystem-based management as they show no or weak potential for early-warning. We therefore propose a multiple method approach for early detection of ecosystem regime shifts in monitoring data that may be useful in informing timely management actions in the face of ecosystem change.

  1. Early Detection of Ecosystem Regime Shifts: A Multiple Method Evaluation for Management Application

    PubMed Central

    Lindegren, Martin; Dakos, Vasilis; Gröger, Joachim P.; Gårdmark, Anna; Kornilovs, Georgs; Otto, Saskia A.; Möllmann, Christian

    2012-01-01

    Critical transitions between alternative stable states have been shown to occur across an array of complex systems. While our ability to identify abrupt regime shifts in natural ecosystems has improved, detection of potential early-warning signals previous to such shifts is still very limited. Using real monitoring data of a key ecosystem component, we here apply multiple early-warning indicators in order to assess their ability to forewarn a major ecosystem regime shift in the Central Baltic Sea. We show that some indicators and methods can result in clear early-warning signals, while other methods may have limited utility in ecosystem-based management as they show no or weak potential for early-warning. We therefore propose a multiple method approach for early detection of ecosystem regime shifts in monitoring data that may be useful in informing timely management actions in the face of ecosystem change. PMID:22808007

  2. The processing of social stimuli in early infancy: from faces to biological motion perception.

    PubMed

    Simion, Francesca; Di Giorgio, Elisa; Leo, Irene; Bardi, Lara

    2011-01-01

    There are several lines of evidence which suggests that, since birth, the human system detects social agents on the basis of at least two properties: the presence of a face and the way they move. This chapter reviews the infant research on the origin of brain specialization for social stimuli and on the role of innate mechanisms and perceptual experience in shaping the development of the social brain. Two lines of convergent evidence on face detection and biological motion detection will be presented to demonstrate the innate predispositions of the human system to detect social stimuli at birth. As for face detection, experiments will be presented to demonstrate that, by virtue of nonspecific attentional biases, a very coarse template of faces become active at birth. As for biological motion detection, studies will be presented to demonstrate that, since birth, the human system is able to detect social stimuli on the basis of their properties such as the presence of a semi-rigid motion named biological motion. Overall, the empirical evidence converges in supporting the notion that the human system begins life broadly tuned to detect social stimuli and that the progressive specialization will narrow the system for social stimuli as a function of experience. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. Enhancement of Fast Face Detection Algorithm Based on a Cascade of Decision Trees

    NASA Astrophysics Data System (ADS)

    Khryashchev, V. V.; Lebedev, A. A.; Priorov, A. L.

    2017-05-01

    Face detection algorithm based on a cascade of ensembles of decision trees (CEDT) is presented. The new approach allows detecting faces other than the front position through the use of multiple classifiers. Each classifier is trained for a specific range of angles of the rotation head. The results showed a high rate of productivity for CEDT on images with standard size. The algorithm increases the area under the ROC-curve of 13% compared to a standard Viola-Jones face detection algorithm. Final realization of given algorithm consist of 5 different cascades for frontal/non-frontal faces. One more thing which we take from the simulation results is a low computational complexity of CEDT algorithm in comparison with standard Viola-Jones approach. This could prove important in the embedded system and mobile device industries because it can reduce the cost of hardware and make battery life longer.

  4. Automatic Processing of Changes in Facial Emotions in Dysphoria: A Magnetoencephalography Study.

    PubMed

    Xu, Qianru; Ruohonen, Elisa M; Ye, Chaoxiong; Li, Xueqiao; Kreegipuu, Kairi; Stefanics, Gabor; Luo, Wenbo; Astikainen, Piia

    2018-01-01

    It is not known to what extent the automatic encoding and change detection of peripherally presented facial emotion is altered in dysphoria. The negative bias in automatic face processing in particular has rarely been studied. We used magnetoencephalography (MEG) to record automatic brain responses to happy and sad faces in dysphoric (Beck's Depression Inventory ≥ 13) and control participants. Stimuli were presented in a passive oddball condition, which allowed potential negative bias in dysphoria at different stages of face processing (M100, M170, and M300) and alterations of change detection (visual mismatch negativity, vMMN) to be investigated. The magnetic counterpart of the vMMN was elicited at all stages of face processing, indexing automatic deviance detection in facial emotions. The M170 amplitude was modulated by emotion, response amplitudes being larger for sad faces than happy faces. Group differences were found for the M300, and they were indexed by two different interaction effects. At the left occipital region of interest, the dysphoric group had larger amplitudes for sad than happy deviant faces, reflecting negative bias in deviance detection, which was not found in the control group. On the other hand, the dysphoric group showed no vMMN to changes in facial emotions, while the vMMN was observed in the control group at the right occipital region of interest. Our results indicate that there is a negative bias in automatic visual deviance detection, but also a general change detection deficit in dysphoria.

  5. Investigating the Causal Role of rOFA in Holistic Detection of Mooney Faces and Objects: An fMRI-guided TMS Study.

    PubMed

    Bona, Silvia; Cattaneo, Zaira; Silvanto, Juha

    2016-01-01

    The right occipital face area (rOFA) is known to be involved in face discrimination based on local featural information. Whether this region is also involved in global, holistic stimulus processing is not known. We used fMRI-guided transcranial magnetic stimulation (TMS) to investigate whether rOFA is causally implicated in stimulus detection based on holistic processing, by the use of Mooney stimuli. Two studies were carried out: In Experiment 1, participants performed a detection task involving Mooney faces and Mooney objects; Mooney stimuli lack distinguishable local features and can be detected solely via holistic processing (i.e. at a global level) with top-down guidance from previously stored representations. Experiment 2 required participants to detect shapes which are recognized via bottom-up integration of local (collinear) Gabor elements and was performed to control for specificity of rOFA's implication in holistic detection. In Experiment 1, TMS over rOFA and rLO impaired detection of all stimulus categories, with no category-specific effect. In Experiment 2, shape detection was impaired when TMS was applied over rLO but not over rOFA. Our results demonstrate that rOFA is causally implicated in the type of top-down holistic detection required by Mooney stimuli and that such role is not face-selective. In contrast, rOFA does not appear to play a causal role in detection of shapes based on bottom-up integration of local components, demonstrating that its involvement in processing non-face stimuli is specific for holistic processing. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  6. Change detection on LOD 2 building models with very high resolution spaceborne stereo imagery

    NASA Astrophysics Data System (ADS)

    Qin, Rongjun

    2014-10-01

    Due to the fast development of the urban environment, the need for efficient maintenance and updating of 3D building models is ever increasing. Change detection is an essential step to spot the changed area for data (map/3D models) updating and urban monitoring. Traditional methods based on 2D images are no longer suitable for change detection in building scale, owing to the increased spectral variability of the building roofs and larger perspective distortion of the very high resolution (VHR) imagery. Change detection in 3D is increasingly being investigated using airborne laser scanning data or matched Digital Surface Models (DSM), but rare study has been conducted regarding to change detection on 3D city models with VHR images, which is more informative but meanwhile more complicated. This is due to the fact that the 3D models are abstracted geometric representation of the urban reality, while the VHR images record everything. In this paper, a novel method is proposed to detect changes directly on LOD (Level of Detail) 2 building models with VHR spaceborne stereo images from a different date, with particular focus on addressing the special characteristics of the 3D models. In the first step, the 3D building models are projected onto a raster grid, encoded with building object, terrain object, and planar faces. The DSM is extracted from the stereo imagery by hierarchical semi-global matching (SGM). In the second step, a multi-channel change indicator is extracted between the 3D models and stereo images, considering the inherent geometric consistency (IGC), height difference, and texture similarity for each planar face. Each channel of the indicator is then clustered with the Self-organizing Map (SOM), with "change", "non-change" and "uncertain change" status labeled through a voting strategy. The "uncertain changes" are then determined with a Markov Random Field (MRF) analysis considering the geometric relationship between faces. In the third step, buildings are extracted combining the multispectral images and the DSM by morphological operators, and the new buildings are determined by excluding the verified unchanged buildings from the second step. Both the synthetic experiment with Worldview-2 stereo imagery and the real experiment with IKONOS stereo imagery are carried out to demonstrate the effectiveness of the proposed method. It is shown that the proposed method can be applied as an effective way to monitoring the building changes, as well as updating 3D models from one epoch to the other.

  7. Component-based subspace linear discriminant analysis method for face recognition with one training sample

    NASA Astrophysics Data System (ADS)

    Huang, Jian; Yuen, Pong C.; Chen, Wen-Sheng; Lai, J. H.

    2005-05-01

    Many face recognition algorithms/systems have been developed in the last decade and excellent performances have also been reported when there is a sufficient number of representative training samples. In many real-life applications such as passport identification, only one well-controlled frontal sample image is available for training. Under this situation, the performance of existing algorithms will degrade dramatically or may not even be implemented. We propose a component-based linear discriminant analysis (LDA) method to solve the one training sample problem. The basic idea of the proposed method is to construct local facial feature component bunches by moving each local feature region in four directions. In this way, we not only generate more samples with lower dimension than the original image, but also consider the face detection localization error while training. After that, we propose a subspace LDA method, which is tailor-made for a small number of training samples, for the local feature projection to maximize the discrimination power. Theoretical analysis and experiment results show that our proposed subspace LDA is efficient and overcomes the limitations in existing LDA methods. Finally, we combine the contributions of each local component bunch with a weighted combination scheme to draw the recognition decision. A FERET database is used for evaluating the proposed method and results are encouraging.

  8. Effects of threshold on single-target detection by using modified amplitude-modulated joint transform correlator

    NASA Astrophysics Data System (ADS)

    Kaewkasi, Pitchaya; Widjaja, Joewono; Uozumi, Jun

    2007-03-01

    Effects of threshold value on detection performance of the modified amplitude-modulated joint transform correlator are quantitatively studied using computer simulation. Fingerprint and human face images are used as test scenes in the presence of noise and a contrast difference. Simulation results demonstrate that this correlator improves detection performance for both types of image used, but moreso for human face images. Optimal detection of low-contrast human face images obscured by strong noise can be obtained by selecting an appropriate threshold value.

  9. Fast 3D NIR systems for facial measurement and lip-reading

    NASA Astrophysics Data System (ADS)

    Brahm, Anika; Ramm, Roland; Heist, Stefan; Rulff, Christian; Kühmstedt, Peter; Notni, Gunther

    2017-05-01

    Structured-light projection is a well-established optical method for the non-destructive contactless three-dimensional (3D) measurement of object surfaces. In particular, there is a great demand for accurate and fast 3D scans of human faces or facial regions of interest in medicine, safety, face modeling, games, virtual life, or entertainment. New developments of facial expression detection and machine lip-reading can be used for communication tasks, future machine control, or human-machine interactions. In such cases, 3D information may offer more detailed information than 2D images which can help to increase the power of current facial analysis algorithms. In this contribution, we present new 3D sensor technologies based on three different methods of near-infrared projection technologies in combination with a stereo vision setup of two cameras. We explain the optical principles of an NIR GOBO projector, an array projector and a modified multi-aperture projection method and compare their performance parameters to each other. Further, we show some experimental measurement results of applications where we realized fast, accurate, and irritation-free measurements of human faces.

  10. Method and apparatus for monitoring the flow of mercury in a system

    DOEpatents

    Grossman, M.W.

    1987-12-15

    An apparatus and method for monitoring the flow of mercury in a system are disclosed. The equipment enables the entrainment of the mercury in a carrier gas e.g., an inert gas, which passes as mercury vapor between a pair of optically transparent windows. The attenuation of the emission is indicative of the quantity of mercury (and its isotopes) in the system. A 253.7 nm light is shone through one of the windows and the unabsorbed light is detected through the other window. The absorption of the 253.7 nm light is thereby measured whereby the quantity of mercury passing between the windows can be determined. The apparatus includes an in-line sensor for measuring the quantity of mercury. It includes a conduit together with a pair of apertures disposed in a face to face relationship and arranged on opposite sides of the conduit. A pair of optically transparent windows are disposed upon a pair of viewing tubes. A portion of each of the tubes is disposed inside of the conduit and within each of the apertures. The two windows are disposed in a face to face relationship on the ends of the viewing tubes and the entire assembly is hermetically sealed from the atmosphere whereby when 253.7 nm ultraviolet light is shone through one of the windows and detected through the other, the quantity of mercury which is passing by can be continuously monitored due to absorption which is indicated by attenuation of the amplitude of the observed emission. 4 figs.

  11. Geophysical examination of coal deposits

    NASA Astrophysics Data System (ADS)

    Jackson, L. J.

    1981-04-01

    Geophysical techniques for the solution of mining problems and as an aid to mine planning are reviewed. Techniques of geophysical borehole logging are discussed. The responses of the coal seams to logging tools are easily recognized on the logging records. Cores for laboratory analysis are cut from selected sections of the borehole. In addition, information about the density and chemical composition of the coal may be obtained. Surface seismic reflection surveys using two dimensional arrays of seismic sources and detectors detect faults with throws as small as 3 m depths of 800 m. In geologically disturbed areas, good results have been obtained from three dimensional surveys. Smaller faults as far as 500 m in advance of the working face may be detected using in seam seismic surveying conducted from a roadway or working face. Small disturbances are detected by pulse radar and continuous wave electromagnetic methods either from within boreholes or from underground. Other geophysical techniques which explicit the electrical, magnetic, gravitational, and geothermal properties of rocks are described.

  12. Implicit conditioning of faces via the social regulation of emotion: ERP evidence of early attentional biases for security conditioned faces.

    PubMed

    Beckes, Lane; Coan, James A; Morris, James P

    2013-08-01

    Not much is known about the neural and psychological processes that promote the initial conditions necessary for positive social bonding. This study explores one method of conditioned bonding utilizing dynamics related to the social regulation of emotion and attachment theory. This form of conditioning involves repeated presentations of negative stimuli followed by images of warm, smiling faces. L. Beckes, J. Simpson, and A. Erickson (2010) found that this conditioning procedure results in positive associations with the faces measured via a lexical decision task, suggesting they are perceived as comforting. This study found that the P1 ERP was similarly modified by this conditioning procedure and the P1 amplitude predicted lexical decision times to insecure words primed by the faces. The findings have implications for understanding how the brain detects supportive people, the flexibility and modifiability of early ERP components, and social bonding more broadly. Copyright © 2013 Society for Psychophysiological Research.

  13. Brain Activity Related to the Judgment of Face-Likeness: Correlation between EEG and Face-Like Evaluation.

    PubMed

    Nihei, Yuji; Minami, Tetsuto; Nakauchi, Shigeki

    2018-01-01

    Faces represent important information for social communication, because social information, such as face-color, expression, and gender, is obtained from faces. Therefore, individuals' tend to find faces unconsciously, even in objects. Why is face-likeness perceived in non-face objects? Previous event-related potential (ERP) studies showed that the P1 component (early visual processing), the N170 component (face detection), and the N250 component (personal detection) reflect the neural processing of faces. Inverted faces were reported to enhance the amplitude and delay the latency of P1 and N170. To investigate face-likeness processing in the brain, we explored the face-related components of the ERP through a face-like evaluation task using natural faces, cars, insects, and Arcimboldo paintings presented upright or inverted. We found a significant correlation between the inversion effect index and face-like scores in P1 in both hemispheres and in N170 in the right hemisphere. These results suggest that judgment of face-likeness occurs in a relatively early stage of face processing.

  14. Brain Activity Related to the Judgment of Face-Likeness: Correlation between EEG and Face-Like Evaluation

    PubMed Central

    Nihei, Yuji; Minami, Tetsuto; Nakauchi, Shigeki

    2018-01-01

    Faces represent important information for social communication, because social information, such as face-color, expression, and gender, is obtained from faces. Therefore, individuals' tend to find faces unconsciously, even in objects. Why is face-likeness perceived in non-face objects? Previous event-related potential (ERP) studies showed that the P1 component (early visual processing), the N170 component (face detection), and the N250 component (personal detection) reflect the neural processing of faces. Inverted faces were reported to enhance the amplitude and delay the latency of P1 and N170. To investigate face-likeness processing in the brain, we explored the face-related components of the ERP through a face-like evaluation task using natural faces, cars, insects, and Arcimboldo paintings presented upright or inverted. We found a significant correlation between the inversion effect index and face-like scores in P1 in both hemispheres and in N170 in the right hemisphere. These results suggest that judgment of face-likeness occurs in a relatively early stage of face processing. PMID:29503612

  15. Pose invariant face recognition: 3D model from single photo

    NASA Astrophysics Data System (ADS)

    Napoléon, Thibault; Alfalou, Ayman

    2017-02-01

    Face recognition is widely studied in the literature for its possibilities in surveillance and security. In this paper, we report a novel algorithm for the identification task. This technique is based on an optimized 3D modeling allowing to reconstruct faces in different poses from a limited number of references (i.e. one image by class/person). Particularly, we propose to use an active shape model to detect a set of keypoints on the face necessary to deform our synthetic model with our optimized finite element method. Indeed, in order to improve our deformation, we propose a regularization by distances on graph. To perform the identification we use the VanderLugt correlator well know to effectively address this task. On the other hand we add a difference of Gaussian filtering step to highlight the edges and a description step based on the local binary patterns. The experiments are performed on the PHPID database enhanced with our 3D reconstructed faces of each person with an azimuth and an elevation ranging from -30° to +30°. The obtained results prove the robustness of our new method with 88.76% of good identification when the classic 2D approach (based on the VLC) obtains just 44.97%.

  16. Dissociation of face-selective cortical responses by attention.

    PubMed

    Furey, Maura L; Tanskanen, Topi; Beauchamp, Michael S; Avikainen, Sari; Uutela, Kimmo; Hari, Riitta; Haxby, James V

    2006-01-24

    We studied attentional modulation of cortical processing of faces and houses with functional MRI and magnetoencephalography (MEG). MEG detected an early, transient face-selective response. Directing attention to houses in "double-exposure" pictures of superimposed faces and houses strongly suppressed the characteristic, face-selective functional MRI response in the fusiform gyrus. By contrast, attention had no effect on the M170, the early, face-selective response detected with MEG. Late (>190 ms) category-related MEG responses elicited by faces and houses, however, were strongly modulated by attention. These results indicate that hemodynamic and electrophysiological measures of face-selective cortical processing complement each other. The hemodynamic signals reflect primarily late responses that can be modulated by feedback connections. By contrast, the early, face-specific M170 that was not modulated by attention likely reflects a rapid, feed-forward phase of face-selective processing.

  17. Sensitivity and Specificity of OCT Angiography to Detect Choroidal Neovascularization.

    PubMed

    Faridi, Ambar; Jia, Yali; Gao, Simon S; Huang, David; Bhavsar, Kavita V; Wilson, David J; Sill, Andrew; Flaxel, Christina J; Hwang, Thomas S; Lauer, Andreas K; Bailey, Steven T

    2017-01-01

    To determine the sensitivity and specificity of optical coherence tomography angiography (OCTA) in the detection of choroidal neovascularization (CNV) in age-related macular degeneration (AMD). Prospective case series. Prospective series of seventy-two eyes were studied, which included eyes with treatment-naive CNV due to AMD, non-neovascular AMD, and normal controls. All eyes underwent OCTA with a spectral domain (SD) OCT (Optovue, Inc.). The 3D angiogram was segmented into separate en face views including the inner retinal angiogram, outer retinal angiogram, and choriocapillaris angiogram. Detection of abnormal flow in the outer retina served as candidate CNV with OCTA. Masked graders reviewed structural OCT alone, en face OCTA alone, and en face OCTA combined with cross-sectional OCTA for the presence of CNV. The sensitivity and specificity of CNV detection compared to the gold standard of fluorescein angiography (FA) and OCT was determined for structural SD-OCT alone, en face OCTA alone, and with en face OCTA combined with cross-sectional OCTA. Of 32 eyes with CNV, both graders identified 26 true positives with en face OCTA alone, resulting in a sensitivity of 81.3%. Four of the 6 false negatives had large subretinal hemorrhage (SRH) and sensitivity improved to 94% for both graders if eyes with SRH were excluded. The addition of cross-sectional OCTA along with en face OCTA improved the sensitivity to 100% for both graders. Structural OCT alone also had a sensitivity of 100%. The specificity of en face OCTA alone was 92.5% for grader A and 97.5% for grader B. The specificity of structural OCT alone was 97.5% for grader A and 85% for grader B. Cross-sectional OCTA combined with en face OCTA had a specificity of 97.5% for grader A and 100% for grader B. Sensitivity and specificity for CNV detection with en face OCTA combined with cross-sectional OCTA approaches that of the gold standard of FA with OCT, and it is better than en face OCTA alone. Structural OCT alone has excellent sensitivity for CNV detection. False positives from structural OCT can be mitigated with the addition of flow information with OCTA.

  18. Making great leaps forward: Accounting for detectability in herpetological field studies

    USGS Publications Warehouse

    Mazerolle, Marc J.; Bailey, Larissa L.; Kendall, William L.; Royle, J. Andrew; Converse, Sarah J.; Nichols, James D.

    2007-01-01

    Detecting individuals of amphibian and reptile species can be a daunting task. Detection can be hindered by various factors such as cryptic behavior, color patterns, or observer experience. These factors complicate the estimation of state variables of interest (e.g., abundance, occupancy, species richness) as well as the vital rates that induce changes in these state variables (e.g., survival probabilities for abundance; extinction probabilities for occupancy). Although ad hoc methods (e.g., counts uncorrected for detection, return rates) typically perform poorly in the face of no detection, they continue to be used extensively in various fields, including herpetology. However, formal approaches that estimate and account for the probability of detection, such as capture-mark-recapture (CMR) methods and distance sampling, are available. In this paper, we present classical approaches and recent advances in methods accounting for detectability that are particularly pertinent for herpetological data sets. Through examples, we illustrate the use of several methods, discuss their performance compared to that of ad hoc methods, and we suggest available software to perform these analyses. The methods we discuss control for imperfect detection and reduce bias in estimates of demographic parameters such as population size, survival, or, at other levels of biological organization, species occurrence. Among these methods, recently developed approaches that no longer require marked or resighted individuals should be particularly of interest to field herpetologists. We hope that our effort will encourage practitioners to implement some of the estimation methods presented herein instead of relying on ad hoc methods that make more limiting assumptions.

  19. GOM-Face: GKP, EOG, and EMG-based multimodal interface with application to humanoid robot control.

    PubMed

    Nam, Yunjun; Koo, Bonkon; Cichocki, Andrzej; Choi, Seungjin

    2014-02-01

    We present a novel human-machine interface, called GOM-Face , and its application to humanoid robot control. The GOM-Face bases its interfacing on three electric potentials measured on the face: 1) glossokinetic potential (GKP), which involves the tongue movement; 2) electrooculogram (EOG), which involves the eye movement; 3) electromyogram, which involves the teeth clenching. Each potential has been individually used for assistive interfacing to provide persons with limb motor disabilities or even complete quadriplegia an alternative communication channel. However, to the best of our knowledge, GOM-Face is the first interface that exploits all these potentials together. We resolved the interference between GKP and EOG by extracting discriminative features from two covariance matrices: a tongue-movement-only data matrix and eye-movement-only data matrix. With the feature extraction method, GOM-Face can detect four kinds of horizontal tongue or eye movements with an accuracy of 86.7% within 2.77 s. We demonstrated the applicability of the GOM-Face to humanoid robot control: users were able to communicate with the robot by selecting from a predefined menu using the eye and tongue movements.

  20. Multimodal Translation System Using Texture-Mapped Lip-Sync Images for Video Mail and Automatic Dubbing Applications

    NASA Astrophysics Data System (ADS)

    Morishima, Shigeo; Nakamura, Satoshi

    2004-12-01

    We introduce a multimodal English-to-Japanese and Japanese-to-English translation system that also translates the speaker's speech motion by synchronizing it to the translated speech. This system also introduces both a face synthesis technique that can generate any viseme lip shape and a face tracking technique that can estimate the original position and rotation of a speaker's face in an image sequence. To retain the speaker's facial expression, we substitute only the speech organ's image with the synthesized one, which is made by a 3D wire-frame model that is adaptable to any speaker. Our approach provides translated image synthesis with an extremely small database. The tracking motion of the face from a video image is performed by template matching. In this system, the translation and rotation of the face are detected by using a 3D personal face model whose texture is captured from a video frame. We also propose a method to customize the personal face model by using our GUI tool. By combining these techniques and the translated voice synthesis technique, an automatic multimodal translation can be achieved that is suitable for video mail or automatic dubbing systems into other languages.

  1. Spoof Detection for Finger-Vein Recognition System Using NIR Camera.

    PubMed

    Nguyen, Dat Tien; Yoon, Hyo Sik; Pham, Tuyen Danh; Park, Kang Ryoung

    2017-10-01

    Finger-vein recognition, a new and advanced biometrics recognition method, is attracting the attention of researchers because of its advantages such as high recognition performance and lesser likelihood of theft and inaccuracies occurring on account of skin condition defects. However, as reported by previous researchers, it is possible to attack a finger-vein recognition system by using presentation attack (fake) finger-vein images. As a result, spoof detection, named as presentation attack detection (PAD), is necessary in such recognition systems. Previous attempts to establish PAD methods primarily focused on designing feature extractors by hand (handcrafted feature extractor) based on the observations of the researchers about the difference between real (live) and presentation attack finger-vein images. Therefore, the detection performance was limited. Recently, the deep learning framework has been successfully applied in computer vision and delivered superior results compared to traditional handcrafted methods on various computer vision applications such as image-based face recognition, gender recognition and image classification. In this paper, we propose a PAD method for near-infrared (NIR) camera-based finger-vein recognition system using convolutional neural network (CNN) to enhance the detection ability of previous handcrafted methods. Using the CNN method, we can derive a more suitable feature extractor for PAD than the other handcrafted methods using a training procedure. We further process the extracted image features to enhance the presentation attack finger-vein image detection ability of the CNN method using principal component analysis method (PCA) for dimensionality reduction of feature space and support vector machine (SVM) for classification. Through extensive experimental results, we confirm that our proposed method is adequate for presentation attack finger-vein image detection and it can deliver superior detection results compared to CNN-based methods and other previous handcrafted methods.

  2. Spoof Detection for Finger-Vein Recognition System Using NIR Camera

    PubMed Central

    Nguyen, Dat Tien; Yoon, Hyo Sik; Pham, Tuyen Danh; Park, Kang Ryoung

    2017-01-01

    Finger-vein recognition, a new and advanced biometrics recognition method, is attracting the attention of researchers because of its advantages such as high recognition performance and lesser likelihood of theft and inaccuracies occurring on account of skin condition defects. However, as reported by previous researchers, it is possible to attack a finger-vein recognition system by using presentation attack (fake) finger-vein images. As a result, spoof detection, named as presentation attack detection (PAD), is necessary in such recognition systems. Previous attempts to establish PAD methods primarily focused on designing feature extractors by hand (handcrafted feature extractor) based on the observations of the researchers about the difference between real (live) and presentation attack finger-vein images. Therefore, the detection performance was limited. Recently, the deep learning framework has been successfully applied in computer vision and delivered superior results compared to traditional handcrafted methods on various computer vision applications such as image-based face recognition, gender recognition and image classification. In this paper, we propose a PAD method for near-infrared (NIR) camera-based finger-vein recognition system using convolutional neural network (CNN) to enhance the detection ability of previous handcrafted methods. Using the CNN method, we can derive a more suitable feature extractor for PAD than the other handcrafted methods using a training procedure. We further process the extracted image features to enhance the presentation attack finger-vein image detection ability of the CNN method using principal component analysis method (PCA) for dimensionality reduction of feature space and support vector machine (SVM) for classification. Through extensive experimental results, we confirm that our proposed method is adequate for presentation attack finger-vein image detection and it can deliver superior detection results compared to CNN-based methods and other previous handcrafted methods. PMID:28974031

  3. Sequential Probability Ratio Test for Collision Avoidance Maneuver Decisions

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Markley, F. Landis

    2010-01-01

    When facing a conjunction between space objects, decision makers must chose whether to maneuver for collision avoidance or not. We apply a well-known decision procedure, the sequential probability ratio test, to this problem. We propose two approaches to the problem solution, one based on a frequentist method, and the other on a Bayesian method. The frequentist method does not require any prior knowledge concerning the conjunction, while the Bayesian method assumes knowledge of prior probability densities. Our results show that both methods achieve desired missed detection rates, but the frequentist method's false alarm performance is inferior to the Bayesian method's

  4. Sad Facial Expressions Increase Choice Blindness

    PubMed Central

    Wang, Yajie; Zhao, Song; Zhang, Zhijie; Feng, Wenfeng

    2018-01-01

    Previous studies have discovered a fascinating phenomenon known as choice blindness—individuals fail to detect mismatches between the face they choose and the face replaced by the experimenter. Although previous studies have reported a couple of factors that can modulate the magnitude of choice blindness, the potential effect of facial expression on choice blindness has not yet been explored. Using faces with sad and neutral expressions (Experiment 1) and faces with happy and neutral expressions (Experiment 2) in the classic choice blindness paradigm, the present study investigated the effects of facial expressions on choice blindness. The results showed that the detection rate was significantly lower on sad faces than neutral faces, whereas no significant difference was observed between happy faces and neutral faces. The exploratory analysis of verbal reports found that participants who reported less facial features for sad (as compared to neutral) expressions also tended to show a lower detection rate of sad (as compared to neutral) faces. These findings indicated that sad facial expressions increased choice blindness, which might have resulted from inhibition of further processing of the detailed facial features by the less attractive sad expressions (as compared to neutral expressions). PMID:29358926

  5. Sad Facial Expressions Increase Choice Blindness.

    PubMed

    Wang, Yajie; Zhao, Song; Zhang, Zhijie; Feng, Wenfeng

    2017-01-01

    Previous studies have discovered a fascinating phenomenon known as choice blindness-individuals fail to detect mismatches between the face they choose and the face replaced by the experimenter. Although previous studies have reported a couple of factors that can modulate the magnitude of choice blindness, the potential effect of facial expression on choice blindness has not yet been explored. Using faces with sad and neutral expressions (Experiment 1) and faces with happy and neutral expressions (Experiment 2) in the classic choice blindness paradigm, the present study investigated the effects of facial expressions on choice blindness. The results showed that the detection rate was significantly lower on sad faces than neutral faces, whereas no significant difference was observed between happy faces and neutral faces. The exploratory analysis of verbal reports found that participants who reported less facial features for sad (as compared to neutral) expressions also tended to show a lower detection rate of sad (as compared to neutral) faces. These findings indicated that sad facial expressions increased choice blindness, which might have resulted from inhibition of further processing of the detailed facial features by the less attractive sad expressions (as compared to neutral expressions).

  6. Image Quality Assessment for Fake Biometric Detection: Application to Iris, Fingerprint, and Face Recognition.

    PubMed

    Galbally, Javier; Marcel, Sébastien; Fierrez, Julian

    2014-02-01

    To ensure the actual presence of a real legitimate trait in contrast to a fake self-manufactured synthetic or reconstructed sample is a significant problem in biometric authentication, which requires the development of new and efficient protection measures. In this paper, we present a novel software-based fake detection method that can be used in multiple biometric systems to detect different types of fraudulent access attempts. The objective of the proposed system is to enhance the security of biometric recognition frameworks, by adding liveness assessment in a fast, user-friendly, and non-intrusive manner, through the use of image quality assessment. The proposed approach presents a very low degree of complexity, which makes it suitable for real-time applications, using 25 general image quality features extracted from one image (i.e., the same acquired for authentication purposes) to distinguish between legitimate and impostor samples. The experimental results, obtained on publicly available data sets of fingerprint, iris, and 2D face, show that the proposed method is highly competitive compared with other state-of-the-art approaches and that the analysis of the general image quality of real biometric samples reveals highly valuable information that may be very efficiently used to discriminate them from fake traits.

  7. Hole Feature on Conical Face Recognition for Turning Part Model

    NASA Astrophysics Data System (ADS)

    Zubair, A. F.; Abu Mansor, M. S.

    2018-03-01

    Computer Aided Process Planning (CAPP) is the bridge between CAD and CAM and pre-processing of the CAD data in the CAPP system is essential. For CNC turning part, conical faces of part model is inevitable to be recognised beside cylindrical and planar faces. As the sinus cosines of the cone radius structure differ according to different models, face identification in automatic feature recognition of the part model need special intention. This paper intends to focus hole on feature on conical faces that can be detected by CAD solid modeller ACIS via. SAT file. Detection algorithm of face topology were generated and compared. The study shows different faces setup for similar conical part models with different hole type features. Three types of holes were compared and different between merge faces and unmerge faces were studied.

  8. The UK Sport perspective on detecting growth hormone abuse.

    PubMed

    Stow, M R; Wojek, N; Marshall, J

    2009-08-01

    Human growth hormone (hGH) is seen as a doping risk in sport because of its possible anabolic and lipolytic effects. As a result of this hGH is prohibited for athletes to use both in and out-of-competition by the World Anti-Doping Agency (WADA) requiring Anti-Doping Organisations to work with research teams in identifying ways to detect hGH abuse. This paper reviews and discusses the UK Sport perspective on the challenges faced in detecting hGH and in particular draws upon the experiences gained during the collaborative efforts with the GH-2004 research team in achieving the implementation of the Marker Method for hGH detection. In 2008 significant progress has been made; there is one test for detecting HGH approved for use in anti-doping and a second detection method pending. This is a strong reflection of the ongoing research efforts in anti-doping and the progress being made by the Anti-Doping Organisations in reducing the risk that doping poses to sport.

  9. A Lack of Sexual Dimorphism in Width-to-Height Ratio in White European Faces Using 2D Photographs, 3D Scans, and Anthropometry

    PubMed Central

    Kramer, Robin S. S.; Jones, Alex L.; Ward, Robert

    2012-01-01

    Facial width-to-height ratio has received a great deal of attention in recent research. Evidence from human skulls suggests that males have a larger relative facial width than females, and that this sexual dimorphism is an honest signal of masculinity, aggression, and related traits. However, evidence that this measure is sexually dimorphic in faces, rather than skulls, is surprisingly weak. We therefore investigated facial width-to-height ratio in three White European samples using three different methods of measurement: 2D photographs, 3D scans, and anthropometry. By measuring the same individuals with multiple methods, we demonstrated high agreement across all measures. However, we found no evidence of sexual dimorphism in the face. In our third study, we also found a link between facial width-to-height ratio and body mass index for both males and females, although this relationship did not account for the lack of dimorphism in our sample. While we showed sufficient power to detect differences between male and female width-to-height ratio, our results failed to support the general hypothesis of sexual dimorphism in the face. PMID:22880088

  10. Category search speeds up face-selective fMRI responses in a non-hierarchical cortical face network.

    PubMed

    Jiang, Fang; Badler, Jeremy B; Righi, Giulia; Rossion, Bruno

    2015-05-01

    The human brain is extremely efficient at detecting faces in complex visual scenes, but the spatio-temporal dynamics of this remarkable ability, and how it is influenced by category-search, remain largely unknown. In the present study, human subjects were shown gradually-emerging images of faces or cars in visual scenes, while neural activity was recorded using functional magnetic resonance imaging (fMRI). Category search was manipulated by the instruction to indicate the presence of either a face or a car, in different blocks, as soon as an exemplar of the target category was detected in the visual scene. The category selectivity of most face-selective areas was enhanced when participants were instructed to report the presence of faces in gradually decreasing noise stimuli. Conversely, the same regions showed much less selectivity when participants were instructed instead to detect cars. When "face" was the target category, the fusiform face area (FFA) showed consistently earlier differentiation of face versus car stimuli than did the "occipital face area" (OFA). When "car" was the target category, only the FFA showed differentiation of face versus car stimuli. These observations provide further challenges for hierarchical models of cortical face processing and show that during gradual revealing of information, selective category-search may decrease the required amount of information, enhancing and speeding up category-selective responses in the human brain. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Sky Detection in Hazy Image.

    PubMed

    Song, Yingchao; Luo, Haibo; Ma, Junkai; Hui, Bin; Chang, Zheng

    2018-04-01

    Sky detection plays an essential role in various computer vision applications. Most existing sky detection approaches, being trained on ideal dataset, may lose efficacy when facing unfavorable conditions like the effects of weather and lighting conditions. In this paper, a novel algorithm for sky detection in hazy images is proposed from the perspective of probing the density of haze. We address the problem by an image segmentation and a region-level classification. To characterize the sky of hazy scenes, we unprecedentedly introduce several haze-relevant features that reflect the perceptual hazy density and the scene depth. Based on these features, the sky is separated by two imbalance SVM classifiers and a similarity measurement. Moreover, a sky dataset (named HazySky) with 500 annotated hazy images is built for model training and performance evaluation. To evaluate the performance of our method, we conducted extensive experiments both on our HazySky dataset and the SkyFinder dataset. The results demonstrate that our method performs better on the detection accuracy than previous methods, not only under hazy scenes, but also under other weather conditions.

  12. Sky Detection in Hazy Image

    PubMed Central

    Song, Yingchao; Luo, Haibo; Ma, Junkai; Hui, Bin; Chang, Zheng

    2018-01-01

    Sky detection plays an essential role in various computer vision applications. Most existing sky detection approaches, being trained on ideal dataset, may lose efficacy when facing unfavorable conditions like the effects of weather and lighting conditions. In this paper, a novel algorithm for sky detection in hazy images is proposed from the perspective of probing the density of haze. We address the problem by an image segmentation and a region-level classification. To characterize the sky of hazy scenes, we unprecedentedly introduce several haze-relevant features that reflect the perceptual hazy density and the scene depth. Based on these features, the sky is separated by two imbalance SVM classifiers and a similarity measurement. Moreover, a sky dataset (named HazySky) with 500 annotated hazy images is built for model training and performance evaluation. To evaluate the performance of our method, we conducted extensive experiments both on our HazySky dataset and the SkyFinder dataset. The results demonstrate that our method performs better on the detection accuracy than previous methods, not only under hazy scenes, but also under other weather conditions. PMID:29614778

  13. A new method for automatic discontinuity traces sampling on rock mass 3D model

    NASA Astrophysics Data System (ADS)

    Umili, G.; Ferrero, A.; Einstein, H. H.

    2013-02-01

    A new automatic method for discontinuity traces mapping and sampling on a rock mass digital model is described in this work. The implemented procedure allows one to automatically identify discontinuity traces on a Digital Surface Model: traces are detected directly as surface breaklines, by means of maximum and minimum principal curvature values of the vertices that constitute the model surface. Color influence and user errors, that usually characterize the trace mapping on images, are eliminated. Also trace sampling procedures based on circular windows and circular scanlines have been implemented: they are used to infer trace data and to calculate values of mean trace length, expected discontinuity diameter and intensity of rock discontinuities. The method is tested on a case study: results obtained applying the automatic procedure on the DSM of a rock face are compared to those obtained performing a manual sampling on the orthophotograph of the same rock face.

  14. Determination of the action modes of cellulases from hydrolytic profiles over a time course using fluorescence-assisted carbohydrate electrophoresis.

    PubMed

    Zhang, Qing; Zhang, Xiaomei; Wang, Peipei; Li, Dandan; Chen, Guanjun; Gao, Peiji; Wang, Lushan

    2015-03-01

    Fluorescence-assisted carbohydrate electrophoresis (FACE) is a sensitive and simple method for the separation of oligosaccharides. It relies on labeling the reducing ends of oligosaccharides with a fluorophore, followed by PAGE. Concentration changes of oligosaccharides following hydrolysis of a carbohydrate polymer could be quantitatively measured continuously over time using the FACE method. Based on the quantitative analysis, we suggested that FACE was a relatively high-throughput, repeatable, and suitable method for the analysis of the action modes of cellulases. On account of the time courses of their hydrolytic profiles, the apparent processivity was used to show the different action modes of cellulases. Cellulases could be easily differentiated as exoglucanases, β-glucosidases, or endoglucanases. Moreover, endoglucanases from the same glycoside hydrolases family had a variety of apparent processivity, indicating the different modes of action. Endoglucanases with the same binding capacities and hydrolytic activities had similar oligosaccharide profiles, which aided in their classification. The hydrolytic profile of Trichoderma reesei Cel12A, an endoglucanases from T. reesei, contained glucose, cellobiose, and cellotriose, which revealed that it may have a new glucosidase activity, corresponding to that of EC 3.2.1.74. A hydrolysate study of a T. reesei Cel12A-N20A mutant demonstrated that the FACE method was sufficiently sensitive to detect the influence of a single-site mutation on enzymatic activity. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Non-contact detection of cardiac rate based on visible light imaging device

    NASA Astrophysics Data System (ADS)

    Zhu, Huishi; Zhao, Yuejin; Dong, Liquan

    2012-10-01

    We have developed a non-contact method to detect human cardiac rate at a distance. This detection is based on the general lighting condition. Using the video signal of human face region captured by webcam, we acquire the cardiac rate based on the PhotoPlethysmoGraphy theory. In this paper, the cardiac rate detecting method is mainly in view of the blood's different absorptivities of the lights various wavelengths. Firstly, we discompose the video signal into RGB three color signal channels and choose the face region as region of interest to take average gray value. Then, we draw three gray-mean curves on each color channel with time as variable. When the imaging device has good fidelity of color, the green channel signal shows the PhotoPlethysmoGraphy information most clearly. But the red and blue channel signals can provide more other physiological information on the account of their light absorptive characteristics of blood. We divide red channel signal by green channel signal to acquire the pulse wave. With the passband from 0.67Hz to 3Hz as a filter of the pulse wave signal and the frequency spectrum superimposed algorithm, we design frequency extracted algorithm to achieve the cardiac rate. Finally, we experiment with 30 volunteers, containing different genders and different ages. The results of the experiments are all relatively agreeable. The difference is about 2bmp. Through the experiment, we deduce that the PhotoPlethysmoGraphy theory based on visible light can also be used to detect other physiological information.

  16. Faces Do Not Capture Special Attention in Children with Autism Spectrum Disorder: A Change Blindness Study

    ERIC Educational Resources Information Center

    Kikuchi, Yukiko; Senju, Atsushi; Tojo, Yoshikuni; Osanai, Hiroo; Hasegawa, Toshikazu

    2009-01-01

    Two experiments investigated attention of children with autism spectrum disorder (ASD) to faces and objects. In both experiments, children (7- to 15-year-olds) detected the difference between 2 visual scenes. Results in Experiment 1 revealed that typically developing children (n = 16) detected the change in faces faster than in objects, whereas…

  17. Preliminary evidence that different mechanisms underlie the anger superiority effect in children with and without Autism Spectrum Disorders

    PubMed Central

    Isomura, Tomoko; Ogawa, Shino; Yamada, Satoko; Shibasaki, Masahiro; Masataka, Nobuo

    2014-01-01

    Previous studies have demonstrated that angry faces capture humans' attention more rapidly than emotionally positive faces. This phenomenon is referred to as the anger superiority effect (ASE). Despite atypical emotional processing, adults and children with Autism Spectrum Disorders (ASD) have been reported to show ASE as well as typically developed (TD) individuals. So far, however, few studies have clarified whether or not the mechanisms underlying ASE are the same for both TD and ASD individuals. Here, we tested how TD and ASD children process schematic emotional faces during detection by employing a recognition task in combination with a face-in-the-crowd task. Results of the face-in-the-crowd task revealed the prevalence of ASE both in TD and ASD children. However, the results of the recognition task revealed group differences: In TD children, detection of angry faces required more configural face processing and disrupted the processing of local features. In ASD children, on the other hand, it required more feature-based processing rather than configural processing. Despite the small sample sizes, these findings provide preliminary evidence that children with ASD, in contrast to TD children, show quick detection of angry faces by extracting local features in faces. PMID:24904477

  18. Hardware-software face detection system based on multi-block local binary patterns

    NASA Astrophysics Data System (ADS)

    Acasandrei, Laurentiu; Barriga, Angel

    2015-03-01

    Face detection is an important aspect for biometrics, video surveillance and human computer interaction. Due to the complexity of the detection algorithms any face detection system requires a huge amount of computational and memory resources. In this communication an accelerated implementation of MB LBP face detection algorithm targeting low frequency, low memory and low power embedded system is presented. The resulted implementation is time deterministic and uses a customizable AMBA IP hardware accelerator. The IP implements the kernel operations of the MB-LBP algorithm and can be used as universal accelerator for MB LBP based applications. The IP employs 8 parallel MB-LBP feature evaluators cores, uses a deterministic bandwidth, has a low area profile and the power consumption is ~95 mW on a Virtex5 XC5VLX50T. The resulted implementation acceleration gain is between 5 to 8 times, while the hardware MB-LBP feature evaluation gain is between 69 and 139 times.

  19. External and internal facial features modulate processing of vertical but not horizontal spatial relations.

    PubMed

    Meinhardt, Günter; Kurbel, David; Meinhardt-Injac, Bozana; Persike, Malte

    2018-03-22

    Some years ago an asymmetry was reported for the inversion effect for horizontal (H) and vertical (V) relational face manipulations (Goffaux & Rossion, 2007). Subsequent research examined whether a specific disruption of long-range relations underlies the H/V inversion asymmetry (Sekunova & Barton, 2008). Here, we tested how detection of changes in interocular distance (H) and eye height (V) depends on cardinal internal features and external feature surround. Results replicated the H/V inversion asymmetry. Moreover, we found very different face cue dependencies for both change types. Performance and inversion effects did not depend on the presence of other face cues for detecting H changes. In contrast, accuracy for detecting V changes strongly depended on internal and external features, showing cumulative improvement when more cues were added. Inversion effects were generally large, and larger with external feature surround. The cue independence in detecting H relational changes indicates specialized local processing tightly tuned to the eyes region, while the strong cue dependency in detecting V relational changes indicates a global mechanism of cue integration across different face regions. These findings suggest that the H/V asymmetry of the inversion effect rests on an H/V anisotropy of face cue dependency, since only the global V mechanism suffers from disruption of cue integration as the major effect of face inversion. Copyright © 2018. Published by Elsevier Ltd.

  20. Optimized face recognition algorithm using radial basis function neural networks and its practical applications.

    PubMed

    Yoo, Sung-Hoon; Oh, Sung-Kwun; Pedrycz, Witold

    2015-09-01

    In this study, we propose a hybrid method of face recognition by using face region information extracted from the detected face region. In the preprocessing part, we develop a hybrid approach based on the Active Shape Model (ASM) and the Principal Component Analysis (PCA) algorithm. At this step, we use a CCD (Charge Coupled Device) camera to acquire a facial image by using AdaBoost and then Histogram Equalization (HE) is employed to improve the quality of the image. ASM extracts the face contour and image shape to produce a personal profile. Then we use a PCA method to reduce dimensionality of face images. In the recognition part, we consider the improved Radial Basis Function Neural Networks (RBF NNs) to identify a unique pattern associated with each person. The proposed RBF NN architecture consists of three functional modules realizing the condition phase, the conclusion phase, and the inference phase completed with the help of fuzzy rules coming in the standard 'if-then' format. In the formation of the condition part of the fuzzy rules, the input space is partitioned with the use of Fuzzy C-Means (FCM) clustering. In the conclusion part of the fuzzy rules, the connections (weights) of the RBF NNs are represented by four kinds of polynomials such as constant, linear, quadratic, and reduced quadratic. The values of the coefficients are determined by running a gradient descent method. The output of the RBF NNs model is obtained by running a fuzzy inference method. The essential design parameters of the network (including learning rate, momentum coefficient and fuzzification coefficient used by the FCM) are optimized by means of Differential Evolution (DE). The proposed P-RBF NNs (Polynomial based RBF NNs) are applied to facial recognition and its performance is quantified from the viewpoint of the output performance and recognition rate. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Monitoring of facial stress during space flight: Optical computer recognition combining discriminative and generative methods

    NASA Astrophysics Data System (ADS)

    Dinges, David F.; Venkataraman, Sundara; McGlinchey, Eleanor L.; Metaxas, Dimitris N.

    2007-02-01

    Astronauts are required to perform mission-critical tasks at a high level of functional capability throughout spaceflight. Stressors can compromise their ability to do so, making early objective detection of neurobehavioral problems in spaceflight a priority. Computer optical approaches offer a completely unobtrusive way to detect distress during critical operations in space flight. A methodology was developed and a study completed to determine whether optical computer recognition algorithms could be used to discriminate facial expressions during stress induced by performance demands. Stress recognition from a facial image sequence is a subject that has not received much attention although it is an important problem for many applications beyond space flight (security, human-computer interaction, etc.). This paper proposes a comprehensive method to detect stress from facial image sequences by using a model-based tracker. The image sequences were captured as subjects underwent a battery of psychological tests under high- and low-stress conditions. A cue integration-based tracking system accurately captured the rigid and non-rigid parameters of different parts of the face (eyebrows, lips). The labeled sequences were used to train the recognition system, which consisted of generative (hidden Markov model) and discriminative (support vector machine) parts that yield results superior to using either approach individually. The current optical algorithm methods performed at a 68% accuracy rate in an experimental study of 60 healthy adults undergoing periods of high-stress versus low-stress performance demands. Accuracy and practical feasibility of the technique is being improved further with automatic multi-resolution selection for the discretization of the mask, and automated face detection and mask initialization algorithms.

  2. Horizontal transfer of GM DNA - why is almost no one looking? Open letter to Kaare Nielsen in his capacity as a member of the European Food Safety Authority GMO panel.

    PubMed

    Ho, Mae-Wan

    2014-01-01

    A culture of denial over the horizontal spread of genetically modified nucleic acids prevails in the face of direct evidence that it has occurred widely when appropriate methods and molecular probes are used for detection.

  3. Presentation Attack Detection for Iris Recognition System Using NIR Camera Sensor

    PubMed Central

    Nguyen, Dat Tien; Baek, Na Rae; Pham, Tuyen Danh; Park, Kang Ryoung

    2018-01-01

    Among biometric recognition systems such as fingerprint, finger-vein, or face, the iris recognition system has proven to be effective for achieving a high recognition accuracy and security level. However, several recent studies have indicated that an iris recognition system can be fooled by using presentation attack images that are recaptured using high-quality printed images or by contact lenses with printed iris patterns. As a result, this potential threat can reduce the security level of an iris recognition system. In this study, we propose a new presentation attack detection (PAD) method for an iris recognition system (iPAD) using a near infrared light (NIR) camera image. To detect presentation attack images, we first localized the iris region of the input iris image using circular edge detection (CED). Based on the result of iris localization, we extracted the image features using deep learning-based and handcrafted-based methods. The input iris images were then classified into real and presentation attack categories using support vector machines (SVM). Through extensive experiments with two public datasets, we show that our proposed method effectively solves the iris recognition presentation attack detection problem and produces detection accuracy superior to previous studies. PMID:29695113

  4. Presentation Attack Detection for Iris Recognition System Using NIR Camera Sensor.

    PubMed

    Nguyen, Dat Tien; Baek, Na Rae; Pham, Tuyen Danh; Park, Kang Ryoung

    2018-04-24

    Among biometric recognition systems such as fingerprint, finger-vein, or face, the iris recognition system has proven to be effective for achieving a high recognition accuracy and security level. However, several recent studies have indicated that an iris recognition system can be fooled by using presentation attack images that are recaptured using high-quality printed images or by contact lenses with printed iris patterns. As a result, this potential threat can reduce the security level of an iris recognition system. In this study, we propose a new presentation attack detection (PAD) method for an iris recognition system (iPAD) using a near infrared light (NIR) camera image. To detect presentation attack images, we first localized the iris region of the input iris image using circular edge detection (CED). Based on the result of iris localization, we extracted the image features using deep learning-based and handcrafted-based methods. The input iris images were then classified into real and presentation attack categories using support vector machines (SVM). Through extensive experiments with two public datasets, we show that our proposed method effectively solves the iris recognition presentation attack detection problem and produces detection accuracy superior to previous studies.

  5. Reconstruction of lower face defect or deformity with submental artery perforator flaps.

    PubMed

    Shi, Cheng-li; Wang, Xian-cheng

    2012-07-01

    Reconstruction of lower face defects or deformity often presents as a challenge for plastic surgeons. Many methods, including skin graft, tissue expander, or free flap are introduced. Submental artery perforator flaps have been used in the reconstruction of defects or deformities of the lower face. Between August 2006 and December 2008, 22 patients with lower face defects or deformity underwent reconstruction with pedicled submental artery perforator flaps. Their age ranged between 14 and 36 years. The perforator arteries were detected and labeled with a hand-held Doppler flowmeter. The size of flaps ranged from 4 × 6 to 6 × 7 cm, and the designed flaps included the perforator artery. All the flaps survived well, except 1 flap which resulted in partial necrosis in distal region and healed after conservative therapy. No other complication occurred with satisfactory aesthetic appearance of the donor site. The submental artery perforator flap is a thin and reliable flap with robust blood supply. This flap can reduce donor-site morbidity significantly and is a good choice for reconstructive surgery of lower face.

  6. Anti Theft Mechanism Through Face recognition Using FPGA

    NASA Astrophysics Data System (ADS)

    Sundari, Y. B. T.; Laxminarayana, G.; Laxmi, G. Vijaya

    2012-11-01

    The use of vehicle is must for everyone. At the same time, protection from theft is also very important. Prevention of vehicle theft can be done remotely by an authorized person. The location of the car can be found by using GPS and GSM controlled by FPGA. In this paper, face recognition is used to identify the persons and comparison is done with the preloaded faces for authorization. The vehicle will start only when the authorized personís face is identified. In the event of theft attempt or unauthorized personís trial to drive the vehicle, an MMS/SMS will be sent to the owner along with the location. Then the authorized person can alert the security personnel for tracking and catching the vehicle. For face recognition, a Principal Component Analysis (PCA) algorithm is developed using MATLAB. The control technique for GPS and GSM is developed using VHDL over SPTRAN 3E FPGA. The MMS sending method is written in VB6.0. The proposed application can be implemented with some modifications in the systems wherever the face recognition or detection is needed like, airports, international borders, banking applications etc.

  7. Observed touch on a non-human face is not remapped onto the human observer's own face.

    PubMed

    Beck, Brianna; Bertini, Caterina; Scarpazza, Cristina; Làdavas, Elisabetta

    2013-01-01

    Visual remapping of touch (VRT) is a phenomenon in which seeing a human face being touched enhances detection of tactile stimuli on the observer's own face, especially when the observed face expresses fear. This study tested whether VRT would occur when seeing touch on monkey faces and whether it would be similarly modulated by facial expressions. Human participants detected near-threshold tactile stimulation on their own cheeks while watching fearful, happy, and neutral human or monkey faces being concurrently touched or merely approached by fingers. We predicted minimal VRT for neutral and happy monkey faces but greater VRT for fearful monkey faces. The results with human faces replicated previous findings, demonstrating stronger VRT for fearful expressions than for happy or neutral expressions. However, there was no VRT (i.e. no difference between accuracy in touch and no-touch trials) for any of the monkey faces, regardless of facial expression, suggesting that touch on a non-human face is not remapped onto the somatosensory system of the human observer.

  8. Observed Touch on a Non-Human Face Is Not Remapped onto the Human Observer's Own Face

    PubMed Central

    Beck, Brianna; Bertini, Caterina; Scarpazza, Cristina; Làdavas, Elisabetta

    2013-01-01

    Visual remapping of touch (VRT) is a phenomenon in which seeing a human face being touched enhances detection of tactile stimuli on the observer's own face, especially when the observed face expresses fear. This study tested whether VRT would occur when seeing touch on monkey faces and whether it would be similarly modulated by facial expressions. Human participants detected near-threshold tactile stimulation on their own cheeks while watching fearful, happy, and neutral human or monkey faces being concurrently touched or merely approached by fingers. We predicted minimal VRT for neutral and happy monkey faces but greater VRT for fearful monkey faces. The results with human faces replicated previous findings, demonstrating stronger VRT for fearful expressions than for happy or neutral expressions. However, there was no VRT (i.e. no difference between accuracy in touch and no-touch trials) for any of the monkey faces, regardless of facial expression, suggesting that touch on a non-human face is not remapped onto the somatosensory system of the human observer. PMID:24250781

  9. Eye pupil detection system using an ensemble of regression forest and fast radial symmetry transform with a near infrared camera

    NASA Astrophysics Data System (ADS)

    Jeong, Mira; Nam, Jae-Yeal; Ko, Byoung Chul

    2017-09-01

    In this paper, we focus on pupil center detection in various video sequences that include head poses and changes in illumination. To detect the pupil center, we first find four eye landmarks in each eye by using cascade local regression based on a regression forest. Based on the rough location of the pupil, a fast radial symmetric transform is applied using the previously found pupil location to rearrange the fine pupil center. As the final step, the pupil displacement is estimated between the previous frame and the current frame to maintain the level of accuracy against a false locating result occurring in a particular frame. We generated a new face dataset, called Keimyung University pupil detection (KMUPD), with infrared camera. The proposed method was successfully applied to the KMUPD dataset, and the results indicate that its pupil center detection capability is better than that of other methods and with a shorter processing time.

  10. Hyper-realistic face masks: a new challenge in person identification.

    PubMed

    Sanders, Jet Gabrielle; Ueda, Yoshiyuki; Minemoto, Kazusa; Noyes, Eilidh; Yoshikawa, Sakiko; Jenkins, Rob

    2017-01-01

    We often identify people using face images. This is true in occupational settings such as passport control as well as in everyday social environments. Mapping between images and identities assumes that facial appearance is stable within certain bounds. For example, a person's apparent age, gender and ethnicity change slowly, if at all. It also assumes that deliberate changes beyond these bounds (i.e., disguises) would be easy to spot. Hyper-realistic face masks overturn these assumptions by allowing the wearer to look like an entirely different person. If unnoticed, these masks break the link between facial appearance and personal identity, with clear implications for applied face recognition. However, to date, no one has assessed the realism of these masks, or specified conditions under which they may be accepted as real faces. Herein, we examined incidental detection of unexpected but attended hyper-realistic masks in both photographic and live presentations. Experiment 1 (UK; n = 60) revealed no evidence for overt detection of hyper-realistic masks among real face photos, and little evidence of covert detection. Experiment 2 (Japan; n = 60) extended these findings to different masks, mask-wearers and participant pools. In Experiment 3 (UK and Japan; n = 407), passers-by failed to notice that a live confederate was wearing a hyper-realistic mask and showed limited evidence of covert detection, even at close viewing distance (5 vs. 20 m). Across all of these studies, viewers accepted hyper-realistic masks as real faces. Specific countermeasures will be required if detection rates are to be improved.

  11. Visual search for faces by race: a cross-race study.

    PubMed

    Sun, Gang; Song, Luping; Bentin, Shlomo; Yang, Yanjie; Zhao, Lun

    2013-08-30

    Using a single averaged face of each race previous study indicated that the detection of one other-race face among own-race faces background was faster than vice versa (Levin, 1996, 2000). However, employing a variable mapping of face pictures one recent report found preferential detection of own-race faces vs. other-race faces (Lipp et al., 2009). Using the well-controlled design and a heterogeneous set of real face images, in the present study we explored the visual search for own and other race faces in Chinese and Caucasian participants. Across both groups, the search for a face of one race among other-race faces was serial and self-terminating. In Chinese participants, the search consistently faster for other-race than own-race faces, irrespective of upright or upside-down condition; however, this search asymmetry was not evident in Caucasian participants. These characteristics suggested that the race of a face is not a visual basic feature, and in Chinese participants the faster search for other-race than own-race faces also reflects perceptual factors. The possible mechanism underlying other-race search effects was discussed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Neural Correlates of Face and Object Perception in an Awake Chimpanzee (Pan Troglodytes) Examined by Scalp-Surface Event-Related Potentials

    PubMed Central

    Fukushima, Hirokata; Hirata, Satoshi; Ueno, Ari; Matsuda, Goh; Fuwa, Kohki; Sugama, Keiko; Kusunoki, Kiyo; Hirai, Masahiro; Hiraki, Kazuo; Tomonaga, Masaki; Hasegawa, Toshikazu

    2010-01-01

    Background The neural system of our closest living relative, the chimpanzee, is a topic of increasing research interest. However, electrophysiological examinations of neural activity during visual processing in awake chimpanzees are currently lacking. Methodology/Principal Findings In the present report, skin-surface event-related brain potentials (ERPs) were measured while a fully awake chimpanzee observed photographs of faces and objects in two experiments. In Experiment 1, human faces and stimuli composed of scrambled face images were displayed. In Experiment 2, three types of pictures (faces, flowers, and cars) were presented. The waveforms evoked by face stimuli were distinguished from other stimulus types, as reflected by an enhanced early positivity appearing before 200 ms post stimulus, and an enhanced late negativity after 200 ms, around posterior and occipito-temporal sites. Face-sensitive activity was clearly observed in both experiments. However, in contrast to the robustly observed face-evoked N170 component in humans, we found that faces did not elicit a peak in the latency range of 150–200 ms in either experiment. Conclusions/Significance Although this pilot study examined a single subject and requires further examination, the observed scalp voltage patterns suggest that selective processing of faces in the chimpanzee brain can be detected by recording surface ERPs. In addition, this non-invasive method for examining an awake chimpanzee can be used to extend our knowledge of the characteristics of visual cognition in other primate species. PMID:20967284

  13. Assessment of Emotional Expressions after Full-Face Transplantation.

    PubMed

    Topçu, Çağdaş; Uysal, Hilmi; Özkan, Ömer; Özkan, Özlenen; Polat, Övünç; Bedeloğlu, Merve; Akgül, Arzu; Döğer, Ela Naz; Sever, Refik; Barçın, Nur Ebru; Tombak, Kadriye; Çolak, Ömer Halil

    2017-01-01

    We assessed clinical features as well as sensory and motor recoveries in 3 full-face transplantation patients. A frequency analysis was performed on facial surface electromyography data collected during 6 basic emotional expressions and 4 primary facial movements. Motor progress was assessed using the wavelet packet method by comparison against the mean results obtained from 10 healthy subjects. Analyses were conducted on 1 patient at approximately 1 year after face transplantation and at 2 years after transplantation in the remaining 2 patients. Motor recovery was observed following sensory recovery in all 3 patients; however, the 3 cases had different backgrounds and exhibited different degrees and rates of sensory and motor improvements after transplant. Wavelet packet energy was detected in all patients during emotional expressions and primary movements; however, there were fewer active channels during expressions in transplant patients compared to healthy individuals, and patterns of wavelet packet energy were different for each patient. Finally, high-frequency components were typically detected in patients during emotional expressions, but fewer channels demonstrated these high-frequency components in patients compared to healthy individuals. Our data suggest that the posttransplantation recovery of emotional facial expression requires neural plasticity.

  14. 77 FR 8328 - Open Meeting of the Taxpayer Advocacy Panel Face-to-Face Service Methods Project Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-14

    ... Face-to-Face Service Methods Project Committee AGENCY: Internal Revenue Service (IRS), Treasury. ACTION: Notice of meeting. SUMMARY: An open meeting of the Taxpayer Advocacy Panel Face-to-Face Service Methods... Panel Face-to-Face Service Methods Project Committee will be held Tuesday, March 13, 2012, at 2 p.m...

  15. Face Mask Sampling for the Detection of Mycobacterium tuberculosis in Expelled Aerosols

    PubMed Central

    Malkin, Joanne; Patel, Hemu; Otu, Jacob; Mlaga, Kodjovi; Sutherland, Jayne S.; Antonio, Martin; Perera, Nelun; Woltmann, Gerrit; Haldar, Pranabashis; Garton, Natalie J.; Barer, Michael R.

    2014-01-01

    Background Although tuberculosis is transmitted by the airborne route, direct information on the natural output of bacilli into air by source cases is very limited. We sought to address this through sampling of expelled aerosols in face masks that were subsequently analyzed for mycobacterial contamination. Methods In series 1, 17 smear microscopy positive patients wore standard surgical face masks once or twice for periods between 10 minutes and 5 hours; mycobacterial contamination was detected using a bacteriophage assay. In series 2, 19 patients with suspected tuberculosis were studied in Leicester UK and 10 patients with at least one positive smear were studied in The Gambia. These subjects wore one FFP30 mask modified to contain a gelatin filter for one hour; this was subsequently analyzed by the Xpert MTB/RIF system. Results In series 1, the bacteriophage assay detected live mycobacteria in 11/17 patients with wearing times between 10 and 120 minutes. Variation was seen in mask positivity and the level of contamination detected in multiple samples from the same patient. Two patients had non-tuberculous mycobacterial infections. In series 2, 13/20 patients with pulmonary tuberculosis produced positive masks and 0/9 patients with extrapulmonary or non-tuberculous diagnoses were mask positive. Overall, 65% of patients with confirmed pulmonary mycobacterial infection gave positive masks and this included 3/6 patients who received diagnostic bronchoalveolar lavages. Conclusion Mask sampling provides a simple means of assessing mycobacterial output in non-sputum expectorant. The approach shows potential for application to the study of airborne transmission and to diagnosis. PMID:25122163

  16. Interactive display system having a scaled virtual target zone

    DOEpatents

    Veligdan, James T.; DeSanto, Leonard

    2006-06-13

    A display system includes a waveguide optical panel having an inlet face and an opposite outlet face. A projector and imaging device cooperate with the panel for projecting a video image thereon. An optical detector bridges at least a portion of the waveguides for detecting a location on the outlet face within a target zone of an inbound light spot. A controller is operatively coupled to the imaging device and detector for displaying a cursor on the outlet face corresponding with the detected location of the spot within the target zone.

  17. Cooperative multisensor system for real-time face detection and tracking in uncontrolled conditions

    NASA Astrophysics Data System (ADS)

    Marchesotti, Luca; Piva, Stefano; Turolla, Andrea; Minetti, Deborah; Regazzoni, Carlo S.

    2005-03-01

    The presented work describes an innovative architecture for multi-sensor distributed video surveillance applications. The aim of the system is to track moving objects in outdoor environments with a cooperative strategy exploiting two video cameras. The system also exhibits the capacity of focusing its attention on the faces of detected pedestrians collecting snapshot frames of face images, by segmenting and tracking them over time at different resolution. The system is designed to employ two video cameras in a cooperative client/server structure: the first camera monitors the entire area of interest and detects the moving objects using change detection techniques. The detected objects are tracked over time and their position is indicated on a map representing the monitored area. The objects" coordinates are sent to the server sensor in order to point its zooming optics towards the moving object. The second camera tracks the objects at high resolution. As well as the client camera, this sensor is calibrated and the position of the object detected on the image plane reference system is translated in its coordinates referred to the same area map. In the map common reference system, data fusion techniques are applied to achieve a more precise and robust estimation of the objects" track and to perform face detection and tracking. The work novelties and strength reside in the cooperative multi-sensor approach, in the high resolution long distance tracking and in the automatic collection of biometric data such as a person face clip for recognition purposes.

  18. Segmentation of the Speaker's Face Region with Audiovisual Correlation

    NASA Astrophysics Data System (ADS)

    Liu, Yuyu; Sato, Yoichi

    The ability to find the speaker's face region in a video is useful for various applications. In this work, we develop a novel technique to find this region within different time windows, which is robust against the changes of view, scale, and background. The main thrust of our technique is to integrate audiovisual correlation analysis into a video segmentation framework. We analyze the audiovisual correlation locally by computing quadratic mutual information between our audiovisual features. The computation of quadratic mutual information is based on the probability density functions estimated by kernel density estimation with adaptive kernel bandwidth. The results of this audiovisual correlation analysis are incorporated into graph cut-based video segmentation to resolve a globally optimum extraction of the speaker's face region. The setting of any heuristic threshold in this segmentation is avoided by learning the correlation distributions of speaker and background by expectation maximization. Experimental results demonstrate that our method can detect the speaker's face region accurately and robustly for different views, scales, and backgrounds.

  19. Structural transformations in hull material clad by nitrogen stainless steel using various methods

    NASA Astrophysics Data System (ADS)

    Sagaradze, V. V.; Kataeva, N. V.; Mushnikova, S. Yu.; Khar'kov, O. A.; Kalinin, G. Yu.; Yampol'skii, V. D.

    2014-02-01

    Specimens of a 10N3KhDMBF shipbuilding hull steel were clad by a 04Kh20N6G11M2AFB nitrogen austenitic steel using various treatment conditions, which included hot rolling, austenitic facing, and explosive welding followed by hot rolling and heat treatment. Between the base and cladding materials, an intermediate layer with variable concentrations of chromium, manganese, and nickel was found, in which a martensitic structure was formed. In all the cases, the strength of bonding of the cladding layer to the hull steel (determined in tests for shear to fracture) was fairly high (σsh = 437-520 MPa). The only exception was the specimen produced by unidirectional facing without subsequent hot rolling (σsh = 308 MPa), in which nonfusions between the faced beads of stainless steel were detected.

  20. An ERP study of famous face incongruity detection in middle age.

    PubMed

    Chaby, L; Jemel, B; George, N; Renault, B; Fiori, N

    2001-04-01

    Age-related changes in famous face incongruity detection were examined in middle-aged (mean = 50.6) and young (mean = 24.8) subjects. Behavioral and ERP responses were recorded while subjects, after a presentation of a "prime face" (a famous person with the eyes masked), had to decide whether the following "test face" was completed with its authentic eyes (congruent) or with other eyes (incongruent). The principal effects of advancing age were (1) behavioral difficulties in discriminating between incongruent and congruent faces; (2) a reduced N400 effect due to N400 enhancement for both congruent and incongruent faces; (3) a latency increase of both N400 and P600 components. ERPs to primes (face encoding) were not affected by aging. These results are interpreted in terms of early signs of aging. Copyright 2001 Academic Press.

  1. Detection of Lysosomal Exocytosis by Surface Exposure of Lamp1 Luminal Epitopes.

    PubMed

    Andrews, Norma W

    2017-01-01

    Elevation in the cytosolic Ca 2+ concentration triggers exocytosis of lysosomes in many cell types. This chapter describes a method to detect lysosomal exocytosis in mammalian cells, which takes advantage of the presence of an abundant glycoprotein, Lamp1, on the membrane of lysosomes. Lamp1 is a transmembrane protein with a large, heavily glycosylated region that faces the lumen of lysosomes. When lysosomes fuse with the plasma membrane, epitopes present on the luminal domain of Lamp1 are exposed on the cell surface. The Lamp1 luminal epitopes can then be detected on the surface of live, unfixed cells using highly specific monoclonal antibodies and fluorescence microscopy. The main advantage of this method is its sensitivity, and the fact that it provides spatial information on lysosomal exocytosis at the single cell level.

  2. 77 FR 40411 - Open Meeting of the Taxpayer Advocacy Panel Face-to-Face Service Methods Project Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-09

    ... Face-to-Face Service Methods Project Committee AGENCY: Internal Revenue Service (IRS), Treasury. ACTION: Notice of meeting. SUMMARY: An open meeting of the Taxpayer Advocacy Panel Face-to-Face Service Methods... Act, 5 U.S.C. App. (1988) that a meeting of the Taxpayer Advocacy Panel Face-to-Face Service Methods...

  3. 77 FR 37101 - Open Meeting of the Taxpayer Advocacy Panel Face-to-Face Service Methods Project Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-20

    ... Face-to-Face Service Methods Project Committee AGENCY: Internal Revenue Service (IRS), Treasury. ACTION: Notice of meeting. SUMMARY: An open meeting of the Taxpayer Advocacy Panel Face-to-Face Service Methods... Act, 5 U.S.C. App. (1988) that a meeting of the Taxpayer Advocacy Panel Face-to-Face Service Methods...

  4. 77 FR 21157 - Open Meeting of the Taxpayer Advocacy Panel Face-to-Face Service Methods Project Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-09

    ... Face-to-Face Service Methods Project Committee AGENCY: Internal Revenue Service (IRS) Treasury. ACTION: Notice of meeting. SUMMARY: An open meeting of the Taxpayer Advocacy Panel Face-to-Face Service Methods... Act, 5 U.S.C. App. (1988) that a meeting of the Taxpayer Advocacy Panel Face-to-Face Service Methods...

  5. Parametric Representation of the Speaker's Lips for Multimodal Sign Language and Speech Recognition

    NASA Astrophysics Data System (ADS)

    Ryumin, D.; Karpov, A. A.

    2017-05-01

    In this article, we propose a new method for parametric representation of human's lips region. The functional diagram of the method is described and implementation details with the explanation of its key stages and features are given. The results of automatic detection of the regions of interest are illustrated. A speed of the method work using several computers with different performances is reported. This universal method allows applying parametrical representation of the speaker's lipsfor the tasks of biometrics, computer vision, machine learning, and automatic recognition of face, elements of sign languages, and audio-visual speech, including lip-reading.

  6. A novel weld seam detection method for space weld seam of narrow butt joint in laser welding

    NASA Astrophysics Data System (ADS)

    Shao, Wen Jun; Huang, Yu; Zhang, Yong

    2018-02-01

    Structured light measurement is widely used for weld seam detection owing to its high measurement precision and robust. However, there is nearly no geometrical deformation of the stripe projected onto weld face, whose seam width is less than 0.1 mm and without misalignment. So, it's very difficult to ensure an exact retrieval of the seam feature. This issue is raised as laser welding for butt joint of thin metal plate is widely applied. Moreover, measurement for the seam width, seam center and the normal vector of the weld face at the same time during welding process is of great importance to the welding quality but rarely reported. Consequently, a seam measurement method based on vision sensor for space weld seam of narrow butt joint is proposed in this article. Three laser stripes with different wave length are project on the weldment, in which two red laser stripes are designed and used to measure the three dimensional profile of the weld face by the principle of optical triangulation, and the third green laser stripe is used as light source to measure the edge and the centerline of the seam by the principle of passive vision sensor. The corresponding image process algorithm is proposed to extract the centerline of the red laser stripes as well as the seam feature. All these three laser stripes are captured and processed in a single image so that the three dimensional position of the space weld seam can be obtained simultaneously. Finally, the result of experiment reveals that the proposed method can meet the precision demand of space narrow butt joint.

  7. Collision induced unfolding of isolated proteins in the gas phase: past, present, and future.

    PubMed

    Dixit, Sugyan M; Polasky, Daniel A; Ruotolo, Brandon T

    2018-02-01

    Rapidly characterizing the three-dimensional structures of proteins and the multimeric machines they form remains one of the great challenges facing modern biological and medical sciences. Ion mobility-mass spectrometry based techniques are playing an expanding role in characterizing these functional complexes, especially in drug discovery and development workflows. Despite this expansion, ion mobility-mass spectrometry faces many challenges, especially in the context of detecting small differences in protein tertiary structure that bear functional consequences. Collision induced unfolding is an ion mobility-mass spectrometry method that enables the rapid differentiation of subtly-different protein isoforms based on their unfolding patterns and stabilities. In this review, we summarize the modern implementation of such gas-phase unfolding experiments and provide an overview of recent developments in both methods and applications. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Preliminary Investigation of Skull Fracture Patterns Using an Impactor Representative of Helmet Back-Face Deformation.

    PubMed

    Weisenbach, Charles A; Logsdon, Katie; Salzar, Robert S; Chancey, Valeta Carol; Brozoski, Fredrick

    2018-03-01

    Military combat helmets protect the wearer from a variety of battlefield threats, including projectiles. Helmet back-face deformation (BFD) is the result of the helmet defeating a projectile and deforming inward. Back-face deformation can result in localized blunt impacts to the head. A method was developed to investigate skull injury due to BFD behind-armor blunt trauma. A representative impactor was designed from the BFD profiles of modern combat helmets subjected to ballistic impacts. Three post-mortem human subject head specimens were each impacted using the representative impactor at three anatomical regions (frontal bone, right/left temporo-parietal regions) using a pneumatic projectile launcher. Thirty-six impacts were conducted at energy levels between 5 J and 25 J. Fractures were detected in two specimens. Two of the specimens experienced temporo-parietal fractures while the third specimen experienced no fractures. Biomechanical metrics, including impactor acceleration, were obtained for all tests. The work presented herein describes initial research utilizing a test method enabling the collection of dynamic exposure and biomechanical response data for the skull at the BFD-head interface.

  9. Searching for emotion or race: task-irrelevant facial cues have asymmetrical effects.

    PubMed

    Lipp, Ottmar V; Craig, Belinda M; Frost, Mareka J; Terry, Deborah J; Smith, Joanne R

    2014-01-01

    Facial cues of threat such as anger and other race membership are detected preferentially in visual search tasks. However, it remains unclear whether these facial cues interact in visual search. If both cues equally facilitate search, a symmetrical interaction would be predicted; anger cues should facilitate detection of other race faces and cues of other race membership should facilitate detection of anger. Past research investigating this race by emotional expression interaction in categorisation tasks revealed an asymmetrical interaction. This suggests that cues of other race membership may facilitate the detection of angry faces but not vice versa. Utilising the same stimuli and procedures across two search tasks, participants were asked to search for targets defined by either race or emotional expression. Contrary to the results revealed in the categorisation paradigm, cues of anger facilitated detection of other race faces whereas differences in race did not differentially influence detection of emotion targets.

  10. Automatic temporal segment detection via bilateral long short-term memory recurrent neural networks

    NASA Astrophysics Data System (ADS)

    Sun, Bo; Cao, Siming; He, Jun; Yu, Lejun; Li, Liandong

    2017-03-01

    Constrained by the physiology, the temporal factors associated with human behavior, irrespective of facial movement or body gesture, are described by four phases: neutral, onset, apex, and offset. Although they may benefit related recognition tasks, it is not easy to accurately detect such temporal segments. An automatic temporal segment detection framework using bilateral long short-term memory recurrent neural networks (BLSTM-RNN) to learn high-level temporal-spatial features, which synthesizes the local and global temporal-spatial information more efficiently, is presented. The framework is evaluated in detail over the face and body database (FABO). The comparison shows that the proposed framework outperforms state-of-the-art methods for solving the problem of temporal segment detection.

  11. Face, Body, and Center of Gravity Mediate Person Detection in Natural Scenes

    ERIC Educational Resources Information Center

    Bindemann, Markus; Scheepers, Christoph; Ferguson, Heather J.; Burton, A. Mike

    2010-01-01

    Person detection is an important prerequisite of social interaction, but is not well understood. Following suggestions that people in the visual field can capture a viewer's attention, this study examines the role of the face and the body for person detection in natural scenes. We observed that viewers tend first to look at the center of a scene,…

  12. Application of the SNoW machine learning paradigm to a set of transportation imaging problems

    NASA Astrophysics Data System (ADS)

    Paul, Peter; Burry, Aaron M.; Wang, Yuheng; Kozitsky, Vladimir

    2012-01-01

    Machine learning methods have been successfully applied to image object classification problems where there is clear distinction between classes and where a comprehensive set of training samples and ground truth are readily available. The transportation domain is an area where machine learning methods are particularly applicable, since the classification problems typically have well defined class boundaries and, due to high traffic volumes in most applications, massive roadway data is available. Though these classes tend to be well defined, the particular image noises and variations can be challenging. Another challenge is the extremely high accuracy typically required in most traffic applications. Incorrect assignment of fines or tolls due to imaging mistakes is not acceptable in most applications. For the front seat vehicle occupancy detection problem, classification amounts to determining whether one face (driver only) or two faces (driver + passenger) are detected in the front seat of a vehicle on a roadway. For automatic license plate recognition, the classification problem is a type of optical character recognition problem encompassing multiple class classification. The SNoW machine learning classifier using local SMQT features is shown to be successful in these two transportation imaging applications.

  13. Facelock: familiarity-based graphical authentication.

    PubMed

    Jenkins, Rob; McLachlan, Jane L; Renaud, Karen

    2014-01-01

    Authentication codes such as passwords and PIN numbers are widely used to control access to resources. One major drawback of these codes is that they are difficult to remember. Account holders are often faced with a choice between forgetting a code, which can be inconvenient, or writing it down, which compromises security. In two studies, we test a new knowledge-based authentication method that does not impose memory load on the user. Psychological research on face recognition has revealed an important distinction between familiar and unfamiliar face perception: When a face is familiar to the observer, it can be identified across a wide range of images. However, when the face is unfamiliar, generalisation across images is poor. This contrast can be used as the basis for a personalised 'facelock', in which authentication succeeds or fails based on image-invariant recognition of faces that are familiar to the account holder. In Study 1, account holders authenticated easily by detecting familiar targets among other faces (97.5% success rate), even after a one-year delay (86.1% success rate). Zero-acquaintance attackers were reduced to guessing (<1% success rate). Even personal attackers who knew the account holder well were rarely able to authenticate (6.6% success rate). In Study 2, we found that shoulder-surfing attacks by strangers could be defeated by presenting different photos of the same target faces in observed and attacked grids (1.9% success rate). Our findings suggest that the contrast between familiar and unfamiliar face recognition may be useful for developers of graphical authentication systems.

  14. Driver fatigue detection based on eye state.

    PubMed

    Lin, Lizong; Huang, Chao; Ni, Xiaopeng; Wang, Jiawen; Zhang, Hao; Li, Xiao; Qian, Zhiqin

    2015-01-01

    Nowadays, more and more traffic accidents occur because of driver fatigue. In order to reduce and prevent it, in this study, a calculation method using PERCLOS (percentage of eye closure time) parameter characteristics based on machine vision was developed. It determined whether a driver's eyes were in a fatigue state according to the PERCLOS value. The overall workflow solutions included face detection and tracking, detection and location of the human eye, human eye tracking, eye state recognition, and driver fatigue testing. The key aspects of the detection system incorporated the detection and location of human eyes and driver fatigue testing. The simplified method of measuring the PERCLOS value of the driver was to calculate the ratio of the eyes being open and closed with the total number of frames for a given period. If the eyes were closed more than the set threshold in the total number of frames, the system would alert the driver. Through many experiments, it was shown that besides the simple detection algorithm, the rapid computing speed, and the high detection and recognition accuracies of the system, the system was demonstrated to be in accord with the real-time requirements of a driver fatigue detection system.

  15. Heart rate measurement based on face video sequence

    NASA Astrophysics Data System (ADS)

    Xu, Fang; Zhou, Qin-Wu; Wu, Peng; Chen, Xing; Yang, Xiaofeng; Yan, Hong-jian

    2015-03-01

    This paper proposes a new non-contact heart rate measurement method based on photoplethysmography (PPG) theory. With this method we can measure heart rate remotely with a camera and ambient light. We collected video sequences of subjects, and detected remote PPG signals through video sequences. Remote PPG signals were analyzed with two methods, Blind Source Separation Technology (BSST) and Cross Spectral Power Technology (CSPT). BSST is a commonly used method, and CSPT is used for the first time in the study of remote PPG signals in this paper. Both of the methods can acquire heart rate, but compared with BSST, CSPT has clearer physical meaning, and the computational complexity of CSPT is lower than that of BSST. Our work shows that heart rates detected by CSPT method have good consistency with the heart rates measured by a finger clip oximeter. With good accuracy and low computational complexity, the CSPT method has a good prospect for the application in the field of home medical devices and mobile health devices.

  16. A Fuzzy Aproach For Facial Emotion Recognition

    NASA Astrophysics Data System (ADS)

    Gîlcă, Gheorghe; Bîzdoacă, Nicu-George

    2015-09-01

    This article deals with an emotion recognition system based on the fuzzy sets. Human faces are detected in images with the Viola - Jones algorithm and for its tracking in video sequences we used the Camshift algorithm. The detected human faces are transferred to the decisional fuzzy system, which is based on the variable fuzzyfication measurements of the face: eyebrow, eyelid and mouth. The system can easily determine the emotional state of a person.

  17. Vision-based in-line fabric defect detection using yarn-specific shape features

    NASA Astrophysics Data System (ADS)

    Schneider, Dorian; Aach, Til

    2012-01-01

    We develop a methodology for automatic in-line flaw detection in industrial woven fabrics. Where state of the art detection algorithms apply texture analysis methods to operate on low-resolved ({200 ppi) image data, we describe here a process flow to segment single yarns in high-resolved ({1000 ppi) textile images. Four yarn shape features are extracted, allowing a precise detection and measurement of defects. The degree of precision reached allows a classification of detected defects according to their nature, providing an innovation in the field of automatic fabric flaw detection. The design has been carried out to meet real time requirements and face adverse conditions caused by loom vibrations and dirt. The entire process flow is discussed followed by an evaluation using a database with real-life industrial fabric images. This work pertains to the construction of an on-loom defect detection system to be used in manufacturing practice.

  18. Detection of vehicle parts based on Faster R-CNN and relative position information

    NASA Astrophysics Data System (ADS)

    Zhang, Mingwen; Sang, Nong; Chen, Youbin; Gao, Changxin; Wang, Yongzhong

    2018-03-01

    Detection and recognition of vehicles are two essential tasks in intelligent transportation system (ITS). Currently, a prevalent method is to detect vehicle body, logo or license plate at first, and then recognize them. So the detection task is the most basic, but also the most important work. Besides the logo and license plate, some other parts, such as vehicle face, lamp, windshield and rearview mirror, are also key parts which can reflect the characteristics of vehicle and be used to improve the accuracy of recognition task. In this paper, the detection of vehicle parts is studied, and the work is novel. We choose Faster R-CNN as the basic algorithm, and take the local area of an image where vehicle body locates as input, then can get multiple bounding boxes with their own scores. If the box with maximum score is chosen as final result directly, it is often not the best one, especially for small objects. This paper presents a method which corrects original score with relative position information between two parts. Then we choose the box with maximum comprehensive score as the final result. Compared with original output strategy, the proposed method performs better.

  19. A Method for Counting Moving People in Video Surveillance Videos

    NASA Astrophysics Data System (ADS)

    Conte, Donatello; Foggia, Pasquale; Percannella, Gennaro; Tufano, Francesco; Vento, Mario

    2010-12-01

    People counting is an important problem in video surveillance applications. This problem has been faced either by trying to detect people in the scene and then counting them or by establishing a mapping between some scene feature and the number of people (avoiding the complex detection problem). This paper presents a novel method, following this second approach, that is based on the use of SURF features and of an [InlineEquation not available: see fulltext.]-SVR regressor provide an estimate of this count. The algorithm takes specifically into account problems due to partial occlusions and to perspective. In the experimental evaluation, the proposed method has been compared with the algorithm by Albiol et al., winner of the PETS 2009 contest on people counting, using the same PETS 2009 database. The provided results confirm that the proposed method yields an improved accuracy, while retaining the robustness of Albiol's algorithm.

  20. Application of cabin atmosphere monitors to rapid screening of breath samples for the early detection of disease states

    NASA Technical Reports Server (NTRS)

    Valentine, J. L.; Bryant, P. J.

    1975-01-01

    Analysis of human breath is a nonintrusive method to monitor both endogenous and exogenous chemicals found in the body. Several technologies were investigated and developed which are applicable to monitoring some organic molecules important in both physiological and pathological states. Two methods were developed for enriching the organic molecules exhaled in the breath of humans. One device is based on a respiratory face mask fitted with a polyethylene foam wafer; while the other device is a cryogenic trap utilizing an organic solvent. Using laboratory workers as controls, two organic molecules which occurred in the enriched breath of all subjects were tentatively identified as lactic acid and contisol. Both of these substances occurred in breath in sufficient amounts that the conventional method of gas-liquid chromatography was adequate for detection and quantification. To detect and quantitate trace amounts of chemicals in breath, another type of technology was developed in which analysis was conducted using high pressure liquid chromatography and mass spectrometry.

  1. A face in a (temporal) crowd.

    PubMed

    Hacker, Catrina M; Meschke, Emily X; Biederman, Irving

    2018-03-20

    Familiar objects, specified by name, can be identified with high accuracy when embedded in a rapidly presented sequence of images at rates exceeding 10 images/s. Not only can target objects be detected at such brief presentation rates, they can also be detected under high uncertainty, where their classification is defined negatively, e.g., "Not a Tool." The identification of a familiar speaker's voice declines precipitously when uncertainty is increased from one to a mere handful of possible speakers. Is the limitation imposed by uncertainty, i.e., the number of possible individuals, a general characteristic of processes for person individuation such that the identifiability of a familiar face would undergo a similar decline with uncertainty? Specifically, could the presence of an unnamed celebrity, thus any celebrity, be detected when presented in a rapid sequence of unfamiliar faces? If so, could the celebrity be identified? Despite the markedly greater physical similarity of faces compared to objects that are, say, not tools, the presence of a celebrity could be detected with moderately high accuracy (∼75%) at rates exceeding 7 faces/s. False alarms were exceedingly rare as almost all the errors were misses. Detection accuracy by moderate congenital prosopagnosics was lower than controls, but still well above chance. Given the detection of the presence of a celebrity, all subjects were almost always able to identify that celebrity, providing no role for a covert familiarity signal outside of awareness. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Quality labeled faces in the wild (QLFW): a database for studying face recognition in real-world environments

    NASA Astrophysics Data System (ADS)

    Karam, Lina J.; Zhu, Tong

    2015-03-01

    The varying quality of face images is an important challenge that limits the effectiveness of face recognition technology when applied in real-world applications. Existing face image databases do not consider the effect of distortions that commonly occur in real-world environments. This database (QLFW) represents an initial attempt to provide a set of labeled face images spanning the wide range of quality, from no perceived impairment to strong perceived impairment for face detection and face recognition applications. Types of impairment include JPEG2000 compression, JPEG compression, additive white noise, Gaussian blur and contrast change. Subjective experiments are conducted to assess the perceived visual quality of faces under different levels and types of distortions and also to assess the human recognition performance under the considered distortions. One goal of this work is to enable automated performance evaluation of face recognition technologies in the presence of different types and levels of visual distortions. This will consequently enable the development of face recognition systems that can operate reliably on real-world visual content in the presence of real-world visual distortions. Another goal is to enable the development and assessment of visual quality metrics for face images and for face detection and recognition applications.

  3. Object recognition of ladar with support vector machine

    NASA Astrophysics Data System (ADS)

    Sun, Jian-Feng; Li, Qi; Wang, Qi

    2005-01-01

    Intensity, range and Doppler images can be obtained by using laser radar. Laser radar can detect much more object information than other detecting sensor, such as passive infrared imaging and synthetic aperture radar (SAR), so it is well suited as the sensor of object recognition. Traditional method of laser radar object recognition is extracting target features, which can be influenced by noise. In this paper, a laser radar recognition method-Support Vector Machine is introduced. Support Vector Machine (SVM) is a new hotspot of recognition research after neural network. It has well performance on digital written and face recognition. Two series experiments about SVM designed for preprocessing and non-preprocessing samples are performed by real laser radar images, and the experiments results are compared.

  4. NAIMA as a solution for future GMO diagnostics challenges.

    PubMed

    Dobnik, David; Morisset, Dany; Gruden, Kristina

    2010-03-01

    In the field of genetically modified organism (GMO) diagnostics, real-time PCR has been the method of choice for target detection and quantification in most laboratories. Despite its numerous advantages, however, the lack of a true multiplexing option may render real-time PCR less practical in the face of future GMO detection challenges such as the multiplicity and increasing complexity of new transgenic events, as well as the repeated occurrence of unauthorized GMOs on the market. In this context, we recently reported the development of a novel multiplex quantitative DNA-based target amplification method, named NASBA implemented microarray analysis (NAIMA), which is suitable for sensitive, specific and quantitative detection of GMOs on a microarray. In this article, the performance of NAIMA is compared with that of real-time PCR, the focus being their performances in view of the upcoming challenge to detect/quantify an increasing number of possible GMOs at a sustainable cost and affordable staff effort. Finally, we present our conclusions concerning the applicability of NAIMA for future use in GMO diagnostics.

  5. Automated night/day standoff detection, tracking, and identification of personnel for installation protection

    NASA Astrophysics Data System (ADS)

    Lemoff, Brian E.; Martin, Robert B.; Sluch, Mikhail; Kafka, Kristopher M.; McCormick, William; Ice, Robert

    2013-06-01

    The capability to positively and covertly identify people at a safe distance, 24-hours per day, could provide a valuable advantage in protecting installations, both domestically and in an asymmetric warfare environment. This capability would enable installation security officers to identify known bad actors from a safe distance, even if they are approaching under cover of darkness. We will describe an active-SWIR imaging system being developed to automatically detect, track, and identify people at long range using computer face recognition. The system illuminates the target with an eye-safe and invisible SWIR laser beam, to provide consistent high-resolution imagery night and day. SWIR facial imagery produced by the system is matched against a watch-list of mug shots using computer face recognition algorithms. The current system relies on an operator to point the camera and to review and interpret the face recognition results. Automation software is being developed that will allow the system to be cued to a location by an external system, automatically detect a person, track the person as they move, zoom in on the face, select good facial images, and process the face recognition results, producing alarms and sharing data with other systems when people are detected and identified. Progress on the automation of this system will be presented along with experimental night-time face recognition results at distance.

  6. Reconstructing Face Image from the Thermal Infrared Spectrum to the Visible Spectrum †

    PubMed Central

    Kresnaraman, Brahmastro; Deguchi, Daisuke; Takahashi, Tomokazu; Mekada, Yoshito; Ide, Ichiro; Murase, Hiroshi

    2016-01-01

    During the night or in poorly lit areas, thermal cameras are a better choice instead of normal cameras for security surveillance because they do not rely on illumination. A thermal camera is able to detect a person within its view, but identification from only thermal information is not an easy task. The purpose of this paper is to reconstruct the face image of a person from the thermal spectrum to the visible spectrum. After the reconstruction, further image processing can be employed, including identification/recognition. Concretely, we propose a two-step thermal-to-visible-spectrum reconstruction method based on Canonical Correlation Analysis (CCA). The reconstruction is done by utilizing the relationship between images in both thermal infrared and visible spectra obtained by CCA. The whole image is processed in the first step while the second step processes patches in an image. Results show that the proposed method gives satisfying results with the two-step approach and outperforms comparative methods in both quality and recognition evaluations. PMID:27110781

  7. Detection of foreign body using fast thermoacoustic tomography with a multielement linear transducer array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nie Liming; Xing Da; Yang Diwu

    2007-04-23

    Current imaging modalities face challenges in clinical applications due to limitations in resolution or contrast. Microwave-induced thermoacoustic imaging may provide a complementary modality for medical imaging, particularly for detecting foreign objects due to their different absorption of electromagnetic radiation at specific frequencies. A thermoacoustic tomography system with a multielement linear transducer array was developed and used to detect foreign objects in tissue. Radiography and thermoacoustic images of objects with different electromagnetic properties, including glass, sand, and iron, were compared. The authors' results demonstrate that thermoacoustic imaging has the potential to become a fast method for surgical localization of occult foreignmore » objects.« less

  8. A video-based real-time adaptive vehicle-counting system for urban roads.

    PubMed

    Liu, Fei; Zeng, Zhiyuan; Jiang, Rong

    2017-01-01

    In developing nations, many expanding cities are facing challenges that result from the overwhelming numbers of people and vehicles. Collecting real-time, reliable and precise traffic flow information is crucial for urban traffic management. The main purpose of this paper is to develop an adaptive model that can assess the real-time vehicle counts on urban roads using computer vision technologies. This paper proposes an automatic real-time background update algorithm for vehicle detection and an adaptive pattern for vehicle counting based on the virtual loop and detection line methods. In addition, a new robust detection method is introduced to monitor the real-time traffic congestion state of road section. A prototype system has been developed and installed on an urban road for testing. The results show that the system is robust, with a real-time counting accuracy exceeding 99% in most field scenarios.

  9. A video-based real-time adaptive vehicle-counting system for urban roads

    PubMed Central

    2017-01-01

    In developing nations, many expanding cities are facing challenges that result from the overwhelming numbers of people and vehicles. Collecting real-time, reliable and precise traffic flow information is crucial for urban traffic management. The main purpose of this paper is to develop an adaptive model that can assess the real-time vehicle counts on urban roads using computer vision technologies. This paper proposes an automatic real-time background update algorithm for vehicle detection and an adaptive pattern for vehicle counting based on the virtual loop and detection line methods. In addition, a new robust detection method is introduced to monitor the real-time traffic congestion state of road section. A prototype system has been developed and installed on an urban road for testing. The results show that the system is robust, with a real-time counting accuracy exceeding 99% in most field scenarios. PMID:29135984

  10. 77 FR 61053 - Open Meeting of the Taxpayer Advocacy Panel Face-to-Face Service Methods Project Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-05

    ... Face-to-Face Service Methods Project Committee AGENCY: Internal Revenue Service (IRS), Treasury. ACTION: Notice of meeting. SUMMARY: An open meeting of the Taxpayer Advocacy Panel Face-to-Face Service Methods... Service Methods Project Committee will be held Tuesday, November 13, 2012, at 2:00 p.m. Eastern Time via...

  11. 77 FR 47166 - Open Meeting of the Taxpayer Advocacy Panel Face-to-Face Service Methods Project Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-07

    ... Face-to-Face Service Methods Project Committee AGENCY: Internal Revenue Service (IRS) Treasury. ACTION: Notice of meeting. SUMMARY: An open meeting of the Taxpayer Advocacy Panel Face-to-Face Service Methods... Service Methods Project Committee will be held Tuesday, September 11, 2012, at 2 p.m. Eastern Time via...

  12. 76 FR 78342 - Open Meeting of the Taxpayer Advocacy Panel Face-to-Face Service Methods Project Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-16

    ... Face-to-Face Service Methods Project Committee AGENCY: Internal Revenue Service (IRS), Treasury. ACTION: Notice of Meeting. SUMMARY: An open meeting of the Taxpayer Advocacy Panel Face-to-Face Service Methods... Service Methods Project Committee will be held Tuesday, January 10, 2012, at 2 p.m. Eastern Time via...

  13. 77 FR 2611 - Open Meeting of the Taxpayer Advocacy Panel Face-to-Face Service Methods Project Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-18

    ... Face-to-Face Service Methods Project Committee AGENCY: Internal Revenue Service (IRS) Treasury. ACTION: Notice of meeting. SUMMARY: An open meeting of the Taxpayer Advocacy Panel Face-to-Face Service Methods... Service Methods Project Committee will be held Tuesday, February 14, 2012, at 2 p.m. Eastern Time via...

  14. Perception-based road hazard identification with Internet support.

    PubMed

    Tarko, Andrew P; DeSalle, Brian R

    2003-01-01

    One of the most important tasks faced by highway agencies is identifying road hazards. Agencies use crash statistics to detect road intersections and segments where the frequency of crashes is excessive. With the crash-based method, a dangerous intersection or segment can be pointed out only after a sufficient number of crashes occur. A more proactive method is needed, and motorist complaints may be able to assist agencies in detecting road hazards before crashes occur. This paper investigates the quality of safety information reported by motorists and the effectiveness of hazard identification based on motorist reports, which were collected with an experimental Internet website. It demonstrates that the intersections pointed out by motorists tended to have more crashes than other intersections. The safety information collected through the website was comparable to 2-3 months of crash data. It was concluded that although the Internet-based method could not substitute for the traditional crash-based methods, its joint use with crash statistics might be useful in detecting new hazards where crash data had been collected for a short time.

  15. Automated diagnosis of fetal alcohol syndrome using 3D facial image analysis

    PubMed Central

    Fang, Shiaofen; McLaughlin, Jason; Fang, Jiandong; Huang, Jeffrey; Autti-Rämö, Ilona; Fagerlund, Åse; Jacobson, Sandra W.; Robinson, Luther K.; Hoyme, H. Eugene; Mattson, Sarah N.; Riley, Edward; Zhou, Feng; Ward, Richard; Moore, Elizabeth S.; Foroud, Tatiana

    2012-01-01

    Objectives Use three-dimensional (3D) facial laser scanned images from children with fetal alcohol syndrome (FAS) and controls to develop an automated diagnosis technique that can reliably and accurately identify individuals prenatally exposed to alcohol. Methods A detailed dysmorphology evaluation, history of prenatal alcohol exposure, and 3D facial laser scans were obtained from 149 individuals (86 FAS; 63 Control) recruited from two study sites (Cape Town, South Africa and Helsinki, Finland). Computer graphics, machine learning, and pattern recognition techniques were used to automatically identify a set of facial features that best discriminated individuals with FAS from controls in each sample. Results An automated feature detection and analysis technique was developed and applied to the two study populations. A unique set of facial regions and features were identified for each population that accurately discriminated FAS and control faces without any human intervention. Conclusion Our results demonstrate that computer algorithms can be used to automatically detect facial features that can discriminate FAS and control faces. PMID:18713153

  16. Method and system for sensing and identifying foreign particles in a gaseous environment

    NASA Technical Reports Server (NTRS)

    Choi, Sang H. (Inventor); Park, Yeonjoon (Inventor)

    2008-01-01

    An optical method and system sense and identify a foreign particle in a gaseous environment. A light source generates light. An electrically-conductive sheet has an array of holes formed through the sheet. Each hole has a diameter that is less than one quarter of the light's wavelength. The sheet is positioned relative to the light source such that the light is incident on one face of the sheet. An optical detector is positioned adjacent the sheet's opposing face and is spaced apart therefrom such that a gaseous environment is adapted to be disposed there between. Alterations in the light pattern detected by the optical detector indicate the presence of a foreign particle in the holes or on the sheet, while a laser induced fluorescence (LIF) signature associated with the foreign particle indicates the identity of the foreign particle.

  17. Blending Face-to-Face and Distance Learning Methods in Adult and Career-Technical Education. Practice Application Brief No. 23.

    ERIC Educational Resources Information Center

    Wonacott, Michael E.

    Both face-to-face and distance learning methods are currently being used in adult education and career and technical education. In theory, the advantages of face-to-face and distance learning methods complement each other. In practice, however, both face-to-face and information and communications technology (ICT)-based distance programs often rely…

  18. Microwave studies of weak localization and antilocalization in epitaxial graphene

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drabińska, Aneta; Kamińska, Maria; Wołoś, Agnieszka

    2013-12-04

    A microwave detection method was applied to study weak localization and antilocalization in epitaxial graphene sheets grown on both polarities of SiC substrates. Both coherence and scattering length values were obtained. The scattering lengths were found to be smaller for graphene grown on C-face of SiC. The decoherence rate was found to depend linearly on temperature, showing the electron-electron scattering mechanism.

  19. The Development of Face Perception in Infancy: Intersensory Interference and Unimodal Visual Facilitation

    ERIC Educational Resources Information Center

    Bahrick, Lorraine E.; Lickliter, Robert; Castellanos, Irina

    2013-01-01

    Although research has demonstrated impressive face perception skills of young infants, little attention has focused on conditions that enhance versus impair infant face perception. The present studies tested the prediction, generated from the intersensory redundancy hypothesis (IRH), that face discrimination, which relies on detection of visual…

  20. Intersensory Redundancy Hinders Face Discrimination in Preschool Children: Evidence for Visual Facilitation

    ERIC Educational Resources Information Center

    Bahrick, Lorraine E.; Krogh-Jespersen, Sheila; Argumosa, Melissa A.; Lopez, Hassel

    2014-01-01

    Although infants and children show impressive face-processing skills, little research has focused on the conditions that facilitate versus impair face perception. According to the intersensory redundancy hypothesis (IRH), face discrimination, which relies on detection of visual featural information, should be impaired in the context of…

  1. Interactive display system having a matrix optical detector

    DOEpatents

    Veligdan, James T.; DeSanto, Leonard

    2007-01-23

    A display system includes a waveguide optical panel having an inlet face and an opposite outlet face. An image beam is projected across the inlet face laterally and transversely for display on the outlet face. An optical detector including a matrix of detector elements is optically aligned with the inlet face for detecting a corresponding lateral and transverse position of an inbound light spot on the outlet face.

  2. On the Comparison of Wearable Sensor Data Fusion to a Single Sensor Machine Learning Technique in Fall Detection.

    PubMed

    Tsinganos, Panagiotis; Skodras, Athanassios

    2018-02-14

    In the context of the ageing global population, researchers and scientists have tried to find solutions to many challenges faced by older people. Falls, the leading cause of injury among elderly, are usually severe enough to require immediate medical attention; thus, their detection is of primary importance. To this effect, many fall detection systems that utilize wearable and ambient sensors have been proposed. In this study, we compare three newly proposed data fusion schemes that have been applied in human activity recognition and fall detection. Furthermore, these algorithms are compared to our recent work regarding fall detection in which only one type of sensor is used. The results show that fusion algorithms differ in their performance, whereas a machine learning strategy should be preferred. In conclusion, the methods presented and the comparison of their performance provide useful insights into the problem of fall detection.

  3. In situ temperature measurement of. alpha. -mercuric iodide by reflection spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nason, D.; Burger, A.

    1991-12-30

    Crystal face temperatures of single crystals of {alpha}-HgI{sub 2} growing in transparent ampules by physical vapor transport have been measured, {ital in} {ital situ}, by a novel, noncontact method which may be called reflectance spectroscopy thermometry. The method is based on the temperature dependence of the energy of the free-exciton peak as detected with a low-energy reflected beam. As presently configured, the accuracy is {plus minus}1.5 {degree}C for a slowly varying surface temperature. The method has potential for noncontact temperature measurement in some systems for which pyrometry is unsatisfactory.

  4. SENSITIVITY AND SPECIFICITY OF DETECTING POLYPOIDAL CHOROIDAL VASCULOPATHY WITH EN FACE OPTICAL COHERENCE TOMOGRAPHY AND OPTICAL COHERENCE TOMOGRAPHY ANGIOGRAPHY.

    PubMed

    de Carlo, Talisa E; Kokame, Gregg T; Kaneko, Kyle N; Lian, Rebecca; Lai, James C; Wee, Raymond

    2018-03-20

    Determine sensitivity and specificity of polypoidal choroidal vasculopathy (PCV) diagnosis with structural en face optical coherence tomography (OCT) and OCT angiography (OCTA). Retrospective review of the medical records of eyes diagnosed with PCV by indocyanine green angiography with review of diagnostic testing with structural en face OCT and OCTA by a trained reader. Structural en face OCT, cross-sectional OCT angiograms alone, and OCTA in its entirety were reviewed blinded to the findings of indocyanine green angiography and each other to determine if they could demonstrate the PCV complex. Sensitivity and specificity of PCV diagnosis was determined for each imaging technique using indocyanine green angiography as the ground truth. Sensitivity and specificity of structural en face OCT were 30.0% and 85.7%, of OCT angiograms alone were 26.8% and 96.8%, and of the entire OCTA were 43.9% and 87.1%, respectively. Sensitivity and specificity were improved for OCT angiograms and OCTA when looking at images taken within 1 month of PCV diagnosis. Sensitivity of detecting PCV was low using structural en face OCT and OCTA but specificity was high. Indocyanine green angiography remains the gold standard for PCV detection.

  5. What Faces Reveal: A Novel Method to Identify Patients at Risk of Deterioration Using Facial Expressions.

    PubMed

    Madrigal-Garcia, Maria Isabel; Rodrigues, Marcos; Shenfield, Alex; Singer, Mervyn; Moreno-Cuesta, Jeronimo

    2018-07-01

    To identify facial expressions occurring in patients at risk of deterioration in hospital wards. Prospective observational feasibility study. General ward patients in a London Community Hospital, United Kingdom. Thirty-four patients at risk of clinical deterioration. A 5-minute video (25 frames/s; 7,500 images) was recorded, encrypted, and subsequently analyzed for action units by a trained facial action coding system psychologist blinded to outcome. Action units of the upper face, head position, eyes position, lips and jaw position, and lower face were analyzed in conjunction with clinical measures collected within the National Early Warning Score. The most frequently detected action units were action unit 43 (73%) for upper face, action unit 51 (11.7%) for head position, action unit 62 (5.8%) for eyes position, action unit 25 (44.1%) for lips and jaw, and action unit 15 (67.6%) for lower face. The presence of certain combined face displays was increased in patients requiring admission to intensive care, namely, action units 43 + 15 + 25 (face display 1, p < 0.013), action units 43 + 15 + 51/52 (face display 2, p < 0.003), and action units 43 + 15 + 51 + 25 (face display 3, p < 0.002). Having face display 1, face display 2, and face display 3 increased the risk of being admitted to intensive care eight-fold, 18-fold, and as a sure event, respectively. A logistic regression model with face display 1, face display 2, face display 3, and National Early Warning Score as independent covariates described admission to intensive care with an average concordance statistic (C-index) of 0.71 (p = 0.009). Patterned facial expressions can be identified in deteriorating general ward patients. This tool may potentially augment risk prediction of current scoring systems.

  6. Long-Term Exposure to American and European Movies and Television Series Facilitates Caucasian Face Perception in Young Chinese Watchers.

    PubMed

    Wang, Yamin; Zhou, Lu

    2016-10-01

    Most young Chinese people now learn about Caucasian individuals via media, especially American and European movies and television series (AEMT). The current study aimed to explore whether long-term exposure to AEMT facilitates Caucasian face perception in young Chinese watchers. Before the experiment, we created Chinese, Caucasian, and generic average faces (generic average face was created from both Chinese and Caucasian faces) and tested participants' ability to identify them. In the experiment, we asked AEMT watchers and Chinese movie and television series (CMT) watchers to complete a facial norm detection task. This task was developed recently to detect norms used in facial perception. The results indicated that AEMT watchers coded Caucasian faces relative to a Caucasian face norm better than they did to a generic face norm, whereas no such difference was found among CMT watchers. All watchers coded Chinese faces by referencing a Chinese norm better than they did relative to a generic norm. The results suggested that long-term exposure to AEMT has the same effect as daily other-race face contact in shaping facial perception. © The Author(s) 2016.

  7. Facial expression system on video using widrow hoff

    NASA Astrophysics Data System (ADS)

    Jannah, M.; Zarlis, M.; Mawengkang, H.

    2018-03-01

    Facial expressions recognition is one of interesting research. This research contains human feeling to computer application Such as the interaction between human and computer, data compression, facial animation and facial detection from the video. The purpose of this research is to create facial expression system that captures image from the video camera. The system in this research uses Widrow-Hoff learning method in training and testing image with Adaptive Linear Neuron (ADALINE) approach. The system performance is evaluated by two parameters, detection rate and false positive rate. The system accuracy depends on good technique and face position that trained and tested.

  8. On-site Rapid Detection of Trace Non-volatile Inorganic Explosives by Stand-alone Ion Mobility Spectrometry via Acid-enhanced Evaporization

    PubMed Central

    Peng, Liying; Hua, Lei; Wang, Weiguo; Zhou, Qinghua; Li, Haiyang

    2014-01-01

    New techniques for the field detection of inorganic improvised explosive devices (IEDs) are urgently developed. Although ion mobility spectrometry (IMS) has been proved to be the most effective method for screening organic explosives, it still faces a major challenge to detect inorganic explosives owing to their low volatilities. Herein, we proposed a strategy for detecting trace inorganic explosives by thermal desorption ion mobility spectrometry (TD-IMS) with sample-to-sample analysis time less than 5 s based on in-situ acidification on the sampling swabs. The responses for typical oxidizers in inorganic explosives, such as KNO3, KClO3 and KClO4 were at least enhanced by a factor of 3000 and their limits of detection were found to be subnanogram. The common organic explosives and their mixtures with inorganic oxidizers were detected, indicating that the acidification process did not affect the detection of organic explosives. Moreover, the typical inorganic explosives such as black powders, firecrackers and match head could be sensitively detected as well. These results demonstrated that this method could be easily employed in the current deployed IMS for on-site sensitive detection of either inorganic explosives or organic ones. PMID:25318960

  9. The impact of web-based and face-to-face simulation on patient deterioration and patient safety: protocol for a multi-site multi-method design.

    PubMed

    Cooper, Simon J; Kinsman, Leigh; Chung, Catherine; Cant, Robyn; Boyle, Jayne; Bull, Loretta; Cameron, Amanda; Connell, Cliff; Kim, Jeong-Ah; McInnes, Denise; McKay, Angela; Nankervis, Katrina; Penz, Erika; Rotter, Thomas

    2016-09-07

    There are international concerns in relation to the management of patient deterioration which has led to a body of evidence known as the 'failure to rescue' literature. Nursing staff are known to miss cues of deterioration and often fail to call for assistance. Medical Emergency Teams (Rapid Response Teams) do improve the management of acutely deteriorating patients, but first responders need the requisite skills to impact on patient safety. In this study we aim to address these issues in a mixed methods interventional trial with the objective of measuring and comparing the cost and clinical impact of face-to-face and web-based simulation programs on the management of patient deterioration and related patient outcomes. The education programs, known as 'FIRST(2)ACT', have been found to have an impact on education and will be tested in four hospitals in the State of Victoria, Australia. Nursing staff will be trained in primary (the first 8 min) responses to emergencies in two medical wards using a face-to-face approach and in two medical wards using a web-based version FIRST(2)ACTWeb. The impact of these interventions will be determined through quantitative and qualitative approaches, cost analyses and patient notes review (time series analyses) to measure quality of care and patient outcomes. In this 18 month study it is hypothesised that both simulation programs will improve the detection and management of deteriorating patients but that the web-based program will have lower total costs. The study will also add to our overall understanding of the utility of simulation approaches in the preparation of nurses working in hospital wards. (ACTRN12616000468426, retrospectively registered 8.4.2016).

  10. Development of three-dimensional patient face model that enables real-time collision detection and cutting operation for a dental simulator.

    PubMed

    Yamaguchi, Satoshi; Yamada, Yuya; Yoshida, Yoshinori; Noborio, Hiroshi; Imazato, Satoshi

    2012-01-01

    The virtual reality (VR) simulator is a useful tool to develop dental hand skill. However, VR simulations with reactions of patients have limited computational time to reproduce a face model. Our aim was to develop a patient face model that enables real-time collision detection and cutting operation by using stereolithography (STL) and deterministic finite automaton (DFA) data files. We evaluated dependence of computational cost and constructed the patient face model using the optimum condition for combining STL and DFA data files, and assessed the computational costs for operation in do-nothing, collision, cutting, and combination of collision and cutting. The face model was successfully constructed with low computational costs of 11.3, 18.3, 30.3, and 33.5 ms for do-nothing, collision, cutting, and collision and cutting, respectively. The patient face model could be useful for developing dental hand skill with VR.

  11. Increasing the power for detecting impairment in older adults with the Faces subtest from Wechsler Memory Scale-III: an empirical trial.

    PubMed

    Levy, Boaz

    2006-10-01

    Empirical studies have questioned the validity of the Faces subtest from the WMS-III for detecting impairment in visual memory, particularly among the elderly. A recent examination of the test norms revealed a significant age related floor effect already emerging on Faces I (immediate recall), implying excessive difficulty in the acquisition phase among unimpaired older adults. The current study compared the concurrent validity of the Faces subtest with an alternative measure between 16 Alzheimer's patients and 16 controls. The alternative measure was designed to facilitate acquisition by reducing the sequence of item presentation. Other changes aimed at increasing the retrieval challenge, decreasing error due to guessing and standardizing the administration. Analyses converged to indicate that the alternative measure provided a considerably greater differentiation than the Faces subtest between Alzheimer's patients and controls. Steps for revising the Faces subtest are discussed.

  12. Tracking the truth: the effect of face familiarity on eye fixations during deception.

    PubMed

    Millen, Ailsa E; Hope, Lorraine; Hillstrom, Anne P; Vrij, Aldert

    2017-05-01

    In forensic investigations, suspects sometimes conceal recognition of a familiar person to protect co-conspirators or hide knowledge of a victim. The current experiment sought to determine whether eye fixations could be used to identify memory of known persons when lying about recognition of faces. Participants' eye movements were monitored whilst they lied and told the truth about recognition of faces that varied in familiarity (newly learned, famous celebrities, personally known). Memory detection by eye movements during recognition of personally familiar and famous celebrity faces was negligibly affected by lying, thereby demonstrating that detection of memory during lies is influenced by the prior learning of the face. By contrast, eye movements did not reveal lies robustly for newly learned faces. These findings support the use of eye movements as markers of memory during concealed recognition but also suggest caution when familiarity is only a consequence of one brief exposure.

  13. Operational analysis for the drug detection problem

    NASA Astrophysics Data System (ADS)

    Hoopengardner, Roger L.; Smith, Michael C.

    1994-10-01

    New techniques and sensors to identify the molecular, chemical, or elemental structures unique to drugs are being developed under several national programs. However, the challenge faced by U.S. drug enforcement and Customs officials goes far beyond the simple technical capability to detect an illegal drug. Entry points into the U.S. include ports, border crossings, and airports where cargo ships, vehicles, and aircraft move huge volumes of freight. Current technology and personnel are able to physically inspect only a small fraction of the entering cargo containers. The complexities of how to best utilize new technology to aid the detection process and yet not adversely affect the processing of vehicles and time-sensitive cargo is the challenge faced by these officials. This paper describes an ARPA sponsored initiative to develop a simple, yet useful, method for examining the operational consequences of utilizing various procedures and technologies in combination to achieve an `acceptable' level of detection probability. Since Customs entry points into the U.S. vary from huge seaports to a one lane highway checkpoint between the U.S. and Canadian or Mexico border, no one system can possibly be right for all points. This approach can examine alternative concepts for using different techniques/systems for different types of entry points. Operational measures reported include the average time to process vehicles and containers, the average and maximum numbers in the system at any time, and the utilization of inspection teams. The method is implemented via a PC-based simulation written in GPSS-PC language. Input to the simulation model is (1) the individual detection probabilities and false positive rates for each detection technology or procedure, (2) the inspection time for each procedure, (3) the system configuration, and (4) the physical distance between inspection stations. The model offers on- line graphics to examine effects as the model runs.

  14. Embedded wavelet-based face recognition under variable position

    NASA Astrophysics Data System (ADS)

    Cotret, Pascal; Chevobbe, Stéphane; Darouich, Mehdi

    2015-02-01

    For several years, face recognition has been a hot topic in the image processing field: this technique is applied in several domains such as CCTV, electronic devices delocking and so on. In this context, this work studies the efficiency of a wavelet-based face recognition method in terms of subject position robustness and performance on various systems. The use of wavelet transform has a limited impact on the position robustness of PCA-based face recognition. This work shows, for a well-known database (Yale face database B*), that subject position in a 3D space can vary up to 10% of the original ROI size without decreasing recognition rates. Face recognition is performed on approximation coefficients of the image wavelet transform: results are still satisfying after 3 levels of decomposition. Furthermore, face database size can be divided by a factor 64 (22K with K = 3). In the context of ultra-embedded vision systems, memory footprint is one of the key points to be addressed; that is the reason why compression techniques such as wavelet transform are interesting. Furthermore, it leads to a low-complexity face detection stage compliant with limited computation resources available on such systems. The approach described in this work is tested on three platforms from a standard x86-based computer towards nanocomputers such as RaspberryPi and SECO boards. For K = 3 and a database with 40 faces, the execution mean time for one frame is 0.64 ms on a x86-based computer, 9 ms on a SECO board and 26 ms on a RaspberryPi (B model).

  15. Facelock: familiarity-based graphical authentication

    PubMed Central

    McLachlan, Jane L.; Renaud, Karen

    2014-01-01

    Authentication codes such as passwords and PIN numbers are widely used to control access to resources. One major drawback of these codes is that they are difficult to remember. Account holders are often faced with a choice between forgetting a code, which can be inconvenient, or writing it down, which compromises security. In two studies, we test a new knowledge-based authentication method that does not impose memory load on the user. Psychological research on face recognition has revealed an important distinction between familiar and unfamiliar face perception: When a face is familiar to the observer, it can be identified across a wide range of images. However, when the face is unfamiliar, generalisation across images is poor. This contrast can be used as the basis for a personalised ‘facelock’, in which authentication succeeds or fails based on image-invariant recognition of faces that are familiar to the account holder. In Study 1, account holders authenticated easily by detecting familiar targets among other faces (97.5% success rate), even after a one-year delay (86.1% success rate). Zero-acquaintance attackers were reduced to guessing (<1% success rate). Even personal attackers who knew the account holder well were rarely able to authenticate (6.6% success rate). In Study 2, we found that shoulder-surfing attacks by strangers could be defeated by presenting different photos of the same target faces in observed and attacked grids (1.9% success rate). Our findings suggest that the contrast between familiar and unfamiliar face recognition may be useful for developers of graphical authentication systems. PMID:25024913

  16. Enhanced attention amplifies face adaptation.

    PubMed

    Rhodes, Gillian; Jeffery, Linda; Evangelista, Emma; Ewing, Louise; Peters, Marianne; Taylor, Libby

    2011-08-15

    Perceptual adaptation not only produces striking perceptual aftereffects, but also enhances coding efficiency and discrimination by calibrating coding mechanisms to prevailing inputs. Attention to simple stimuli increases adaptation, potentially enhancing its functional benefits. Here we show that attention also increases adaptation to faces. In Experiment 1, face identity aftereffects increased when attention to adapting faces was increased using a change detection task. In Experiment 2, figural (distortion) face aftereffects increased when attention was increased using a snap game (detecting immediate repeats) during adaptation. Both were large effects. Contributions of low-level adaptation were reduced using free viewing (both experiments) and a size change between adapt and test faces (Experiment 2). We suggest that attention may enhance adaptation throughout the entire cortical visual pathway, with functional benefits well beyond the immediate advantages of selective processing of potentially important stimuli. These results highlight the potential to facilitate adaptive updating of face-coding mechanisms by strategic deployment of attentional resources. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Amygdala activation in response to facial expressions in pediatric obsessive-compulsive disorder

    PubMed Central

    Britton, Jennifer C.; Stewart, S. Evelyn; Killgore, William D.S.; Rosso, Isabelle M.; Price, Lauren M.; Gold, Andrea L.; Pine, Daniel S.; Wilhelm, Sabine; Jenike, Michael A.; Rauch, Scott L.

    2010-01-01

    Background Exaggerated amygdala activation to threatening faces has been detected in adults and children with anxiety disorders, compared to healthy comparison subjects. However, the profile of amygdala activation in response to facial expressions in obsessive-compulsive disorder (OCD) may be a distinguishing feature; a prior study found that compared with healthy adults, adults with OCD exhibited less amygdala activation to emotional and neutral faces, relative to fixation (Cannistraro et al., 2004). Methods In the current event-related functional magnetic resonance imaging (fMRI) study, a pediatric OCD sample (N=12) and a healthy comparison sample (HC, N=17) performed a gender discrimination task while viewing emotional faces (happy, fear, disgust) and neutral faces. Results Compared to the HC group, the OCD group showed less amygdala/hippocampus activation in all emotion and neutral conditions relative to fixation. Conclusions Like previous reports in adult OCD, pediatric OCD may have a distinct neural profile from other anxiety disorders, with respect to amygdala activation in response to emotional stimuli that are not disorder-specific. PMID:20602430

  18. Assessment of electrical resistivity imaging for pre-tunneling geological characterization - A case study of the Qingdao R3 metro line tunnel

    NASA Astrophysics Data System (ADS)

    Li, Shucai; Xu, Shan; Nie, Lichao; Liu, Bin; Liu, Rentai; Zhang, Qingsong; Zhao, Yan; Liu, Quanwei; Wang, Houtong; Liu, Haidong; Guo, Qin

    2018-06-01

    Water inrush during tunneling is a significant problem in the underground infrastructure construction. Electrical resistivity imaging (ERI) is a technique that can detect and characterize a water body in an open fracture or fault by exploiting the resistivity contrast that exists between the water body and the surrounding materials. ERI is an efficient method for pre-tunneling geological characterization. In this study, a case study is presented in which tunnel-face and borehole ERI (TBERI) is performed by using the probe hole to detect a water body during tunnel construction. The construction site is a metro line site, situated in the city of Qingdao, China. Unlike the traditional cross-hole observation mode, TBERI only use a single borehole. The installation of injection electrodes inside the probe hole and the installation of measuring electrodes on the tunnel face is proposed as the observation mode. Furthermore, a numerical simulation is carried out before the real field experiment, and the simulation results show that the TBERI is capable of detecting a deeply buried water body. In addition, the water body in the field case is also identified by TBERI. The water body appears as a strongly conductive anomaly relative to the background materials. This study highlights the respective strengths and weaknesses of the TBERI for pre-tunneling geological characterization. This method is a relatively rapid means of investigating the studied area. This study clearly demonstrates the suitability of TBERI in a tunneling scenario.

  19. Ultrasensitive Single Fluorescence-Labeled Probe-Mediated Single Universal Primer-Multiplex-Droplet Digital Polymerase Chain Reaction for High-Throughput Genetically Modified Organism Screening.

    PubMed

    Niu, Chenqi; Xu, Yuancong; Zhang, Chao; Zhu, Pengyu; Huang, Kunlun; Luo, Yunbo; Xu, Wentao

    2018-05-01

    As genetically modified (GM) technology develops and genetically modified organisms (GMOs) become more available, GMOs face increasing regulations and pressure to adhere to strict labeling guidelines. A singleplex detection method cannot perform the high-throughput analysis necessary for optimal GMO detection. Combining the advantages of multiplex detection and droplet digital polymerase chain reaction (ddPCR), a single universal primer-multiplex-ddPCR (SUP-M-ddPCR) strategy was proposed for accurate broad-spectrum screening and quantification. The SUP increases efficiency of the primers in PCR and plays an important role in establishing a high-throughput, multiplex detection method. Emerging ddPCR technology has been used for accurate quantification of nucleic acid molecules without a standard curve. Using maize as a reference point, four heterologous sequences ( 35S, NOS, NPTII, and PAT) were selected to evaluate the feasibility and applicability of this strategy. Surprisingly, these four genes cover more than 93% of the transgenic maize lines and serve as preliminary screening sequences. All screening probes were labeled with FAM fluorescence, which allows the signals from the samples with GMO content and those without to be easily differentiated. This fiveplex screening method is a new development in GMO screening. Utilizing an optimal amplification assay, the specificity, limit of detection (LOD), and limit of quantitation (LOQ) were validated. The LOD and LOQ of this GMO screening method were 0.1% and 0.01%, respectively, with a relative standard deviation (RSD) < 25%. This method could serve as an important tool for the detection of GM maize from different processed, commercially available products. Further, this screening method could be applied to other fields that require reliable and sensitive detection of DNA targets.

  20. Face shape and face identity processing in behavioral variant fronto-temporal dementia: A specific deficit for familiarity and name recognition of famous faces.

    PubMed

    De Winter, François-Laurent; Timmers, Dorien; de Gelder, Beatrice; Van Orshoven, Marc; Vieren, Marleen; Bouckaert, Miriam; Cypers, Gert; Caekebeke, Jo; Van de Vliet, Laura; Goffin, Karolien; Van Laere, Koen; Sunaert, Stefan; Vandenberghe, Rik; Vandenbulcke, Mathieu; Van den Stock, Jan

    2016-01-01

    Deficits in face processing have been described in the behavioral variant of fronto-temporal dementia (bvFTD), primarily regarding the recognition of facial expressions. Less is known about face shape and face identity processing. Here we used a hierarchical strategy targeting face shape and face identity recognition in bvFTD and matched healthy controls. Participants performed 3 psychophysical experiments targeting face shape detection (Experiment 1), unfamiliar face identity matching (Experiment 2), familiarity categorization and famous face-name matching (Experiment 3). The results revealed group differences only in Experiment 3, with a deficit in the bvFTD group for both familiarity categorization and famous face-name matching. Voxel-based morphometry regression analyses in the bvFTD group revealed an association between grey matter volume of the left ventral anterior temporal lobe and familiarity recognition, while face-name matching correlated with grey matter volume of the bilateral ventral anterior temporal lobes. Subsequently, we quantified familiarity-specific and name-specific recognition deficits as the sum of the celebrities of which respectively only the name or only the familiarity was accurately recognized. Both indices were associated with grey matter volume of the bilateral anterior temporal cortices. These findings extent previous results by documenting the involvement of the left anterior temporal lobe (ATL) in familiarity detection and the right ATL in name recognition deficits in fronto-temporal lobar degeneration.

  1. Tracking and Counting Motion for Monitoring Food Intake Based-On Depth Sensor and UDOO Board: A Comprehensive Review

    NASA Astrophysics Data System (ADS)

    Kassim, Muhammad Fuad bin; Norzali Haji Mohd, Mohd

    2017-08-01

    Technology is all about helping people, which created a new opportunity to take serious action in managing their health care. Moreover, Obesity continues to be a serious public health concern in the Malaysia and continuing to rise. Obesity has been a serious health concern among people. Nearly half of Malaysian people overweight. Most of dietary approach is not tracking and detecting the right calorie intake for weight loss, but currently used tools such as food diaries require users to manually record and track the food calories, making them difficult for daily use. We will be developing a new tool that counts the food intake bite by monitoring hand gesture and face jaw motion movement of caloric intake. The Bite count method showed a good significant that can lead to a successful weight loss by simply monitoring the bite taken during eating. The device used was Kinect Xbox One which used a depth camera to detect the motion on person hand and face during food intake. Previous studies showed that most of the method used to count bite device is worn type. The recent trend is now going towards non-wearable devices due to the difficulty when wearing devices and it has high false alarm ratio. The proposed system gets data from the Kinect that will be monitoring the hand and face gesture of the user while eating. Then, the gesture of hand and face data is sent to the microcontroller board to recognize and start counting bite taken by the user. The system recognizes the patterns of bite taken from user by following the algorithm of basic eating type either using hand or chopstick. This system can help people who are trying to follow a proper way to reduce overweight or eating disorders by monitoring their meal intake and controlling eating rate.

  2. 77 FR 55525 - Open Meeting of the Taxpayer Advocacy Panel Face-to-Face Service Methods Project Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-10

    ... Face-to-Face Service Methods Project Committee AGENCY: Internal Revenue Service (IRS), Treasury. ACTION: Notice of meeting. SUMMARY: An open meeting of the Taxpayer Advocacy Panel Face-to-Face Service Methods Project Committee will be conducted. The Taxpayer Advocacy Panel is soliciting public comments, ideas, and...

  3. SU-C-201-01: Investigation of the Effects of Scintillator Surface Treatment On Light Output Measurements with SiPM Detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valenciaga, Y; Prout, D; Chatziioannou, A

    2015-06-15

    Purpose: To examine the effect of different scintillator surface treatments (BGO crystals) on the fraction of scintillation photons that exit the crystal and reach the photodetector (SiPM). Methods: Positron Emission Tomography is based on the detection of light that exits scintillator crystals, after annihilation photons deposit energy inside these crystals. A considerable fraction of the scintillation light gets trapped or absorbed after going through multiple internal reflections on the interfaces surrounding the crystals. BGO scintillator crystals generate considerably less scintillation light than crystals made of LSO and its variants. Therefore, it is crucial that the small amount of light producedmore » by BGO exits towards the light detector. The surface treatment of scintillator crystals is among the factors affecting the ability of scintillation light to reach the detectors. In this study, we analyze the effect of different crystal surface treatments on the fraction of scintillation light that is detected by the solid state photodetector (SiPM), once energy is deposited inside a BGO crystal. Simulations were performed by a Monte Carlo based software named GATE, and validated by measurements from individual BGO crystals coupled to Philips digital-SiPM sensor (DPC-3200). Results: The results showed an increment in light collection of about 4 percent when only the exit face of the BGO crystal, is unpolished; compared to when all the faces are polished. However, leaving several faces unpolished caused a reduction of at least 10 percent of light output when the interaction occurs as far from the exit face of the crystal as possible compared to when it occurs very close to the exit face. Conclusion: This work demonstrates the advantages on light collection from leaving unpolished the exit face of BGO crystals. The configuration with best light output will be used to obtain flood images from BGO crystal arrays coupled to SiPM sensors.« less

  4. Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions

    PubMed Central

    Maruthapillai, Vasanthan; Murugappan, Murugappan

    2016-01-01

    In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject’s face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face) and change in marker distance (change in distance between the original and new marker positions), were used to extract three statistical features (mean, variance, and root mean square) from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject’s face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network. PMID:26859884

  5. Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions.

    PubMed

    Maruthapillai, Vasanthan; Murugappan, Murugappan

    2016-01-01

    In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject's face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face) and change in marker distance (change in distance between the original and new marker positions), were used to extract three statistical features (mean, variance, and root mean square) from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject's face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network.

  6. Determination of the Ecological and Geographic Distributions of Armillaria Species in Missouri Ozark Forest Ecosystems

    Treesearch

    Johann N. Bruhn; James J. Wetteroff; Jeanne D. Mihail; Susan Burks

    1997-01-01

    Armillaria root rot contributes to oak decline in the Ozarks. Three Armillaria species were detected in Ecological Landtypes (ELT's) representing south- to west-facing side slopes (ELT 17), north- to east-facing side slopes (ELT 18), and ridge tops (ELT 11). Armillaria mellea was detected in 91 percent...

  7. Hidden Covariation Detection Produces Faster, Not Slower, Social Judgments

    ERIC Educational Resources Information Center

    Barker, Lynne A.; Andrade, Jackie

    2006-01-01

    In P. Lewicki's (1986b) demonstration of hidden covariation detection (HCD), responses of participants were slower to faces that corresponded with a covariation encountered previously than to faces with novel covariations. This slowing contrasts with the typical finding that priming leads to faster responding and suggests that HCD is a unique type…

  8. Quantitative determination of methamphetamine in oral fluid by liquid-liquid extraction and gas chromatography/mass spectrometry.

    PubMed

    Bahmanabadi, L; Akhgari, M; Jokar, F; Sadeghi, H B

    2017-02-01

    Methamphetamine abuse is one of the most medical and social problems many countries face. In spite of the ban on the use of methamphetamine, it is widely available in Iran's drug black market. There are many analytical methods for the detection of methamphetamine in biological specimen. Oral fluid has become a popular specimen to test for the presence of methamphetamine. The purpose of the present study was to develop a method for the extraction and detection of methamphetamine in oral fluid samples using liquid-liquid extraction (LLE) and gas chromatography/mass spectrometry (GC/MS) methods. An analytical study was designed in that blank and 50 authentic oral fluid samples were collected to be first extracted by LLE and subsequently analysed by GC/MS. The method was fully validated and showed an excellent intra- and inter-assay precision (reflex sympathetic dystrophy ˂ 10%) for external quality control samples. Recovery with LLE methods was 96%. Limit of detection and limit of quantitation were 5 and 15 ng/mL, respectively. The method showed high selectivity, no additional peak due to interfering substances in samples was observed. The introduced method was sensitive, accurate and precise enough for the extraction of methamphetamine from oral fluid samples in forensic toxicology laboratories.

  9. Effects of Facial Symmetry and Gaze Direction on Perception of Social Attributes: A Study in Experimental Art History.

    PubMed

    Folgerø, Per O; Hodne, Lasse; Johansson, Christer; Andresen, Alf E; Sætren, Lill C; Specht, Karsten; Skaar, Øystein O; Reber, Rolf

    2016-01-01

    This article explores the possibility of testing hypotheses about art production in the past by collecting data in the present. We call this enterprise "experimental art history". Why did medieval artists prefer to paint Christ with his face directed towards the beholder, while profane faces were noticeably more often painted in different degrees of profile? Is a preference for frontal faces motivated by deeper evolutionary and biological considerations? Head and gaze direction is a significant factor for detecting the intentions of others, and accurate detection of gaze direction depends on strong contrast between a dark iris and a bright sclera, a combination that is only found in humans among the primates. One uniquely human capacity is language acquisition, where the detection of shared or joint attention, for example through detection of gaze direction, contributes significantly to the ease of acquisition. The perceived face and gaze direction is also related to fundamental emotional reactions such as fear, aggression, empathy and sympathy. The fast-track modulator model presents a related fast and unconscious subcortical route that involves many central brain areas. Activity in this pathway mediates the affective valence of the stimulus. In particular, different sub-regions of the amygdala show specific activation as response to gaze direction, head orientation and the valence of facial expression. We present three experiments on the effects of face orientation and gaze direction on the judgments of social attributes. We observed that frontal faces with direct gaze were more highly associated with positive adjectives. Does this help to associate positive values to the Holy Face in a Western context? The formal result indicates that the Holy Face is perceived more positively than profiles with both direct and averted gaze. Two control studies, using a Brazilian and a Dutch database of photographs, showed a similar but weaker effect with a larger contrast between the gaze directions for profiles. Our findings indicate that many factors affect the impression of a face, and that eye contact in combination with face direction reinforce the general impression of portraits, rather than determine it.

  10. Convolutional Neural Network-Based Classification of Driver's Emotion during Aggressive and Smooth Driving Using Multi-Modal Camera Sensors.

    PubMed

    Lee, Kwan Woo; Yoon, Hyo Sik; Song, Jong Min; Park, Kang Ryoung

    2018-03-23

    Because aggressive driving often causes large-scale loss of life and property, techniques for advance detection of adverse driver emotional states have become important for the prevention of aggressive driving behaviors. Previous studies have primarily focused on systems for detecting aggressive driver emotion via smart-phone accelerometers and gyro-sensors, or they focused on methods of detecting physiological signals using electroencephalography (EEG) or electrocardiogram (ECG) sensors. Because EEG and ECG sensors cause discomfort to drivers and can be detached from the driver's body, it becomes difficult to focus on bio-signals to determine their emotional state. Gyro-sensors and accelerometers depend on the performance of GPS receivers and cannot be used in areas where GPS signals are blocked. Moreover, if driving on a mountain road with many quick turns, a driver's emotional state can easily be misrecognized as that of an aggressive driver. To resolve these problems, we propose a convolutional neural network (CNN)-based method of detecting emotion to identify aggressive driving using input images of the driver's face, obtained using near-infrared (NIR) light and thermal camera sensors. In this research, we conducted an experiment using our own database, which provides a high classification accuracy for detecting driver emotion leading to either aggressive or smooth (i.e., relaxed) driving. Our proposed method demonstrates better performance than existing methods.

  11. Hemispheric metacontrol and cerebral dominance in healthy individuals investigated by means of chimeric faces.

    PubMed

    Urgesi, Cosimo; Bricolo, Emanuela; Aglioti, Salvatore M

    2005-08-01

    Cerebral dominance and hemispheric metacontrol were investigated by testing the ability of healthy participants to match chimeric, entire, or half faces presented tachistoscopically. The two hemi-faces compounding chimeric or entire stimuli were presented simultaneously or asynchronously at different exposure times. Participants did not consciously detect chimeric faces for simultaneous presentations lasting up to 40 ms. Interestingly, a 20 ms separation between each half-chimera was sufficient to induce detection of conflicts at a conscious level. Although the presence of chimeric faces was not consciously perceived, performance on chimeric faces was poorer than on entire- and half-faces stimuli, thus indicating an implicit processing of perceptual conflicts. Moreover, the precedence of hemispheric stimulation over-ruled the right hemisphere dominance for face processing, insofar as the hemisphere stimulated last appeared to influence the response. This dynamic reversal of cerebral dominance, however, was not caused by a shift in hemispheric specialization, since the level of performance always reflected the right hemisphere specialization for face recognition. Thus, the dissociation between hemispheric dominance and specialization found in the present study hints at the existence of hemispheric metacontrol in healthy individuals.

  12. A novel BCI based on ERP components sensitive to configural processing of human faces

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Zhao, Qibin; Jing, Jin; Wang, Xingyu; Cichocki, Andrzej

    2012-04-01

    This study introduces a novel brain-computer interface (BCI) based on an oddball paradigm using stimuli of facial images with loss of configural face information (e.g., inversion of face). To the best of our knowledge, till now the configural processing of human faces has not been applied to BCI but widely studied in cognitive neuroscience research. Our experiments confirm that the face-sensitive event-related potential (ERP) components N170 and vertex positive potential (VPP) have reflected early structural encoding of faces and can be modulated by the configural processing of faces. With the proposed novel paradigm, we investigate the effects of ERP components N170, VPP and P300 on target detection for BCI. An eight-class BCI platform is developed to analyze ERPs and evaluate the target detection performance using linear discriminant analysis without complicated feature extraction processing. The online classification accuracy of 88.7% and information transfer rate of 38.7 bits min-1 using stimuli of inverted faces with only single trial suggest that the proposed paradigm based on the configural processing of faces is very promising for visual stimuli-driven BCI applications.

  13. A novel BCI based on ERP components sensitive to configural processing of human faces.

    PubMed

    Zhang, Yu; Zhao, Qibin; Jin, Jing; Wang, Xingyu; Cichocki, Andrzej

    2012-04-01

    This study introduces a novel brain-computer interface (BCI) based on an oddball paradigm using stimuli of facial images with loss of configural face information (e.g., inversion of face). To the best of our knowledge, till now the configural processing of human faces has not been applied to BCI but widely studied in cognitive neuroscience research. Our experiments confirm that the face-sensitive event-related potential (ERP) components N170 and vertex positive potential (VPP) have reflected early structural encoding of faces and can be modulated by the configural processing of faces. With the proposed novel paradigm, we investigate the effects of ERP components N170, VPP and P300 on target detection for BCI. An eight-class BCI platform is developed to analyze ERPs and evaluate the target detection performance using linear discriminant analysis without complicated feature extraction processing. The online classification accuracy of 88.7% and information transfer rate of 38.7 bits min(-1) using stimuli of inverted faces with only single trial suggest that the proposed paradigm based on the configural processing of faces is very promising for visual stimuli-driven BCI applications.

  14. Rare cancer cell analyzer for whole blood applications: automated nucleic acid purification in a microfluidic disposable card.

    PubMed

    Kokoris, M; Nabavi, M; Lancaster, C; Clemmens, J; Maloney, P; Capadanno, J; Gerdes, J; Battrell, C F

    2005-09-01

    One current challenge facing point-of-care cancer detection is that existing methods make it difficult, time consuming and too costly to (1) collect relevant cell types directly from a patient sample, such as blood and (2) rapidly assay those cell types to determine the presence or absence of a particular type of cancer. We present a proof of principle method for an integrated, sample-to-result, point-of-care detection device that employs microfluidics technology, accepted assays, and a silica membrane for total RNA purification on a disposable, credit card sized laboratory-on-card ('lab card") device in which results are obtained in minutes. Both yield and quality of on-card purified total RNA, as determined by both LightCycler and standard reverse transcriptase amplification of G6PDH and BCR-ABL transcripts, were found to be better than or equal to accepted standard purification methods.

  15. Neuromagnetic evidence that the right fusiform face area is essential for human face awareness: An intermittent binocular rivalry study.

    PubMed

    Kume, Yuko; Maekawa, Toshihiko; Urakawa, Tomokazu; Hironaga, Naruhito; Ogata, Katsuya; Shigyo, Maki; Tobimatsu, Shozo

    2016-08-01

    When and where the awareness of faces is consciously initiated is unclear. We used magnetoencephalography to probe the brain responses associated with face awareness under intermittent pseudo-rivalry (PR) and binocular rivalry (BR) conditions. The stimuli comprised three pictures: a human face, a monkey face and a house. In the PR condition, we detected the M130 component, which has been minimally characterized in previous research. We obtained a clear recording of the M170 component in the fusiform face area (FFA), and found that this component had an earlier response time to faces compared with other objects. The M170 occurred predominantly in the right hemisphere in both conditions. In the BR condition, the amplitude of the M130 significantly increased in the right hemisphere irrespective of the physical characteristics of the visual stimuli. Conversely, we did not detect the M170 when the face image was suppressed in the BR condition, although this component was clearly present when awareness for the face was initiated. We also found a significant difference in the latency of the M170 (human

  16. Magnetic resonance imaging for the detection, localisation, and characterisation of prostate cancer: recommendations from a European consensus meeting.

    PubMed

    Dickinson, Louise; Ahmed, Hashim U; Allen, Clare; Barentsz, Jelle O; Carey, Brendan; Futterer, Jurgen J; Heijmink, Stijn W; Hoskin, Peter J; Kirkham, Alex; Padhani, Anwar R; Persad, Raj; Puech, Philippe; Punwani, Shonit; Sohaib, Aslam S; Tombal, Bertrand; Villers, Arnauld; van der Meulen, Jan; Emberton, Mark

    2011-04-01

    Multiparametric magnetic resonance imaging (mpMRI) may have a role in detecting clinically significant prostate cancer in men with raised serum prostate-specific antigen levels. Variations in technique and the interpretation of images have contributed to inconsistency in its reported performance characteristics. Our aim was to make recommendations on a standardised method for the conduct, interpretation, and reporting of prostate mpMRI for prostate cancer detection and localisation. A consensus meeting of 16 European prostate cancer experts was held that followed the UCLA-RAND Appropriateness Method and facilitated by an independent chair. Before the meeting, 520 items were scored for "appropriateness" by panel members, discussed face to face, and rescored. Agreement was reached in 67% of 260 items related to imaging sequence parameters. T2-weighted, dynamic contrast-enhanced, and diffusion-weighted MRI were the key sequences incorporated into the minimum requirements. Consensus was also reached on 54% of 260 items related to image interpretation and reporting, including features of malignancy on individual sequences. A 5-point scale was agreed on for communicating the probability of malignancy, with a minimum of 16 prostatic regions of interest, to include a pictorial representation of suspicious foci. Limitations relate to consensus methodology. Dominant personalities are known to affect the opinions of the group and were countered by a neutral chairperson. Consensus was reached on a number of areas related to the conduct, interpretation, and reporting of mpMRI for the detection, localisation, and characterisation of prostate cancer. Before optimal dissemination of this technology, these outcomes will require formal validation in prospective trials. Copyright © 2010 European Association of Urology. Published by Elsevier B.V. All rights reserved.

  17. Fisheye-Based Method for GPS Localization Improvement in Unknown Semi-Obstructed Areas

    PubMed Central

    Moreau, Julien; Ambellouis, Sébastien; Ruichek, Yassine

    2017-01-01

    A precise GNSS (Global Navigation Satellite System) localization is vital for autonomous road vehicles, especially in cluttered or urban environments where satellites are occluded, preventing accurate positioning. We propose to fuse GPS (Global Positioning System) data with fisheye stereovision to face this problem independently to additional data, possibly outdated, unavailable, and needing correlation with reality. Our stereoscope is sky-facing with 360° × 180° fisheye cameras to observe surrounding obstacles. We propose a 3D modelling and plane extraction through following steps: stereoscope self-calibration for decalibration robustness, stereo matching considering neighbours epipolar curves to compute 3D, and robust plane fitting based on generated cartography and Hough transform. We use these 3D data with GPS raw data to estimate NLOS (Non Line Of Sight) reflected signals pseudorange delay. We exploit extracted planes to build a visibility mask for NLOS detection. A simplified 3D canyon model allows to compute reflections pseudorange delays. In the end, GPS positioning is computed considering corrected pseudoranges. With experimentations on real fixed scenes, we show generated 3D models reaching metric accuracy and improvement of horizontal GPS positioning accuracy by more than 50%. The proposed procedure is effective, and the proposed NLOS detection outperforms CN0-based methods (Carrier-to-receiver Noise density). PMID:28106746

  18. Face Pareidolia in the Rhesus Monkey.

    PubMed

    Taubert, Jessica; Wardle, Susan G; Flessert, Molly; Leopold, David A; Ungerleider, Leslie G

    2017-08-21

    Face perception in humans and nonhuman primates is rapid and accurate [1-4]. In the human brain, a network of visual-processing regions is specialized for faces [5-7]. Although face processing is a priority of the primate visual system, face detection is not infallible. Face pareidolia is the compelling illusion of perceiving facial features on inanimate objects, such as the illusory face on the surface of the moon. Although face pareidolia is commonly experienced by humans, its presence in other species is unknown. Here we provide evidence for face pareidolia in a species known to possess a complex face-processing system [8-10]: the rhesus monkey (Macaca mulatta). In a visual preference task [11, 12], monkeys looked longer at photographs of objects that elicited face pareidolia in human observers than at photographs of similar objects that did not elicit illusory faces. Examination of eye movements revealed that monkeys fixated the illusory internal facial features in a pattern consistent with how they view photographs of faces [13]. Although the specialized response to faces observed in humans [1, 3, 5-7, 14] is often argued to be continuous across primates [4, 15], it was previously unclear whether face pareidolia arose from a uniquely human capacity. For example, pareidolia could be a product of the human aptitude for perceptual abstraction or result from frequent exposure to cartoons and illustrations that anthropomorphize inanimate objects. Instead, our results indicate that the perception of illusory facial features on inanimate objects is driven by a broadly tuned face-detection mechanism that we share with other species. Published by Elsevier Ltd.

  19. Combinatorial clustering and Its Application to 3D Polygonal Traffic Sign Reconstruction From Multiple Images

    NASA Astrophysics Data System (ADS)

    Vallet, B.; Soheilian, B.; Brédif, M.

    2014-08-01

    The 3D reconstruction of similar 3D objects detected in 2D faces a major issue when it comes to grouping the 2D detections into clusters to be used to reconstruct the individual 3D objects. Simple clustering heuristics fail as soon as similar objects are close. This paper formulates a framework to use the geometric quality of the reconstruction as a hint to do a proper clustering. We present a methodology to solve the resulting combinatorial optimization problem with some simplifications and approximations in order to make it tractable. The proposed method is applied to the reconstruction of 3D traffic signs from their 2D detections to demonstrate its capacity to solve ambiguities.

  20. Computational optimisation of targeted DNA sequencing for cancer detection

    NASA Astrophysics Data System (ADS)

    Martinez, Pierre; McGranahan, Nicholas; Birkbak, Nicolai Juul; Gerlinger, Marco; Swanton, Charles

    2013-12-01

    Despite recent progress thanks to next-generation sequencing technologies, personalised cancer medicine is still hampered by intra-tumour heterogeneity and drug resistance. As most patients with advanced metastatic disease face poor survival, there is need to improve early diagnosis. Analysing circulating tumour DNA (ctDNA) might represent a non-invasive method to detect mutations in patients, facilitating early detection. In this article, we define reduced gene panels from publicly available datasets as a first step to assess and optimise the potential of targeted ctDNA scans for early tumour detection. Dividing 4,467 samples into one discovery and two independent validation cohorts, we show that up to 76% of 10 cancer types harbour at least one mutation in a panel of only 25 genes, with high sensitivity across most tumour types. Our analyses demonstrate that targeting ``hotspot'' regions would introduce biases towards in-frame mutations and would compromise the reproducibility of tumour detection.

  1. Technological advances for improving adenoma detection rates: The changing face of colonoscopy.

    PubMed

    Ishaq, Sauid; Siau, Keith; Harrison, Elizabeth; Tontini, Gian Eugenio; Hoffman, Arthur; Gross, Seth; Kiesslich, Ralf; Neumann, Helmut

    2017-07-01

    Worldwide, colorectal cancer is the third commonest cancer. Over 90% follow an adenoma-to-cancer sequence over many years. Colonoscopy is the gold standard method for cancer screening and early adenoma detection. However, considerable variation exists between endoscopists' detection rates. This review considers the effects of different endoscopic techniques on adenoma detection. Two areas of technological interest were considered: (1) optical technologies and (2) mechanical technologies. Optical solutions, including FICE, NBI, i-SCAN and high definition colonoscopy showed mixed results. In contrast, mechanical advances, such as cap-assisted colonoscopy, FUSE, EndoCuff and G-EYE™, showed promise, with reported detections rates of up to 69%. However, before definitive recommendations can be made for their incorporation into daily practice, further studies and comparison trials are required. Copyright © 2017 Editrice Gastroenterologica Italiana S.r.l. Published by Elsevier Ltd. All rights reserved.

  2. "Utilizing" signal detection theory.

    PubMed

    Lynn, Spencer K; Barrett, Lisa Feldman

    2014-09-01

    What do inferring what a person is thinking or feeling, judging a defendant's guilt, and navigating a dimly lit room have in common? They involve perceptual uncertainty (e.g., a scowling face might indicate anger or concentration, for which different responses are appropriate) and behavioral risk (e.g., a cost to making the wrong response). Signal detection theory describes these types of decisions. In this tutorial, we show how incorporating the economic concept of utility allows signal detection theory to serve as a model of optimal decision making, going beyond its common use as an analytic method. This utility approach to signal detection theory clarifies otherwise enigmatic influences of perceptual uncertainty on measures of decision-making performance (accuracy and optimality) and on behavior (an inverse relationship between bias magnitude and sensitivity optimizes utility). A "utilized" signal detection theory offers the possibility of expanding the phenomena that can be understood within a decision-making framework. © The Author(s) 2014.

  3. “UTILIZING” SIGNAL DETECTION THEORY

    PubMed Central

    Lynn, Spencer K.; Barrett, Lisa Feldman

    2014-01-01

    What do inferring what a person is thinking or feeling, deciding to report a symptom to your doctor, judging a defendant’s guilt, and navigating a dimly lit room have in common? They involve perceptual uncertainty (e.g., a scowling face might indicate anger or concentration, which engender different appropriate responses), and behavioral risk (e.g., a cost to making the wrong response). Signal detection theory describes these types of decisions. In this tutorial we show how, by incorporating the economic concept of utility, signal detection theory serves as a model of optimal decision making, beyond its common use as an analytic method. This utility approach to signal detection theory highlights potentially enigmatic influences of perceptual uncertainty on measures of decision-making performance (accuracy and optimality) and on behavior (a functional relationship between bias and sensitivity). A “utilized” signal detection theory offers the possibility of expanding the phenomena that can be understood within a decision-making framework. PMID:25097061

  4. Face recognition system for set-top box-based intelligent TV.

    PubMed

    Lee, Won Oh; Kim, Yeong Gon; Hong, Hyung Gil; Park, Kang Ryoung

    2014-11-18

    Despite the prevalence of smart TVs, many consumers continue to use conventional TVs with supplementary set-top boxes (STBs) because of the high cost of smart TVs. However, because the processing power of a STB is quite low, the smart TV functionalities that can be implemented in a STB are very limited. Because of this, negligible research has been conducted regarding face recognition for conventional TVs with supplementary STBs, even though many such studies have been conducted with smart TVs. In terms of camera sensors, previous face recognition systems have used high-resolution cameras, cameras with high magnification zoom lenses, or camera systems with panning and tilting devices that can be used for face recognition from various positions. However, these cameras and devices cannot be used in intelligent TV environments because of limitations related to size and cost, and only small, low cost web-cameras can be used. The resulting face recognition performance is degraded because of the limited resolution and quality levels of the images. Therefore, we propose a new face recognition system for intelligent TVs in order to overcome the limitations associated with low resource set-top box and low cost web-cameras. We implement the face recognition system using a software algorithm that does not require special devices or cameras. Our research has the following four novelties: first, the candidate regions in a viewer's face are detected in an image captured by a camera connected to the STB via low processing background subtraction and face color filtering; second, the detected candidate regions of face are transmitted to a server that has high processing power in order to detect face regions accurately; third, in-plane rotations of the face regions are compensated based on similarities between the left and right half sub-regions of the face regions; fourth, various poses of the viewer's face region are identified using five templates obtained during the initial user registration stage and multi-level local binary pattern matching. Experimental results indicate that the recall; precision; and genuine acceptance rate were about 95.7%; 96.2%; and 90.2%, respectively.

  5. Detecting "Infant-Directedness" in Face and Voice

    ERIC Educational Resources Information Center

    Kim, Hojin I.; Johnson, Scott P.

    2014-01-01

    Five- and 3-month-old infants' perception of infant-directed (ID) faces and the role of speech in perceiving faces were examined. Infants' eye movements were recorded as they viewed a series of two side-by-side talking faces, one infant-directed and one adult-directed (AD), while listening to ID speech, AD speech, or in silence. Infants…

  6. VidCat: an image and video analysis service for personal media management

    NASA Astrophysics Data System (ADS)

    Begeja, Lee; Zavesky, Eric; Liu, Zhu; Gibbon, David; Gopalan, Raghuraman; Shahraray, Behzad

    2013-03-01

    Cloud-based storage and consumption of personal photos and videos provides increased accessibility, functionality, and satisfaction for mobile users. One cloud service frontier that is recently growing is that of personal media management. This work presents a system called VidCat that assists users in the tagging, organization, and retrieval of their personal media by faces and visual content similarity, time, and date information. Evaluations for the effectiveness of the copy detection and face recognition algorithms on standard datasets are also discussed. Finally, the system includes a set of application programming interfaces (API's) allowing content to be uploaded, analyzed, and retrieved on any client with simple HTTP-based methods as demonstrated with a prototype developed on the iOS and Android mobile platforms.

  7. Detecting Emotional Expression in Face-to-Face and Online Breast Cancer Support Groups

    ERIC Educational Resources Information Center

    Liess, Anna; Simon, Wendy; Yutsis, Maya; Owen, Jason E.; Piemme, Karen Altree; Golant, Mitch; Giese-Davis, Janine

    2008-01-01

    Accurately detecting emotional expression in women with primary breast cancer participating in support groups may be important for therapists and researchers. In 2 small studies (N = 20 and N = 16), the authors examined whether video coding, human text coding, and automated text analysis provided consistent estimates of the level of emotional…

  8. Rigid particulate matter sensor

    DOEpatents

    Hall, Matthew [Austin, TX

    2011-02-22

    A sensor to detect particulate matter. The sensor includes a first rigid tube, a second rigid tube, a detection surface electrode, and a bias surface electrode. The second rigid tube is mounted substantially parallel to the first rigid tube. The detection surface electrode is disposed on an outer surface of the first rigid tube. The detection surface electrode is disposed to face the second rigid tube. The bias surface electrode is disposed on an outer surface of the second rigid tube. The bias surface electrode is disposed to face the detection surface electrode on the first rigid tube. An air gap exists between the detection surface electrode and the bias surface electrode to allow particulate matter within an exhaust stream to flow between the detection and bias surface electrodes.

  9. Early detection of tooth wear by en-face optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Mărcăuteanu, Corina; Negrutiu, Meda; Sinescu, Cosmin; Demjan, Eniko; Hughes, Mike; Bradu, Adrian; Dobre, George; Podoleanu, Adrian G.

    2009-02-01

    Excessive dental wear (pathological attrition and/or abfractions) is a frequent complication in bruxing patients. The parafunction causes heavy occlusal loads. The aim of this study is the early detection and monitoring of occlusal overload in bruxing patients. En-face optical coherence tomography was used for investigating and imaging of several extracted tooth, with a normal morphology, derived from patients with active bruxism and from subjects without parafunction. We found a characteristic pattern of enamel cracks in patients with first degree bruxism and with a normal tooth morphology. We conclude that the en-face optical coherence tomography is a promising non-invasive alternative technique for the early detection of occlusal overload, before it becomes clinically evident as tooth wear.

  10. Implicit Binding of Facial Features During Change Blindness

    PubMed Central

    Lyyra, Pessi; Mäkelä, Hanna; Hietanen, Jari K.; Astikainen, Piia

    2014-01-01

    Change blindness refers to the inability to detect visual changes if introduced together with an eye-movement, blink, flash of light, or with distracting stimuli. Evidence of implicit detection of changed visual features during change blindness has been reported in a number of studies using both behavioral and neurophysiological measurements. However, it is not known whether implicit detection occurs only at the level of single features or whether complex organizations of features can be implicitly detected as well. We tested this in adult humans using intact and scrambled versions of schematic faces as stimuli in a change blindness paradigm while recording event-related potentials (ERPs). An enlargement of the face-sensitive N170 ERP component was observed at the right temporal electrode site to changes from scrambled to intact faces, even if the participants were not consciously able to report such changes (change blindness). Similarly, the disintegration of an intact face to scrambled features resulted in attenuated N170 responses during change blindness. Other ERP deflections were modulated by changes, but unlike the N170 component, they were indifferent to the direction of the change. The bidirectional modulation of the N170 component during change blindness suggests that implicit change detection can also occur at the level of complex features in the case of facial stimuli. PMID:24498165

  11. Implicit binding of facial features during change blindness.

    PubMed

    Lyyra, Pessi; Mäkelä, Hanna; Hietanen, Jari K; Astikainen, Piia

    2014-01-01

    Change blindness refers to the inability to detect visual changes if introduced together with an eye-movement, blink, flash of light, or with distracting stimuli. Evidence of implicit detection of changed visual features during change blindness has been reported in a number of studies using both behavioral and neurophysiological measurements. However, it is not known whether implicit detection occurs only at the level of single features or whether complex organizations of features can be implicitly detected as well. We tested this in adult humans using intact and scrambled versions of schematic faces as stimuli in a change blindness paradigm while recording event-related potentials (ERPs). An enlargement of the face-sensitive N170 ERP component was observed at the right temporal electrode site to changes from scrambled to intact faces, even if the participants were not consciously able to report such changes (change blindness). Similarly, the disintegration of an intact face to scrambled features resulted in attenuated N170 responses during change blindness. Other ERP deflections were modulated by changes, but unlike the N170 component, they were indifferent to the direction of the change. The bidirectional modulation of the N170 component during change blindness suggests that implicit change detection can also occur at the level of complex features in the case of facial stimuli.

  12. Adaptive skin segmentation via feature-based face detection

    NASA Astrophysics Data System (ADS)

    Taylor, Michael J.; Morris, Tim

    2014-05-01

    Variations in illumination can have significant effects on the apparent colour of skin, which can be damaging to the efficacy of any colour-based segmentation approach. We attempt to overcome this issue by presenting a new adaptive approach, capable of generating skin colour models at run-time. Our approach adopts a Viola-Jones feature-based face detector, in a moderate-recall, high-precision configuration, to sample faces within an image, with an emphasis on avoiding potentially detrimental false positives. From these samples, we extract a set of pixels that are likely to be from skin regions, filter them according to their relative luma values in an attempt to eliminate typical non-skin facial features (eyes, mouths, nostrils, etc.), and hence establish a set of pixels that we can be confident represent skin. Using this representative set, we train a unimodal Gaussian function to model the skin colour in the given image in the normalised rg colour space - a combination of modelling approach and colour space that benefits us in a number of ways. A generated function can subsequently be applied to every pixel in the given image, and, hence, the probability that any given pixel represents skin can be determined. Segmentation of the skin, therefore, can be as simple as applying a binary threshold to the calculated probabilities. In this paper, we touch upon a number of existing approaches, describe the methods behind our new system, present the results of its application to arbitrary images of people with detectable faces, which we have found to be extremely encouraging, and investigate its potential to be used as part of real-time systems.

  13. Does vigilance to pain make individuals experts in facial recognition of pain?

    PubMed Central

    Baum, Corinna; Kappesser, Judith; Schneider, Raphaela; Lautenbacher, Stefan

    2013-01-01

    BACKGROUND: It is well known that individual factors are important in the facial recognition of pain. However, it is unclear whether vigilance to pain as a pain-related attentional mechanism is among these relevant factors. OBJECTIVES: Vigilance to pain may have two different effects on the recognition of facial pain expressions: pain-vigilant individuals may detect pain faces better but overinclude other facial displays, misinterpreting them as expressing pain; or they may be true experts in discriminating between pain and other facial expressions. The present study aimed to test these two hypotheses. Furthermore, pain vigilance was assumed to be a distinct predictor, the impact of which on recognition cannot be completely replaced by related concepts such as pain catastrophizing and fear of pain. METHODS: Photographs of neutral, happy, angry and pain facial expressions were presented to 40 healthy participants, who were asked to classify them into the appropriate emotion categories and provide a confidence rating for each classification. Additionally, potential predictors of the discrimination performance for pain and anger faces – pain vigilance, pain-related catastrophizing, fear of pain – were assessed using self-report questionnaires. RESULTS: Pain-vigilant participants classified pain faces more accurately and did not misclassify anger as pain faces more frequently. However, vigilance to pain was not related to the confidence of recognition ratings. Pain catastrophizing and fear of pain did not account for the recognition performance. CONCLUSIONS: Moderate pain vigilance, as assessed in the present study, appears to be associated with appropriate detection of pain-related cues and not necessarily with the overinclusion of other negative cues. PMID:23717826

  14. Oxygen investigation in the Galileian satellites using AFOSC

    NASA Astrophysics Data System (ADS)

    Migliorini, A.; Barbieri, M.; Piccioni, G.; Barbieri, C.; Altieri, F.

    Spectroscopy in the visible range of the Galilean satellites is a suitable way to investigate the surface properties of these objects. In recent years, several species, like O_2, O_3, and SO_2, have been detected on the surfaces of these satellites, which were thought to be completely covered only by water ice. The recent detection of the O_2 absorption bands in the Ganymede trailing face \\citep{spencer_1995} led to laboratory experiments in order to better constraint the O_2 phases trapped in the water ice surface \\citep{vidal_1997}. The same features were observed also on Europa and Callisto surfaces \\citep{spencer_2002}, although a better investigation of their properties and their variability with time is still not fully addressed. We proposed ground-based observations with the AFOSC instrument on the 1.8-m telescope in Asiago, to investigate the Galilean satellites? surface properties, focusing both on the leading and trailing faces of the satellites. We used the Volume Phase Holographic grism covering the spectral range 400-1000 nm, with a spectral resolution of about 5000. In this work, we show results of the observations acquired in November 2014, focusing on the leading faces of the satellites. Data were treated using standard methods of data reduction. Further observations with the same setup, scheduled for February 2015 to observe the trailing face of the Galileian satellites, will complement the program. These observations are in preparation to the future science we will be able to perform with the MAJIS spectrometer on the European JUICE mission.

  15. Face recognition system and method using face pattern words and face pattern bytes

    DOEpatents

    Zheng, Yufeng

    2014-12-23

    The present invention provides a novel system and method for identifying individuals and for face recognition utilizing facial features for face identification. The system and method of the invention comprise creating facial features or face patterns called face pattern words and face pattern bytes for face identification. The invention also provides for pattern recognitions for identification other than face recognition. The invention further provides a means for identifying individuals based on visible and/or thermal images of those individuals by utilizing computer software implemented by instructions on a computer or computer system and a computer readable medium containing instructions on a computer system for face recognition and identification.

  16. From tiger to panda: animal head detection.

    PubMed

    Zhang, Weiwei; Sun, Jian; Tang, Xiaoou

    2011-06-01

    Robust object detection has many important applications in real-world online photo processing. For example, both Google image search and MSN live image search have integrated human face detector to retrieve face or portrait photos. Inspired by the success of such face filtering approach, in this paper, we focus on another popular online photo category--animal, which is one of the top five categories in the MSN live image search query log. As a first attempt, we focus on the problem of animal head detection of a set of relatively large land animals that are popular on the internet, such as cat, tiger, panda, fox, and cheetah. First, we proposed a new set of gradient oriented feature, Haar of Oriented Gradients (HOOG), to effectively capture the shape and texture features on animal head. Then, we proposed two detection algorithms, namely Bruteforce detection and Deformable detection, to effectively exploit the shape feature and texture feature simultaneously. Experimental results on 14,379 well labeled animals images validate the superiority of the proposed approach. Additionally, we apply the animal head detector to improve the image search result through text based online photo search result filtering.

  17. Lip boundary detection techniques using color and depth information

    NASA Astrophysics Data System (ADS)

    Kim, Gwang-Myung; Yoon, Sung H.; Kim, Jung H.; Hur, Gi Taek

    2002-01-01

    This paper presents our approach to using a stereo camera to obtain 3-D image data to be used to improve existing lip boundary detection techniques. We show that depth information as provided by our approach can be used to significantly improve boundary detection systems. Our system detects the face and mouth area in the image by using color, geometric location, and additional depth information for the face. Initially, color and depth information can be used to localize the face. Then we can determine the lip region from the intensity information and the detected eye locations. The system has successfully been used to extract approximate lip regions using RGB color information of the mouth area. Merely using color information is not robust because the quality of the results may vary depending on light conditions, background, and the human race. To overcome this problem, we used a stereo camera to obtain 3-D facial images. 3-D data constructed from the depth information along with color information can provide more accurate lip boundary detection results as compared to color only based techniques.

  18. Subject independent facial expression recognition with robust face detection using a convolutional neural network.

    PubMed

    Matsugu, Masakazu; Mori, Katsuhiko; Mitari, Yusuke; Kaneda, Yuji

    2003-01-01

    Reliable detection of ordinary facial expressions (e.g. smile) despite the variability among individuals as well as face appearance is an important step toward the realization of perceptual user interface with autonomous perception of persons. We describe a rule-based algorithm for robust facial expression recognition combined with robust face detection using a convolutional neural network. In this study, we address the problem of subject independence as well as translation, rotation, and scale invariance in the recognition of facial expression. The result shows reliable detection of smiles with recognition rate of 97.6% for 5600 still images of more than 10 subjects. The proposed algorithm demonstrated the ability to discriminate smiling from talking based on the saliency score obtained from voting visual cues. To the best of our knowledge, it is the first facial expression recognition model with the property of subject independence combined with robustness to variability in facial appearance.

  19. Measurement of radon progenies using the Timepix detector.

    PubMed

    Bulanek, Boris; Jilek, Karel; Cermak, Pavel

    2014-07-01

    After an introduction of Timepix detector, results of these detectors with silicon and cadmium telluride detection layer in assessment of activity of short-lived radon decay products are presented. They were collected on an open-face filter by means of one-grab sampling method from the NRPI radon chamber. Activity of short-lived radon decay products was estimated from measured alpha decays of 218,214Po. The results indicate very good agreement between the use of both Timepix detectors and an NRPI reference instrument, continuous monitor Fritra 4. Low-level detection limit for EEC was estimated to be 41 Bq m(-3) for silicon detection layer and 184 Bq m(-3) for CdTe detection layer, respectively. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. Vibration characteristics and damage detection in a suspension bridge

    NASA Astrophysics Data System (ADS)

    Wickramasinghe, Wasanthi R.; Thambiratnam, David P.; Chan, Tommy H. T.; Nguyen, Theanh

    2016-08-01

    Suspension bridges are flexible and vibration sensitive structures that exhibit complex and multi-modal vibration. Due to this, the usual vibration based methods could face a challenge when used for damage detection in these structures. This paper develops and applies a mode shape component specific damage index (DI) to detect and locate damage in a suspension bridge with pre-tensioned cables. This is important as suspension bridges are large structures and damage in them during their long service lives could easily go un-noticed. The capability of the proposed vibration based DI is demonstrated through its application to detect and locate single and multiple damages with varied locations and severity in the cables of the suspension bridge. The outcome of this research will enhance the safety and performance of these bridges which play an important role in the transport network.

  1. Change detection of medical images using dictionary learning techniques and PCA

    NASA Astrophysics Data System (ADS)

    Nika, Varvara; Babyn, Paul; Zhu, Hongmei

    2014-03-01

    Automatic change detection methods for identifying the changes of serial MR images taken at different times are of great interest to radiologists. The majority of existing change detection methods in medical imaging, and those of brain images in particular, include many preprocessing steps and rely mostly on statistical analysis of MRI scans. Although most methods utilize registration software, tissue classification remains a difficult and overwhelming task. Recently, dictionary learning techniques are used in many areas of image processing, such as image surveillance, face recognition, remote sensing, and medical imaging. In this paper we present the Eigen-Block Change Detection algorithm (EigenBlockCD). It performs local registration and identifies the changes between consecutive MR images of the brain. Blocks of pixels from baseline scan are used to train local dictionaries that are then used to detect changes in the follow-up scan. We use PCA to reduce the dimensionality of the local dictionaries and the redundancy of data. Choosing the appropriate distance measure significantly affects the performance of our algorithm. We examine the differences between L1 and L2 norms as two possible similarity measures in the EigenBlockCD. We show the advantages of L2 norm over L1 norm theoretically and numerically. We also demonstrate the performance of the EigenBlockCD algorithm for detecting changes of MR images and compare our results with those provided in recent literature. Experimental results with both simulated and real MRI scans show that the EigenBlockCD outperforms the previous methods. It detects clinical changes while ignoring the changes due to patient's position and other acquisition artifacts.

  2. Expectation and Surprise Determine Neural Population Responses in the Ventral Visual Stream

    PubMed Central

    Egner, Tobias; Monti, Jim M.; Summerfield, Christopher

    2014-01-01

    Visual cortex is traditionally viewed as a hierarchy of neural feature detectors, with neural population responses being driven by bottom-up stimulus features. Conversely, “predictive coding” models propose that each stage of the visual hierarchy harbors two computationally distinct classes of processing unit: representational units that encode the conditional probability of a stimulus and provide predictions to the next lower level; and error units that encode the mismatch between predictions and bottom-up evidence, and forward prediction error to the next higher level. Predictive coding therefore suggests that neural population responses in category-selective visual regions, like the fusiform face area (FFA), reflect a summation of activity related to prediction (“face expectation”) and prediction error (“face surprise”), rather than a homogenous feature detection response. We tested the rival hypotheses of the feature detection and predictive coding models by collecting functional magnetic resonance imaging data from the FFA while independently varying both stimulus features (faces vs houses) and subjects’ perceptual expectations regarding those features (low vs medium vs high face expectation). The effects of stimulus and expectation factors interacted, whereby FFA activity elicited by face and house stimuli was indistinguishable under high face expectation and maximally differentiated under low face expectation. Using computational modeling, we show that these data can be explained by predictive coding but not by feature detection models, even when the latter are augmented with attentional mechanisms. Thus, population responses in the ventral visual stream appear to be determined by feature expectation and surprise rather than by stimulus features per se. PMID:21147999

  3. Hepatitis B Core Antigen in Hepatocytes of Chronic Hepatitis B: Comparison between Indirect Immunofluorescence and Immunoperoxidase Method

    PubMed Central

    Tabassum, Shahina; Al-Mahtab, Mamun; Nessa, Afzalun; Jahan, Munira; Shamim Kabir, Chowdhury Mohammad; Kamal, Mohammad; Cesar Aguilar, Julio

    2015-01-01

    Background Hepatitis B virus (HBV) infection has many faces. Precore and core promoter mutants resemble inactive carrier status. The identification of hepatitis B core antigen (HBcAg) in hepatocytes may have variable clinical significance. The present study was undertaken to detect HBcAg in chronic hepatitis B (CHB) patients and to assess the efficacy of detection system by indirect immunofluorescence (IIF) and indirect immunoperoxidase (IIP). Materials and methods The study was done in 70 chronic HBV-infected patients. Out of 70 patients, eight (11.4%) were hepatitis B e antigen (HBeAg) positive and 62 (88.57%) were HBeAg negative. Hepatitis B core antigen was detected by indirect immunofluorescence (IIF) and indirect immunoperoxidase (IIP) methods in liver tissue. Results All HBeAg positive patients expressed HBcAg by both IIF and IIP methods. Out of 62 patients with HBeAg-negative CHB, HBcAg was detected by IIF in 55 (88.7%) patients and by IIP in 51 (82.26%) patients. A positive relation among viral load and HBcAg detection was also found. This was more evident in the case of HBeAg negative patients and showed a positive relation with HBV DNA levels. Conclusion Hepatitis B core antigen can be detected using the IIF from formalin fixed paraffin block preparation and also by IIP method. This seems to reflect the magnitudes of HBV replication in CHB. How to cite this article Raihan R, Tabassum S, Al-Mahtab M, Nessa A, Jahan M, Kabir CMS, Kamal M, Aguilar JC. Hepatitis B Core Antigen in Hepatocytes of Chronic Hepatitis B: Comparison between Indirect Immunofluorescence and Immunoperoxidase Method. Euroasian J Hepato-Gastroenterol 2015;5(1):7-10. PMID:29201677

  4. Fraudulent ID using face morphs: Experiments on human and automatic recognition

    PubMed Central

    Robertson, David J.; Kramer, Robin S. S.

    2017-01-01

    Matching unfamiliar faces is known to be difficult, and this can give an opportunity to those engaged in identity fraud. Here we examine a relatively new form of fraud, the use of photo-ID containing a graphical morph between two faces. Such a document may look sufficiently like two people to serve as ID for both. We present two experiments with human viewers, and a third with a smartphone face recognition system. In Experiment 1, viewers were asked to match pairs of faces, without being warned that one of the pair could be a morph. They very commonly accepted a morphed face as a match. However, in Experiment 2, following very short training on morph detection, their acceptance rate fell considerably. Nevertheless, there remained large individual differences in people’s ability to detect a morph. In Experiment 3 we show that a smartphone makes errors at a similar rate to ‘trained’ human viewers—i.e. accepting a small number of morphs as genuine ID. We discuss these results in reference to the use of face photos for security. PMID:28328928

  5. Fraudulent ID using face morphs: Experiments on human and automatic recognition.

    PubMed

    Robertson, David J; Kramer, Robin S S; Burton, A Mike

    2017-01-01

    Matching unfamiliar faces is known to be difficult, and this can give an opportunity to those engaged in identity fraud. Here we examine a relatively new form of fraud, the use of photo-ID containing a graphical morph between two faces. Such a document may look sufficiently like two people to serve as ID for both. We present two experiments with human viewers, and a third with a smartphone face recognition system. In Experiment 1, viewers were asked to match pairs of faces, without being warned that one of the pair could be a morph. They very commonly accepted a morphed face as a match. However, in Experiment 2, following very short training on morph detection, their acceptance rate fell considerably. Nevertheless, there remained large individual differences in people's ability to detect a morph. In Experiment 3 we show that a smartphone makes errors at a similar rate to 'trained' human viewers-i.e. accepting a small number of morphs as genuine ID. We discuss these results in reference to the use of face photos for security.

  6. Developing the Own-Race Advantage in 4-, 6-, and 9-Month-Old Taiwanese Infants: A Perceptual Learning Perspective

    PubMed Central

    Chien, Sarina Hui-Lin; Wang, Jing-Fong; Huang, Tsung-Ren

    2016-01-01

    Previous infant studies on the other-race effect have favored the perceptual narrowing view, or declined sensitivities to rarely exposed other-race faces. Here we wish to provide an alternative possibility, perceptual learning, manifested by improved sensitivity for frequently exposed own-race faces in the first year of life. Using the familiarization/visual-paired comparison paradigm, we presented 4-, 6-, and 9-month-old Taiwanese infants with oval-cropped Taiwanese, Caucasian, Filipino faces, and each with three different manipulations of increasing task difficulty (i.e., change identity, change eyes, and widen eye spacing). An adult experiment was first conducted to verify the task difficulty. Our results showed that, with oval-cropped faces, the 4 month-old infants could only discriminate Taiwanese “change identity” condition and not any others, suggesting an early own-race advantage at 4 months. The 6 month-old infants demonstrated novelty preferences in both Taiwanese and Caucasian “change identity” conditions, and proceeded to the Taiwanese “change eyes” condition. The 9-month-old infants demonstrated novelty preferences in the “change identity” condition of all three ethnic faces. They also passed the Taiwanese “change eyes” condition but could not extend this refined ability of detecting a change in the eyes for the Caucasian or Philippine faces. Taken together, we interpret the pattern of results as evidence supporting perceptual learning during the first year: the ability to discriminate own-race faces emerges at 4 months and continues to refine, while the ability to discriminate other-race faces emerges between 6 and 9 months and retains at 9 months. Additionally, the discrepancies in the face stimuli and methods between studies advocating the narrowing view and those supporting the learning view were discussed. PMID:27807427

  7. Adaptive Integration and Optimization of Automated and Neural Processing Systems - Establishing Neural and Behavioral Benchmarks of Optimized Performance

    DTIC Science & Technology

    2012-07-01

    detection only condition followed either face detection only or dual task, thus ensuring that participants were practiced in face detection before...1 ARMY RSCH LABORATORY – HRED RDRL HRM C A DAVISON 320 MANSCEN LOOP STE 115 FORT LEONARD WOOD MO 65473 2 ARMY RSCH LABORATORY...HRED RDRL HRM DI T DAVIS J HANSBERGER BLDG 5400 RM C242 REDSTONE ARSENAL AL 35898-7290 1 ARMY RSCH LABORATORY – HRED RDRL HRS

  8. A cloud shadow detection method combined with cloud height iteration and spectral analysis for Landsat 8 OLI data

    NASA Astrophysics Data System (ADS)

    Sun, Lin; Liu, Xinyan; Yang, Yikun; Chen, TingTing; Wang, Quan; Zhou, Xueying

    2018-04-01

    Although enhanced over prior Landsat instruments, Landsat 8 OLI can obtain very high cloud detection precisions, but for the detection of cloud shadows, it still faces great challenges. Geometry-based cloud shadow detection methods are considered the most effective and are being improved constantly. The Function of Mask (Fmask) cloud shadow detection method is one of the most representative geometry-based methods that has been used for cloud shadow detection with Landsat 8 OLI. However, the Fmask method estimates cloud height employing fixed temperature rates, which are highly uncertain, and errors of large area cloud shadow detection can be caused by errors in estimations of cloud height. This article improves the geometry-based cloud shadow detection method for Landsat OLI from the following two aspects. (1) Cloud height no longer depends on the brightness temperature of the thermal infrared band but uses a possible dynamic range from 200 m to 12,000 m. In this case, cloud shadow is not a specific location but a possible range. Further analysis was carried out in the possible range based on the spectrum to determine cloud shadow location. This effectively avoids the cloud shadow leakage caused by the error in the height determination of a cloud. (2) Object-based and pixel spectral analyses are combined to detect cloud shadows, which can realize cloud shadow detection from two aspects of target scale and pixel scale. Based on the analysis of the spectral differences between the cloud shadow and typical ground objects, the best cloud shadow detection bands of Landsat 8 OLI were determined. The combined use of spectrum and shape can effectively improve the detection precision of cloud shadows produced by thin clouds. Several cloud shadow detection experiments were carried out, and the results were verified by the results of artificial recognition. The results of these experiments indicated that this method can identify cloud shadows in different regions with correct accuracy exceeding 80%, approximately 5% of the areas were wrongly identified, and approximately 10% of the cloud shadow areas were missing. The accuracy of this method is obviously higher than the recognition accuracy of Fmask, which has correct accuracy lower than 60%, and the missing recognition is approximately 40%.

  9. Learning optimal embedded cascades.

    PubMed

    Saberian, Mohammad Javad; Vasconcelos, Nuno

    2012-10-01

    The problem of automatic and optimal design of embedded object detector cascades is considered. Two main challenges are identified: optimization of the cascade configuration and optimization of individual cascade stages, so as to achieve the best tradeoff between classification accuracy and speed, under a detection rate constraint. Two novel boosting algorithms are proposed to address these problems. The first, RCBoost, formulates boosting as a constrained optimization problem which is solved with a barrier penalty method. The constraint is the target detection rate, which is met at all iterations of the boosting process. This enables the design of embedded cascades of known configuration without extensive cross validation or heuristics. The second, ECBoost, searches over cascade configurations to achieve the optimal tradeoff between classification risk and speed. The two algorithms are combined into an overall boosting procedure, RCECBoost, which optimizes both the cascade configuration and its stages under a detection rate constraint, in a fully automated manner. Extensive experiments in face, car, pedestrian, and panda detection show that the resulting detectors achieve an accuracy versus speed tradeoff superior to those of previous methods.

  10. [Quartz-enhanced photoacoustic spectroscopy trace gas detection system based on the Fabry-Perot demodulation].

    PubMed

    Lin, Cheng; Zhu, Yong; Wei, Wei; Zhang, Jie; Tian, Li; Xu, Zu-Wen

    2013-05-01

    An all-optical quartz-enhanced photoacoustic spectroscopy system, based on the F-P demodulation, for trace gas detection in the open environment was proposed. In quartz-enhanced photoacoustic spectroscopy (QEPAS), an optical fiber Fabry-Perot method was used to replace the conventional electronic demodulation method. The photoacoustic signal was obtained by demodulating the variation of the Fabry-Perot cavity between the quartz tuning fork side and the fiber face. An experimental system was setup. The experiment for detection of water vapour in the open environment was carried on. A normalized noise equivalent absorption coefficient of 2.80 x 10(-7) cm(-1) x W x Hz(-1/2) was achieved. The result demonstrated that the sensitivity of the all-optical quartz-enhanced photoacoustic spectroscopy system is about 2.6 times higher than that of the conventional QEPAS system. The all-optical quartz-enhanced photoacoustic spectroscopy system is immune to electromagnetic interference, safe in flammable and explosive gas detection, suitable for high temperature and high humidity environments and realizable for long distance, multi-point and network sensing.

  11. Face recognition for criminal identification: An implementation of principal component analysis for face recognition

    NASA Astrophysics Data System (ADS)

    Abdullah, Nurul Azma; Saidi, Md. Jamri; Rahman, Nurul Hidayah Ab; Wen, Chuah Chai; Hamid, Isredza Rahmi A.

    2017-10-01

    In practice, identification of criminal in Malaysia is done through thumbprint identification. However, this type of identification is constrained as most of criminal nowadays getting cleverer not to leave their thumbprint on the scene. With the advent of security technology, cameras especially CCTV have been installed in many public and private areas to provide surveillance activities. The footage of the CCTV can be used to identify suspects on scene. However, because of limited software developed to automatically detect the similarity between photo in the footage and recorded photo of criminals, the law enforce thumbprint identification. In this paper, an automated facial recognition system for criminal database was proposed using known Principal Component Analysis approach. This system will be able to detect face and recognize face automatically. This will help the law enforcements to detect or recognize suspect of the case if no thumbprint present on the scene. The results show that about 80% of input photo can be matched with the template data.

  12. Correlation based efficient face recognition and color change detection

    NASA Astrophysics Data System (ADS)

    Elbouz, M.; Alfalou, A.; Brosseau, C.; Alam, M. S.; Qasmi, S.

    2013-01-01

    Identifying the human face via correlation is a topic attracting widespread interest. At the heart of this technique lies the comparison of an unknown target image to a known reference database of images. However, the color information in the target image remains notoriously difficult to interpret. In this paper, we report a new technique which: (i) is robust against illumination change, (ii) offers discrimination ability to detect color change between faces having similar shape, and (iii) is specifically designed to detect red colored stains (i.e. facial bleeding). We adopt the Vanderlugt correlator (VLC) architecture with a segmented phase filter and we decompose the color target image using normalized red, green, and blue (RGB), and hue, saturation, and value (HSV) scales. We propose a new strategy to effectively utilize color information in signatures for further increasing the discrimination ability. The proposed algorithm has been found to be very efficient for discriminating face subjects with different skin colors, and those having color stains in different areas of the facial image.

  13. Face verification system for Android mobile devices using histogram based features

    NASA Astrophysics Data System (ADS)

    Sato, Sho; Kobayashi, Kazuhiro; Chen, Qiu

    2016-07-01

    This paper proposes a face verification system that runs on Android mobile devices. In this system, facial image is captured by a built-in camera on the Android device firstly, and then face detection is implemented using Haar-like features and AdaBoost learning algorithm. The proposed system verify the detected face using histogram based features, which are generated by binary Vector Quantization (VQ) histogram using DCT coefficients in low frequency domains, as well as Improved Local Binary Pattern (Improved LBP) histogram in spatial domain. Verification results with different type of histogram based features are first obtained separately and then combined by weighted averaging. We evaluate our proposed algorithm by using publicly available ORL database and facial images captured by an Android tablet.

  14. Support vector machine for automatic pain recognition

    NASA Astrophysics Data System (ADS)

    Monwar, Md Maruf; Rezaei, Siamak

    2009-02-01

    Facial expressions are a key index of emotion and the interpretation of such expressions of emotion is critical to everyday social functioning. In this paper, we present an efficient video analysis technique for recognition of a specific expression, pain, from human faces. We employ an automatic face detector which detects face from the stored video frame using skin color modeling technique. For pain recognition, location and shape features of the detected faces are computed. These features are then used as inputs to a support vector machine (SVM) for classification. We compare the results with neural network based and eigenimage based automatic pain recognition systems. The experiment results indicate that using support vector machine as classifier can certainly improve the performance of automatic pain recognition system.

  15. Validity, Sensitivity, and Responsiveness of the 11-Face Faces Pain Scale to Postoperative Pain in Adult Orthopedic Surgery Patients.

    PubMed

    Van Giang, Nguyen; Chiu, Hsiao-Yean; Thai, Duong Hong; Kuo, Shu-Yu; Tsai, Pei-Shan

    2015-10-01

    Pain is common in patients after orthopedic surgery. The 11-face Faces Pain Scale has not been validated for use in adult patients with postoperative pain. To assess the validity of the 11-face Faces Pain Scale and its ability to detect responses to pain medications, and to determine whether the sensitivity of the 11-face Faces Pain Scale for detecting changes in pain intensity over time is associated with gender differences in adult postorthopedic surgery patients. The 11-face Faces Pain Scale was translated into Vietnamese using forward and back translation. Postoperative pain was assessed using an 11-point numerical rating scale and the 11-face Faces Pain Scale on the day of surgery, and before (Time 1) and every 30 minutes after (Times 2-5) the patients had taken pain medications on the first postoperative day. The 11-face Faces Pain Scale highly correlated with the numerical rating scale (r = 0.78, p < .001). When the scores from each follow-up test (Times 2-5) were compared with those from the baseline test (Time 1), the effect sizes were -0.70, -1.05, -1.20, and -1.31, and the standardized response means were -1.17, -1.59, -1.66, and -1.82, respectively. The mean change in pain intensity, but not gender-time interaction effect, over the five time points was significant (F = 182.03, p < .001). Our results support that the 11-face Faces Pain Scale is appropriate for measuring acute postoperative pain in adults. Copyright © 2015 American Society for Pain Management Nursing. Published by Elsevier Inc. All rights reserved.

  16. Observing real-time social interaction via telecommunication methods in budgerigars (Melopsittacus undulatus).

    PubMed

    Ikkatai, Yuko; Okanoya, Kazuo; Seki, Yoshimasa

    2016-07-01

    Humans communicate with one another not only face-to-face but also via modern telecommunication methods such as television and video conferencing. We readily detect the difference between people actively communicating with us and people merely acting via a broadcasting system. We developed an animal model of this novel communication method seen in humans to determine whether animals also make this distinction. We built a system for two animals to interact via audio-visual equipment in real-time, to compare behavioral differences between two conditions, an "interactive two-way condition" and a "non-interactive (one-way) condition." We measured birds' responses to stimuli which appeared in these two conditions. We used budgerigars, which are small, gregarious birds, and found that the frequency of vocal interaction with other individuals did not differ between the two conditions. However, body synchrony between the two birds was observed more often in the interactive condition, suggesting budgerigars recognized the difference between these interactive and non-interactive conditions on some level. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. The Deceptively Simple N170 Reflects Network Information Processing Mechanisms Involving Visual Feature Coding and Transfer Across Hemispheres

    PubMed Central

    Ince, Robin A. A.; Jaworska, Katarzyna; Gross, Joachim; Panzeri, Stefano; van Rijsbergen, Nicola J.; Rousselet, Guillaume A.; Schyns, Philippe G.

    2016-01-01

    A key to understanding visual cognition is to determine “where”, “when”, and “how” brain responses reflect the processing of the specific visual features that modulate categorization behavior—the “what”. The N170 is the earliest Event-Related Potential (ERP) that preferentially responds to faces. Here, we demonstrate that a paradigmatic shift is necessary to interpret the N170 as the product of an information processing network that dynamically codes and transfers face features across hemispheres, rather than as a local stimulus-driven event. Reverse-correlation methods coupled with information-theoretic analyses revealed that visibility of the eyes influences face detection behavior. The N170 initially reflects coding of the behaviorally relevant eye contralateral to the sensor, followed by a causal communication of the other eye from the other hemisphere. These findings demonstrate that the deceptively simple N170 ERP hides a complex network information processing mechanism involving initial coding and subsequent cross-hemispheric transfer of visual features. PMID:27550865

  18. Context-Aware Local Binary Feature Learning for Face Recognition.

    PubMed

    Duan, Yueqi; Lu, Jiwen; Feng, Jianjiang; Zhou, Jie

    2018-05-01

    In this paper, we propose a context-aware local binary feature learning (CA-LBFL) method for face recognition. Unlike existing learning-based local face descriptors such as discriminant face descriptor (DFD) and compact binary face descriptor (CBFD) which learn each feature code individually, our CA-LBFL exploits the contextual information of adjacent bits by constraining the number of shifts from different binary bits, so that more robust information can be exploited for face representation. Given a face image, we first extract pixel difference vectors (PDV) in local patches, and learn a discriminative mapping in an unsupervised manner to project each pixel difference vector into a context-aware binary vector. Then, we perform clustering on the learned binary codes to construct a codebook, and extract a histogram feature for each face image with the learned codebook as the final representation. In order to exploit local information from different scales, we propose a context-aware local binary multi-scale feature learning (CA-LBMFL) method to jointly learn multiple projection matrices for face representation. To make the proposed methods applicable for heterogeneous face recognition, we present a coupled CA-LBFL (C-CA-LBFL) method and a coupled CA-LBMFL (C-CA-LBMFL) method to reduce the modality gap of corresponding heterogeneous faces in the feature level, respectively. Extensive experimental results on four widely used face datasets clearly show that our methods outperform most state-of-the-art face descriptors.

  19. Fraud in a population-based study of headache: prevention, detection and correction

    PubMed Central

    2014-01-01

    Background In medicine, research misconduct is historically associated with laboratory or pharmaceutical research, but the vulnerability of epidemiological surveys should be recognized. As these surveys underpin health policy and allocation of limited resources, misreporting can have far-reaching implications. We report how fraud in a nationwide headache survey occurred and how it was discovered and rectified before it could cause harm. Methods The context was a door-to-door survey to estimate the prevalence and burden of headache disorders in Pakistan. Data were collected from all four provinces of Pakistan by non-medical interviewers and collated centrally. Measures to ensure data integrity were preventative, detective and corrective. We carefully selected and trained the interviewers, set rules of conduct and gave specific warnings regarding the consequences of falsification. We employed two-fold fraud detection methods: comparative data analysis, and face-to-face re-contact with randomly selected participants. When fabrication was detected, data shown to be unreliable were replaced by repeating the survey in new samples according to the original protocol. Results Comparative analysis of datasets from the regions revealed unfeasible prevalences and gender ratios in one (Multan). Data fabrication was suspected. During a surprise-visit to Multan, of a random sample of addresses selected for verification, all but one had been falsely reported. The data (from 840 cases) were discarded, and the survey repeated with new interviewers. The new sample of 800 cases was demographically and diagnostically consistent with other regions. Conclusion Fraud in community-based surveys is seldom reported, but no less likely to occur than in other fields of medical research. Measures should be put in place to prevent, detect and, where necessary, correct it. In this instance, had the data from Multan been pooled with those from other regions before analysis, a damaging fraud might have escaped notice. PMID:24916996

  20. Scrambling for anonymous visual communications

    NASA Astrophysics Data System (ADS)

    Dufaux, Frederic; Ebrahimi, Touradj

    2005-08-01

    In this paper, we present a system for anonymous visual communications. Target application is an anonymous video chat. The system is identifying faces in the video sequence by means of face detection or skin detection. The corresponding regions are subsequently scrambled. We investigate several approaches for scrambling, either in the image-domain or in the transform-domain. Experiment results show the effectiveness of the proposed system.

  1. Improving face image extraction by using deep learning technique

    NASA Astrophysics Data System (ADS)

    Xue, Zhiyun; Antani, Sameer; Long, L. R.; Demner-Fushman, Dina; Thoma, George R.

    2016-03-01

    The National Library of Medicine (NLM) has made a collection of over a 1.2 million research articles containing 3.2 million figure images searchable using the Open-iSM multimodal (text+image) search engine. Many images are visible light photographs, some of which are images containing faces ("face images"). Some of these face images are acquired in unconstrained settings, while others are studio photos. To extract the face regions in the images, we first applied one of the most widely-used face detectors, a pre-trained Viola-Jones detector implemented in Matlab and OpenCV. The Viola-Jones detector was trained for unconstrained face image detection, but the results for the NLM database included many false positives, which resulted in a very low precision. To improve this performance, we applied a deep learning technique, which reduced the number of false positives and as a result, the detection precision was improved significantly. (For example, the classification accuracy for identifying whether the face regions output by this Viola- Jones detector are true positives or not in a test set is about 96%.) By combining these two techniques (Viola-Jones and deep learning) we were able to increase the system precision considerably, while avoiding the need to manually construct a large training set by manual delineation of the face regions.

  2. Monkeys and Humans Share a Common Computation for Face/Voice Integration

    PubMed Central

    Chandrasekaran, Chandramouli; Lemus, Luis; Trubanova, Andrea; Gondan, Matthias; Ghazanfar, Asif A.

    2011-01-01

    Speech production involves the movement of the mouth and other regions of the face resulting in visual motion cues. These visual cues enhance intelligibility and detection of auditory speech. As such, face-to-face speech is fundamentally a multisensory phenomenon. If speech is fundamentally multisensory, it should be reflected in the evolution of vocal communication: similar behavioral effects should be observed in other primates. Old World monkeys share with humans vocal production biomechanics and communicate face-to-face with vocalizations. It is unknown, however, if they, too, combine faces and voices to enhance their perception of vocalizations. We show that they do: monkeys combine faces and voices in noisy environments to enhance their detection of vocalizations. Their behavior parallels that of humans performing an identical task. We explored what common computational mechanism(s) could explain the pattern of results we observed across species. Standard explanations or models such as the principle of inverse effectiveness and a “race” model failed to account for their behavior patterns. Conversely, a “superposition model”, positing the linear summation of activity patterns in response to visual and auditory components of vocalizations, served as a straightforward but powerful explanatory mechanism for the observed behaviors in both species. As such, it represents a putative homologous mechanism for integrating faces and voices across primates. PMID:21998576

  3. Association of impaired facial affect recognition with basic facial and visual processing deficits in schizophrenia.

    PubMed

    Norton, Daniel; McBain, Ryan; Holt, Daphne J; Ongur, Dost; Chen, Yue

    2009-06-15

    Impaired emotion recognition has been reported in schizophrenia, yet the nature of this impairment is not completely understood. Recognition of facial emotion depends on processing affective and nonaffective facial signals, as well as basic visual attributes. We examined whether and how poor facial emotion recognition in schizophrenia is related to basic visual processing and nonaffective face recognition. Schizophrenia patients (n = 32) and healthy control subjects (n = 29) performed emotion discrimination, identity discrimination, and visual contrast detection tasks, where the emotionality, distinctiveness of identity, or visual contrast was systematically manipulated. Subjects determined which of two presentations in a trial contained the target: the emotional face for emotion discrimination, a specific individual for identity discrimination, and a sinusoidal grating for contrast detection. Patients had significantly higher thresholds (worse performance) than control subjects for discriminating both fearful and happy faces. Furthermore, patients' poor performance in fear discrimination was predicted by performance in visual detection and face identity discrimination. Schizophrenia patients require greater emotional signal strength to discriminate fearful or happy face images from neutral ones. Deficient emotion recognition in schizophrenia does not appear to be determined solely by affective processing but is also linked to the processing of basic visual and facial information.

  4. Area X-ray or UV camera system for high-intensity beams

    DOEpatents

    Chapman, Henry N.; Bajt, Sasa; Spiller, Eberhard A.; Hau-Riege, Stefan , Marchesini, Stefano

    2010-03-02

    A system in one embodiment includes a source for directing a beam of radiation at a sample; a multilayer mirror having a face oriented at an angle of less than 90 degrees from an axis of the beam from the source, the mirror reflecting at least a portion of the radiation after the beam encounters a sample; and a pixellated detector for detecting radiation reflected by the mirror. A method in a further embodiment includes directing a beam of radiation at a sample; reflecting at least some of the radiation diffracted by the sample; not reflecting at least a majority of the radiation that is not diffracted by the sample; and detecting at least some of the reflected radiation. A method in yet another embodiment includes directing a beam of radiation at a sample; reflecting at least some of the radiation diffracted by the sample using a multilayer mirror; and detecting at least some of the reflected radiation.

  5. Technologies and methods used for the detection, enrichment and characterization of cancer stem cells.

    PubMed

    Williams, Anthony; Datar, Ram; Cote, Richard

    2010-01-01

    Cancer stem cells (CSCs) represent a subclass of tumour cells with the ability for self-renewal, production of differentiated progeny, prolonged survival, resistance to damaging therapeutic agents, and anchorage-independent survival, which together make this population effectively equipped to metastasize, invade and colonize secondary tissues in the face of therapeutic intervention. In recent years, investigators have increasingly focused on the characterization of CSCs to better understand the mechanisms that govern malignant disease progression in an effort to develop more effective, targeted therapeutic agents. The primary obstacle to the study of CSCs, however, is their rarity. Thus, the study of CSCs requires the use of sensitive and efficient technologies for their enrichment and detection. This review discusses technologies and methods that have been adapted and used to isolate and characterize CSCs to date, as well as new potential directions for the enhanced enrichment and detection of CSCs. While the technologies used for CSC enrichment and detection have been useful thus far for their characterization, each approach is not without limitations. Future studies of CSCs will depend on the enhanced sensitivity and specificity of currently available technologies, and the development of novel technologies for increased detection and enrichment of CSCs.

  6. Ultrasonic inspection of studs (bolts) using dynamic predictive deconvolution and wave shaping.

    PubMed

    Suh, D M; Kim, W W; Chung, J G

    1999-01-01

    Bolt degradation has become a major issue in the nuclear industry since the 1980's. If small cracks in stud bolts are not detected early enough, they grow rapidly and cause catastrophic disasters. Their detection, despite its importance, is known to be a very difficult problem due to the complicated structures of the stud bolts. This paper presents a method of detecting and sizing a small crack in the root between two adjacent crests in threads. The key idea is from the fact that the mode-converted Rayleigh wave travels slowly down the face of the crack and turns from the intersection of the crack and the root of thread to the transducer. Thus, when a crack exists, a small delayed pulse due to the Rayleigh wave is detected between large regularly spaced pulses from the thread. The delay time is the same as the propagation delay time of the slow Rayleigh wave and is proportional to the site of the crack. To efficiently detect the slow Rayleigh wave, three methods based on digital signal processing are proposed: wave shaping, dynamic predictive deconvolution, and dynamic predictive deconvolution combined with wave shaping.

  7. Detecting and Categorizing Fleeting Emotions in Faces

    PubMed Central

    Sweeny, Timothy D.; Suzuki, Satoru; Grabowecky, Marcia; Paller, Ken A.

    2013-01-01

    Expressions of emotion are often brief, providing only fleeting images from which to base important social judgments. We sought to characterize the sensitivity and mechanisms of emotion detection and expression categorization when exposure to faces is very brief, and to determine whether these processes dissociate. Observers viewed 2 backward-masked facial expressions in quick succession, 1 neutral and the other emotional (happy, fearful, or angry), in a 2-interval forced-choice task. On each trial, observers attempted to detect the emotional expression (emotion detection) and to classify the expression (expression categorization). Above-chance emotion detection was possible with extremely brief exposures of 10 ms and was most accurate for happy expressions. We compared categorization among expressions using a d′ analysis, and found that categorization was usually above chance for angry versus happy and fearful versus happy, but consistently poor for fearful versus angry expressions. Fearful versus angry categorization was poor even when only negative emotions (fearful, angry, or disgusted) were used, suggesting that this categorization is poor independent of decision context. Inverting faces impaired angry versus happy categorization, but not emotion detection, suggesting that information from facial features is used differently for emotion detection and expression categorizations. Emotion detection often occurred without expression categorization, and expression categorization sometimes occurred without emotion detection. These results are consistent with the notion that emotion detection and expression categorization involve separate mechanisms. PMID:22866885

  8. System for face recognition under expression variations of neutral-sampled individuals using recognized expression warping and a virtual expression-face database

    NASA Astrophysics Data System (ADS)

    Petpairote, Chayanut; Madarasmi, Suthep; Chamnongthai, Kosin

    2018-01-01

    The practical identification of individuals using facial recognition techniques requires the matching of faces with specific expressions to faces from a neutral face database. A method for facial recognition under varied expressions against neutral face samples of individuals via recognition of expression warping and the use of a virtual expression-face database is proposed. In this method, facial expressions are recognized and the input expression faces are classified into facial expression groups. To aid facial recognition, the virtual expression-face database is sorted into average facial-expression shapes and by coarse- and fine-featured facial textures. Wrinkle information is also employed in classification by using a process of masking to adjust input faces to match the expression-face database. We evaluate the performance of the proposed method using the CMU multi-PIE, Cohn-Kanade, and AR expression-face databases, and we find that it provides significantly improved results in terms of face recognition accuracy compared to conventional methods and is acceptable for facial recognition under expression variation.

  9. 3D Face Modeling Using the Multi-Deformable Method

    PubMed Central

    Hwang, Jinkyu; Yu, Sunjin; Kim, Joongrock; Lee, Sangyoun

    2012-01-01

    In this paper, we focus on the problem of the accuracy performance of 3D face modeling techniques using corresponding features in multiple views, which is quite sensitive to feature extraction errors. To solve the problem, we adopt a statistical model-based 3D face modeling approach in a mirror system consisting of two mirrors and a camera. The overall procedure of our 3D facial modeling method has two primary steps: 3D facial shape estimation using a multiple 3D face deformable model and texture mapping using seamless cloning that is a type of gradient-domain blending. To evaluate our method's performance, we generate 3D faces of 30 individuals and then carry out two tests: accuracy test and robustness test. Our method shows not only highly accurate 3D face shape results when compared with the ground truth, but also robustness to feature extraction errors. Moreover, 3D face rendering results intuitively show that our method is more robust to feature extraction errors than other 3D face modeling methods. An additional contribution of our method is that a wide range of face textures can be acquired by the mirror system. By using this texture map, we generate realistic 3D face for individuals at the end of the paper. PMID:23201976

  10. The role of the amygdala and the basal ganglia in visual processing of central vs. peripheral emotional content.

    PubMed

    Almeida, Inês; van Asselen, Marieke; Castelo-Branco, Miguel

    2013-09-01

    In human cognition, most relevant stimuli, such as faces, are processed in central vision. However, it is widely believed that recognition of relevant stimuli (e.g. threatening animal faces) at peripheral locations is also important due to their survival value. Moreover, task instructions have been shown to modulate brain regions involved in threat recognition (e.g. the amygdala). In this respect it is also controversial whether tasks requiring explicit focus on stimulus threat content vs. implicit processing differently engage primitive subcortical structures involved in emotional appraisal. Here we have addressed the role of central vs. peripheral processing in the human amygdala using animal threatening vs. non-threatening face stimuli. First, a simple animal face recognition task with threatening and non-threatening animal faces, as well as non-face control stimuli, was employed in naïve subjects (implicit task). A subsequent task was then performed with the same stimulus categories (but different stimuli) in which subjects were told to explicitly detect threat signals. We found lateralized amygdala responses both to the spatial location of stimuli and to the threatening content of faces depending on the task performed: the right amygdala showed increased responses to central compared to left presented stimuli specifically during the threat detection task, while the left amygdala was better prone to discriminate threatening faces from non-facial displays during the animal face recognition task. Additionally, the right amygdala responded to faces during the threat detection task but only when centrally presented. Moreover, we have found no evidence for superior responses of the amygdala to peripheral stimuli. Importantly, we have found that striatal regions activate differentially depending on peripheral vs. central processing of threatening faces. Accordingly, peripheral processing of these stimuli activated more strongly the putaminal region, while central processing engaged mainly the caudate nucleus. We conclude that the human amygdala has a central bias for face stimuli, and that visual processing recruits different striatal regions, putaminal or caudate based, depending on the task and on whether peripheral or central visual processing is involved. © 2013 Elsevier Ltd. All rights reserved.

  11. Detecting 'infant-directedness' in face and voice.

    PubMed

    Kim, Hojin I; Johnson, Scott P

    2014-07-01

    Five- and 3-month-old infants' perception of infant-directed (ID) faces and the role of speech in perceiving faces were examined. Infants' eye movements were recorded as they viewed a series of two side-by-side talking faces, one infant-directed and one adult-directed (AD), while listening to ID speech, AD speech, or in silence. Infants showed consistently greater dwell time on ID faces vs. AD faces, and this ID face preference was consistent across all three sound conditions. ID speech resulted in higher looking overall, but it did not increase looking at the ID face per se. Together, these findings demonstrate that infants' preferences for ID speech extend to ID faces. © 2014 John Wiley & Sons Ltd.

  12. Familiarity Enhances Visual Working Memory for Faces

    ERIC Educational Resources Information Center

    Jackson, Margaret C.; Raymond, Jane E.

    2008-01-01

    Although it is intuitive that familiarity with complex visual objects should aid their preservation in visual working memory (WM), empirical evidence for this is lacking. This study used a conventional change-detection procedure to assess visual WM for unfamiliar and famous faces in healthy adults. Across experiments, faces were upright or…

  13. Event-Related Brain Potential Correlates of Emotional Face Processing

    ERIC Educational Resources Information Center

    Eimer, Martin; Holmes, Amanda

    2007-01-01

    Results from recent event-related brain potential (ERP) studies investigating brain processes involved in the detection and analysis of emotional facial expression are reviewed. In all experiments, emotional faces were found to trigger an increased ERP positivity relative to neutral faces. The onset of this emotional expression effect was…

  14. Exploring the Role of Spatial Frequency Information during Neural Emotion Processing in Human Infants.

    PubMed

    Jessen, Sarah; Grossmann, Tobias

    2017-01-01

    Enhanced attention to fear expressions in adults is primarily driven by information from low as opposed to high spatial frequencies contained in faces. However, little is known about the role of spatial frequency information in emotion processing during infancy. In the present study, we examined the role of low compared to high spatial frequencies in the processing of happy and fearful facial expressions by using filtered face stimuli and measuring event-related brain potentials (ERPs) in 7-month-old infants ( N = 26). Our results revealed that infants' brains discriminated between emotional facial expressions containing high but not between expressions containing low spatial frequencies. Specifically, happy faces containing high spatial frequencies elicited a smaller Nc amplitude than fearful faces containing high spatial frequencies and happy and fearful faces containing low spatial frequencies. Our results demonstrate that already in infancy spatial frequency content influences the processing of facial emotions. Furthermore, we observed that fearful facial expressions elicited a comparable Nc response for high and low spatial frequencies, suggesting a robust detection of fearful faces irrespective of spatial frequency content, whereas the detection of happy facial expressions was contingent upon frequency content. In summary, these data provide new insights into the neural processing of facial emotions in early development by highlighting the differential role played by spatial frequencies in the detection of fear and happiness.

  15. Tools for Protecting the Privacy of Specific Individuals in Video

    NASA Astrophysics Data System (ADS)

    Chen, Datong; Chang, Yi; Yan, Rong; Yang, Jie

    2007-12-01

    This paper presents a system for protecting the privacy of specific individuals in video recordings. We address the following two problems: automatic people identification with limited labeled data, and human body obscuring with preserved structure and motion information. In order to address the first problem, we propose a new discriminative learning algorithm to improve people identification accuracy using limited training data labeled from the original video and imperfect pairwise constraints labeled from face obscured video data. We employ a robust face detection and tracking algorithm to obscure human faces in the video. Our experiments in a nursing home environment show that the system can obtain a high accuracy of people identification using limited labeled data and noisy pairwise constraints. The study result indicates that human subjects can perform reasonably well in labeling pairwise constraints with the face masked data. For the second problem, we propose a novel method of body obscuring, which removes the appearance information of the people while preserving rich structure and motion information. The proposed approach provides a way to minimize the risk of exposing the identities of the protected people while maximizing the use of the captured data for activity/behavior analysis.

  16. Optical filter for highlighting spectral features part I: design and development of the filter for discrimination of human skin with and without an application of cosmetic foundation.

    PubMed

    Nishino, Ken; Nakamura, Mutsuko; Matsumoto, Masayuki; Tanno, Osamu; Nakauchi, Shigeki

    2011-03-28

    Light reflected from an object's surface contains much information about its physical and chemical properties. Changes in the physical properties of an object are barely detectable in spectra. Conventional trichromatic systems, on the other hand, cannot detect most spectral features because spectral information is compressively represented as trichromatic signals forming a three-dimensional subspace. We propose a method for designing a filter that optically modulates a camera's spectral sensitivity to find an alternative subspace highlighting an object's spectral features more effectively than the original trichromatic space. We designed and developed a filter that detects cosmetic foundations on human face. Results confirmed that the filter can visualize and nondestructively inspect the foundation distribution.

  17. Automated detection of preserved photoreceptor on optical coherence tomography in choroideremia based on machine learning.

    PubMed

    Wang, Zhuo; Camino, Acner; Hagag, Ahmed M; Wang, Jie; Weleber, Richard G; Yang, Paul; Pennesi, Mark E; Huang, David; Li, Dengwang; Jia, Yali

    2018-05-01

    Optical coherence tomography (OCT) can demonstrate early deterioration of the photoreceptor integrity caused by inherited retinal degeneration diseases (IRDs). A machine learning method based on random forests was developed to automatically detect continuous areas of preserved ellipsoid zone structure (an easily recognizable part of the photoreceptors on OCT) in 16 eyes of patients with choroideremia (a type of IRD). Pseudopodial extensions protruding from the preserved ellipsoid zone areas are detected separately by a local active contour routine. The algorithm is implemented on en face images with minimum segmentation requirements, only needing delineation of the Bruch's membrane, thus evading the inaccuracies and technical challenges associated with automatic segmentation of the ellipsoid zone in eyes with severe retinal degeneration. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Methods for Using Durable Adhesively Bonded Joints for Sandwich Structures

    NASA Technical Reports Server (NTRS)

    Smeltzer, Stanley S., III (Inventor); Lundgren, Eric C. (Inventor)

    2016-01-01

    Systems, methods, and apparatus for increasing durability of adhesively bonded joints in a sandwich structure. Such systems, methods, and apparatus includes an first face sheet and an second face sheet as well as an insert structure, the insert structure having a first insert face sheet, a second insert face sheet, and an insert core material. In addition, sandwich core material is arranged between the first face sheet and the second face sheet. A primary bondline may be coupled to the face sheet(s) and the splice. Further, systems, methods, and apparatus of the present disclosure advantageously reduce the load, provide a redundant path, reduce structural fatigue, and/or increase fatigue life.

  19. Systems, Apparatuses, and Methods for Using Durable Adhesively Bonded Joints for Sandwich Structures

    NASA Technical Reports Server (NTRS)

    Smeltzer, III, Stanley S. (Inventor); Lundgren, Eric C. (Inventor)

    2014-01-01

    Systems, methods, and apparatus for increasing durability of adhesively bonded joints in a sandwich structure. Such systems, methods, and apparatus includes an first face sheet and an second face sheet as well as an insert structure, the insert structure having a first insert face sheet, a second insert face sheet, and an insert core material. In addition, sandwich core material is arranged between the first face sheet and the second face sheet. A primary bondline may be coupled to the face sheet(s) and the splice. Further, systems, methods, and apparatus of the present disclosure advantageously reduce the load, provide a redundant path, reduce structural fatigue, and/or increase fatigue life.

  20. A Comparison Between Optical Coherence Tomography Angiography and Fluorescein Angiography for the Imaging of Type 1 Neovascularization.

    PubMed

    Inoue, Maiko; Jung, Jesse J; Balaratnasingam, Chandrakumar; Dansingani, Kunal K; Dhrami-Gavazi, Elona; Suzuki, Mihoko; de Carlo, Talisa E; Shahlaee, Abtin; Klufas, Michael A; El Maftouhi, Adil; Duker, Jay S; Ho, Allen C; Maftouhi, Maddalena Quaranta-El; Sarraf, David; Freund, K Bailey

    2016-07-01

    To determine the sensitivity of the combination of optical coherence tomography angiography (OCTA) and structural optical coherence tomography (OCT) for detecting type 1 neovascularization (NV) and to determine significant factors that preclude visualization of type 1 NV using OCTA. Multicenter, retrospective cohort study of 115 eyes from 100 patients with type 1 NV. A retrospective review of fluorescein (FA), OCT, and OCTA imaging was performed on a consecutive series of eyes with type 1 NV from five institutions. Unmasked graders utilized FA and structural OCT data to determine the diagnosis of type 1 NV. Masked graders evaluated FA data alone, en face OCTA data alone and combined en face OCTA and structural OCT data to determine the presence of type 1 NV. Sensitivity analyses were performed using combined FA and OCT data as the reference standard. A total of 105 eyes were diagnosed with type 1 NV using the reference. Of these, 90 (85.7%) could be detected using en face OCTA and structural OCT. The sensitivities of FA data alone and en face OCTA data alone for visualizing type 1 NV were the same (66.7%). Significant factors that precluded visualization of NV using en face OCTA included the height of pigment epithelial detachment, low signal strength, and treatment-naïve disease (P < 0.05, respectively). En face OCTA and structural OCT showed better detection of type 1 NV than either FA alone or en face OCTA alone. Combining en face OCTA and structural OCT information may therefore be a useful way to noninvasively diagnose and monitor the treatment of type 1 NV.

  1. A fast response hydrogen sensor with Pd metallic grating onto a fiber's end-face

    NASA Astrophysics Data System (ADS)

    Yan, Haitao; Zhao, Xiaoyan; Zhang, Chao; Li, Qiu-Ze; Cao, Jingxiao; Han, Dao-Fu; Hao, Hui; Wang, Ming

    2016-01-01

    We demonstrated an integrated hydrogen sensor with Pd metallic grating fabricated on a fiber end-face. The grating consists of three thin metal layers in stacks, Au, WO3 and Pd. The WO3 is used as a waveguide layer between the Pd and Au layer. The Pd layer is etched by using a focused ion beam (FIB) method, forming a Pd metallic grating with period of 450 nm. The sensor is experimentally exposed to hydrogen gas environment. Changing the concentration from 0% to 4% which is the low explosive limit (LEL), the resonant wavelength measured from the reflection experienced 28.10 nm spectral changes in the visible range. The results demonstrated that the sensor is sensitive for hydrogen detection and it has fast response and low temperature effect.

  2. Efficient Mining and Detection of Sequential Intrusion Patterns for Network Intrusion Detection Systems

    NASA Astrophysics Data System (ADS)

    Shyu, Mei-Ling; Huang, Zifang; Luo, Hongli

    In recent years, pervasive computing infrastructures have greatly improved the interaction between human and system. As we put more reliance on these computing infrastructures, we also face threats of network intrusion and/or any new forms of undesirable IT-based activities. Hence, network security has become an extremely important issue, which is closely connected with homeland security, business transactions, and people's daily life. Accurate and efficient intrusion detection technologies are required to safeguard the network systems and the critical information transmitted in the network systems. In this chapter, a novel network intrusion detection framework for mining and detecting sequential intrusion patterns is proposed. The proposed framework consists of a Collateral Representative Subspace Projection Modeling (C-RSPM) component for supervised classification, and an inter-transactional association rule mining method based on Layer Divided Modeling (LDM) for temporal pattern analysis. Experiments on the KDD99 data set and the traffic data set generated by a private LAN testbed show promising results with high detection rates, low processing time, and low false alarm rates in mining and detecting sequential intrusion detections.

  3. Steganography anomaly detection using simple one-class classification

    NASA Astrophysics Data System (ADS)

    Rodriguez, Benjamin M.; Peterson, Gilbert L.; Agaian, Sos S.

    2007-04-01

    There are several security issues tied to multimedia when implementing the various applications in the cellular phone and wireless industry. One primary concern is the potential ease of implementing a steganography system. Traditionally, the only mechanism to embed information into a media file has been with a desktop computer. However, as the cellular phone and wireless industry matures, it becomes much simpler for the same techniques to be performed using a cell phone. In this paper, two methods are compared that classify cell phone images as either an anomaly or clean, where a clean image is one in which no alterations have been made and an anomalous image is one in which information has been hidden within the image. An image in which information has been hidden is known as a stego image. The main concern in detecting steganographic content with machine learning using cell phone images is in training specific embedding procedures to determine if the method has been used to generate a stego image. This leads to a possible flaw in the system when the learned model of stego is faced with a new stego method which doesn't match the existing model. The proposed solution to this problem is to develop systems that detect steganography as anomalies, making the embedding method irrelevant in detection. Two applicable classification methods for solving the anomaly detection of steganographic content problem are single class support vector machines (SVM) and Parzen-window. Empirical comparison of the two approaches shows that Parzen-window outperforms the single class SVM most likely due to the fact that Parzen-window generalizes less.

  4. Image-Based 3D Face Modeling System

    NASA Astrophysics Data System (ADS)

    Park, In Kyu; Zhang, Hui; Vezhnevets, Vladimir

    2005-12-01

    This paper describes an automatic system for 3D face modeling using frontal and profile images taken by an ordinary digital camera. The system consists of four subsystems including frontal feature detection, profile feature detection, shape deformation, and texture generation modules. The frontal and profile feature detection modules automatically extract the facial parts such as the eye, nose, mouth, and ear. The shape deformation module utilizes the detected features to deform the generic head mesh model such that the deformed model coincides with the detected features. A texture is created by combining the facial textures augmented from the input images and the synthesized texture and mapped onto the deformed generic head model. This paper provides a practical system for 3D face modeling, which is highly automated by aggregating, customizing, and optimizing a bunch of individual computer vision algorithms. The experimental results show a highly automated process of modeling, which is sufficiently robust to various imaging conditions. The whole model creation including all the optional manual corrections takes only 2[InlineEquation not available: see fulltext.]3 minutes.

  5. The Potential of Automatic Word Comparison for Historical Linguistics.

    PubMed

    List, Johann-Mattis; Greenhill, Simon J; Gray, Russell D

    2017-01-01

    The amount of data from languages spoken all over the world is rapidly increasing. Traditional manual methods in historical linguistics need to face the challenges brought by this influx of data. Automatic approaches to word comparison could provide invaluable help to pre-analyze data which can be later enhanced by experts. In this way, computational approaches can take care of the repetitive and schematic tasks leaving experts to concentrate on answering interesting questions. Here we test the potential of automatic methods to detect etymologically related words (cognates) in cross-linguistic data. Using a newly compiled database of expert cognate judgments across five different language families, we compare how well different automatic approaches distinguish related from unrelated words. Our results show that automatic methods can identify cognates with a very high degree of accuracy, reaching 89% for the best-performing method Infomap. We identify the specific strengths and weaknesses of these different methods and point to major challenges for future approaches. Current automatic approaches for cognate detection-although not perfect-could become an important component of future research in historical linguistics.

  6. Prevalence of subretinal drusenoid deposits in older persons with and without age-related macular degeneration, by multimodal imaging

    PubMed Central

    Zarubina, Anna V.; Neely, David C.; Clark, Mark E.; Huisingh, Carrie E.; Samuels, Brian C.; Zhang, Yuhua; McGwin, Gerald; Owsley, Cynthia; Curcio, Christine A.

    2015-01-01

    Purpose To assess the prevalence of subretinal drusenoid deposits (SDD) in older adults with healthy maculas and early and intermediate age-related macular degeneration (AMD) using multimodal imaging. Design Cross-sectional study. Participants A total of 651 subjects aged ≥60 years enrolled in the Alabama Study of Early Age-Related Macular Degeneration from primary care ophthalmology clinics. Methods Subjects were imaged using spectral domain optical coherence tomography (SD-OCT) of the macula and optic nerve head (ONH), infrared reflectance, fundus autofluorescence, and color fundus photographs (CFP). Eyes were assessed for AMD presence and severity using the AREDS 9-step scale. Criteria for SDD presence were identification on ≥1 en-face modality plus SD-OCT or on ≥2 en-face modalities if absent on SD-OCT. SDD were considered present at the person-level if present in 1 or both eyes. Main outcomes measures Prevalence of SDD in participants with and without AMD. Results Overall prevalence of SDD was 32% (197/611), with 62% (122/197) affected in both eyes. Persons with SDD were older than those without SDD (70.6 vs. 68.7 years, p =0.0002). Prevalence of SDD was 23% in subjects without AMD and 52% in subjects with AMD (p<0.0001). Among those with early and intermediate AMD, SDD prevalence was 49% and 79%, respectively. After age adjustment, those with SDD were 3.4x more likely to have AMD than those without SDD (95% CI 2.3–4.9). By using CFP only for SDD detection per the AREDS protocol, prevalence of SDD was 2% (12/610). Of persons with SDD detected by SD-OCT and confirmed by at least one en-face modality 47% (89/190) were detected exclusively on the ONH SD-OCT volume. Conclusion SDD are present in approximately one quarter of older adults with healthy maculae and in more than half of persons with early to intermediate AMD, even by stringent criteria. The prevalence of SDD is strongly associated with AMD presence and severity and increases with age, and its retinal topography including peripapillary involvement resembles that of rod photoreceptors. Consensus on SDD detection methods is recommended to advance our knowledge of this lesion and its clinical and biologic significance. PMID:26875000

  7. Analyzing Interactions by an IIS-Map-Based Method in Face-to-Face Collaborative Learning: An Empirical Study

    ERIC Educational Resources Information Center

    Zheng, Lanqin; Yang, Kaicheng; Huang, Ronghuai

    2012-01-01

    This study proposes a new method named the IIS-map-based method for analyzing interactions in face-to-face collaborative learning settings. This analysis method is conducted in three steps: firstly, drawing an initial IIS-map according to collaborative tasks; secondly, coding and segmenting information flows into information items of IIS; thirdly,…

  8. Unconstrained face detection and recognition based on RGB-D camera for the visually impaired

    NASA Astrophysics Data System (ADS)

    Zhao, Xiangdong; Wang, Kaiwei; Yang, Kailun; Hu, Weijian

    2017-02-01

    It is highly important for visually impaired people (VIP) to be aware of human beings around themselves, so correctly recognizing people in VIP assisting apparatus provide great convenience. However, in classical face recognition technology, faces used in training and prediction procedures are usually frontal, and the procedures of acquiring face images require subjects to get close to the camera so that frontal face and illumination guaranteed. Meanwhile, labels of faces are defined manually rather than automatically. Most of the time, labels belonging to different classes need to be input one by one. It prevents assisting application for VIP with these constraints in practice. In this article, a face recognition system under unconstrained environment is proposed. Specifically, it doesn't require frontal pose or uniform illumination as required by previous algorithms. The attributes of this work lie in three aspects. First, a real time frontal-face synthesizing enhancement is implemented, and frontal faces help to increase recognition rate, which is proved with experiment results. Secondly, RGB-D camera plays a significant role in our system, from which both color and depth information are utilized to achieve real time face tracking which not only raises the detection rate but also gives an access to label faces automatically. Finally, we propose to use neural networks to train a face recognition system, and Principal Component Analysis (PCA) is applied to pre-refine the input data. This system is expected to provide convenient help for VIP to get familiar with others, and make an access for them to recognize people when the system is trained enough.

  9. Real-Time Lane Region Detection Using a Combination of Geometrical and Image Features

    PubMed Central

    Cáceres Hernández, Danilo; Kurnianggoro, Laksono; Filonenko, Alexander; Jo, Kang Hyun

    2016-01-01

    Over the past few decades, pavement markings have played a key role in intelligent vehicle applications such as guidance, navigation, and control. However, there are still serious issues facing the problem of lane marking detection. For example, problems include excessive processing time and false detection due to similarities in color and edges between traffic signs (channeling lines, stop lines, crosswalk, arrows, etc.). This paper proposes a strategy to extract the lane marking information taking into consideration its features such as color, edge, and width, as well as the vehicle speed. Firstly, defining the region of interest is a critical task to achieve real-time performance. In this sense, the region of interest is dependent on vehicle speed. Secondly, the lane markings are detected by using a hybrid color-edge feature method along with a probabilistic method, based on distance-color dependence and a hierarchical fitting model. Thirdly, the following lane marking information is extracted: the number of lane markings to both sides of the vehicle, the respective fitting model, and the centroid information of the lane. Using these parameters, the region is computed by using a road geometric model. To evaluate the proposed method, a set of consecutive frames was used in order to validate the performance. PMID:27869657

  10. Electronic Properties of Synthetic Shrimp Pathogens-derived DNA Schottky Diodes.

    PubMed

    Rizan, Nastaran; Yew, Chan Yen; Niknam, Maryam Rajabpour; Krishnasamy, Jegenathan; Bhassu, Subha; Hong, Goh Zee; Devadas, Sridevi; Din, Mohamed Shariff Mohd; Tajuddin, Hairul Anuar; Othman, Rofina Yasmin; Phang, Siew Moi; Iwamoto, Mitsumasa; Periasamy, Vengadesh

    2018-01-17

    The exciting discovery of the semiconducting-like properties of deoxyribonucleic acid (DNA) and its potential applications in molecular genetics and diagnostics in recent times has resulted in a paradigm shift in biophysics research. Recent studies in our laboratory provide a platform towards detecting charge transfer mechanism and understanding the electronic properties of DNA based on the sequence-specific electronic response, which can be applied as an alternative to identify or detect DNA. In this study, we demonstrate a novel method for identification of DNA from different shrimp viruses and bacteria using electronic properties of DNA obtained from both negative and positive bias regions in current-voltage (I-V) profiles. Characteristic electronic properties were calculated and used for quantification and further understanding in the identification process. Aquaculture in shrimp industry is a fast-growing food sector throughout the world. However, shrimp culture in many Asian countries faced a huge economic loss due to disease outbreaks. Scientists have been using specific established methods for detecting shrimp infection, but those methods do have their significant drawbacks due to many inherent factors. As such, we believe that this simple, rapid, sensitive and cost-effective tool can be used for detection and identification of DNA from different shrimp viruses and bacteria.

  11. Attention and memory bias to facial emotions underlying negative symptoms of schizophrenia.

    PubMed

    Jang, Seon-Kyeong; Park, Seon-Cheol; Lee, Seung-Hwan; Cho, Yang Seok; Choi, Kee-Hong

    2016-01-01

    This study assessed bias in selective attention to facial emotions in negative symptoms of schizophrenia and its influence on subsequent memory for facial emotions. Thirty people with schizophrenia who had high and low levels of negative symptoms (n = 15, respectively) and 21 healthy controls completed a visual probe detection task investigating selective attention bias (happy, sad, and angry faces randomly presented for 50, 500, or 1000 ms). A yes/no incidental facial memory task was then completed. Attention bias scores and recognition errors were calculated. Those with high negative symptoms exhibited reduced attention to emotional faces relative to neutral faces; those with low negative symptoms showed the opposite pattern when faces were presented for 500 ms regardless of the valence. Compared to healthy controls, those with high negative symptoms made more errors for happy faces in the memory task. Reduced attention to emotional faces in the probe detection task was significantly associated with less pleasure and motivation and more recognition errors for happy faces in schizophrenia group only. Attention bias away from emotional information relatively early in the attentional process and associated diminished positive memory may relate to pathological mechanisms for negative symptoms.

  12. Applying face identification to detecting hijacking of airplane

    NASA Astrophysics Data System (ADS)

    Luo, Xuanwen; Cheng, Qiang

    2004-09-01

    That terrorists hijacked the airplanes and crashed the World Trade Center is disaster to civilization. To avoid the happening of hijack is critical to homeland security. To report the hijacking in time, limit the terrorist to operate the plane if happened and land the plane to the nearest airport could be an efficient way to avoid the misery. Image processing technique in human face recognition or identification could be used for this task. Before the plane take off, the face images of pilots are input into a face identification system installed in the airplane. The camera in front of pilot seat keeps taking the pilot face image during the flight and comparing it with pre-input pilot face images. If a different face is detected, a warning signal is sent to ground automatically. At the same time, the automatic cruise system is started or the plane is controlled by the ground. The terrorists will have no control over the plane. The plane will be landed to a nearest or appropriate airport under the control of the ground or cruise system. This technique could also be used in automobile industry as an image key to avoid car stealth.

  13. iFER: facial expression recognition using automatically selected geometric eye and eyebrow features

    NASA Astrophysics Data System (ADS)

    Oztel, Ismail; Yolcu, Gozde; Oz, Cemil; Kazan, Serap; Bunyak, Filiz

    2018-03-01

    Facial expressions have an important role in interpersonal communications and estimation of emotional states or intentions. Automatic recognition of facial expressions has led to many practical applications and became one of the important topics in computer vision. We present a facial expression recognition system that relies on geometry-based features extracted from eye and eyebrow regions of the face. The proposed system detects keypoints on frontal face images and forms a feature set using geometric relationships among groups of detected keypoints. Obtained feature set is refined and reduced using the sequential forward selection (SFS) algorithm and fed to a support vector machine classifier to recognize five facial expression classes. The proposed system, iFER (eye-eyebrow only facial expression recognition), is robust to lower face occlusions that may be caused by beards, mustaches, scarves, etc. and lower face motion during speech production. Preliminary experiments on benchmark datasets produced promising results outperforming previous facial expression recognition studies using partial face features, and comparable results to studies using whole face information, only slightly lower by ˜ 2.5 % compared to the best whole face facial recognition system while using only ˜ 1 / 3 of the facial region.

  14. Neural Computation as a Tool to Differentiate Perceptual from Emotional Processes: The Case of Anger Superiority Effect

    ERIC Educational Resources Information Center

    Mermillod, Martial; Vermeulen, Nicolas; Lundqvist, Daniel; Niedenthal, Paula M.

    2009-01-01

    Research findings in social and cognitive psychology imply that it is easier to detect angry faces than happy faces in a crowd of neutral faces [Hansen, C. H., & Hansen, R. D. (1988). Finding the face in the crowd--An anger superiority effect. "Journal of Personality and Social Psychology," 54(6), 917-924]. This phenomenon has been held to have…

  15. Expectations about person identity modulate the face-sensitive N170.

    PubMed

    Johnston, Patrick; Overell, Anne; Kaufman, Jordy; Robinson, Jonathan; Young, Andrew W

    2016-12-01

    Identifying familiar faces is a fundamentally important aspect of social perception that requires the ability to assign very different (ambient) images of a face to a common identity. The current consensus is that the brain processes face identity at approximately 250-300 msec following stimulus onset, as indexed by the N250 event related potential. However, using two experiments we show compelling evidence that where experimental paradigms induce expectations about person identity, changes in famous face identity are in fact detected at an earlier latency corresponding to the face-sensitive N170. In Experiment 1, using a rapid periodic stimulation paradigm presenting highly variable ambient images, we demonstrate robust effects of low frequency, periodic face-identity changes in N170 amplitude. In Experiment 2, we added infrequent aperiodic identity changes to show that the N170 was larger to both infrequent periodic and infrequent aperiodic identity changes than to high frequency identities. Our use of ambient stimulus images makes it unlikely that these effects are due to adaptation of low-level stimulus features. In line with current ideas about predictive coding, we therefore suggest that when expectations about the identity of a face exist, the visual system is capable of detecting identity mismatches at a latency consistent with the N170. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. On the flexibility of social source memory: a test of the emotional incongruity hypothesis.

    PubMed

    Bell, Raoul; Buchner, Axel; Kroneisen, Meike; Giang, Trang

    2012-11-01

    A popular hypothesis in evolutionary psychology posits that reciprocal altruism is supported by a cognitive module that helps cooperative individuals to detect and remember cheaters. Consistent with this hypothesis, a source memory advantage for faces of cheaters (better memory for the cheating context in which these faces were encountered) was observed in previous studies. Here, we examined whether positive or negative expectancies would influence source memory for cheaters and cooperators. A cooperation task with virtual opponents was used in Experiments 1 and 2. Source memory for the emotionally incongruent information was enhanced relative to the congruent information: In Experiment 1, source memory was best for cheaters with likable faces and for cooperators with unlikable faces; in Experiment 2, source memory was better for smiling cheater faces than for smiling cooperator faces, and descriptively better for angry cooperator faces than for angry cheater faces. Experiments 3 and 4 showed that the emotional incongruity effect generalizes to 3rd-party reputational information (descriptions of cheating and trustworthy behavior). The results are inconsistent with the assumption of a highly specific cheater detection module. Focusing on expectancy-incongruent information may represent a more efficient, general, and hence more adaptive memory strategy for remembering exchange-relevant information than focusing only on cheaters.

  17. A randomized trial of face-to-face counselling versus telephone counselling versus bibliotherapy for occupational stress.

    PubMed

    Kilfedder, Catherine; Power, Kevin; Karatzias, Thanos; McCafferty, Aileen; Niven, Karen; Chouliara, Zoë; Galloway, Lisa; Sharp, Stephen

    2010-09-01

    The aim of the present study was to compare the effectiveness and acceptability of three interventions for occupational stress. A total of 90 National Health Service employees were randomized to face-to-face counselling or telephone counselling or bibliotherapy. Outcomes were assessed at post-intervention and 4-month follow-up. Clinical Outcomes in Routine Evaluation (CORE), General Health Questionnaire (GHQ-12), and Perceived Stress Scale (PSS-10) were used to evaluate intervention outcomes. An intention-to-treat analyses was performed. Repeated measures analysis revealed significant time effects on all measures with the exception of CORE Risk. No significant group effects were detected on all outcome measures. No time by group significant interaction effects were detected on any of the outcome measures with the exception of CORE Functioning and GHQ total. With regard to acceptability of interventions, participants expressed a preference for face-to-face counselling over the other two modalities. Overall, it was concluded that the three intervention groups are equally effective. Given that bibliotherapy is the least costly of the three, results from the present study might be considered in relation to a stepped care approach to occupational stress management with bibliotherapy as the first line of intervention, followed by telephone and face-to-face counselling as required.

  18. Biosensors for spatiotemporal detection of reactive oxygen species in cells and tissues.

    PubMed

    Erard, Marie; Dupré-Crochet, Sophie; Nüße, Oliver

    2018-05-01

    Redox biology has become a major issue in numerous areas of physiology. Reactive oxygen species (ROS) have a broad range of roles from signal transduction to growth control and cell death. To understand the nature of these roles, accurate measurement of the reactive compounds is required. An increasing number of tools for ROS detection is available; however, the specificity and sensitivity of these tools are often insufficient. Furthermore, their specificity has been rarely evaluated in complex physiological conditions. Many ROS probes are sensitive to environmental conditions in particular pH, which may interfere with ROS detection and cause misleading results. Accurate detection of ROS in physiology and pathophysiology faces additional challenges concerning the precise localization of the ROS and the timing of their production and disappearance. Certain ROS are membrane permeable, and certain ROS probes move across cells and organelles. Targetable ROS probes such as fluorescent protein-based biosensors are required for accurate localization. Here we analyze these challenges in more detail, provide indications on the strength and weakness of current tools for ROS detection, and point out developments that will provide improved ROS detection methods in the future. There is no universal method that fits all situations in physiology and cell biology. A detailed knowledge of the ROS probes is required to choose the appropriate method for a given biological problem. The knowledge of the shortcomings of these probes should also guide the development of new sensors.

  19. Newborns' Mooney-Face Perception

    ERIC Educational Resources Information Center

    Leo, Irene; Simion, Francesca

    2009-01-01

    The aim of this study is to investigate whether newborns detect a face on the basis of a Gestalt representation based on first-order relational information (i.e., the basic arrangement of face features) by using Mooney stimuli. The incomplete 2-tone Mooney stimuli were used because they preclude focusing both on the local features (i.e., the fine…

  20. Infant Face Preferences after Binocular Visual Deprivation

    ERIC Educational Resources Information Center

    Mondloch, Catherine J.; Lewis, Terri L.; Levin, Alex V.; Maurer, Daphne

    2013-01-01

    Early visual deprivation impairs some, but not all, aspects of face perception. We investigated the possible developmental roots of later abnormalities by using a face detection task to test infants treated for bilateral congenital cataract within 1 hour of their first focused visual input. The seven patients were between 5 and 12 weeks old…

  1. Neural evidence for the subliminal processing of facial trustworthiness in infancy.

    PubMed

    Jessen, Sarah; Grossmann, Tobias

    2017-04-22

    Face evaluation is thought to play a vital role in human social interactions. One prominent aspect is the evaluation of facial signs of trustworthiness, which has been shown to occur reliably, rapidly, and without conscious awareness in adults. Recent developmental work indicates that the sensitivity to facial trustworthiness has early ontogenetic origins as it can already be observed in infancy. However, it is unclear whether infants' sensitivity to facial signs of trustworthiness relies upon conscious processing of a face or, similar to adults, occurs also in response to subliminal faces. To investigate this question, we conducted an event-related brain potential (ERP) study, in which we presented 7-month-old infants with faces varying in trustworthiness. Facial stimuli were presented subliminally (below infants' face visibility threshold) for only 50ms and then masked by presenting a scrambled face image. Our data revealed that infants' ERP responses to subliminally presented faces differed as a function of trustworthiness. Specifically, untrustworthy faces elicited an enhanced negative slow wave (800-1000ms) at frontal and central electrodes. The current findings critically extend prior work by showing that, similar to adults, infants' neural detection of facial signs of trustworthiness occurs also in response to subliminal face. This supports the view that detecting facial trustworthiness is an early developing and automatic process in humans. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Electrophysiological evidence for attentional capture by irrelevant angry facial expressions: Naturalistic faces.

    PubMed

    Burra, Nicolas; Coll, Sélim Yahia; Barras, Caroline; Kerzel, Dirk

    2017-01-10

    Recently, research on lateralized event related potentials (ERPs) in response to irrelevant distractors has revealed that angry but not happy schematic distractors capture spatial attention. Whether this effect occurs in the context of the natural expression of emotions is unknown. To fill this gap, observers were asked to judge the gender of a natural face surrounded by a color singleton among five other face identities. In contrast to previous studies, the similarity between the task-relevant feature (color) and the distractor features was low. On some trials, the target was displayed concurrently with an irrelevant angry or happy face. The lateralized ERPs to these distractors were measured as a marker of spatial attention. Our results revealed that angry face distractors, but not happy face distractors, triggered a P D , which is a marker of distractor suppression. Subsequent to the P D , angry distractors elicited a larger N450 component, which is associated with conflict detection. We conclude that threatening expressions have a high attentional priority because of their emotional value, resulting in early suppression and late conflict detection. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  3. Real-time camera-based face detection using a modified LAMSTAR neural network system

    NASA Astrophysics Data System (ADS)

    Girado, Javier I.; Sandin, Daniel J.; DeFanti, Thomas A.; Wolf, Laura K.

    2003-03-01

    This paper describes a cost-effective, real-time (640x480 at 30Hz) upright frontal face detector as part of an ongoing project to develop a video-based, tetherless 3D head position and orientation tracking system. The work is specifically targeted for auto-stereoscopic displays and projection-based virtual reality systems. The proposed face detector is based on a modified LAMSTAR neural network system. At the input stage, after achieving image normalization and equalization, a sub-window analyzes facial features using a neural network. The sub-window is segmented, and each part is fed to a neural network layer consisting of a Kohonen Self-Organizing Map (SOM). The output of the SOM neural networks are interconnected and related by correlation-links, and can hence determine the presence of a face with enough redundancy to provide a high detection rate. To avoid tracking multiple faces simultaneously, the system is initially trained to track only the face centered in a box superimposed on the display. The system is also rotationally and size invariant to a certain degree.

  4. Integrated display scanner

    DOEpatents

    Veligdan, James T.

    2004-12-21

    A display scanner includes an optical panel having a plurality of stacked optical waveguides. The waveguides define an inlet face at one end and a screen at an opposite end, with each waveguide having a core laminated between cladding. A projector projects a scan beam of light into the panel inlet face for transmission from the screen as a scan line to scan a barcode. A light sensor at the inlet face detects a return beam reflected from the barcode into the screen. A decoder decodes the return beam detected by the sensor for reading the barcode. In an exemplary embodiment, the optical panel also displays a visual image thereon.

  5. Automatic image enhancement based on multi-scale image decomposition

    NASA Astrophysics Data System (ADS)

    Feng, Lu; Wu, Zhuangzhi; Pei, Luo; Long, Xiong

    2014-01-01

    In image processing and computational photography, automatic image enhancement is one of the long-range objectives. Recently the automatic image enhancement methods not only take account of the globe semantics, like correct color hue and brightness imbalances, but also the local content of the image, such as human face and sky of landscape. In this paper we describe a new scheme for automatic image enhancement that considers both global semantics and local content of image. Our automatic image enhancement method employs the multi-scale edge-aware image decomposition approach to detect the underexposure regions and enhance the detail of the salient content. The experiment results demonstrate the effectiveness of our approach compared to existing automatic enhancement methods.

  6. Pediatricians’ and health visitors’ views towards detection and management of maternal depression in the context of a weak primary health care system: a qualitative study

    PubMed Central

    2014-01-01

    Background The present study’s aim has been to investigate, identify and interpret the views of pediatric primary healthcare providers on the recognition and management of maternal depression in the context of a weak primary healthcare system. Methods Twenty six pediatricians and health visitors were selected by using purposive sampling. Face to face in-depth interviews of approximately 45 minutes duration were conducted. The data were analyzed by using the framework analysis approach which includes five main steps: familiarization, identifying a thematic framework, indexing, charting, mapping and interpretation. Results Fear of stigmatization came across as a key barrier for detection and management of maternal depression. Pediatric primary health care providers linked their hesitation to start a conversation about depression with stigma. They highlighted that mothers were not receptive to discussing depression and accepting a referral. It was also revealed that the fragmented primary health care system and the lack of collaboration between health and mental health services have resulted in an unfavorable situation towards maternal mental health. Conclusions Even though pediatricians and health visitors are aware about maternal depression and the importance of maternal mental health, however they fail to implement detection and management practices successfully. The inefficiently decentralized psychiatric services but also stigmatization and misconceptions about maternal depression have impeded the integration of maternal mental health into primary care and prevent pediatric primary health care providers from implementing detection and management practices. PMID:24725738

  7. The effect of computer-assisted learning versus conventional teaching methods on the acquisition and retention of handwashing theory and skills in pre-qualification nursing students: a randomised controlled trial.

    PubMed

    Bloomfield, Jacqueline; Roberts, Julia; While, Alison

    2010-03-01

    High quality health care demands a nursing workforce with sound clinical skills. However, the clinical competency of newly qualified nurses continues to stimulate debate about the adequacy of current methods of clinical skills education and emphasises the need for innovative teaching strategies. Despite the increasing use of e-learning within nurse education, evidence to support its use for clinical skills teaching is limited and inconclusive. This study tested whether nursing students could learn and retain the theory and skill of handwashing more effectively when taught using computer-assisted learning compared with conventional face-to-face methods. The study employed a two group randomised controlled design. The intervention group used an interactive, multimedia, self-directed computer-assisted learning module. The control group was taught by an experienced lecturer in a clinical skills room. Data were collected over a 5-month period between October 2004 and February 2005. Knowledge was tested at four time points and handwashing skills were assessed twice. Two-hundred and forty-two first year nursing students of mixed gender; age; educational background and first language studying at one British university were recruited to the study. Participant attrition increased during the study. Knowledge scores increased significantly from baseline in both groups and no significant differences were detected between the scores of the two groups. Skill performance scores were similar in both groups at the 2-week follow-up with significant differences emerging at the 8-week follow-up in favour of the intervention group, however, this finding must be interpreted with caution in light of sample size and attrition rates. The computer-assisted learning module was an effective strategy for teaching both the theory and practice of handwashing to nursing students and in this study was found to be at least as effective as conventional face-to-face teaching methods. Copyright 2009 Elsevier Ltd. All rights reserved.

  8. The Body That Speaks: Recombining Bodies and Speech Sources in Unscripted Face-to-Face Communication.

    PubMed

    Gillespie, Alex; Corti, Kevin

    2016-01-01

    This article examines advances in research methods that enable experimental substitution of the speaking body in unscripted face-to-face communication. A taxonomy of six hybrid social agents is presented by combining three types of bodies (mechanical, virtual, and human) with either an artificial or human speech source. Our contribution is to introduce and explore the significance of two particular hybrids: (1) the cyranoid method that enables humans to converse face-to-face through the medium of another person's body, and (2) the echoborg method that enables artificial intelligence to converse face-to-face through the medium of a human body. These two methods are distinct in being able to parse the unique influence of the human body when combined with various speech sources. We also introduce a new framework for conceptualizing the body's role in communication, distinguishing three levels: self's perspective on the body, other's perspective on the body, and self's perspective of other's perspective on the body. Within each level the cyranoid and echoborg methodologies make important research questions tractable. By conceptualizing and synthesizing these methods, we outline a novel paradigm of research on the role of the body in unscripted face-to-face communication.

  9. The Body That Speaks: Recombining Bodies and Speech Sources in Unscripted Face-to-Face Communication

    PubMed Central

    Gillespie, Alex; Corti, Kevin

    2016-01-01

    This article examines advances in research methods that enable experimental substitution of the speaking body in unscripted face-to-face communication. A taxonomy of six hybrid social agents is presented by combining three types of bodies (mechanical, virtual, and human) with either an artificial or human speech source. Our contribution is to introduce and explore the significance of two particular hybrids: (1) the cyranoid method that enables humans to converse face-to-face through the medium of another person's body, and (2) the echoborg method that enables artificial intelligence to converse face-to-face through the medium of a human body. These two methods are distinct in being able to parse the unique influence of the human body when combined with various speech sources. We also introduce a new framework for conceptualizing the body's role in communication, distinguishing three levels: self's perspective on the body, other's perspective on the body, and self's perspective of other's perspective on the body. Within each level the cyranoid and echoborg methodologies make important research questions tractable. By conceptualizing and synthesizing these methods, we outline a novel paradigm of research on the role of the body in unscripted face-to-face communication. PMID:27660616

  10. Face Processing: Models For Recognition

    NASA Astrophysics Data System (ADS)

    Turk, Matthew A.; Pentland, Alexander P.

    1990-03-01

    The human ability to process faces is remarkable. We can identify perhaps thousands of faces learned throughout our lifetime and read facial expression to understand such subtle qualities as emotion. These skills are quite robust, despite sometimes large changes in the visual stimulus due to expression, aging, and distractions such as glasses or changes in hairstyle or facial hair. Computers which model and recognize faces will be useful in a variety of applications, including criminal identification, human-computer interface, and animation. We discuss models for representing faces and their applicability to the task of recognition, and present techniques for identifying faces and detecting eye blinks.

  11. Airway Tree Segmentation in Serial Block-Face Cryomicrotome Images of Rat Lungs

    PubMed Central

    Bauer, Christian; Krueger, Melissa A.; Lamm, Wayne J.; Smith, Brian J.; Glenny, Robb W.; Beichel, Reinhard R.

    2014-01-01

    A highly-automated method for the segmentation of airways in serial block-face cryomicrotome images of rat lungs is presented. First, a point inside of the trachea is manually specified. Then, a set of candidate airway centerline points is automatically identified. By utilizing a novel path extraction method, a centerline path between the root of the airway tree and each point in the set of candidate centerline points is obtained. Local disturbances are robustly handled by a novel path extraction approach, which avoids the shortcut problem of standard minimum cost path algorithms. The union of all centerline paths is utilized to generate an initial airway tree structure, and a pruning algorithm is applied to automatically remove erroneous subtrees or branches. Finally, a surface segmentation method is used to obtain the airway lumen. The method was validated on five image volumes of Sprague-Dawley rats. Based on an expert-generated independent standard, an assessment of airway identification and lumen segmentation performance was conducted. The average of airway detection sensitivity was 87.4% with a 95% confidence interval (CI) of (84.9, 88.6)%. A plot of sensitivity as a function of airway radius is provided. The combined estimate of airway detection specificity was 100% with a 95% CI of (99.4, 100)%. The average number and diameter of terminal airway branches was 1179 and 159 μm, respectively. Segmentation results include airways up to 31 generations. The regression intercept and slope of airway radius measurements derived from final segmentations were estimated to be 7.22 μm and 1.005, respectively. The developed approach enables quantitative studies of physiology and lung diseases in rats, requiring detailed geometric airway models. PMID:23955692

  12. From face processing to face recognition: Comparing three different processing levels.

    PubMed

    Besson, G; Barragan-Jason, G; Thorpe, S J; Fabre-Thorpe, M; Puma, S; Ceccaldi, M; Barbeau, E J

    2017-01-01

    Verifying that a face is from a target person (e.g. finding someone in the crowd) is a critical ability of the human face processing system. Yet how fast this can be performed is unknown. The 'entry-level shift due to expertise' hypothesis suggests that - since humans are face experts - processing faces should be as fast - or even faster - at the individual than at superordinate levels. In contrast, the 'superordinate advantage' hypothesis suggests that faces are processed from coarse to fine, so that the opposite pattern should be observed. To clarify this debate, three different face processing levels were compared: (1) a superordinate face categorization level (i.e. detecting human faces among animal faces), (2) a face familiarity level (i.e. recognizing famous faces among unfamiliar ones) and (3) verifying that a face is from a target person, our condition of interest. The minimal speed at which faces can be categorized (∼260ms) or recognized as familiar (∼360ms) has largely been documented in previous studies, and thus provides boundaries to compare our condition of interest to. Twenty-seven participants were included. The recent Speed and Accuracy Boosting procedure paradigm (SAB) was used since it constrains participants to use their fastest strategy. Stimuli were presented either upright or inverted. Results revealed that verifying that a face is from a target person (minimal RT at ∼260ms) was remarkably fast but longer than the face categorization level (∼240ms) and was more sensitive to face inversion. In contrast, it was much faster than recognizing a face as familiar (∼380ms), a level severely affected by face inversion. Face recognition corresponding to finding a specific person in a crowd thus appears achievable in only a quarter of a second. In favor of the 'superordinate advantage' hypothesis or coarse-to-fine account of the face visual hierarchy, these results suggest a graded engagement of the face processing system across processing levels as reflected by the face inversion effects. Furthermore, they underline how verifying that a face is from a target person and detecting a face as familiar - both often referred to as "Face Recognition" - in fact differs. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Tick-Borne Pathogen – Reversed and Conventional Discovery of Disease

    PubMed Central

    Tijsse-Klasen, Ellen; Koopmans, Marion P. G.; Sprong, Hein

    2014-01-01

    Molecular methods have increased the number of known microorganisms associated with ticks significantly. Some of these newly identified microorganisms are readily linked to human disease while others are yet unknown to cause human disease. The face of tick-borne disease discovery has changed with more diseases now being discovered in a “reversed way,” detecting disease cases only years after the tick-borne microorganism was first discovered. Compared to the conventional discovery of infectious diseases, reverse order discovery presents researchers with new challenges. Estimating public health risks of such agents is especially challenging, as case definitions and diagnostic procedures may initially be missing. We discuss the advantages and shortcomings of molecular methods, serology, and epidemiological studies that might be used to study some fundamental questions regarding newly identified tick-borne diseases. With increased tick-exposure and improved detection methods, more tick-borne microorganisms will be added to the list of pathogens causing disease in humans in the future. PMID:25072045

  14. A Prospective, Randomized, Double-blind, Split-face Clinical Trial Comparing the Efficacy of Two Topical Human Growth Factors for the Rejuvenation of the Aging Face

    PubMed Central

    Goldman, Mitchel P.

    2017-01-01

    Background: Cosmeceutical products represent an increasingly important therapeutic option for anti-aging and rejuvenation, either used alone or in combination with dermatologic surgical procedures. Among this group of products, topical growth factors have demonstrated efficacy in randomized, controlled clinical trials. However, comparisons between different products remain uncommon. Objective: The objective of this randomized, double-blind, split-face clinical trial was to compare two different topical growth factor formulations derived from either human fibroblasts or human adipose tissue derived mesenchymal stem cells. Methods: This was an institutional review board-approved, randomized, double-blind, split-face clinical trial involving 20 healthy subjects with moderate-to-severe facial wrinkling secondary to photodamage. One half of the face was randomized to receive topical human fibroblast growth factors and the other topical human mesenchymal stem cell growth factors. Treatment was continued for three months, and evaluations were performed in a double-blind fashion. Results: Both growth factor formulations achieved significant improvement in facial wrinkling. Blinded investigator and subject evaluations did not detect any significant differences between the two formulations in terms of efficacy, safety, or tolerability. Conclusion: Both human fibroblast growth factors and human mesenchymal stem cell growth factors are effective at facial rejuvenation. Topical growth factors represent a useful therapeutic modality. PMID:28670356

  15. Stress resistance strategy in an arid land shrub: interactions between developmental instability and fractal dimention

    USGS Publications Warehouse

    Escos, J.; Alados, C.L.; Pugnaire, F. I.; Puigdefábregas, J.; Emlen, J.

    2000-01-01

    This paper investigates allocation of energy to mechanisms that generate and preserve architectural forms (i.e. developmental stability, complexity of branching patterns) and productivity (growth and reproduction) in response to environmental disturbances (i.e. grazing and resource availability). The statistical error in translational symmetry was used to detect random intra-individual variability during development. This can be thought of as a measure of developmental instability caused by stress. Additionally, we use changes in fractal complexity and shoot distribution of branch structures as an alternate indicator of stress. These methods were applied to Anthyllis cytisoides L., a semi-arid environment shrub, to ascertain the effect of grazing and slope exposure on developmental traits in a 2×2 factorial design. The results show that A. cytisoidesmaintains developmental stability at the expense of productivity. Anthyllis cytisoides was developmentally more stable when grazed and when on south-facing, as opposed to north-facing slopes. On the contrary, shoot length, leaf area, fractal dimension and reproductive-to-vegetative allocation ratio were larger in north- than in south-facing slopes. As a consequence, under extreme xeric conditions, shrub mortality increased in north-facing slopes, especially when not grazed. The removal of transpiring area and the reduction of plant competition favoured developmental stability and survival in grazed plants. Differences between grazed and ungrazed plants were most evident in more mesic (north-facing) areas.

  16. A Prospective, Randomized, Double-blind, Split-face Clinical Trial Comparing the Efficacy of Two Topical Human Growth Factors for the Rejuvenation of the Aging Face.

    PubMed

    Wu, Douglas C; Goldman, Mitchel P

    2017-05-01

    Background: Cosmeceutical products represent an increasingly important therapeutic option for anti-aging and rejuvenation, either used alone or in combination with dermatologic surgical procedures. Among this group of products, topical growth factors have demonstrated efficacy in randomized, controlled clinical trials. However, comparisons between different products remain uncommon. Objective: The objective of this randomized, double-blind, split-face clinical trial was to compare two different topical growth factor formulations derived from either human fibroblasts or human adipose tissue derived mesenchymal stem cells. Methods: This was an institutional review board-approved, randomized, double-blind, split-face clinical trial involving 20 healthy subjects with moderate-to-severe facial wrinkling secondary to photodamage. One half of the face was randomized to receive topical human fibroblast growth factors and the other topical human mesenchymal stem cell growth factors. Treatment was continued for three months, and evaluations were performed in a double-blind fashion. Results: Both growth factor formulations achieved significant improvement in facial wrinkling. Blinded investigator and subject evaluations did not detect any significant differences between the two formulations in terms of efficacy, safety, or tolerability. Conclusion: Both human fibroblast growth factors and human mesenchymal stem cell growth factors are effective at facial rejuvenation. Topical growth factors represent a useful therapeutic modality.

  17. Is Beauty in the Face of the Beholder?

    PubMed Central

    Laeng, Bruno; Vermeer, Oddrun; Sulutvedt, Unni

    2013-01-01

    Opposing forces influence assortative mating so that one seeks a similar mate while at the same time avoiding inbreeding with close relatives. Thus, mate choice may be a balancing of phenotypic similarity and dissimilarity between partners. In the present study, we assessed the role of resemblance to Self’s facial traits in judgments of physical attractiveness. Participants chose the most attractive face image of their romantic partner among several variants, where the faces were morphed so as to include only 22% of another face. Participants distinctly preferred a “Self-based morph” (i.e., their partner’s face with a small amount of Self’s face blended into it) to other morphed images. The Self-based morph was also preferred to the morph of their partner’s face blended with the partner’s same-sex “prototype”, although the latter face was (“objectively”) judged more attractive by other individuals. When ranking morphs differing in level of amalgamation (i.e., 11% vs. 22% vs. 33%) of another face, the 22% was chosen consistently as the preferred morph and, in particular, when Self was blended in the partner’s face. A forced-choice signal-detection paradigm showed that the effect of self-resemblance operated at an unconscious level, since the same participants were unable to detect the presence of their own faces in the above morphs. We concluded that individuals, if given the opportunity, seek to promote “positive assortment” for Self’s phenotype, especially when the level of similarity approaches an optimal point that is similar to Self without causing a conscious acknowledgment of the similarity. PMID:23874608

  18. [Neural basis of self-face recognition: social aspects].

    PubMed

    Sugiura, Motoaki

    2012-07-01

    Considering the importance of the face in social survival and evidence from evolutionary psychology of visual self-recognition, it is reasonable that we expect neural mechanisms for higher social-cognitive processes to underlie self-face recognition. A decade of neuroimaging studies so far has, however, not provided an encouraging finding in this respect. Self-face specific activation has typically been reported in the areas for sensory-motor integration in the right lateral cortices. This observation appears to reflect the physical nature of the self-face which representation is developed via the detection of contingency between one's own action and sensory feedback. We have recently revealed that the medial prefrontal cortex, implicated in socially nuanced self-referential process, is activated during self-face recognition under a rich social context where multiple other faces are available for reference. The posterior cingulate cortex has also exhibited this activation modulation, and in the separate experiment showed a response to attractively manipulated self-face suggesting its relevance to positive self-value. Furthermore, the regions in the right lateral cortices typically showing self-face-specific activation have responded also to the face of one's close friend under the rich social context. This observation is potentially explained by the fact that the contingency detection for physical self-recognition also plays a role in physical social interaction, which characterizes the representation of personally familiar people. These findings demonstrate that neuroscientific exploration reveals multiple facets of the relationship between self-face recognition and social-cognitive process, and that technically the manipulation of social context is key to its success.

  19. Simple method for self-referenced and lable-free biosensing by using a capillary sensing element.

    PubMed

    Liu, Yun; Chen, Shimeng; Liu, Qiang; Liu, Zigeng; Wei, Peng

    2017-05-15

    We demonstrated a simple method for self-reference and label free biosensing based on a capillary sensing element and common optoelectronic devices. The capillary sensing element is illuminated by a light-emitting diode (LED) light source and detected by a webcam. Part of gold film that deposited on the tubing wall is functionalized to carry on the biological information in the excited SPR modes. The end face of the capillary was monitored and separate regions of interest (ROIs) were selected as the measurement channel and the reference channel. In the ROIs, the biological information can be accurately extracted from the image by simple image processing. Moreover, temperature fluctuation, bulk RI fluctuation, light source fluctuation and other factors can be effectively compensated during detection. Our biosensing device has a sensitivity of 1145%/RIU and a resolution better than 5.287 × 10 -4 RIU, considering a 0.79% noise level. We apply it for concanavalin A (Con A) biological measurement, which has an approximately linear response to the specific analyte concentration. This simple method provides a new approach for multichannel SPR sensing and reference-compensated calibration of SPR signal for label-free detection.

  20. A real time mobile-based face recognition with fisherface methods

    NASA Astrophysics Data System (ADS)

    Arisandi, D.; Syahputra, M. F.; Putri, I. L.; Purnamawati, S.; Rahmat, R. F.; Sari, P. P.

    2018-03-01

    Face Recognition is a field research in Computer Vision that study about learning face and determine the identity of the face from a picture sent to the system. By utilizing this face recognition technology, learning process about people’s identity between students in a university will become simpler. With this technology, student won’t need to browse student directory in university’s server site and look for the person with certain face trait. To obtain this goal, face recognition application use image processing methods consist of two phase, pre-processing phase and recognition phase. In pre-processing phase, system will process input image into the best image for recognition phase. Purpose of this pre-processing phase is to reduce noise and increase signal in image. Next, to recognize face phase, we use Fisherface Methods. This methods is chosen because of its advantage that would help system of its limited data. Therefore from experiment the accuracy of face recognition using fisherface is 90%.

  1. Robust vehicle detection in different weather conditions: Using MIPM

    PubMed Central

    Menéndez, José Manuel; Jiménez, David

    2018-01-01

    Intelligent Transportation Systems (ITS) allow us to have high quality traffic information to reduce the risk of potentially critical situations. Conventional image-based traffic detection methods have difficulties acquiring good images due to perspective and background noise, poor lighting and weather conditions. In this paper, we propose a new method to accurately segment and track vehicles. After removing perspective using Modified Inverse Perspective Mapping (MIPM), Hough transform is applied to extract road lines and lanes. Then, Gaussian Mixture Models (GMM) are used to segment moving objects and to tackle car shadow effects, we apply a chromacity-based strategy. Finally, performance is evaluated through three different video benchmarks: own recorded videos in Madrid and Tehran (with different weather conditions at urban and interurban areas); and two well-known public datasets (KITTI and DETRAC). Our results indicate that the proposed algorithms are robust, and more accurate compared to others, especially when facing occlusions, lighting variations and weather conditions. PMID:29513664

  2. Blending Our Practice: Using Online and Face-to-Face Methods to Sustain Community among Faculty in an Extended Length Professional Development Program

    ERIC Educational Resources Information Center

    Paskevicius, Michael; Bortolin, Kathleen

    2016-01-01

    This paper outlines the design and implementation of a nine-month faculty development programme delivered using a combination of face-to-face and online methods. Participants from a range of disciplines met at regular intervals throughout the year. Between the face-to-face meetings, participants engaged in online activities such as discussions,…

  3. 77 FR 30591 - Open Meeting of the Taxpayer Advocacy Panel Face-to-Face Service Methods Project Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-23

    ... Face-to-Face Service Methods Project Committee AGENCY: Internal Revenue Service (IRS), Treasury. ACTION: Notice of meeting. SUMMARY: An open meeting of the Taxpayer Advocacy Panel Face-to-Face [[Page 30592... meeting will be held Thursday, June 7 from 8:00 a.m.-5:00 p.m. Eastern Time and Friday, June 8 from 8:00 a...

  4. Brief Report: Reduced Prioritization of Facial Threat in Adults with Autism

    ERIC Educational Resources Information Center

    Sasson, Noah J.; Shasteen, Jonathon R.; Pinkham, Amy E.

    2016-01-01

    Typically-developing (TD) adults detect angry faces more efficiently within a crowd than non-threatening faces. Prior studies of this social threat superiority effect (TSE) in ASD using tasks consisting of schematic faces and homogeneous crowds have produced mixed results. Here, we employ a more ecologically-valid test of the social TSE and find…

  5. Maximal likelihood correspondence estimation for face recognition across pose.

    PubMed

    Li, Shaoxin; Liu, Xin; Chai, Xiujuan; Zhang, Haihong; Lao, Shihong; Shan, Shiguang

    2014-10-01

    Due to the misalignment of image features, the performance of many conventional face recognition methods degrades considerably in across pose scenario. To address this problem, many image matching-based methods are proposed to estimate semantic correspondence between faces in different poses. In this paper, we aim to solve two critical problems in previous image matching-based correspondence learning methods: 1) fail to fully exploit face specific structure information in correspondence estimation and 2) fail to learn personalized correspondence for each probe image. To this end, we first build a model, termed as morphable displacement field (MDF), to encode face specific structure information of semantic correspondence from a set of real samples of correspondences calculated from 3D face models. Then, we propose a maximal likelihood correspondence estimation (MLCE) method to learn personalized correspondence based on maximal likelihood frontal face assumption. After obtaining the semantic correspondence encoded in the learned displacement, we can synthesize virtual frontal images of the profile faces for subsequent recognition. Using linear discriminant analysis method with pixel-intensity features, state-of-the-art performance is achieved on three multipose benchmarks, i.e., CMU-PIE, FERET, and MultiPIE databases. Owe to the rational MDF regularization and the usage of novel maximal likelihood objective, the proposed MLCE method can reliably learn correspondence between faces in different poses even in complex wild environment, i.e., labeled face in the wild database.

  6. Determining geometric error model parameters of a terrestrial laser scanner through Two-face, Length-consistency, and Network methods

    PubMed Central

    Wang, Ling; Muralikrishnan, Bala; Rachakonda, Prem; Sawyer, Daniel

    2017-01-01

    Terrestrial laser scanners (TLS) are increasingly used in large-scale manufacturing and assembly where required measurement uncertainties are on the order of few tenths of a millimeter or smaller. In order to meet these stringent requirements, systematic errors within a TLS are compensated in-situ through self-calibration. In the Network method of self-calibration, numerous targets distributed in the work-volume are measured from multiple locations with the TLS to determine parameters of the TLS error model. In this paper, we propose two new self-calibration methods, the Two-face method and the Length-consistency method. The Length-consistency method is proposed as a more efficient way of realizing the Network method where the length between any pair of targets from multiple TLS positions are compared to determine TLS model parameters. The Two-face method is a two-step process. In the first step, many model parameters are determined directly from the difference between front-face and back-face measurements of targets distributed in the work volume. In the second step, all remaining model parameters are determined through the Length-consistency method. We compare the Two-face method, the Length-consistency method, and the Network method in terms of the uncertainties in the model parameters, and demonstrate the validity of our techniques using a calibrated scale bar and front-face back-face target measurements. The clear advantage of these self-calibration methods is that a reference instrument or calibrated artifacts are not required, thus significantly lowering the cost involved in the calibration process. PMID:28890607

  7. A novel CUSUM-based approach for event detection in smart metering

    NASA Astrophysics Data System (ADS)

    Zhu, Zhicheng; Zhang, Shuai; Wei, Zhiqiang; Yin, Bo; Huang, Xianqing

    2018-03-01

    Non-intrusive load monitoring (NILM) plays such a significant role in raising consumer awareness on household electricity use to reduce overall energy consumption in the society. With regard to monitoring low power load, many researchers have introduced CUSUM into the NILM system, since the traditional event detection method is not as effective as expected. Due to the fact that the original CUSUM faces limitations given the small shift is below threshold, we therefore improve the test statistic which allows permissible deviation to gradually rise as the data size increases. This paper proposes a novel event detection and corresponding criterion that could be used in NILM systems to recognize transient states and to help the labelling task. Its performance has been tested in a real scenario where eight different appliances are connected to main line of electric power.

  8. Robust Face Detection from Still Images

    DTIC Science & Technology

    2014-01-01

    significant change in false acceptance rates. Keywords— face detection; illumination; skin color variation; Haar-like features; OpenCV I. INTRODUCTION... OpenCV and an algorithm which used histogram equalization. The test is performed against 17 subjects under 576 viewing conditions from the extended Yale...original OpenCV algorithm proved the least accurate, having a hit rate of only 75.6%. It also had the lowest FAR but only by a slight margin at 25.2

  9. A new paradigm of oral cancer detection using digital infrared thermal imaging

    NASA Astrophysics Data System (ADS)

    Chakraborty, M.; Mukhopadhyay, S.; Dasgupta, A.; Banerjee, S.; Mukhopadhyay, S.; Patsa, S.; Ray, J. G.; Chaudhuri, K.

    2016-03-01

    Histopathology is considered the gold standard for oral cancer detection. But a major fraction of patient pop- ulation is incapable of accessing such healthcare facilities due to poverty. Moreover, such analysis may report false negatives when test tissue is not collected from exact cancerous location. The proposed work introduces a pioneering computer aided paradigm of fast, non-invasive and non-ionizing modality for oral cancer detection us- ing Digital Infrared Thermal Imaging (DITI). Due to aberrant metabolic activities in carcinogenic facial regions, heat signatures of patients are different from that of normal subjects. The proposed work utilizes asymmetry of temperature distribution of facial regions as principle cue for cancer detection. Three views of a subject, viz. front, left and right are acquired using long infrared (7:5 - 13μm) camera for analysing distribution of temperature. We study asymmetry of facial temperature distribution between: a) left and right profile faces and b) left and right half of frontal face. Comparison of temperature distribution suggests that patients manifest greater asymmetry compared to normal subjects. For classification, we initially use k-means and fuzzy k-means for unsupervised clustering followed by cluster class prototype assignment based on majority voting. Average classification accuracy of 91:5% and 92:8% are achieved by k-mean and fuzzy k-mean framework for frontal face. The corresponding metrics for profile face are 93:4% and 95%. Combining features of frontal and profile faces, average accuracies are increased to 96:2% and 97:6% respectively for k-means and fuzzy k-means framework.

  10. Optimized detection of steering via linear criteria for arbitrary-dimensional states

    NASA Astrophysics Data System (ADS)

    Zheng, Yu-Lin; Zhen, Yi-Zheng; Cao, Wen-Fei; Li, Li; Chen, Zeng-Bing; Liu, Nai-Le; Chen, Kai

    2017-03-01

    Einstein-Podolsky-Rosen (EPR) steering, as a new form of nonlocality, stands between entanglement and Bell nonlocality, implying promising applications for quantum information tasks. The problem of detecting EPR steering plays an important role in characterization of quantum nonlocality. Despite some significant progress, one still faces a practical issue: how to detect EPR steering in an experimentally friendly fashion. Resorting to an EPR steering inequality, one is required to apply a strategy as efficiently as possible for any selected measurement settings on the two subsystems, one of which may not be trusted. Inspired by the recent powerful linear criteria proposed by Saunders et al. [D. J. Saunders, S. J. Jones, H. M. Wiseman, and G. J. Pryde, Nat. Phys. 6, 845 (2010)., 10.1038/nphys1766], we present an optimized method of certifying steering for an arbitrary-dimensional state in a cost-effective manner. We provide a practical way to signify steering via only a few settings to optimally violate the steering inequality. Our method leads to steering detections in a highly efficient way, and can be performed with any number of settings, for an arbitrary bipartite mixed state, which can reduce experimental overheads significantly.

  11. Detecting failure of climate predictions

    USGS Publications Warehouse

    Runge, Michael C.; Stroeve, Julienne C.; Barrett, Andrew P.; McDonald-Madden, Eve

    2016-01-01

    The practical consequences of climate change challenge society to formulate responses that are more suited to achieving long-term objectives, even if those responses have to be made in the face of uncertainty1, 2. Such a decision-analytic focus uses the products of climate science as probabilistic predictions about the effects of management policies3. Here we present methods to detect when climate predictions are failing to capture the system dynamics. For a single model, we measure goodness of fit based on the empirical distribution function, and define failure when the distribution of observed values significantly diverges from the modelled distribution. For a set of models, the same statistic can be used to provide relative weights for the individual models, and we define failure when there is no linear weighting of the ensemble models that produces a satisfactory match to the observations. Early detection of failure of a set of predictions is important for improving model predictions and the decisions based on them. We show that these methods would have detected a range shift in northern pintail 20 years before it was actually discovered, and are increasingly giving more weight to those climate models that forecast a September ice-free Arctic by 2055.

  12. Framework for objective evaluation of privacy filters

    NASA Astrophysics Data System (ADS)

    Korshunov, Pavel; Melle, Andrea; Dugelay, Jean-Luc; Ebrahimi, Touradj

    2013-09-01

    Extensive adoption of video surveillance, affecting many aspects of our daily lives, alarms the public about the increasing invasion into personal privacy. To address these concerns, many tools have been proposed for protection of personal privacy in image and video. However, little is understood regarding the effectiveness of such tools and especially their impact on the underlying surveillance tasks, leading to a tradeoff between the preservation of privacy offered by these tools and the intelligibility of activities under video surveillance. In this paper, we investigate this privacy-intelligibility tradeoff objectively by proposing an objective framework for evaluation of privacy filters. We apply the proposed framework on a use case where privacy of people is protected by obscuring faces, assuming an automated video surveillance system. We used several popular privacy protection filters, such as blurring, pixelization, and masking and applied them with varying strengths to people's faces from different public datasets of video surveillance footage. Accuracy of face detection algorithm was used as a measure of intelligibility (a face should be detected to perform a surveillance task), and accuracy of face recognition algorithm as a measure of privacy (a specific person should not be identified). Under these conditions, after application of an ideal privacy protection tool, an obfuscated face would be visible as a face but would not be correctly identified by the recognition algorithm. The experiments demonstrate that, in general, an increase in strength of privacy filters under consideration leads to an increase in privacy (i.e., reduction in recognition accuracy) and to a decrease in intelligibility (i.e., reduction in detection accuracy). Masking also shows to be the most favorable filter across all tested datasets.

  13. Elastic Face, An Anatomy-Based Biometrics Beyond Visible Cue

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsap, L V; Zhang, Y; Kundu, S J

    2004-03-29

    This paper describes a face recognition method that is designed based on the consideration of anatomical and biomechanical characteristics of facial tissues. Elastic strain pattern inferred from face expression can reveal an individual's biometric signature associated with the underlying anatomical structure, and thus has the potential for face recognition. A method based on the continuum mechanics in finite element formulation is employed to compute the strain pattern. Experiments show very promising results. The proposed method is quite different from other face recognition methods and both its advantages and limitations, as well as future research for improvement are discussed.

  14. People counting in classroom based on video surveillance

    NASA Astrophysics Data System (ADS)

    Zhang, Quanbin; Huang, Xiang; Su, Juan

    2014-11-01

    Currently, the switches of the lights and other electronic devices in the classroom are mainly relied on manual control, as a result, many lights are on while no one or only few people in the classroom. It is important to change the current situation and control the electronic devices intelligently according to the number and the distribution of the students in the classroom, so as to reduce the considerable waste of electronic resources. This paper studies the problem of people counting in classroom based on video surveillance. As the camera in the classroom can not get the full shape contour information of bodies and the clear features information of faces, most of the classical algorithms such as the pedestrian detection method based on HOG (histograms of oriented gradient) feature and the face detection method based on machine learning are unable to obtain a satisfied result. A new kind of dual background updating model based on sparse and low-rank matrix decomposition is proposed in this paper, according to the fact that most of the students in the classroom are almost in stationary state and there are body movement occasionally. Firstly, combining the frame difference with the sparse and low-rank matrix decomposition to predict the moving areas, and updating the background model with different parameters according to the positional relationship between the pixels of current video frame and the predicted motion regions. Secondly, the regions of moving objects are determined based on the updated background using the background subtraction method. Finally, some operations including binarization, median filtering and morphology processing, connected component detection, etc. are performed on the regions acquired by the background subtraction, in order to induce the effects of the noise and obtain the number of people in the classroom. The experiment results show the validity of the algorithm of people counting.

  15. Aging and Emotion Recognition: Not Just a Losing Matter

    PubMed Central

    Sze, Jocelyn A.; Goodkind, Madeleine S.; Gyurak, Anett; Levenson, Robert W.

    2013-01-01

    Past studies on emotion recognition and aging have found evidence of age-related decline when emotion recognition was assessed by having participants detect single emotions depicted in static images of full or partial (e.g., eye region) faces. These tests afford good experimental control but do not capture the dynamic nature of real-world emotion recognition, which is often characterized by continuous emotional judgments and dynamic multi-modal stimuli. Research suggests that older adults often perform better under conditions that better mimic real-world social contexts. We assessed emotion recognition in young, middle-aged, and older adults using two traditional methods (single emotion judgments of static images of faces and eyes) and an additional method in which participants made continuous emotion judgments of dynamic, multi-modal stimuli (videotaped interactions between young, middle-aged, and older couples). Results revealed an age by test interaction. Largely consistent with prior research, we found some evidence that older adults performed worse than young adults when judging single emotions from images of faces (for sad and disgust faces only) and eyes (for older eyes only), with middle-aged adults falling in between. In contrast, older adults did better than young adults on the test involving continuous emotion judgments of dyadic interactions, with middle-aged adults falling in between. In tests in which target stimuli differed in age, emotion recognition was not facilitated by an age match between participant and target. These findings are discussed in terms of theoretical and methodological implications for the study of aging and emotional processing. PMID:22823183

  16. Photogrammetric Analysis of Attractiveness in Indian Faces

    PubMed Central

    Duggal, Shveta; Kapoor, DN; Verma, Santosh; Sagar, Mahesh; Lee, Yung-Seop; Moon, Hyoungjin

    2016-01-01

    Background The objective of this study was to assess the attractive facial features of the Indian population. We tried to evaluate subjective ratings of facial attractiveness and identify which facial aesthetic subunits were important for facial attractiveness. Methods A cross-sectional study was conducted of 150 samples (referred to as candidates). Frontal photographs were analyzed. An orthodontist, a prosthodontist, an oral surgeon, a dentist, an artist, a photographer and two laymen (estimators) subjectively evaluated candidates' faces using visual analog scale (VAS) scores. As an objective method for facial analysis, we used balanced angular proportional analysis (BAPA). Using SAS 10.1 (SAS Institute Inc.), the Turkey's studentized range test and Pearson correlation analysis were performed to detect between-group differences in VAS scores (Experiment 1), to identify correlations between VAS scores and BAPA scores (Experiment 2), and to analyze the characteristic features of facial attractiveness and gender differences (Experiment 3); the significance level was set at P=0.05. Results Experiment 1 revealed some differences in VAS scores according to professional characteristics. In Experiment 2, BAPA scores were found to behave similarly to subjective ratings of facial beauty, but showed a relatively weak correlation coefficient with the VAS scores. Experiment 3 found that the decisive factors for facial attractiveness were different for men and women. Composite images of attractive Indian male and female faces were constructed. Conclusions Our photogrammetric study, statistical analysis, and average composite faces of an Indian population provide valuable information about subjective perceptions of facial beauty and attractive facial structures in the Indian population. PMID:27019809

  17. Optical Security System Based on the Biometrics Using Holographic Storage Technique with a Simple Data Format

    NASA Astrophysics Data System (ADS)

    Jun, An Won

    2006-01-01

    We implement a first practical holographic security system using electrical biometrics that combines optical encryption and digital holographic memory technologies. Optical information for identification includes a picture of face, a name, and a fingerprint, which has been spatially multiplexed by random phase mask used for a decryption key. For decryption in our biometric security system, a bit-error-detection method that compares the digital bit of live fingerprint with of fingerprint information extracted from hologram is used.

  18. Using YOLO based deep learning network for real time detection and localization of lung nodules from low dose CT scans

    NASA Astrophysics Data System (ADS)

    Ramachandran S., Sindhu; George, Jose; Skaria, Shibon; V. V., Varun

    2018-02-01

    Lung cancer is the leading cause of cancer related deaths in the world. The survival rate can be improved if the presence of lung nodules are detected early. This has also led to more focus being given to computer aided detection (CAD) and diagnosis of lung nodules. The arbitrariness of shape, size and texture of lung nodules is a challenge to be faced when developing these detection systems. In the proposed work we use convolutional neural networks to learn the features for nodule detection, replacing the traditional method of handcrafting features like geometric shape or texture. Our network uses the DetectNet architecture based on YOLO (You Only Look Once) to detect the nodules in CT scans of lung. In this architecture, object detection is treated as a regression problem with a single convolutional network simultaneously predicting multiple bounding boxes and class probabilities for those boxes. By performing training using chest CT scans from Lung Image Database Consortium (LIDC), NVIDIA DIGITS and Caffe deep learning framework, we show that nodule detection using this single neural network can result in reasonably low false positive rates with high sensitivity and precision.

  19. Personalized Online Learning Labs and Face-To-Face Teaching in First-Year College English Courses

    ERIC Educational Resources Information Center

    Sizemore, Mary L.

    2017-01-01

    The purpose of this two-phase, explanatory mixed methods study was to understand the benefits of teaching grammar from three different learning methods: face-to-face, online personalized learning lab and a blended learning method. The study obtained quantitative results from a pre and post-tests, a general survey and writing assignment rubrics…

  20. Counting to 20: Online Implementation of a Face-to-Face, Elementary Mathematics Methods Problem-Solving Activity

    ERIC Educational Resources Information Center

    Schwartz, Catherine Stein

    2012-01-01

    This study describes implementation of the same problem-solving activity in both online and face-to-face environments. The activity, done in the first class period or first module of a K-2 mathematics methods course, was initially used in a face-to-face class and then adapted later for use in an online class. While the task was originally designed…

  1. Improvement in Social Competence Using a Randomized Trial of a Theatre Intervention for Children with Autism Spectrum Disorder.

    PubMed

    Corbett, Blythe A; Key, Alexandra P; Qualls, Lydia; Fecteau, Stephanie; Newsom, Cassandra; Coke, Catherine; Yoder, Paul

    2016-02-01

    The efficacy of a peer-mediated, theatre-based intervention on social competence in participants with autism spectrum disorder (ASD) was tested. Thirty 8-to-14 year-olds with ASD were randomly assigned to the treatment (n = 17) or a wait-list control (n = 13) group. Immediately after treatment, group effects were seen on social ability, (d = .77), communication symptoms (d = -.86), group play with toys in the company of peers (d = .77), immediate memory of faces as measured by neuropsychological (d = .75) and ERP methods (d = .93), delayed memory for faces (d = .98), and theory of mind (d = .99). At the 2 month follow-up period, group effects were detected on communication symptoms (d = .82). The results of this pilot clinical trial provide initial support for the efficacy of the theatre-based intervention.

  2. More efficient rejection of happy than of angry face distractors in visual search.

    PubMed

    Horstmann, Gernot; Scharlau, Ingrid; Ansorge, Ulrich

    2006-12-01

    In the present study, we examined whether the detection advantage for negative-face targets in crowds of positive-face distractors over positive-face targets in crowds of negative faces can be explained by differentially efficient distractor rejection. Search Condition A demonstrated more efficient distractor rejection with negative-face targets in positive-face crowds than vice versa. Search Condition B showed that target identity alone is not sufficient to account for this effect, because there was no difference in processing efficiency for positive- and negative-face targets within neutral crowds. Search Condition C showed differentially efficient processing with neutral-face targets among positive- or negative-face distractors. These results were obtained with both a within-participants (Experiment 1) and a between-participants (Experiment 2) design. The pattern of results is consistent with the assumption that efficient rejection of positive (more homogenous) distractors is an important determinant of performance in search among (face) distractors.

  3. Face recognition using slow feature analysis and contourlet transform

    NASA Astrophysics Data System (ADS)

    Wang, Yuehao; Peng, Lingling; Zhe, Fuchuan

    2018-04-01

    In this paper we propose a novel face recognition approach based on slow feature analysis (SFA) in contourlet transform domain. This method firstly use contourlet transform to decompose the face image into low frequency and high frequency part, and then takes technological advantages of slow feature analysis for facial feature extraction. We named the new method combining the slow feature analysis and contourlet transform as CT-SFA. The experimental results on international standard face database demonstrate that the new face recognition method is effective and competitive.

  4. Change detection of medical images using dictionary learning techniques and principal component analysis.

    PubMed

    Nika, Varvara; Babyn, Paul; Zhu, Hongmei

    2014-07-01

    Automatic change detection methods for identifying the changes of serial MR images taken at different times are of great interest to radiologists. The majority of existing change detection methods in medical imaging, and those of brain images in particular, include many preprocessing steps and rely mostly on statistical analysis of magnetic resonance imaging (MRI) scans. Although most methods utilize registration software, tissue classification remains a difficult and overwhelming task. Recently, dictionary learning techniques are being used in many areas of image processing, such as image surveillance, face recognition, remote sensing, and medical imaging. We present an improved version of the EigenBlockCD algorithm, named the EigenBlockCD-2. The EigenBlockCD-2 algorithm performs an initial global registration and identifies the changes between serial MR images of the brain. Blocks of pixels from a baseline scan are used to train local dictionaries to detect changes in the follow-up scan. We use PCA to reduce the dimensionality of the local dictionaries and the redundancy of data. Choosing the appropriate distance measure significantly affects the performance of our algorithm. We examine the differences between [Formula: see text] and [Formula: see text] norms as two possible similarity measures in the improved EigenBlockCD-2 algorithm. We show the advantages of the [Formula: see text] norm over the [Formula: see text] norm both theoretically and numerically. We also demonstrate the performance of the new EigenBlockCD-2 algorithm for detecting changes of MR images and compare our results with those provided in the recent literature. Experimental results with both simulated and real MRI scans show that our improved EigenBlockCD-2 algorithm outperforms the previous methods. It detects clinical changes while ignoring the changes due to the patient's position and other acquisition artifacts.

  5. Bounds on the minimum number of recombination events in a sample history.

    PubMed Central

    Myers, Simon R; Griffiths, Robert C

    2003-01-01

    Recombination is an important evolutionary factor in many organisms, including humans, and understanding its effects is an important task facing geneticists. Detecting past recombination events is thus important; this article introduces statistics that give a lower bound on the number of recombination events in the history of a sample, on the basis of the patterns of variation in the sample DNA. Such lower bounds are appropriate, since many recombination events in the history are typically undetectable, so the true number of historical recombinations is unobtainable. The statistics can be calculated quickly by computer and improve upon the earlier bound of Hudson and Kaplan 1985. A method is developed to combine bounds on local regions in the data to produce more powerful improved bounds. The method is flexible to different models of recombination occurrence. The approach gives recombination event bounds between all pairs of sites, to help identify regions with more detectable recombinations, and these bounds can be viewed graphically. Under coalescent simulations, there is a substantial improvement over the earlier method (of up to a factor of 2) in the expected number of recombination events detected by one of the new minima, across a wide range of parameter values. The method is applied to data from a region within the lipoprotein lipase gene and the amount of detected recombination is substantially increased. Further, there is strong clustering of detected recombination events in an area near the center of the region. A program implementing these statistics, which was used for this article, is available from http://www.stats.ox.ac.uk/mathgen/programs.html. PMID:12586723

  6. Discrimination between smiling faces: Human observers vs. automated face analysis.

    PubMed

    Del Líbano, Mario; Calvo, Manuel G; Fernández-Martín, Andrés; Recio, Guillermo

    2018-05-11

    This study investigated (a) how prototypical happy faces (with happy eyes and a smile) can be discriminated from blended expressions with a smile but non-happy eyes, depending on type and intensity of the eye expression; and (b) how smile discrimination differs for human perceivers versus automated face analysis, depending on affective valence and morphological facial features. Human observers categorized faces as happy or non-happy, or rated their valence. Automated analysis (FACET software) computed seven expressions (including joy/happiness) and 20 facial action units (AUs). Physical properties (low-level image statistics and visual saliency) of the face stimuli were controlled. Results revealed, first, that some blended expressions (especially, with angry eyes) had lower discrimination thresholds (i.e., they were identified as "non-happy" at lower non-happy eye intensities) than others (especially, with neutral eyes). Second, discrimination sensitivity was better for human perceivers than for automated FACET analysis. As an additional finding, affective valence predicted human discrimination performance, whereas morphological AUs predicted FACET discrimination. FACET can be a valid tool for categorizing prototypical expressions, but is currently more limited than human observers for discrimination of blended expressions. Configural processing facilitates detection of in/congruence(s) across regions, and thus detection of non-genuine smiling faces (due to non-happy eyes). Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Precise optical observation of 0.5-GPa shock waves in condensed materials

    NASA Astrophysics Data System (ADS)

    Nagayama, Kunihito; Mori, Yasuhito

    1999-06-01

    Precision optical observation method was developed to study impact-generated high-pressure shock waves in condensed materials. The present method makes it possible to sensitively detect the shock waves of the relatively low shock stress around 0.5 GPa. The principle of the present method is based on the use of total internal reflection by triangular prisms placed on the free surface of a target assembly. When a plane shock wave arrives at the free surface, the light reflected from the prisms extinguishes instantaneously. The reason is that the total internal reflection changes to the reflection depending on micron roughness of the free surface after the shock arrival. The shock arrival at the bottom face of the prisms can be detected here by two kinds of methods, i.e., a photographic method and a gauge method. The photographic method is an inclined prism method of using a high-speed streak camera. The shock velocity and the shock tilt angle can be estimated accurately from an obtained streak photograph. While in the gauge method, an in-material PVDF stress gauge is combined with an optical prism-pin. The PVDF gauge records electrically the stress profile behind the shockwave front, and the Hugoniot data can be precisely measured by combining the prism pin with the PVDF gauge.

  8. Laser Doppler imaging of cutaneous blood flow through transparent face masks: a necessary preamble to computer-controlled rapid prototyping fabrication with submillimeter precision.

    PubMed

    Allely, Rebekah R; Van-Buendia, Lan B; Jeng, James C; White, Patricia; Wu, Jingshu; Niszczak, Jonathan; Jordan, Marion H

    2008-01-01

    A paradigm shift in management of postburn facial scarring is lurking "just beneath the waves" with the widespread availability of two recent technologies: precise three-dimensional scanning/digitizing of complex surfaces and computer-controlled rapid prototyping three-dimensional "printers". Laser Doppler imaging may be the sensible method to track the scar hyperemia that should form the basis of assessing progress and directing incremental changes in the digitized topographical face mask "prescription". The purpose of this study was to establish feasibility of detecting perfusion through transparent face masks using the Laser Doppler Imaging scanner. Laser Doppler images of perfusion were obtained at multiple facial regions on five uninjured staff members. Images were obtained without a mask, followed by images with a loose fitting mask with and without a silicone liner, and then with a tight fitting mask with and without a silicone liner. Right and left oblique images, in addition to the frontal images, were used to overcome unobtainable measurements at the extremes of face mask curvature. General linear model, mixed model, and t tests were used for data analysis. Three hundred seventy-five measurements were used for analysis, with a mean perfusion unit of 299 and pixel validity of 97%. The effect of face mask pressure with and without the silicone liner was readily quantified with significant changes in mean cutaneous blood flow (P < .5). High valid pixel rate laser Doppler imager flow data can be obtained through transparent face masks. Perfusion decreases with the application of pressure and with silicone. Every participant measured differently in perfusion units; however, consistent perfusion patterns in the face were observed.

  9. The Deceptively Simple N170 Reflects Network Information Processing Mechanisms Involving Visual Feature Coding and Transfer Across Hemispheres.

    PubMed

    Ince, Robin A A; Jaworska, Katarzyna; Gross, Joachim; Panzeri, Stefano; van Rijsbergen, Nicola J; Rousselet, Guillaume A; Schyns, Philippe G

    2016-08-22

    A key to understanding visual cognition is to determine "where", "when", and "how" brain responses reflect the processing of the specific visual features that modulate categorization behavior-the "what". The N170 is the earliest Event-Related Potential (ERP) that preferentially responds to faces. Here, we demonstrate that a paradigmatic shift is necessary to interpret the N170 as the product of an information processing network that dynamically codes and transfers face features across hemispheres, rather than as a local stimulus-driven event. Reverse-correlation methods coupled with information-theoretic analyses revealed that visibility of the eyes influences face detection behavior. The N170 initially reflects coding of the behaviorally relevant eye contralateral to the sensor, followed by a causal communication of the other eye from the other hemisphere. These findings demonstrate that the deceptively simple N170 ERP hides a complex network information processing mechanism involving initial coding and subsequent cross-hemispheric transfer of visual features. © The Author 2016. Published by Oxford University Press.

  10. Emission-Line Galaxies from the PEARS Hubble Ultra Deep Field: A 2-D Detection Method and First Results

    NASA Technical Reports Server (NTRS)

    Gardner, J. P.; Straughn, Amber N.; Meurer, Gerhardt R.; Pirzkal, Norbert; Cohen, Seth H.; Malhotra, Sangeeta; Rhoads, james; Windhorst, Rogier A.; Gardner, Jonathan P.; Hathi, Nimish P.; hide

    2007-01-01

    The Hubble Space Telescope (HST) Advanced Camera for Surveys (ACS) grism PEARS (Probing Evolution And Reionization Spectroscopically) survey provides a large dataset of low-resolution spectra from thousands of galaxies in the GOODS North and South fields. One important subset of objects in these data are emission-line galaxies (ELGs), and we have investigated several different methods aimed at systematically selecting these galaxies. Here we present a new methodology and results of a search for these ELGs in the PEARS observations of the Hubble Ultra Deep Field (HUDF) using a 2D detection method that utilizes the observation that many emission lines originate from clumpy knots within galaxies. This 2D line-finding method proves to be useful in detecting emission lines from compact knots within galaxies that might not otherwise be detected using more traditional 1D line-finding techniques. We find in total 96 emission lines in the HUDF, originating from 81 distinct "knots" within 63 individual galaxies. We find in general that [0 1111 emitters are the most common, comprising 44% of the sample, and on average have high equivalent widths (70% of [0 1111 emitters having rest-frame EW> 100A). There are 12 galaxies with multiple emitting knots; several show evidence of variations in H-alpha flux in the knots, suggesting that the differing star formation properties across a single galaxy can in general be probed at redshifts approximately greater than 0.2 - 0.4. The most prevalent morphologies are large face-on spirals and clumpy interacting systems, many being unique detections owing to the 2D method described here, thus highlighting the strength of this technique.

  11. Using microarray analysis to evaluate genetic polymorphisms involved in the metabolism of environmental chemicals.

    PubMed

    Ban, Susumu; Kondo, Tomoko; Ishizuka, Mayumi; Sasaki, Seiko; Konishi, Kanae; Washino, Noriaki; Fujita, Syoichi; Kishi, Reiko

    2007-05-01

    The field of molecular biology currently faces the need for a comprehensive method of evaluating individual differences derived from genetic variation in the form of single nucleotide polymorphisms (SNPs). SNPs in human genes are generally considered to be very useful in determining inherited genetic disorders, susceptibility to certain diseases, and cancer predisposition. Quick and accurate discrimination of SNPs is the key characteristic of technology used in DNA diagnostics. For this study, we first developed a DNA microarray and then evaluated its efficacy by determining the detection ability and validity of this method. Using DNA obtained from 380 pregnant Japanese women, we examined 13 polymorphisms of 9 genes, which are associated with the metabolism of environmental chemical compounds found in high frequency among Japanese populations. The ability to detect CYP1A1 I462V, CYP1B1 L432V, GSTP1 I105V and AhR R554K gene polymorphisms was above 98%, and agreement rates when compared with real time PCR analysis methods (kappa values) showed high validity: 0.98 (0.96), 0.97 (0.93), 0.90 (0.81), 0.90 (0.91), respectively. While this DNA microarray analysis should prove important as a method for initial screening, it is still necessary that we find better methods for improving the detection of other gene polymorphisms not part of this study.

  12. Biometric feature embedding using robust steganography technique

    NASA Astrophysics Data System (ADS)

    Rashid, Rasber D.; Sellahewa, Harin; Jassim, Sabah A.

    2013-05-01

    This paper is concerned with robust steganographic techniques to hide and communicate biometric data in mobile media objects like images, over open networks. More specifically, the aim is to embed binarised features extracted using discrete wavelet transforms and local binary patterns of face images as a secret message in an image. The need for such techniques can arise in law enforcement, forensics, counter terrorism, internet/mobile banking and border control. What differentiates this problem from normal information hiding techniques is the added requirement that there should be minimal effect on face recognition accuracy. We propose an LSB-Witness embedding technique in which the secret message is already present in the LSB plane but instead of changing the cover image LSB values, the second LSB plane will be changed to stand as a witness/informer to the receiver during message recovery. Although this approach may affect the stego quality, it is eliminating the weakness of traditional LSB schemes that is exploited by steganalysis techniques for LSB, such as PoV and RS steganalysis, to detect the existence of secrete message. Experimental results show that the proposed method is robust against PoV and RS attacks compared to other variants of LSB. We also discussed variants of this approach and determine capacity requirements for embedding face biometric feature vectors while maintain accuracy of face recognition.

  13. Face repetition detection and social interest: An ERP study in adults with and without Williams syndrome.

    PubMed

    Key, Alexandra P; Dykens, Elisabeth M

    2016-12-01

    The present study examined possible neural mechanisms underlying increased social interest in persons with Williams syndrome (WS). Visual event-related potentials (ERPs) during passive viewing were used to compare incidental memory traces for repeated vs. single presentations of previously unfamiliar social (faces) and nonsocial (houses) images in 26 adults with WS and 26 typical adults. Results indicated that participants with WS developed familiarity with the repeated faces and houses (frontal N400 response), but only typical adults evidenced the parietal old/new effect (previously associated with stimulus recollection) for the repeated faces. There was also no evidence of exceptional salience of social information in WS, as ERP markers of memory for repeated faces vs. houses were not significantly different. Thus, while persons with WS exhibit behavioral evidence of increased social interest, their processing of social information in the absence of specific instructions may be relatively superficial. The ERP evidence of face repetition detection in WS was independent of IQ and the earlier perceptual differentiation of social vs. nonsocial stimuli. Large individual differences in ERPs of participants with WS may provide valuable information for understanding the WS phenotype and have relevance for educational and treatment purposes.

  14. Comparison of Knowledge and Attitudes Using Computer-Based and Face-to-Face Personal Hygiene Training Methods in Food Processing Facilities

    ERIC Educational Resources Information Center

    Fenton, Ginger D.; LaBorde, Luke F.; Radhakrishna, Rama B.; Brown, J. Lynne; Cutter, Catherine N.

    2006-01-01

    Computer-based training is increasingly favored by food companies for training workers due to convenience, self-pacing ability, and ease of use. The objectives of this study were to determine if personal hygiene training, offered through a computer-based method, is as effective as a face-to-face method in knowledge acquisition and improved…

  15. Who is who: areas of the brain associated with recognizing and naming famous faces.

    PubMed

    Giussani, Carlo; Roux, Franck-Emmanuel; Bello, Lorenzo; Lauwers-Cances, Valérie; Papagno, Costanza; Gaini, Sergio M; Puel, Michelle; Démonet, Jean-François

    2009-02-01

    It has been hypothesized that specific brain regions involved in face naming may exist in the brain. To spare these areas and to gain a better understanding of their organization, the authors studied patients who underwent surgery by using direct electrical stimulation mapping for brain tumors, and they compared an object-naming task to a famous face-naming task. Fifty-six patients with brain tumors (39 and 17 in the left and right hemispheres, respectively) and with no significant preoperative overall language deficit were prospectively studied over a 2-year period. Four patients who had a partially selective famous face anomia and 2 with prosopagnosia were not included in the final analysis. Face-naming interferences were exclusively localized in small cortical areas (< 1 cm2). Among 35 patients whose dominant left hemisphere was studied, 26 face-naming specific areas (that is, sites of interference in face naming only and not in object naming) were found. These face naming-specific sites were significantly detected in 2 regions: in the left frontal areas of the superior, middle, and inferior frontal gyri (p < 0.001) and in the anterior part of the superior and middle temporal gyri (p < 0.01). Variable patterns of interference were observed (speech arrest, anomia, phonemic, or semantic paraphasia) probably related to the different stages in famous face processing. Only 4 famous face-naming interferences were found in the right hemisphere. Relative anatomical segregation of naming categories within language areas was detected. This study showed that famous face naming was preferentially processed in the left frontal and anterior temporal gyri. The authors think it is necessary to adapt naming tasks in neurosurgical patients to the brain region studied.

  16. Dissociating maternal responses to sad and happy facial expressions of their own child: An fMRI study

    PubMed Central

    Hindi Attar, Catherine; Stein, Jenny; Poppinga, Sina; Fydrich, Thomas; Jaite, Charlotte; Kappel, Viola; Brunner, Romuald; Herpertz, Sabine C.; Boedeker, Katja; Bermpohl, Felix

    2017-01-01

    Background Maternal sensitive behavior depends on recognizing one’s own child’s affective states. The present study investigated distinct and overlapping neural responses of mothers to sad and happy facial expressions of their own child (in comparison to facial expressions of an unfamiliar child). Methods We used functional MRI to measure dissociable and overlapping activation patterns in 27 healthy mothers in response to happy, neutral and sad facial expressions of their own school-aged child and a gender- and age-matched unfamiliar child. To investigate differential activation to sad compared to happy faces of one’s own child, we used interaction contrasts. During the scan, mothers had to indicate the affect of the presented face. After scanning, they were asked to rate the perceived emotional arousal and valence levels for each face using a 7-point Likert-scale (adapted SAM version). Results While viewing their own child’s sad faces, mothers showed activation in the amygdala and anterior cingulate cortex whereas happy facial expressions of the own child elicited activation in the hippocampus. Conjoint activation in response to one’s own child happy and sad expressions was found in the insula and the superior temporal gyrus. Conclusions Maternal brain activations differed depending on the child’s affective state. Sad faces of the own child activated areas commonly associated with a threat detection network, whereas happy faces activated reward related brain areas. Overlapping activation was found in empathy related networks. These distinct neural activation patterns might facilitate sensitive maternal behavior. PMID:28806742

  17. Multiple Representations-Based Face Sketch-Photo Synthesis.

    PubMed

    Peng, Chunlei; Gao, Xinbo; Wang, Nannan; Tao, Dacheng; Li, Xuelong; Li, Jie

    2016-11-01

    Face sketch-photo synthesis plays an important role in law enforcement and digital entertainment. Most of the existing methods only use pixel intensities as the feature. Since face images can be described using features from multiple aspects, this paper presents a novel multiple representations-based face sketch-photo-synthesis method that adaptively combines multiple representations to represent an image patch. In particular, it combines multiple features from face images processed using multiple filters and deploys Markov networks to exploit the interacting relationships between the neighboring image patches. The proposed framework could be solved using an alternating optimization strategy and it normally converges in only five outer iterations in the experiments. Our experimental results on the Chinese University of Hong Kong (CUHK) face sketch database, celebrity photos, CUHK Face Sketch FERET Database, IIIT-D Viewed Sketch Database, and forensic sketches demonstrate the effectiveness of our method for face sketch-photo synthesis. In addition, cross-database and database-dependent style-synthesis evaluations demonstrate the generalizability of this novel method and suggest promising solutions for face identification in forensic science.

  18. A Comparison of Face to Face and Video-Based Self Care Education on Quality of Life of Hemodialysis Patients

    PubMed Central

    Hemmati Maslakpak, Masumeh; Shams, Shadi

    2015-01-01

    Background End stage renal disease negatively affects the patients’ quality of life. There are different educational methods to help these patients. This study was performed to compare the effectiveness of self-care education in two methods, face to face and video educational, on the quality of life in patients under treatment by hemodialysis in education-medical centers in Urmia. Methods In this quasi-experimental study, 120 hemodialysis patients were selected randomly; they were then randomly allocated to three groups: the control, face to face education and video education. For face to face group, education was given individually in two sessions of 35 to 45 minutes. For video educational group, CD was shown. Kidney Disease Quality Of Life- Short Form (KDQOL-SF) questionnaire was filled out before and two months after the intervention. Data analysis was performed in SPSS software by using one-way ANOVA. Results ANOVA test showed a statistically significant difference in the quality of life scores among the three groups after the intervention (P=0.024). After the intervention, Tukey’s post-hoc test showed a statistically significant difference between the two groups of video and face to face education regarding the quality of life (P>0.05). Conclusion Implementation of the face to face and video education methods improves the quality of life in hemodialysis patients. So, it is suggested that video educational should be used along with face to face education. PMID:26171412

  19. SparCLeS: dynamic l₁ sparse classifiers with level sets for robust beard/moustache detection and segmentation.

    PubMed

    Le, T Hoang Ngan; Luu, Khoa; Savvides, Marios

    2013-08-01

    Robust facial hair detection and segmentation is a highly valued soft biometric attribute for carrying out forensic facial analysis. In this paper, we propose a novel and fully automatic system, called SparCLeS, for beard/moustache detection and segmentation in challenging facial images. SparCLeS uses the multiscale self-quotient (MSQ) algorithm to preprocess facial images and deal with illumination variation. Histogram of oriented gradients (HOG) features are extracted from the preprocessed images and a dynamic sparse classifier is built using these features to classify a facial region as either containing skin or facial hair. A level set based approach, which makes use of the advantages of both global and local information, is then used to segment the regions of a face containing facial hair. Experimental results demonstrate the effectiveness of our proposed system in detecting and segmenting facial hair regions in images drawn from three databases, i.e., the NIST Multiple Biometric Grand Challenge (MBGC) still face database, the NIST Color Facial Recognition Technology FERET database, and the Labeled Faces in the Wild (LFW) database.

  20. Familiarity Detection is an Intrinsic Property of Cortical Microcircuits with Bidirectional Synaptic Plasticity.

    PubMed

    Zhang, Xiaoyu; Ju, Han; Penney, Trevor B; VanDongen, Antonius M J

    2017-01-01

    Humans instantly recognize a previously seen face as "familiar." To deepen our understanding of familiarity-novelty detection, we simulated biologically plausible neural network models of generic cortical microcircuits consisting of spiking neurons with random recurrent synaptic connections. NMDA receptor (NMDAR)-dependent synaptic plasticity was implemented to allow for unsupervised learning and bidirectional modifications. Network spiking activity evoked by sensory inputs consisting of face images altered synaptic efficacy, which resulted in the network responding more strongly to a previously seen face than a novel face. Network size determined how many faces could be accurately recognized as familiar. When the simulated model became sufficiently complex in structure, multiple familiarity traces could be retained in the same network by forming partially-overlapping subnetworks that differ slightly from each other, thereby resulting in a high storage capacity. Fisher's discriminant analysis was applied to identify critical neurons whose spiking activity predicted familiar input patterns. Intriguingly, as sensory exposure was prolonged, the selected critical neurons tended to appear at deeper layers of the network model, suggesting recruitment of additional circuits in the network for incremental information storage. We conclude that generic cortical microcircuits with bidirectional synaptic plasticity have an intrinsic ability to detect familiar inputs. This ability does not require a specialized wiring diagram or supervision and can therefore be expected to emerge naturally in developing cortical circuits.

  1. Familiarity Detection is an Intrinsic Property of Cortical Microcircuits with Bidirectional Synaptic Plasticity

    PubMed Central

    2017-01-01

    Abstract Humans instantly recognize a previously seen face as “familiar.” To deepen our understanding of familiarity-novelty detection, we simulated biologically plausible neural network models of generic cortical microcircuits consisting of spiking neurons with random recurrent synaptic connections. NMDA receptor (NMDAR)-dependent synaptic plasticity was implemented to allow for unsupervised learning and bidirectional modifications. Network spiking activity evoked by sensory inputs consisting of face images altered synaptic efficacy, which resulted in the network responding more strongly to a previously seen face than a novel face. Network size determined how many faces could be accurately recognized as familiar. When the simulated model became sufficiently complex in structure, multiple familiarity traces could be retained in the same network by forming partially-overlapping subnetworks that differ slightly from each other, thereby resulting in a high storage capacity. Fisher’s discriminant analysis was applied to identify critical neurons whose spiking activity predicted familiar input patterns. Intriguingly, as sensory exposure was prolonged, the selected critical neurons tended to appear at deeper layers of the network model, suggesting recruitment of additional circuits in the network for incremental information storage. We conclude that generic cortical microcircuits with bidirectional synaptic plasticity have an intrinsic ability to detect familiar inputs. This ability does not require a specialized wiring diagram or supervision and can therefore be expected to emerge naturally in developing cortical circuits. PMID:28534043

  2. Mobile acoustic transects miss rare bat species: implications of survey method and spatio-temporal sampling for monitoring bats

    PubMed Central

    Wallrichs, Megan A.; Ober, Holly K.; McCleery, Robert A.

    2017-01-01

    Due to increasing threats facing bats, long-term monitoring protocols are needed to inform conservation strategies. Effective monitoring should be easily repeatable while capturing spatio-temporal variation. Mobile acoustic driving transect surveys (‘mobile transects’) have been touted as a robust, cost-effective method to monitor bats; however, it is not clear how well mobile transects represent dynamic bat communities, especially when used as the sole survey approach. To assist biologists who must select a single survey method due to resource limitations, we assessed the effectiveness of three acoustic survey methods at detecting species richness in a vast protected area (Everglades National Park): (1) mobile transects, (2) stationary surveys that were strategically located by sources of open water and (3) stationary surveys that were replicated spatially across the landscape. We found that mobile transects underrepresented bat species richness compared to stationary surveys across all major vegetation communities and in two distinct seasons (dry/cool and wet/warm). Most critically, mobile transects failed to detect three rare bat species, one of which is federally endangered. Spatially replicated stationary surveys did not estimate higher species richness than strategically located stationary surveys, but increased the rate at which species were detected in one vegetation community. The survey strategy that detected maximum species richness and the highest mean nightly species richness with minimal effort was a strategically located stationary detector in each of two major vegetation communities during the wet/warm season. PMID:29134138

  3. Mobile acoustic transects miss rare bat species: implications of survey method and spatio-temporal sampling for monitoring bats.

    PubMed

    Braun de Torrez, Elizabeth C; Wallrichs, Megan A; Ober, Holly K; McCleery, Robert A

    2017-01-01

    Due to increasing threats facing bats, long-term monitoring protocols are needed to inform conservation strategies. Effective monitoring should be easily repeatable while capturing spatio-temporal variation. Mobile acoustic driving transect surveys ('mobile transects') have been touted as a robust, cost-effective method to monitor bats; however, it is not clear how well mobile transects represent dynamic bat communities, especially when used as the sole survey approach. To assist biologists who must select a single survey method due to resource limitations, we assessed the effectiveness of three acoustic survey methods at detecting species richness in a vast protected area (Everglades National Park): (1) mobile transects, (2) stationary surveys that were strategically located by sources of open water and (3) stationary surveys that were replicated spatially across the landscape. We found that mobile transects underrepresented bat species richness compared to stationary surveys across all major vegetation communities and in two distinct seasons (dry/cool and wet/warm). Most critically, mobile transects failed to detect three rare bat species, one of which is federally endangered. Spatially replicated stationary surveys did not estimate higher species richness than strategically located stationary surveys, but increased the rate at which species were detected in one vegetation community. The survey strategy that detected maximum species richness and the highest mean nightly species richness with minimal effort was a strategically located stationary detector in each of two major vegetation communities during the wet/warm season.

  4. High-resolution face verification using pore-scale facial features.

    PubMed

    Li, Dong; Zhou, Huiling; Lam, Kin-Man

    2015-08-01

    Face recognition methods, which usually represent face images using holistic or local facial features, rely heavily on alignment. Their performances also suffer a severe degradation under variations in expressions or poses, especially when there is one gallery per subject only. With the easy access to high-resolution (HR) face images nowadays, some HR face databases have recently been developed. However, few studies have tackled the use of HR information for face recognition or verification. In this paper, we propose a pose-invariant face-verification method, which is robust to alignment errors, using the HR information based on pore-scale facial features. A new keypoint descriptor, namely, pore-Principal Component Analysis (PCA)-Scale Invariant Feature Transform (PPCASIFT)-adapted from PCA-SIFT-is devised for the extraction of a compact set of distinctive pore-scale facial features. Having matched the pore-scale features of two-face regions, an effective robust-fitting scheme is proposed for the face-verification task. Experiments show that, with one frontal-view gallery only per subject, our proposed method outperforms a number of standard verification methods, and can achieve excellent accuracy even the faces are under large variations in expression and pose.

  5. Performance Evaluation of Localization Accuracy for a Log-Normal Shadow Fading Wireless Sensor Network under Physical Barrier Attacks

    PubMed Central

    Abdulqader Hussein, Ahmed; Rahman, Tharek A.; Leow, Chee Yen

    2015-01-01

    Localization is an apparent aspect of a wireless sensor network, which is the focus of much interesting research. One of the severe conditions that needs to be taken into consideration is localizing a mobile target through a dispersed sensor network in the presence of physical barrier attacks. These attacks confuse the localization process and cause location estimation errors. Range-based methods, like the received signal strength indication (RSSI), face the major influence of this kind of attack. This paper proposes a solution based on a combination of multi-frequency multi-power localization (C-MFMPL) and step function multi-frequency multi-power localization (SF-MFMPL), including the fingerprint matching technique and lateration, to provide a robust and accurate localization technique. In addition, this paper proposes a grid coloring algorithm to detect the signal hole map in the network, which refers to the attack-prone regions, in order to carry out corrective actions. The simulation results show the enhancement and robustness of RSS localization performance in the face of log normal shadow fading effects, besides the presence of physical barrier attacks, through detecting, filtering and eliminating the effect of these attacks. PMID:26690159

  6. Performance Evaluation of Localization Accuracy for a Log-Normal Shadow Fading Wireless Sensor Network under Physical Barrier Attacks.

    PubMed

    Hussein, Ahmed Abdulqader; Rahman, Tharek A; Leow, Chee Yen

    2015-12-04

    Localization is an apparent aspect of a wireless sensor network, which is the focus of much interesting research. One of the severe conditions that needs to be taken into consideration is localizing a mobile target through a dispersed sensor network in the presence of physical barrier attacks. These attacks confuse the localization process and cause location estimation errors. Range-based methods, like the received signal strength indication (RSSI), face the major influence of this kind of attack. This paper proposes a solution based on a combination of multi-frequency multi-power localization (C-MFMPL) and step function multi-frequency multi-power localization (SF-MFMPL), including the fingerprint matching technique and lateration, to provide a robust and accurate localization technique. In addition, this paper proposes a grid coloring algorithm to detect the signal hole map in the network, which refers to the attack-prone regions, in order to carry out corrective actions. The simulation results show the enhancement and robustness of RSS localization performance in the face of log normal shadow fading effects, besides the presence of physical barrier attacks, through detecting, filtering and eliminating the effect of these attacks.

  7. Reducing impaired-driving recidivism using advanced vehicle-based alcohol detection systems : a report to Congress

    DOT National Transportation Integrated Search

    2007-12-01

    Vehicle-based alcohol detection systems use technologies designed to detect the presence of alcohol in a driver. Technology suitable for use in all vehicles that will detect an impaired driver faces many challenges including public acceptability, pas...

  8. Sub-pattern based multi-manifold discriminant analysis for face recognition

    NASA Astrophysics Data System (ADS)

    Dai, Jiangyan; Guo, Changlu; Zhou, Wei; Shi, Yanjiao; Cong, Lin; Yi, Yugen

    2018-04-01

    In this paper, we present a Sub-pattern based Multi-manifold Discriminant Analysis (SpMMDA) algorithm for face recognition. Unlike existing Multi-manifold Discriminant Analysis (MMDA) approach which is based on holistic information of face image for recognition, SpMMDA operates on sub-images partitioned from the original face image and then extracts the discriminative local feature from the sub-images separately. Moreover, the structure information of different sub-images from the same face image is considered in the proposed method with the aim of further improve the recognition performance. Extensive experiments on three standard face databases (Extended YaleB, CMU PIE and AR) demonstrate that the proposed method is effective and outperforms some other sub-pattern based face recognition methods.

  9. Mechanisms of face perception

    PubMed Central

    Tsao, Doris Y.

    2009-01-01

    Faces are among the most informative stimuli we ever perceive: Even a split-second glimpse of a person's face tells us their identity, sex, mood, age, race, and direction of attention. The specialness of face processing is acknowledged in the artificial vision community, where contests for face recognition algorithms abound. Neurological evidence strongly implicates a dedicated machinery for face processing in the human brain, to explain the double dissociability of face and object recognition deficits. Furthermore, it has recently become clear that macaques too have specialized neural machinery for processing faces. Here we propose a unifying hypothesis, deduced from computational, neurological, fMRI, and single-unit experiments: that what makes face processing special is that it is gated by an obligatory detection process. We will clarify this idea in concrete algorithmic terms, and show how it can explain a variety of phenomena associated with face processing. PMID:18558862

  10. A general framework for face reconstruction using single still image based on 2D-to-3D transformation kernel.

    PubMed

    Fooprateepsiri, Rerkchai; Kurutach, Werasak

    2014-03-01

    Face authentication is a biometric classification method that verifies the identity of a user based on image of their face. Accuracy of the authentication is reduced when the pose, illumination and expression of the training face images are different than the testing image. The methods in this paper are designed to improve the accuracy of a features-based face recognition system when the pose between the input images and training images are different. First, an efficient 2D-to-3D integrated face reconstruction approach is introduced to reconstruct a personalized 3D face model from a single frontal face image with neutral expression and normal illumination. Second, realistic virtual faces with different poses are synthesized based on the personalized 3D face to characterize the face subspace. Finally, face recognition is conducted based on these representative virtual faces. Compared with other related works, this framework has the following advantages: (1) only one single frontal face is required for face recognition, which avoids the burdensome enrollment work; and (2) the synthesized face samples provide the capability to conduct recognition under difficult conditions like complex pose, illumination and expression. From the experimental results, we conclude that the proposed method improves the accuracy of face recognition by varying the pose, illumination and expression. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  11. Real-time determination of the efficacy of residual disinfection to limit wastewater contamination in a water distribution system using filtration-based luminescence.

    PubMed

    Lee, Jiyoung; Deininger, Rolf A

    2010-05-01

    Water distribution systems can be vulnerable to microbial contamination through cross-connections, wastewater backflow, the intrusion of soiled water after a loss of pressure resulting from an electricity blackout, natural disaster, or intentional contamination of the system in a bioterrrorism event. The most urgent matter a water treatment utility would face in this situation is detecting the presence and extent of a contamination event in real-time, so that immediate action can be taken to mitigate the problem. The current approved microbiological detection methods are culture-based plate count methods, which require incubation time (1 to 7 days). This long period of time would not be useful for the protection of public health. This study was designed to simulate wastewater intrusion in a water distribution system. The objectives were 2-fold: (1) real-time detection of water contamination, and (2) investigation of the sustainability of drinking water systems to suppress the contamination with secondary disinfectant residuals (chlorine and chloramine). The events of drinking water contamination resulting from a wastewater addition were determined by filtration-based luminescence assay. The water contamination was detected by luminescence method within 5 minutes. The signal amplification attributed to wastewater contamination was clear-102-fold signal increase. After 1 hour, chlorinated water could inactivate 98.8% of the bacterial contaminant, while chloraminated water reduced 77.2%.

  12. A system for tracking and recognizing pedestrian faces using a network of loosely coupled cameras

    NASA Astrophysics Data System (ADS)

    Gagnon, L.; Laliberté, F.; Foucher, S.; Branzan Albu, A.; Laurendeau, D.

    2006-05-01

    A face recognition module has been developed for an intelligent multi-camera video surveillance system. The module can recognize a pedestrian face in terms of six basic emotions and the neutral state. Face and facial features detection (eyes, nasal root, nose and mouth) are first performed using cascades of boosted classifiers. These features are used to normalize the pose and dimension of the face image. Gabor filters are then sampled on a regular grid covering the face image to build a facial feature vector that feeds a nearest neighbor classifier with a cosine distance similarity measure for facial expression interpretation and face model construction. A graphical user interface allows the user to adjust the module parameters.

  13. Efficient quantitative assessment of facial paralysis using iris segmentation and active contour-based key points detection with hybrid classifier.

    PubMed

    Barbosa, Jocelyn; Lee, Kyubum; Lee, Sunwon; Lodhi, Bilal; Cho, Jae-Gu; Seo, Woo-Keun; Kang, Jaewoo

    2016-03-12

    Facial palsy or paralysis (FP) is a symptom that loses voluntary muscles movement in one side of the human face, which could be very devastating in the part of the patients. Traditional methods are solely dependent to clinician's judgment and therefore time consuming and subjective in nature. Hence, a quantitative assessment system becomes apparently invaluable for physicians to begin the rehabilitation process; and to produce a reliable and robust method is challenging and still underway. We introduce a novel approach for a quantitative assessment of facial paralysis that tackles classification problem for FP type and degree of severity. Specifically, a novel method of quantitative assessment is presented: an algorithm that extracts the human iris and detects facial landmarks; and a hybrid approach combining the rule-based and machine learning algorithm to analyze and prognosticate facial paralysis using the captured images. A method combining the optimized Daugman's algorithm and Localized Active Contour (LAC) model is proposed to efficiently extract the iris and facial landmark or key points. To improve the performance of LAC, appropriate parameters of initial evolving curve for facial features' segmentation are automatically selected. The symmetry score is measured by the ratio between features extracted from the two sides of the face. Hybrid classifiers (i.e. rule-based with regularized logistic regression) were employed for discriminating healthy and unhealthy subjects, FP type classification, and for facial paralysis grading based on House-Brackmann (H-B) scale. Quantitative analysis was performed to evaluate the performance of the proposed approach. Experiments show that the proposed method demonstrates its efficiency. Facial movement feature extraction on facial images based on iris segmentation and LAC-based key point detection along with a hybrid classifier provides a more efficient way of addressing classification problem on facial palsy type and degree of severity. Combining iris segmentation and key point-based method has several merits that are essential for our real application. Aside from the facial key points, iris segmentation provides significant contribution as it describes the changes of the iris exposure while performing some facial expressions. It reveals the significant difference between the healthy side and the severe palsy side when raising eyebrows with both eyes directed upward, and can model the typical changes in the iris region.

  14. Learning Compact Binary Face Descriptor for Face Recognition.

    PubMed

    Lu, Jiwen; Liong, Venice Erin; Zhou, Xiuzhuang; Zhou, Jie

    2015-10-01

    Binary feature descriptors such as local binary patterns (LBP) and its variations have been widely used in many face recognition systems due to their excellent robustness and strong discriminative power. However, most existing binary face descriptors are hand-crafted, which require strong prior knowledge to engineer them by hand. In this paper, we propose a compact binary face descriptor (CBFD) feature learning method for face representation and recognition. Given each face image, we first extract pixel difference vectors (PDVs) in local patches by computing the difference between each pixel and its neighboring pixels. Then, we learn a feature mapping to project these pixel difference vectors into low-dimensional binary vectors in an unsupervised manner, where 1) the variance of all binary codes in the training set is maximized, 2) the loss between the original real-valued codes and the learned binary codes is minimized, and 3) binary codes evenly distribute at each learned bin, so that the redundancy information in PDVs is removed and compact binary codes are obtained. Lastly, we cluster and pool these binary codes into a histogram feature as the final representation for each face image. Moreover, we propose a coupled CBFD (C-CBFD) method by reducing the modality gap of heterogeneous faces at the feature level to make our method applicable to heterogeneous face recognition. Extensive experimental results on five widely used face datasets show that our methods outperform state-of-the-art face descriptors.

  15. Appearance-based multimodal human tracking and identification for healthcare in the digital home.

    PubMed

    Yang, Mau-Tsuen; Huang, Shen-Yen

    2014-08-05

    There is an urgent need for intelligent home surveillance systems to provide home security, monitor health conditions, and detect emergencies of family members. One of the fundamental problems to realize the power of these intelligent services is how to detect, track, and identify people at home. Compared to RFID tags that need to be worn all the time, vision-based sensors provide a natural and nonintrusive solution. Observing that body appearance and body build, as well as face, provide valuable cues for human identification, we model and record multi-view faces, full-body colors and shapes of family members in an appearance database by using two Kinects located at a home's entrance. Then the Kinects and another set of color cameras installed in other parts of the house are used to detect, track, and identify people by matching the captured color images with the registered templates in the appearance database. People are detected and tracked by multisensor fusion (Kinects and color cameras) using a Kalman filter that can handle duplicate or partial measurements. People are identified by multimodal fusion (face, body appearance, and silhouette) using a track-based majority voting. Moreover, the appearance-based human detection, tracking, and identification modules can cooperate seamlessly and benefit from each other. Experimental results show the effectiveness of the human tracking across multiple sensors and human identification considering the information of multi-view faces, full-body clothes, and silhouettes. The proposed home surveillance system can be applied to domestic applications in digital home security and intelligent healthcare.

  16. Appearance-Based Multimodal Human Tracking and Identification for Healthcare in the Digital Home

    PubMed Central

    Yang, Mau-Tsuen; Huang, Shen-Yen

    2014-01-01

    There is an urgent need for intelligent home surveillance systems to provide home security, monitor health conditions, and detect emergencies of family members. One of the fundamental problems to realize the power of these intelligent services is how to detect, track, and identify people at home. Compared to RFID tags that need to be worn all the time, vision-based sensors provide a natural and nonintrusive solution. Observing that body appearance and body build, as well as face, provide valuable cues for human identification, we model and record multi-view faces, full-body colors and shapes of family members in an appearance database by using two Kinects located at a home's entrance. Then the Kinects and another set of color cameras installed in other parts of the house are used to detect, track, and identify people by matching the captured color images with the registered templates in the appearance database. People are detected and tracked by multisensor fusion (Kinects and color cameras) using a Kalman filter that can handle duplicate or partial measurements. People are identified by multimodal fusion (face, body appearance, and silhouette) using a track-based majority voting. Moreover, the appearance-based human detection, tracking, and identification modules can cooperate seamlessly and benefit from each other. Experimental results show the effectiveness of the human tracking across multiple sensors and human identification considering the information of multi-view faces, full-body clothes, and silhouettes. The proposed home surveillance system can be applied to domestic applications in digital home security and intelligent healthcare. PMID:25098207

  17. You may look unhappy unless you smile: the distinctiveness of a smiling face against faces without an explicit smile.

    PubMed

    Park, Hyung-Bum; Han, Ji-Eun; Hyun, Joo-Seok

    2015-05-01

    An expressionless face is often perceived as rude whereas a smiling face is considered as hospitable. Repetitive exposure to such perceptions may have developed stereotype of categorizing an expressionless face as expressing negative emotion. To test this idea, we displayed a search array where the target was an expressionless face and the distractors were either smiling or frowning faces. We manipulated set size. Search reaction times were delayed with frowning distractors. Delays became more evident as the set size increased. We also devised a short-term comparison task where participants compared two sequential sets of expressionless, smiling, and frowning faces. Detection of an expression change across the sets was highly inaccurate when the change was made between frowning and expressionless face. These results indicate that subjects were confused with expressed emotions on frowning and expressionless faces, suggesting that it is difficult to distinguish expressionless face from frowning faces. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Real-time detection and discrimination of visual perception using electrocorticographic signals

    NASA Astrophysics Data System (ADS)

    Kapeller, C.; Ogawa, H.; Schalk, G.; Kunii, N.; Coon, W. G.; Scharinger, J.; Guger, C.; Kamada, K.

    2018-06-01

    Objective. Several neuroimaging studies have demonstrated that the ventral temporal cortex contains specialized regions that process visual stimuli. This study investigated the spatial and temporal dynamics of electrocorticographic (ECoG) responses to different types and colors of visual stimulation that were presented to four human participants, and demonstrated a real-time decoder that detects and discriminates responses to untrained natural images. Approach. ECoG signals from the participants were recorded while they were shown colored and greyscale versions of seven types of visual stimuli (images of faces, objects, bodies, line drawings, digits, and kanji and hiragana characters), resulting in 14 classes for discrimination (experiment I). Additionally, a real-time system asynchronously classified ECoG responses to faces, kanji and black screens presented via a monitor (experiment II), or to natural scenes (i.e. the face of an experimenter, natural images of faces and kanji, and a mirror) (experiment III). Outcome measures in all experiments included the discrimination performance across types based on broadband γ activity. Main results. Experiment I demonstrated an offline classification accuracy of 72.9% when discriminating among the seven types (without color separation). Further discrimination of grey versus colored images reached an accuracy of 67.1%. Discriminating all colors and types (14 classes) yielded an accuracy of 52.1%. In experiment II and III, the real-time decoder correctly detected 73.7% responses to face, kanji and black computer stimuli and 74.8% responses to presented natural scenes. Significance. Seven different types and their color information (either grey or color) could be detected and discriminated using broadband γ activity. Discrimination performance maximized for combined spatial-temporal information. The discrimination of stimulus color information provided the first ECoG-based evidence for color-related population-level cortical broadband γ responses in humans. Stimulus categories can be detected by their ECoG responses in real time within 500 ms with respect to stimulus onset.

  19. Greater perceptual sensitivity to happy facial expression.

    PubMed

    Maher, Stephen; Ekstrom, Tor; Chen, Yue

    2014-01-01

    Perception of subtle facial expressions is essential for social functioning; yet it is unclear if human perceptual sensitivities differ in detecting varying types of facial emotions. Evidence diverges as to whether salient negative versus positive emotions (such as sadness versus happiness) are preferentially processed. Here, we measured perceptual thresholds for the detection of four types of emotion in faces--happiness, fear, anger, and sadness--using psychophysical methods. We also evaluated the association of the perceptual performances with facial morphological changes between neutral and respective emotion types. Human observers were highly sensitive to happiness compared with the other emotional expressions. Further, this heightened perceptual sensitivity to happy expressions can be attributed largely to the emotion-induced morphological change of a particular facial feature (end-lip raise).

  20. Cross-modal enhancement of speech detection in young and older adults: does signal content matter?

    PubMed

    Tye-Murray, Nancy; Spehar, Brent; Myerson, Joel; Sommers, Mitchell S; Hale, Sandra

    2011-01-01

    The purpose of the present study was to examine the effects of age and visual content on cross-modal enhancement of auditory speech detection. Visual content consisted of three clearly distinct types of visual information: an unaltered video clip of a talker's face, a low-contrast version of the same clip, and a mouth-like Lissajous figure. It was hypothesized that both young and older adults would exhibit reduced enhancement as visual content diverged from the original clip of the talker's face, but that the decrease would be greater for older participants. Nineteen young adults and 19 older adults were asked to detect a single spoken syllable (/ba/) in speech-shaped noise, and the level of the signal was adaptively varied to establish the signal-to-noise ratio (SNR) at threshold. There was an auditory-only baseline condition and three audiovisual conditions in which the syllable was accompanied by one of the three visual signals (the unaltered clip of the talker's face, the low-contrast version of that clip, or the Lissajous figure). For each audiovisual condition, the SNR at threshold was compared with the SNR at threshold for the auditory-only condition to measure the amount of cross-modal enhancement. Young adults exhibited significant cross-modal enhancement with all three types of visual stimuli, with the greatest amount of enhancement observed for the unaltered clip of the talker's face. Older adults, in contrast, exhibited significant cross-modal enhancement only with the unaltered face. Results of this study suggest that visual signal content affects cross-modal enhancement of speech detection in both young and older adults. They also support a hypothesized age-related deficit in processing low-contrast visual speech stimuli, even in older adults with normal contrast sensitivity.

  1. Computer-aided diagnosis workstation and network system for chest diagnosis based on multislice CT images

    NASA Astrophysics Data System (ADS)

    Satoh, Hitoshi; Niki, Noboru; Eguchi, Kenji; Moriyama, Noriyuki; Ohmatsu, Hironobu; Masuda, Hideo; Machida, Suguru

    2008-03-01

    Mass screening based on multi-helical CT images requires a considerable number of images to be read. It is this time-consuming step that makes the use of helical CT for mass screening impractical at present. To overcome this problem, we have provided diagnostic assistance methods to medical screening specialists by developing a lung cancer screening algorithm that automatically detects suspected lung cancers in helical CT images, a coronary artery calcification screening algorithm that automatically detects suspected coronary artery calcification and a vertebra body analysis algorithm for quantitative evaluation of osteoporosis likelihood by using helical CT scanner for the lung cancer mass screening. The function to observe suspicious shadow in detail are provided in computer-aided diagnosis workstation with these screening algorithms. We also have developed the telemedicine network by using Web medical image conference system with the security improvement of images transmission, Biometric fingerprint authentication system and Biometric face authentication system. Biometric face authentication used on site of telemedicine makes "Encryption of file" and Success in login" effective. As a result, patients' private information is protected. Based on these diagnostic assistance methods, we have developed a new computer-aided workstation and a new telemedicine network that can display suspected lesions three-dimensionally in a short time. The results of this study indicate that our radiological information system without film by using computer-aided diagnosis workstation and our telemedicine network system can increase diagnostic speed, diagnostic accuracy and security improvement of medical information.

  2. Test-retest reliability of fMRI-based graph theoretical properties during working memory, emotion processing, and resting state.

    PubMed

    Cao, Hengyi; Plichta, Michael M; Schäfer, Axel; Haddad, Leila; Grimm, Oliver; Schneider, Michael; Esslinger, Christine; Kirsch, Peter; Meyer-Lindenberg, Andreas; Tost, Heike

    2014-01-01

    The investigation of the brain connectome with functional magnetic resonance imaging (fMRI) and graph theory analyses has recently gained much popularity, but little is known about the robustness of these properties, in particular those derived from active fMRI tasks. Here, we studied the test-retest reliability of brain graphs calculated from 26 healthy participants with three established fMRI experiments (n-back working memory, emotional face-matching, resting state) and two parcellation schemes for node definition (AAL atlas, functional atlas proposed by Power et al.). We compared the intra-class correlation coefficients (ICCs) of five different data processing strategies and demonstrated a superior reliability of task-regression methods with condition-specific regressors. The between-task comparison revealed significantly higher ICCs for resting state relative to the active tasks, and a superiority of the n-back task relative to the face-matching task for global and local network properties. While the mean ICCs were typically lower for the active tasks, overall fair to good reliabilities were detected for global and local connectivity properties, and for the n-back task with both atlases, smallworldness. For all three tasks and atlases, low mean ICCs were seen for the local network properties. However, node-specific good reliabilities were detected for node degree in regions known to be critical for the challenged functions (resting-state: default-mode network nodes, n-back: fronto-parietal nodes, face-matching: limbic nodes). Between-atlas comparison demonstrated significantly higher reliabilities for the functional parcellations for global and local network properties. Our findings can inform the choice of processing strategies, brain atlases and outcome properties for fMRI studies using active tasks, graph theory methods, and within-subject designs, in particular future pharmaco-fMRI studies. © 2013 Elsevier Inc. All rights reserved.

  3. Knife blade as a facial foreign body.

    PubMed

    Gardner, P A; Righi, P; Shahbahrami, P B

    1997-08-01

    This case demonstrates the unpredictability of foreign bodies in the face. The retained knife blade eluded detection on two separate examinations. The essential components to making a correct diagnosis of a foreign body following a stabbing to the face include a thorough review of the mechanism of injury, a complete head and neck examination, a high index of suspicion, and plain radiographs of the face.

  4. Methods for artifact detection and removal from scalp EEG: A review.

    PubMed

    Islam, Md Kafiul; Rastegarnia, Amir; Yang, Zhi

    2016-11-01

    Electroencephalography (EEG) is the most popular brain activity recording technique used in wide range of applications. One of the commonly faced problems in EEG recordings is the presence of artifacts that come from sources other than brain and contaminate the acquired signals significantly. Therefore, much research over the past 15 years has focused on identifying ways for handling such artifacts in the preprocessing stage. However, this is still an active area of research as no single existing artifact detection/removal method is complete or universal. This article presents an extensive review of the existing state-of-the-art artifact detection and removal methods from scalp EEG for all potential EEG-based applications and analyses the pros and cons of each method. First, a general overview of the different artifact types that are found in scalp EEG and their effect on particular applications are presented. In addition, the methods are compared based on their ability to remove certain types of artifacts and their suitability in relevant applications (only functional comparison is provided not performance evaluation of methods). Finally, the future direction and expected challenges of current research is discussed. Therefore, this review is expected to be helpful for interested researchers who will develop and/or apply artifact handling algorithm/technique in future for their applications as well as for those willing to improve the existing algorithms or propose a new solution in this particular area of research. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  5. Method for Face-Emotion Retrieval Using A Cartoon Emotional Expression Approach

    NASA Astrophysics Data System (ADS)

    Kostov, Vlaho; Yanagisawa, Hideyoshi; Johansson, Martin; Fukuda, Shuichi

    A simple method for extracting emotion from a human face, as a form of non-verbal communication, was developed to cope with and optimize mobile communication in a globalized and diversified society. A cartoon face based model was developed and used to evaluate emotional content of real faces. After a pilot survey, basic rules were defined and student subjects were asked to express emotion using the cartoon face. Their face samples were then analyzed using principal component analysis and the Mahalanobis distance method. Feature parameters considered as having relations with emotions were extracted and new cartoon faces (based on these parameters) were generated. The subjects evaluated emotion of these cartoon faces again and we confirmed these parameters were suitable. To confirm how these parameters could be applied to real faces, we asked subjects to express the same emotions which were then captured electronically. Simple image processing techniques were also developed to extract these features from real faces and we then compared them with the cartoon face parameters. It is demonstrated via the cartoon face that we are able to express the emotions from very small amounts of information. As a result, real and cartoon faces correspond to each other. It is also shown that emotion could be extracted from still and dynamic real face images using these cartoon-based features.

  6. Fast support vector data descriptions for novelty detection.

    PubMed

    Liu, Yi-Hung; Liu, Yan-Chen; Chen, Yen-Jen

    2010-08-01

    Support vector data description (SVDD) has become a very attractive kernel method due to its good results in many novelty detection problems. However, the decision function of SVDD is expressed in terms of the kernel expansion, which results in a run-time complexity linear in the number of support vectors. For applications where fast real-time response is needed, how to speed up the decision function is crucial. This paper aims at dealing with the issue of reducing the testing time complexity of SVDD. A method called fast SVDD (F-SVDD) is proposed. Unlike the traditional methods which all try to compress a kernel expansion into one with fewer terms, the proposed F-SVDD directly finds the preimage of a feature vector, and then uses a simple relationship between this feature vector and the SVDD sphere center to re-express the center with a single vector. The decision function of F-SVDD contains only one kernel term, and thus the decision boundary of F-SVDD is only spherical in the original space. Hence, the run-time complexity of the F-SVDD decision function is no longer linear in the support vectors, but is a constant, no matter how large the training set size is. In this paper, we also propose a novel direct preimage-finding method, which is noniterative and involves no free parameters. The unique preimage can be obtained in real time by the proposed direct method without taking trial-and-error. For demonstration, several real-world data sets and a large-scale data set, the extended MIT face data set, are used in experiments. In addition, a practical industry example regarding liquid crystal display micro-defect inspection is also used to compare the applicability of SVDD and our proposed F-SVDD when faced with mass data input. The results are very encouraging.

  7. Preferred communication methods of abused women.

    PubMed

    Gilroy, Heidi; McFarlane, Judith; Nava, Angeles; Maddoux, John

    2013-01-01

    To determine preferred communication methods of abused women. A naturalistic study utilizing principles of Community Based Participatory Research. A total of 300 first time users of criminal justice or safe shelter for abused women were interviewed in person. The Preferred Communication Questionnaire was used to determine preference. Given the choice of phone voice, face to face, phone text, e-mail, or Facebook, traditional methods of communication (face-to-face communication and phone voice) were the primary (80% combined) and secondary (58.6% combined) preferred sources among abused women. A total of 292 women (97.3%) gave at least two preferred methods of communication, 255 (85%) gave three preferred methods, 190 (63%) gave four, and 132 (44%) used all five methods. Public health nurses and other professionals who serve abused women should be aware of their preferred method of communication for contact. The women in the sample preferred face-to-face and phone-voice communication; however, many were open to newer forms of communication such as texting and Facebook. Caution should be used to protect the safety of abused women when using any kind of communication. © 2013 Wiley Periodicals, Inc.

  8. Appearance-Based Vision and the Automatic Generation of Object Recognition Programs

    DTIC Science & Technology

    1992-07-01

    q u a groued into equivalence clases with respect o visible featms; the equivalence classes me called alpecu. A recognitio smuegy is generated from...illustates th concept. pge 9 Table 1: Summary o fSnsors Samr Vertex Edge Face Active/ Passive Edge detector line, passive Shape-fzm-shading - r passive...example of the detectability computation for a liht-stripe range finder is shown zn Fqgur 2. Figure 2: Detectability of a face for a light-stripe range

  9. Effects of emotional and non-emotional cues on visual search in neglect patients: evidence for distinct sources of attentional guidance.

    PubMed

    Lucas, Nadia; Vuilleumier, Patrik

    2008-04-01

    In normal observers, visual search is facilitated for targets with salient attributes. We compared how two different types of cue (expression and colour) may influence search for face targets, in healthy subjects (n=27) and right brain-damaged patients with left spatial neglect (n=13). The target faces were defined by their identity (singleton among a crowd of neutral faces) but could either be neutral (like other faces), or have a different emotional expression (fearful or happy), or a different colour (red-tinted). Healthy subjects were the fastest for detecting the colour-cued targets, but also showed a significant facilitation for emotionally cued targets, relative to neutral faces differing from other distracter faces by identity only. Healthy subjects were also faster overall for target faces located on the left, as compared to the right side of the display. In contrast, neglect patients were slower to detect targets on the left (contralesional) relative to the right (ipsilesional) side. However, they showed the same pattern of cueing effects as healthy subjects on both sides of space; while their best performance was also found for faces cued by colour, they showed a significant advantage for faces cued by expression, relative to the neutral condition. These results indicate that despite impaired attention towards the left hemispace, neglect patients may still show an intact influence of both low-level colour cues and emotional expression cues on attention, suggesting that neural mechanisms responsible for these effects are partly separate from fronto-parietal brain systems controlling spatial attention during search.

  10. Study on the influence factors of camouflage target polarization detection

    NASA Astrophysics Data System (ADS)

    Huang, Yanhua; Chen, Lei; Li, Xia; Wu, Wenyuan

    2016-10-01

    The degree of linear polarization (DOLP) expressions at any polarizer direction (PD) was deduced based on the Stokes vector and Mueller matrix. The outdoors experiments were carried out to demonstrate the expressions. This paper mainly explored the DOLP-image-Contrast (DOLPC) between the target image and the background image, and the PD and RGB waveband that be considered two important influence factors were studied for camouflage target polarization detection. It was found that the DOLPC of target and background was obviously higher than intensity image. When setting the reference direction that polarizer was perpendicular to the incident face, the DOLP image of interval angle 60 degree between PD and reference direction had relatively high DOLPC, the interval angle 45 degree was the second, and the interval angle 35 degree was the third. The outdoors polarization detection experiment of controlling waveband showed that the DOLPC results was significantly different to use 650nm, 550nm and 450nm waveband, and the polarization detection performance by using 650nm band was an optimization method.

  11. Occupancy estimation and modeling with multiple states and state uncertainty

    USGS Publications Warehouse

    Nichols, J.D.; Hines, J.E.; MacKenzie, D.I.; Seamans, M.E.; Gutierrez, R.J.

    2007-01-01

    The distribution of a species over space is of central interest in ecology, but species occurrence does not provide all of the information needed to characterize either the well-being of a population or the suitability of occupied habitat. Recent methodological development has focused on drawing inferences about species occurrence in the face of imperfect detection. Here we extend those methods by characterizing occupied locations by some additional state variable ( e. g., as producing young or not). Our modeling approach deals with both detection probabilities,1 and uncertainty in state classification. We then use the approach with occupancy and reproductive rate data from California Spotted Owls (Strix occidentalis occidentalis) collected in the central Sierra Nevada during the breeding season of 2004 to illustrate the utility of the modeling approach. Estimates of owl reproductive rate were larger than naive estimates, indicating the importance of appropriately accounting for uncertainty in detection and state classification.

  12. Remote detection of mental workload changes using cardiac parameters assessed with a low-cost webcam.

    PubMed

    Bousefsaf, Frédéric; Maaoui, Choubeila; Pruski, Alain

    2014-10-01

    We introduce a new framework for detecting mental workload changes using video frames obtained from a low-cost webcam. Image processing in addition to a continuous wavelet transform filtering method were developed and applied to remove major artifacts and trends on raw webcam photoplethysmographic signals. The measurements are performed on human faces. To induce stress, we have employed a computerized and interactive Stroop color word test on a set composed by twelve participants. The electrodermal activity of the participants was recorded and compared to the mental workload curve assessed by merging two parameters derived from the pulse rate variability and photoplethysmographic amplitude fluctuations, which reflect peripheral vasoconstriction changes. The results exhibit strong correlation between the two measurement techniques. This study offers further support for the applicability of mental workload detection by remote and low-cost means, providing an alternative to conventional contact techniques. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Line grouping using perceptual saliency and structure prediction for car detection in traffic scenes

    NASA Astrophysics Data System (ADS)

    Denasi, Sandra; Quaglia, Giorgio

    1993-08-01

    Autonomous and guide assisted vehicles make a heavy use of computer vision techniques to perceive the environment where they move. In this context, the European PROMETHEUS program is carrying on activities in order to develop autonomous vehicle monitoring that assists people to achieve safer driving. Car detection is one of the topics that are faced by the program. Our contribution proposes the development of this task in two stages: the localization of areas of interest and the formulation of object hypotheses. In particular, the present paper proposes a new approach that builds structural descriptions of objects from edge segmentations by using geometrical organization. This approach has been applied to the detection of cars in traffic scenes. We have analyzed images taken from a moving vehicle in order to formulate obstacle hypotheses: preliminary results confirm the efficiency of the method.

  14. Feasibility evaluation of a motion detection system with face images for stereotactic radiosurgery.

    PubMed

    Yamakawa, Takuya; Ogawa, Koichi; Iyatomi, Hitoshi; Kunieda, Etsuo

    2011-01-01

    In stereotactic radiosurgery we can irradiate a targeted volume precisely with a narrow high-energy x-ray beam, and thus the motion of a targeted area may cause side effects to normal organs. This paper describes our motion detection system with three USB cameras. To reduce the effect of change in illuminance in a tracking area we used an infrared light and USB cameras that were sensitive to the infrared light. The motion detection of a patient was performed by tracking his/her ears and nose with three USB cameras, where pattern matching between a predefined template image for each view and acquired images was done by an exhaustive search method with a general-purpose computing on a graphics processing unit (GPGPU). The results of the experiments showed that the measurement accuracy of our system was less than 0.7 mm, amounting to less than half of that of our previous system.

  15. Is attentional prioritisation of infant faces unique in humans?: Comparative demonstrations by modified dot-probe task in monkeys.

    PubMed

    Koda, Hiroki; Sato, Anna; Kato, Akemi

    2013-09-01

    Humans innately perceive infantile features as cute. The ethologist Konrad Lorenz proposed that the infantile features of mammals and birds, known as the baby schema (kindchenschema), motivate caretaking behaviour. As biologically relevant stimuli, newborns are likely to be processed specially in terms of visual attention, perception, and cognition. Recent demonstrations on human participants have shown visual attentional prioritisation to newborn faces (i.e., newborn faces capture visual attention). Although characteristics equivalent to those found in the faces of human infants are found in nonhuman primates, attentional capture by newborn faces has not been tested in nonhuman primates. We examined whether conspecific newborn faces captured the visual attention of two Japanese monkeys using a target-detection task based on dot-probe tasks commonly used in human visual attention studies. Although visual cues enhanced target detection in subject monkeys, our results, unlike those for humans, showed no evidence of an attentional prioritisation for newborn faces by monkeys. Our demonstrations showed the validity of dot-probe task for visual attention studies in monkeys and propose a novel approach to bridge the gap between human and nonhuman primate social cognition research. This suggests that attentional capture by newborn faces is not common to macaques, but it is unclear if nursing experiences influence their perception and recognition of infantile appraisal stimuli. We need additional comparative studies to reveal the evolutionary origins of baby-schema perception and recognition. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Simultaneous Local Binary Feature Learning and Encoding for Homogeneous and Heterogeneous Face Recognition.

    PubMed

    Lu, Jiwen; Erin Liong, Venice; Zhou, Jie

    2017-08-09

    In this paper, we propose a simultaneous local binary feature learning and encoding (SLBFLE) approach for both homogeneous and heterogeneous face recognition. Unlike existing hand-crafted face descriptors such as local binary pattern (LBP) and Gabor features which usually require strong prior knowledge, our SLBFLE is an unsupervised feature learning approach which automatically learns face representation from raw pixels. Unlike existing binary face descriptors such as the LBP, discriminant face descriptor (DFD), and compact binary face descriptor (CBFD) which use a two-stage feature extraction procedure, our SLBFLE jointly learns binary codes and the codebook for local face patches so that discriminative information from raw pixels from face images of different identities can be obtained by using a one-stage feature learning and encoding procedure. Moreover, we propose a coupled simultaneous local binary feature learning and encoding (C-SLBFLE) method to make the proposed approach suitable for heterogeneous face matching. Unlike most existing coupled feature learning methods which learn a pair of transformation matrices for each modality, we exploit both the common and specific information from heterogeneous face samples to characterize their underlying correlations. Experimental results on six widely used face datasets are presented to demonstrate the effectiveness of the proposed method.

  17. Behavioral dissociation between emotional and non-emotional facial expressions in congenital prosopagnosia

    PubMed Central

    Daini, Roberta; Comparetti, Chiara M.; Ricciardelli, Paola

    2014-01-01

    Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP) can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently from features (which are impaired in CP), and basic emotional expressions. To this end, we carried out a behavioral study in which we compared the performance of 6 CP individuals to that of typical development individuals, using upright and inverted faces. Four avatar faces with a neutral expression were presented in the initial phase. The target faces presented in the recognition phase, in which a recognition task was requested (2AFC paradigm), could be identical (neutral) to those of the initial phase or present biologically plausible changes to features, non-emotional expressions, or emotional expressions. After this task, a second task was performed, in which the participants had to detect whether or not the recognized face exactly matched the study face or showed any difference. The results confirmed the CPs' impairment in the configural processing of the invariant aspects of the face, but also showed a spared configural processing of non-emotional facial expression (task 1). Interestingly and unlike the non-emotional expressions, the configural processing of emotional expressions was compromised in CPs and did not improve their change detection ability (task 2). These new results have theoretical implications for face perception models since they suggest that, at least in CPs, non-emotional expressions are processed configurally, can be dissociated from other facial dimensions, and may serve as a compensatory strategy to achieve face recognition. PMID:25520643

  18. Behavioral dissociation between emotional and non-emotional facial expressions in congenital prosopagnosia.

    PubMed

    Daini, Roberta; Comparetti, Chiara M; Ricciardelli, Paola

    2014-01-01

    Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP) can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently from features (which are impaired in CP), and basic emotional expressions. To this end, we carried out a behavioral study in which we compared the performance of 6 CP individuals to that of typical development individuals, using upright and inverted faces. Four avatar faces with a neutral expression were presented in the initial phase. The target faces presented in the recognition phase, in which a recognition task was requested (2AFC paradigm), could be identical (neutral) to those of the initial phase or present biologically plausible changes to features, non-emotional expressions, or emotional expressions. After this task, a second task was performed, in which the participants had to detect whether or not the recognized face exactly matched the study face or showed any difference. The results confirmed the CPs' impairment in the configural processing of the invariant aspects of the face, but also showed a spared configural processing of non-emotional facial expression (task 1). Interestingly and unlike the non-emotional expressions, the configural processing of emotional expressions was compromised in CPs and did not improve their change detection ability (task 2). These new results have theoretical implications for face perception models since they suggest that, at least in CPs, non-emotional expressions are processed configurally, can be dissociated from other facial dimensions, and may serve as a compensatory strategy to achieve face recognition.

  19. Standardization of the face-hand test in a Brazilian multicultural population: prevalence of sensory extinction and implications for neurological diagnosis.

    PubMed

    Luvizutto, Gustavo José; Fogaroli, Marcelo Ortolani; Theotonio, Rodolfo Mazeto; Nunes, Hélio Rubens de Carvalho; Resende, Luiz Antônio de Lima; Bazan, Rodrigo

    2016-12-01

    The face-hand test is a simple, practical, and rapid test to detect neurological syndromes. However, it has not previously been assessed in a Brazilian sample; therefore, the objective of the present study was to standardize the face-hand test for use in the multi-cultural population of Brazil and identify the sociodemographic factors affecting the results. This was a cross sectional study of 150 individuals. The sociodemographic variables that were collected included age, gender, race, body mass index and years of education. Standardization of the face-hand test occurred in 2 rounds of 10 sensory stimuli, with the participant seated to support the trunk and their vision obstructed in a sound-controlled environment. The face-hand test was conducted by applying 2 rounds of 10 sensory stimuli that were applied to the face and hand simultaneously. The associations between the face-hand test and sociodemographic variables were analyzed using Mann-Whitney tests and Spearman correlations. Binomial models were adjusted for the number of face-hand test variations, and ROC curves evaluated sensitivity and specificity of sensory extinction. There was no significant relationship between the sociodemographic variables and the number of stimuli perceived for the face-hand test. There was a high relative frequency of detection, 8 out of 10 stimuli, in this population. Sensory extinction was 25.3%, which increased with increasing age (OR=1.4[1:01-1:07]; p=0.006) and decreased significantly with increasing education (OR=0.82[0.71-0.94]; p=0.005). In the Brazilian population, a normal face-hand test score ranges between 8-10 stimuli, and the results indicate that sensory extinction is associated with increased age and lower levels of education.

  20. New non-invasive safe, quick, economical method of detecting various cancers was found using QRS complex or rising part of T-wave of recorded ECGs. Cancers can be screened along with their biochemical parameters & therapeutic effects of any cancer treatments can be evaluated using recorded ECGs of the same individual.

    PubMed

    Omura, Yoshiaki; Lu, Dominic; O'Young, Brian; Jones, Marilyn; Nihrane, Abdallah; Duvvi, Harsha; Shimotsuura, Yasuhiro; Ohki, Motomu

    2015-01-01

    There are many methods of detecting cancers including detection of cancer markers by blood test, (which is invasive, time consuming and relatively expensive), detection of cancers by non-invasive methods such as X-Ray, CT scan, and MRI & PET Scan (which are non-invasive and quick but very expensive). Our research was performed to develop new non-invasive, safe, quick economical method of detecting cancers and the 1st author already developed for clinically important non-invasive new methods including early stage of present method using his method of localizing accurate organ representation areas of face, eyebrows, upper lip, lower lip, surface and dorsal part of the tongue, surface backs, and palm side of the hands. This accurate localization of the organ representation area of the different parts of the body was performed using electromagnetic field resonance phenomenon between 2 identical molecules or tissues based on our US patented non-invasive method in 1993. Since year 2000, we developed the following non-invasive diagnostic methods that can be quickly identified by the patented simple non-invasive method without using expensive or bulky instrument at any office or field where there is no electricity or instrument available. The following are examples of non-invasive quick method of diagnosis and treatment of cancers using different approaches: 1) Soft red laser beam scanning of different parts of body; 2) By speaking voice; 3) Visible and invisible characteristic abnormalities on different organ representation areas of the different parts of the body, and 4) Mouth, Hand, and Foot Writings of both right and left side of the body. As a consequence of our latest research, we were able to develop a simple method of detecting cancer from existing recorded electrocardiograms. In this article, we are going to describe the method and result of clinical applications on many different cancers of different organs including lung, esophagus, breast, stomach, colon, uterus, ovary, prostate gland, as well as common bone marrow related malignancies such as Hodgkin's Lymphoma, Non-Hodgkin's Lymphoma, Multiple Myeloma as well as Leukemia.

Top