Sample records for single face image

  1. A general framework for face reconstruction using single still image based on 2D-to-3D transformation kernel.

    PubMed

    Fooprateepsiri, Rerkchai; Kurutach, Werasak

    2014-03-01

    Face authentication is a biometric classification method that verifies the identity of a user based on image of their face. Accuracy of the authentication is reduced when the pose, illumination and expression of the training face images are different than the testing image. The methods in this paper are designed to improve the accuracy of a features-based face recognition system when the pose between the input images and training images are different. First, an efficient 2D-to-3D integrated face reconstruction approach is introduced to reconstruct a personalized 3D face model from a single frontal face image with neutral expression and normal illumination. Second, realistic virtual faces with different poses are synthesized based on the personalized 3D face to characterize the face subspace. Finally, face recognition is conducted based on these representative virtual faces. Compared with other related works, this framework has the following advantages: (1) only one single frontal face is required for face recognition, which avoids the burdensome enrollment work; and (2) the synthesized face samples provide the capability to conduct recognition under difficult conditions like complex pose, illumination and expression. From the experimental results, we conclude that the proposed method improves the accuracy of face recognition by varying the pose, illumination and expression. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  2. Reconstructing 3D Face Model with Associated Expression Deformation from a Single Face Image via Constructing a Low-Dimensional Expression Deformation Manifold.

    PubMed

    Wang, Shu-Fan; Lai, Shang-Hong

    2011-10-01

    Facial expression modeling is central to facial expression recognition and expression synthesis for facial animation. In this work, we propose a manifold-based 3D face reconstruction approach to estimating the 3D face model and the associated expression deformation from a single face image. With the proposed robust weighted feature map (RWF), we can obtain the dense correspondences between 3D face models and build a nonlinear 3D expression manifold from a large set of 3D facial expression models. Then a Gaussian mixture model in this manifold is learned to represent the distribution of expression deformation. By combining the merits of morphable neutral face model and the low-dimensional expression manifold, a novel algorithm is developed to reconstruct the 3D face geometry as well as the facial deformation from a single face image in an energy minimization framework. Experimental results on simulated and real images are shown to validate the effectiveness and accuracy of the proposed algorithm.

  3. Image dependency in the recognition of newly learnt faces.

    PubMed

    Longmore, Christopher A; Santos, Isabel M; Silva, Carlos F; Hall, Abi; Faloyin, Dipo; Little, Emily

    2017-05-01

    Research investigating the effect of lighting and viewpoint changes on unfamiliar and newly learnt faces has revealed that such recognition is highly image dependent and that changes in either of these leads to poor recognition accuracy. Three experiments are reported to extend these findings by examining the effect of apparent age on the recognition of newly learnt faces. Experiment 1 investigated the ability to generalize to novel ages of a face after learning a single image. It was found that recognition was best for the learnt image with performance falling the greater the dissimilarity between the study and test images. Experiments 2 and 3 examined whether learning two images aids subsequent recognition of a novel image. The results indicated that interpolation between two studied images (Experiment 2) provided some additional benefit over learning a single view, but that this did not extend to extrapolation (Experiment 3). The results from all studies suggest that recognition was driven primarily by pictorial codes and that the recognition of faces learnt from a limited number of sources operates on stored images of faces as opposed to more abstract, structural, representations.

  4. Effects of exposure to facial expression variation in face learning and recognition.

    PubMed

    Liu, Chang Hong; Chen, Wenfeng; Ward, James

    2015-11-01

    Facial expression is a major source of image variation in face images. Linking numerous expressions to the same face can be a huge challenge for face learning and recognition. It remains largely unknown what level of exposure to this image variation is critical for expression-invariant face recognition. We examined this issue in a recognition memory task, where the number of facial expressions of each face being exposed during a training session was manipulated. Faces were either trained with multiple expressions or a single expression, and they were later tested in either the same or different expressions. We found that recognition performance after learning three emotional expressions had no improvement over learning a single emotional expression (Experiments 1 and 2). However, learning three emotional expressions improved recognition compared to learning a single neutral expression (Experiment 3). These findings reveal both the limitation and the benefit of multiple exposures to variations of emotional expression in achieving expression-invariant face recognition. The transfer of expression training to a new type of expression is likely to depend on a relatively extensive level of training and a certain degree of variation across the types of expressions.

  5. Super-resolution method for face recognition using nonlinear mappings on coherent features.

    PubMed

    Huang, Hua; He, Huiting

    2011-01-01

    Low-resolution (LR) of face images significantly decreases the performance of face recognition. To address this problem, we present a super-resolution method that uses nonlinear mappings to infer coherent features that favor higher recognition of the nearest neighbor (NN) classifiers for recognition of single LR face image. Canonical correlation analysis is applied to establish the coherent subspaces between the principal component analysis (PCA) based features of high-resolution (HR) and LR face images. Then, a nonlinear mapping between HR/LR features can be built by radial basis functions (RBFs) with lower regression errors in the coherent feature space than in the PCA feature space. Thus, we can compute super-resolved coherent features corresponding to an input LR image according to the trained RBF model efficiently and accurately. And, face identity can be obtained by feeding these super-resolved features to a simple NN classifier. Extensive experiments on the Facial Recognition Technology, University of Manchester Institute of Science and Technology, and Olivetti Research Laboratory databases show that the proposed method outperforms the state-of-the-art face recognition algorithms for single LR image in terms of both recognition rate and robustness to facial variations of pose and expression.

  6. Still-to-video face recognition in unconstrained environments

    NASA Astrophysics Data System (ADS)

    Wang, Haoyu; Liu, Changsong; Ding, Xiaoqing

    2015-02-01

    Face images from video sequences captured in unconstrained environments usually contain several kinds of variations, e.g. pose, facial expression, illumination, image resolution and occlusion. Motion blur and compression artifacts also deteriorate recognition performance. Besides, in various practical systems such as law enforcement, video surveillance and e-passport identification, only a single still image per person is enrolled as the gallery set. Many existing methods may fail to work due to variations in face appearances and the limit of available gallery samples. In this paper, we propose a novel approach for still-to-video face recognition in unconstrained environments. By assuming that faces from still images and video frames share the same identity space, a regularized least squares regression method is utilized to tackle the multi-modality problem. Regularization terms based on heuristic assumptions are enrolled to avoid overfitting. In order to deal with the single image per person problem, we exploit face variations learned from training sets to synthesize virtual samples for gallery samples. We adopt a learning algorithm combining both affine/convex hull-based approach and regularizations to match image sets. Experimental results on a real-world dataset consisting of unconstrained video sequences demonstrate that our method outperforms the state-of-the-art methods impressively.

  7. Kernel-aligned multi-view canonical correlation analysis for image recognition

    NASA Astrophysics Data System (ADS)

    Su, Shuzhi; Ge, Hongwei; Yuan, Yun-Hao

    2016-09-01

    Existing kernel-based correlation analysis methods mainly adopt a single kernel in each view. However, only a single kernel is usually insufficient to characterize nonlinear distribution information of a view. To solve the problem, we transform each original feature vector into a 2-dimensional feature matrix by means of kernel alignment, and then propose a novel kernel-aligned multi-view canonical correlation analysis (KAMCCA) method on the basis of the feature matrices. Our proposed method can simultaneously employ multiple kernels to better capture the nonlinear distribution information of each view, so that correlation features learned by KAMCCA can have well discriminating power in real-world image recognition. Extensive experiments are designed on five real-world image datasets, including NIR face images, thermal face images, visible face images, handwritten digit images, and object images. Promising experimental results on the datasets have manifested the effectiveness of our proposed method.

  8. Kruskal-Wallis-based computationally efficient feature selection for face recognition.

    PubMed

    Ali Khan, Sajid; Hussain, Ayyaz; Basit, Abdul; Akram, Sheeraz

    2014-01-01

    Face recognition in today's technological world, and face recognition applications attain much more importance. Most of the existing work used frontal face images to classify face image. However these techniques fail when applied on real world face images. The proposed technique effectively extracts the prominent facial features. Most of the features are redundant and do not contribute to representing face. In order to eliminate those redundant features, computationally efficient algorithm is used to select the more discriminative face features. Extracted features are then passed to classification step. In the classification step, different classifiers are ensemble to enhance the recognition accuracy rate as single classifier is unable to achieve the high accuracy. Experiments are performed on standard face database images and results are compared with existing techniques.

  9. Redesigning photo-ID to improve unfamiliar face matching performance.

    PubMed

    White, David; Burton, A Mike; Jenkins, Rob; Kemp, Richard I

    2014-06-01

    Viewers find it difficult to match photos of unfamiliar faces for identity. Despite this, the use of photographic ID is widespread. In this study we ask whether it is possible to improve face matching performance by replacing single photographs on ID documents with multiple photos or an average image of the bearer. In 3 experiments we compare photo-to-photo matching with photo-to-average matching (where the average is formed from multiple photos of the same person) and photo-to-array matching (where the array comprises separate photos of the same person). We consistently find an accuracy advantage for average images and photo arrays over single photos, and show that this improvement is driven by performance in match trials. In the final experiment, we find a benefit of 4-image arrays relative to average images for unfamiliar faces, but not for familiar faces. We propose that conventional photo-ID format can be improved, and discuss this finding in the context of face recognition more generally. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  10. A Method of Face Detection with Bayesian Probability

    NASA Astrophysics Data System (ADS)

    Sarker, Goutam

    2010-10-01

    The objective of face detection is to identify all images which contain a face, irrespective of its orientation, illumination conditions etc. This is a hard problem, because the faces are highly variable in size, shape lighting conditions etc. Many methods have been designed and developed to detect faces in a single image. The present paper is based on one `Appearance Based Method' which relies on learning the facial and non facial features from image examples. This in its turn is based on statistical analysis of examples and counter examples of facial images and employs Bayesian Conditional Classification Rule to detect the probability of belongingness of a face (or non-face) within an image frame. The detection rate of the present system is very high and thereby the number of false positive and false negative detection is substantially low.

  11. Precedence of the eye region in neural processing of faces

    PubMed Central

    Issa, Elias; DiCarlo, James

    2012-01-01

    SUMMARY Functional magnetic resonance imaging (fMRI) has revealed multiple subregions in monkey inferior temporal cortex (IT) that are selective for images of faces over other objects. The earliest of these subregions, the posterior lateral face patch (PL), has not been studied previously at the neurophysiological level. Perhaps not surprisingly, we found that PL contains a high concentration of ‘face selective’ cells when tested with standard image sets comparable to those used previously to define the region at the level of fMRI. However, we here report that several different image sets and analytical approaches converge to show that nearly all face selective PL cells are driven by the presence of a single eye in the context of a face outline. Most strikingly, images containing only an eye, even when incorrectly positioned in an outline, drove neurons nearly as well as full face images, and face images lacking only this feature led to longer latency responses. Thus, bottom-up face processing is relatively local and linearly integrates features -- consistent with parts-based models -- grounding investigation of how the presence of a face is first inferred in the IT face processing hierarchy. PMID:23175821

  12. Social Cognition in Williams Syndrome: Face Tuning

    PubMed Central

    Pavlova, Marina A.; Heiz, Julie; Sokolov, Alexander N.; Barisnikov, Koviljka

    2016-01-01

    Many neurological, neurodevelopmental, neuropsychiatric, and psychosomatic disorders are characterized by impairments in visual social cognition, body language reading, and facial assessment of a social counterpart. Yet a wealth of research indicates that individuals with Williams syndrome exhibit remarkable concern for social stimuli and face fascination. Here individuals with Williams syndrome were presented with a set of Face-n-Food images composed of food ingredients and in different degree resembling a face (slightly bordering on the Giuseppe Arcimboldo style). The primary advantage of these images is that single components do not explicitly trigger face-specific processing, whereas in face images commonly used for investigating face perception (such as photographs or depictions), the mere occurrence of typical cues already implicates face presence. In a spontaneous recognition task, participants were shown a set of images in a predetermined order from the least to most resembling a face. Strikingly, individuals with Williams syndrome exhibited profound deficits in recognition of the Face-n-Food images as a face: they did not report seeing a face on the images, which typically developing controls effortlessly recognized as a face, and gave overall fewer face responses. This suggests atypical face tuning in Williams syndrome. The outcome is discussed in the light of a general pattern of social cognition in Williams syndrome and brain mechanisms underpinning face processing. PMID:27531986

  13. Social Cognition in Williams Syndrome: Face Tuning.

    PubMed

    Pavlova, Marina A; Heiz, Julie; Sokolov, Alexander N; Barisnikov, Koviljka

    2016-01-01

    Many neurological, neurodevelopmental, neuropsychiatric, and psychosomatic disorders are characterized by impairments in visual social cognition, body language reading, and facial assessment of a social counterpart. Yet a wealth of research indicates that individuals with Williams syndrome exhibit remarkable concern for social stimuli and face fascination. Here individuals with Williams syndrome were presented with a set of Face-n-Food images composed of food ingredients and in different degree resembling a face (slightly bordering on the Giuseppe Arcimboldo style). The primary advantage of these images is that single components do not explicitly trigger face-specific processing, whereas in face images commonly used for investigating face perception (such as photographs or depictions), the mere occurrence of typical cues already implicates face presence. In a spontaneous recognition task, participants were shown a set of images in a predetermined order from the least to most resembling a face. Strikingly, individuals with Williams syndrome exhibited profound deficits in recognition of the Face-n-Food images as a face: they did not report seeing a face on the images, which typically developing controls effortlessly recognized as a face, and gave overall fewer face responses. This suggests atypical face tuning in Williams syndrome. The outcome is discussed in the light of a general pattern of social cognition in Williams syndrome and brain mechanisms underpinning face processing.

  14. Dynamic Emotional Faces Generalise Better to a New Expression but not to a New View.

    PubMed

    Liu, Chang Hong; Chen, Wenfeng; Ward, James; Takahashi, Nozomi

    2016-08-08

    Prior research based on static images has found limited improvement for recognising previously learnt faces in a new expression after several different facial expressions of these faces had been shown during the learning session. We investigated whether non-rigid motion of facial expression facilitates the learning process. In Experiment 1, participants remembered faces that were either presented in short video clips or still images. To assess the effect of exposure to expression variation, each face was either learnt through a single expression or three different expressions. Experiment 2 examined whether learning faces in video clips could generalise more effectively to a new view. The results show that faces learnt from video clips generalised effectively to a new expression with exposure to a single expression, whereas faces learnt from stills showed poorer generalisation with exposure to either single or three expressions. However, although superior recognition performance was demonstrated for faces learnt through video clips, dynamic facial expression did not create better transfer of learning to faces tested in a new view. The data thus fail to support the hypothesis that non-rigid motion enhances viewpoint invariance. These findings reveal both benefits and limitations of exposures to moving expressions for expression-invariant face recognition.

  15. Dynamic Emotional Faces Generalise Better to a New Expression but not to a New View

    PubMed Central

    Liu, Chang Hong; Chen, Wenfeng; Ward, James; Takahashi, Nozomi

    2016-01-01

    Prior research based on static images has found limited improvement for recognising previously learnt faces in a new expression after several different facial expressions of these faces had been shown during the learning session. We investigated whether non-rigid motion of facial expression facilitates the learning process. In Experiment 1, participants remembered faces that were either presented in short video clips or still images. To assess the effect of exposure to expression variation, each face was either learnt through a single expression or three different expressions. Experiment 2 examined whether learning faces in video clips could generalise more effectively to a new view. The results show that faces learnt from video clips generalised effectively to a new expression with exposure to a single expression, whereas faces learnt from stills showed poorer generalisation with exposure to either single or three expressions. However, although superior recognition performance was demonstrated for faces learnt through video clips, dynamic facial expression did not create better transfer of learning to faces tested in a new view. The data thus fail to support the hypothesis that non-rigid motion enhances viewpoint invariance. These findings reveal both benefits and limitations of exposures to moving expressions for expression-invariant face recognition. PMID:27499252

  16. Effects of threshold on single-target detection by using modified amplitude-modulated joint transform correlator

    NASA Astrophysics Data System (ADS)

    Kaewkasi, Pitchaya; Widjaja, Joewono; Uozumi, Jun

    2007-03-01

    Effects of threshold value on detection performance of the modified amplitude-modulated joint transform correlator are quantitatively studied using computer simulation. Fingerprint and human face images are used as test scenes in the presence of noise and a contrast difference. Simulation results demonstrate that this correlator improves detection performance for both types of image used, but moreso for human face images. Optimal detection of low-contrast human face images obscured by strong noise can be obtained by selecting an appropriate threshold value.

  17. The utility of multiple synthesized views in the recognition of unfamiliar faces.

    PubMed

    Jones, Scott P; Dwyer, Dominic M; Lewis, Michael B

    2017-05-01

    The ability to recognize an unfamiliar individual on the basis of prior exposure to a photograph is notoriously poor and prone to errors, but recognition accuracy is improved when multiple photographs are available. In applied situations, when only limited real images are available (e.g., from a mugshot or CCTV image), the generation of new images might provide a technological prosthesis for otherwise fallible human recognition. We report two experiments examining the effects of providing computer-generated additional views of a target face. In Experiment 1, provision of computer-generated views supported better target face recognition than exposure to the target image alone and equivalent performance to that for exposure of multiple photograph views. Experiment 2 replicated the advantage of providing generated views, but also indicated an advantage for multiple viewings of the single target photograph. These results strengthen the claim that identifying a target face can be improved by providing multiple synthesized views based on a single target image. In addition, our results suggest that the degree of advantage provided by synthesized views may be affected by the quality of synthesized material.

  18. Global Binary Continuity for Color Face Detection With Complex Background

    NASA Astrophysics Data System (ADS)

    Belavadi, Bhaskar; Mahendra Prashanth, K. V.; Joshi, Sujay S.; Suprathik, N.

    2017-08-01

    In this paper, we propose a method to detect human faces in color images, with complex background. The proposed algorithm makes use of basically two color space models, specifically HSV and YCgCr. The color segmented image is filled uniformly with a single color (binary) and then all unwanted discontinuous lines are removed to get the final image. Experimental results on Caltech database manifests that the purported model is able to accomplish far better segmentation for faces of varying orientations, skin color and background environment.

  19. Fingerprint imaging from the inside of a finger with full-field optical coherence tomography

    PubMed Central

    Auksorius, Egidijus; Boccara, A. Claude

    2015-01-01

    Imaging below fingertip surface might be a useful alternative to the traditional fingerprint sensing since the internal finger features are more reliable than the external ones. One of the most promising subsurface imaging technique is optical coherence tomography (OCT), which, however, has to acquire 3-D data even when a single en face image is required. This makes OCT inherently slow for en face imaging and produce unnecessary large data sets. Here we demonstrate that full-field optical coherence tomography (FF-OCT) can be used to produce en face images of sweat pores and internal fingerprints, which can be used for the identification purposes. PMID:26601009

  20. 3D FaceCam: a fast and accurate 3D facial imaging device for biometrics applications

    NASA Astrophysics Data System (ADS)

    Geng, Jason; Zhuang, Ping; May, Patrick; Yi, Steven; Tunnell, David

    2004-08-01

    Human faces are fundamentally three-dimensional (3D) objects, and each face has its unique 3D geometric profile. The 3D geometric features of a human face can be used, together with its 2D texture, for rapid and accurate face recognition purposes. Due to the lack of low-cost and robust 3D sensors and effective 3D facial recognition (FR) algorithms, almost all existing FR systems use 2D face images. Genex has developed 3D solutions that overcome the inherent problems in 2D while also addressing limitations in other 3D alternatives. One important aspect of our solution is a unique 3D camera (the 3D FaceCam) that combines multiple imaging sensors within a single compact device to provide instantaneous, ear-to-ear coverage of a human face. This 3D camera uses three high-resolution CCD sensors and a color encoded pattern projection system. The RGB color information from each pixel is used to compute the range data and generate an accurate 3D surface map. The imaging system uses no moving parts and combines multiple 3D views to provide detailed and complete 3D coverage of the entire face. Images are captured within a fraction of a second and full-frame 3D data is produced within a few seconds. This described method provides much better data coverage and accuracy in feature areas with sharp features or details (such as the nose and eyes). Using this 3D data, we have been able to demonstrate that a 3D approach can significantly improve the performance of facial recognition. We have conducted tests in which we have varied the lighting conditions and angle of image acquisition in the "field." These tests have shown that the matching results are significantly improved when enrolling a 3D image rather than a single 2D image. With its 3D solutions, Genex is working toward unlocking the promise of powerful 3D FR and transferring FR from a lab technology into a real-world biometric solution.

  1. Does a single session of electroconvulsive therapy alter the neural response to emotional faces in depression? A randomised sham-controlled functional magnetic resonance imaging study.

    PubMed

    Miskowiak, Kamilla W; Kessing, Lars V; Ott, Caroline V; Macoveanu, Julian; Harmer, Catherine J; Jørgensen, Anders; Revsbech, Rasmus; Jensen, Hans M; Paulson, Olaf B; Siebner, Hartwig R; Jørgensen, Martin B

    2017-09-01

    Negative neurocognitive bias is a core feature of major depressive disorder that is reversed by pharmacological and psychological treatments. This double-blind functional magnetic resonance imaging study investigated for the first time whether electroconvulsive therapy modulates negative neurocognitive bias in major depressive disorder. Patients with major depressive disorder were randomised to one active ( n=15) or sham electroconvulsive therapy ( n=12). The following day they underwent whole-brain functional magnetic resonance imaging at 3T while viewing emotional faces and performed facial expression recognition and dot-probe tasks. A single electroconvulsive therapy session had no effect on amygdala response to emotional faces. Whole-brain analysis revealed no effects of electroconvulsive therapy versus sham therapy after family-wise error correction at the cluster level, using a cluster-forming threshold of Z>3.1 ( p<0.001) to secure family-wise error <5%. Groups showed no differences in behavioural measures, mood and medication. Exploratory cluster-corrected whole-brain analysis ( Z>2.3; p<0.01) revealed electroconvulsive therapy-induced changes in parahippocampal and superior frontal responses to fearful versus happy faces as well as in fear-specific functional connectivity between amygdala and occipito-temporal regions. Across all patients, greater fear-specific amygdala - occipital coupling correlated with lower fear vigilance. Despite no statistically significant shift in neural response to faces after a single electroconvulsive therapy session, the observed trend changes after a single electroconvulsive therapy session point to an early shift in emotional processing that may contribute to antidepressant effects of electroconvulsive therapy.

  2. Fast periodic presentation of natural images reveals a robust face-selective electrophysiological response in the human brain.

    PubMed

    Rossion, Bruno; Torfs, Katrien; Jacques, Corentin; Liu-Shuang, Joan

    2015-01-16

    We designed a fast periodic visual stimulation approach to identify an objective signature of face categorization incorporating both visual discrimination (from nonface objects) and generalization (across widely variable face exemplars). Scalp electroencephalographic (EEG) data were recorded in 12 human observers viewing natural images of objects at a rapid frequency of 5.88 images/s for 60 s. Natural images of faces were interleaved every five stimuli, i.e., at 1.18 Hz (5.88/5). Face categorization was indexed by a high signal-to-noise ratio response, specifically at an oddball face stimulation frequency of 1.18 Hz and its harmonics. This face-selective periodic EEG response was highly significant for every participant, even for a single 60-s sequence, and was generally localized over the right occipitotemporal cortex. The periodicity constraint and the large selection of stimuli ensured that this selective response to natural face images was free of low-level visual confounds, as confirmed by the absence of any oddball response for phase-scrambled stimuli. Without any subtraction procedure, time-domain analysis revealed a sequence of differential face-selective EEG components between 120 and 400 ms after oddball face image onset, progressing from medial occipital (P1-faces) to occipitotemporal (N1-faces) and anterior temporal (P2-faces) regions. Overall, this fast periodic visual stimulation approach provides a direct signature of natural face categorization and opens an avenue for efficiently measuring categorization responses of complex visual stimuli in the human brain. © 2015 ARVO.

  3. Synthesis and identification of three-dimensional faces from image(s) and three-dimensional generic models

    NASA Astrophysics Data System (ADS)

    Liu, Zexi; Cohen, Fernand

    2017-11-01

    We describe an approach for synthesizing a three-dimensional (3-D) face structure from an image or images of a human face taken at a priori unknown poses using gender and ethnicity specific 3-D generic models. The synthesis process starts with a generic model, which is personalized as images of the person become available using preselected landmark points that are tessellated to form a high-resolution triangular mesh. From a single image, two of the three coordinates of the model are reconstructed in accordance with the given image of the person, while the third coordinate is sampled from the generic model, and the appearance is made in accordance with the image. With multiple images, all coordinates and appearance are reconstructed in accordance with the observed images. This method allows for accurate pose estimation as well as face identification in 3-D rendering of a difficult two-dimensional (2-D) face recognition problem into a much simpler 3-D surface matching problem. The estimation of the unknown pose is achieved using the Levenberg-Marquardt optimization process. Encouraging experimental results are obtained in a controlled environment with high-resolution images under a good illumination condition, as well as for images taken in an uncontrolled environment under arbitrary illumination with low-resolution cameras.

  4. Face detection assisted auto exposure: supporting evidence from a psychophysical study

    NASA Astrophysics Data System (ADS)

    Jin, Elaine W.; Lin, Sheng; Dharumalingam, Dhandapani

    2010-01-01

    Face detection has been implemented in many digital still cameras and camera phones with the promise of enhancing existing camera functions (e.g. auto exposure) and adding new features to cameras (e.g. blink detection). In this study we examined the use of face detection algorithms in assisting auto exposure (AE). The set of 706 images, used in this study, was captured using Canon Digital Single Lens Reflex cameras and subsequently processed with an image processing pipeline. A psychophysical study was performed to obtain optimal exposure along with the upper and lower bounds of exposure for all 706 images. Three methods of marking faces were utilized: manual marking, face detection algorithm A (FD-A), and face detection algorithm B (FD-B). The manual marking method found 751 faces in 426 images, which served as the ground-truth for face regions of interest. The remaining images do not have any faces or the faces are too small to be considered detectable. The two face detection algorithms are different in resource requirements and in performance. FD-A uses less memory and gate counts compared to FD-B, but FD-B detects more faces and has less false positives. A face detection assisted auto exposure algorithm was developed and tested against the evaluation results from the psychophysical study. The AE test results showed noticeable improvement when faces were detected and used in auto exposure. However, the presence of false positives would negatively impact the added benefit.

  5. Incidental memory for faces in children with different genetic subtypes of Prader-Willi syndrome.

    PubMed

    Key, Alexandra P; Dykens, Elisabeth M

    2017-06-01

    The present study examined the effects of genetic subtype on social memory in children (7-16 years) with Prader-Willi syndrome (PWS). Visual event-related potentials (ERPs) during a passive viewing task were used to compare incidental memory traces for repeated vs single presentations of previously unfamiliar social (faces) and nonsocial (houses) images in 15 children with the deletion subtype and 13 children with maternal uniparental disomy (mUPD). While all participants perceived faces as different from houses (N170 responses), repeated faces elicited more positive ERP amplitudes ('old/new' effect, 250-500ms) only in children with the deletion subtype. Conversely, the mUPD group demonstrated reduced amplitudes suggestive of habituation to the repeated faces. ERP responses to repeated vs single house images did not differ in either group. The results suggest that faces hold different motivational value for individuals with the deletion vs mUPD subtype of PWS and could contribute to the explanation of subtype differences in the psychiatric symptoms, including autism symptomatology. © The Author (2017). Published by Oxford University Press.

  6. 20 MHz Forward-imaging Single-element Beam Steering with an Internal Rotating Variable-Angle Reflecting Surface: Wire phantom and Ex vivo pilot study

    PubMed Central

    Raphael, David T.; Li, Xiang; Park, Jinhyoung; Chen, Ruimin; Chabok, Hamid; Barukh, Arthur; Zhou, Qifa; Elgazery, Mahmoud; Shung, K. Kirk

    2012-01-01

    Feasibility is demonstrated for a forward-imaging beam steering system involving a single-element 20 MHz angled-face acoustic transducer combined with an internal rotating variable-angle reflecting surface (VARS). Rotation of the VARS structure, for a fixed position of the transducer, generates a 2-D angular sector scan. If these VARS revolutions were to be accompanied by successive rotations of the single-element transducer, 3-D imaging would be achieved. In the design of this device, a single-element 20 MHz PMN-PT press-focused angled-face transducer is focused on the circle of midpoints of a micro-machined VARS within the distal end of an endoscope. The 2-D imaging system was tested in water bath experiments with phantom wire structures at a depth of 10 mm, and exhibited an axial resolution of 66 μm and a lateral resolution of 520 μm. Chirp coded excitation was used to enhance the signal-to-noise ratio, and to increase the depth of penetration. Images of an ex vivo cow eye were obtained. This VARS-based approach offers a novel forward-looking beam-steering method, which could be useful in intra-cavity imaging. PMID:23122968

  7. 20 MHz forward-imaging single-element beam steering with an internal rotating variable-angle reflecting surface: Wire phantom and ex vivo pilot study.

    PubMed

    Raphael, David T; Li, Xiang; Park, Jinhyoung; Chen, Ruimin; Chabok, Hamid; Barukh, Arthur; Zhou, Qifa; Elgazery, Mahmoud; Shung, K Kirk

    2013-02-01

    Feasibility is demonstrated for a forward-imaging beam steering system involving a single-element 20MHz angled-face acoustic transducer combined with an internal rotating variable-angle reflecting surface (VARS). Rotation of the VARS structure, for a fixed position of the transducer, generates a 2-D angular sector scan. If these VARS revolutions were to be accompanied by successive rotations of the single-element transducer, 3-D imaging would be achieved. In the design of this device, a single-element 20MHz PMN-PT press-focused angled-face transducer is focused on the circle of midpoints of a micro-machined VARS within the distal end of an endoscope. The 2-D imaging system was tested in water bath experiments with phantom wire structures at a depth of 10mm, and exhibited an axial resolution of 66μm and a lateral resolution of 520μm. Chirp coded excitation was used to enhance the signal-to-noise ratio, and to increase the depth of penetration. Images of an ex vivo cow eye were obtained. This VARS-based approach offers a novel forward-looking beam-steering method, which could be useful in intra-cavity imaging. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Deep neural network using color and synthesized three-dimensional shape for face recognition

    NASA Astrophysics Data System (ADS)

    Rhee, Seon-Min; Yoo, ByungIn; Han, Jae-Joon; Hwang, Wonjun

    2017-03-01

    We present an approach for face recognition using synthesized three-dimensional (3-D) shape information together with two-dimensional (2-D) color in a deep convolutional neural network (DCNN). As 3-D facial shape is hardly affected by the extrinsic 2-D texture changes caused by illumination, make-up, and occlusions, it could provide more reliable complementary features in harmony with the 2-D color feature in face recognition. Unlike other approaches that use 3-D shape information with the help of an additional depth sensor, our approach generates a personalized 3-D face model by using only face landmarks in the 2-D input image. Using the personalized 3-D face model, we generate a frontalized 2-D color facial image as well as 3-D facial images (e.g., a depth image and a normal image). In our DCNN, we first feed 2-D and 3-D facial images into independent convolutional layers, where the low-level kernels are successfully learned according to their own characteristics. Then, we merge them and feed into higher-level layers under a single deep neural network. Our proposed approach is evaluated with labeled faces in the wild dataset and the results show that the error rate of the verification rate at false acceptance rate 1% is improved by up to 32.1% compared with the baseline where only a 2-D color image is used.

  9. Transfer learning for bimodal biometrics recognition

    NASA Astrophysics Data System (ADS)

    Dan, Zhiping; Sun, Shuifa; Chen, Yanfei; Gan, Haitao

    2013-10-01

    Biometrics recognition aims to identify and predict new personal identities based on their existing knowledge. As the use of multiple biometric traits of the individual may enables more information to be used for recognition, it has been proved that multi-biometrics can produce higher accuracy than single biometrics. However, a common problem with traditional machine learning is that the training and test data should be in the same feature space, and have the same underlying distribution. If the distributions and features are different between training and future data, the model performance often drops. In this paper, we propose a transfer learning method for face recognition on bimodal biometrics. The training and test samples of bimodal biometric images are composed of the visible light face images and the infrared face images. Our algorithm transfers the knowledge across feature spaces, relaxing the assumption of same feature space as well as same underlying distribution by automatically learning a mapping between two different but somewhat similar face images. According to the experiments in the face images, the results show that the accuracy of face recognition has been greatly improved by the proposed method compared with the other previous methods. It demonstrates the effectiveness and robustness of our method.

  10. Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors

    PubMed Central

    Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung

    2018-01-01

    Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases. PMID:29495417

  11. Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors.

    PubMed

    Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung

    2018-02-26

    Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases.

  12. Live face detection based on the analysis of Fourier spectra

    NASA Astrophysics Data System (ADS)

    Li, Jiangwei; Wang, Yunhong; Tan, Tieniu; Jain, Anil K.

    2004-08-01

    Biometrics is a rapidly developing technology that is to identify a person based on his or her physiological or behavioral characteristics. To ensure the correction of authentication, the biometric system must be able to detect and reject the use of a copy of a biometric instead of the live biometric. This function is usually termed "liveness detection". This paper describes a new method for live face detection. Using structure and movement information of live face, an effective live face detection algorithm is presented. Compared to existing approaches, which concentrate on the measurement of 3D depth information, this method is based on the analysis of Fourier spectra of a single face image or face image sequences. Experimental results show that the proposed method has an encouraging performance.

  13. Identity-level representations affect unfamiliar face matching performance in sequential but not simultaneous tasks.

    PubMed

    Menon, Nadia; White, David; Kemp, Richard I

    2015-01-01

    According to cognitive and neurological models of the face-processing system, faces are represented at two levels of abstraction. First, image-based pictorial representations code a particular instance of a face and include information that is unrelated to identity-such as lighting, pose, and expression. Second, at a more abstract level, identity-specific representations combine information from various encounters with a single face. Here we tested whether identity-level representations mediate unfamiliar face matching performance. Across three experiments we manipulated identity attributions to pairs of target images and measured the effect on subsequent identification decisions. Participants were instructed that target images were either two photos of the same person (1ID condition) or photos of two different people (2ID condition). This manipulation consistently affected performance in sequential matching: 1ID instructions improved accuracy on "match" trials and caused participants to adopt a more liberal response bias than the 2ID condition. However, this manipulation did not affect performance in simultaneous matching. We conclude that identity-level representations, generated in working memory, influence the amount of variation tolerated between images, when making identity judgements in sequential face matching.

  14. Discriminating Projections for Estimating Face Age in Wild Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tokola, Ryan A; Bolme, David S; Ricanek, Karl

    2014-01-01

    We introduce a novel approach to estimating the age of a human from a single uncontrolled image. Current face age estimation algorithms work well in highly controlled images, and some are robust to changes in illumination, but it is usually assumed that images are close to frontal. This bias is clearly seen in the datasets that are commonly used to evaluate age estimation, which either entirely or mostly consist of frontal images. Using pose-specific projections, our algorithm maps image features into a pose-insensitive latent space that is discriminative with respect to age. Age estimation is then performed using a multi-classmore » SVM. We show that our approach outperforms other published results on the Images of Groups dataset, which is the only age-related dataset with a non-trivial number of off-axis face images, and that we are competitive with recent age estimation algorithms on the mostly-frontal FG-NET dataset. We also experimentally demonstrate that our feature projections introduce insensitivity to pose.« less

  15. Combined in-depth, 3D, en face imaging of the optic disc, optic disc pits and optic disc pit maculopathy using swept-source megahertz OCT at 1050 nm.

    PubMed

    Maertz, Josef; Kolb, Jan Philip; Klein, Thomas; Mohler, Kathrin J; Eibl, Matthias; Wieser, Wolfgang; Huber, Robert; Priglinger, Siegfried; Wolf, Armin

    2018-02-01

    To demonstrate papillary imaging of eyes with optic disc pits (ODP) or optic disc pit associated maculopathy (ODP-M) with ultrahigh-speed swept-source optical coherence tomography (SS-OCT) at 1.68 million A-scans/s. To generate 3D-renderings of the papillary area with 3D volume-reconstructions of the ODP and highly resolved en face images from a single densely-sampled megahertz-OCT (MHz-OCT) dataset for investigation of ODP-characteristics. A 1.68 MHz-prototype SS-MHz-OCT system at 1050 nm based on a Fourier-domain mode-locked laser was employed to acquire high-definition, 3D datasets with a dense sampling of 1600 × 1600 A-scans over a 45° field of view. Six eyes with ODPs, and two further eyes with glaucomatous alteration or without ocular pathology are presented. 3D-rendering of the deep papillary structures, virtual 3D-reconstructions of the ODPs and depth resolved isotropic en face images were generated using semiautomatic segmentation. 3D-rendering and en face imaging of the optic disc, ODPs and ODP associated pathologies showed a broad spectrum regarding ODP characteristics. Between individuals the shape of the ODP and the appending pathologies varied considerably. MHz-OCT en face imaging generates distinct top-view images of ODPs and ODP-M. MHz-OCT generates high resolution images of retinal pathologies associated with ODP-M and allows visualizing ODPs with depths of up to 2.7 mm. Different patterns of ODPs can be visualized in patients for the first time using 3D-reconstructions and co-registered high-definition en face images extracted from a single densely sampled 1050 nm megahertz-OCT (MHz-OCT) dataset. As the immediate vicinity to the SAS and the site of intrapapillary proliferation is located at the bottom of the ODP it is crucial to image the complete structure and the whole depth of ODPs. Especially in very deep pits, where non-swept-source OCT fails to reach the bottom, conventional swept-source devices and the MHz-OCT alike are feasible and beneficial methods to examine deep details of optic disc pathologies, while the MHz-OCT bears the advantage of an essentially swifter imaging process.

  16. The role of movement in the recognition of famous faces.

    PubMed

    Lander, K; Christie, F; Bruce, V

    1999-11-01

    The effects of movement on the recognition of famous faces shown in difficult conditions were investigated. Images were presented as negatives, upside down (inverted), and thresholded. Results indicate that, under all these conditions, moving faces were recognized significantly better than static ones. One possible explanation of this effect could be that a moving sequence contains more static information about the different views and expressions of the face than does a single static image. However, even when the amount of static information was equated (Experiments 3 and 4), there was still an advantage for moving sequences that contained their original dynamic properties. The results suggest that the dynamics of the motion provide additional information, helping to access an established familiar face representation. Both the theoretical and the practical implications for these findings are discussed.

  17. The roles of perceptual and conceptual information in face recognition.

    PubMed

    Schwartz, Linoy; Yovel, Galit

    2016-11-01

    The representation of familiar objects is comprised of perceptual information about their visual properties as well as the conceptual knowledge that we have about them. What is the relative contribution of perceptual and conceptual information to object recognition? Here, we examined this question by designing a face familiarization protocol during which participants were either exposed to rich perceptual information (viewing each face in different angles and illuminations) or with conceptual information (associating each face with a different name). Both conditions were compared with single-view faces presented with no labels. Recognition was tested on new images of the same identities to assess whether learning generated a view-invariant representation. Results showed better recognition of novel images of the learned identities following association of a face with a name label, but no enhancement following exposure to multiple face views. Whereas these findings may be consistent with the role of category learning in object recognition, face recognition was better for labeled faces only when faces were associated with person-related labels (name, occupation), but not with person-unrelated labels (object names or symbols). These findings suggest that association of meaningful conceptual information with an image shifts its representation from an image-based percept to a view-invariant concept. They further indicate that the role of conceptual information should be considered to account for the superior recognition that we have for familiar faces and objects. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  18. Not looking yourself: The cost of self-selecting photographs for identity verification.

    PubMed

    White, David; Burton, Amy L; Kemp, Richard I

    2016-05-01

    Photo-identification is based on the premise that photographs are representative of facial appearance. However, previous studies show that ratings of likeness vary across different photographs of the same face, suggesting that some images capture identity better than others. Two experiments were designed to examine the relationship between likeness judgments and face matching accuracy. In Experiment 1, we compared unfamiliar face matching accuracy for self-selected and other-selected high-likeness images. Surprisingly, images selected by previously unfamiliar viewers - after very limited exposure to a target face - were more accurately matched than self-selected images chosen by the target identity themselves. Results also revealed extremely low inter-rater agreement in ratings of likeness across participants, suggesting that perceptions of image resemblance are inherently unstable. In Experiment 2, we test whether the cost of self-selection can be explained by this general disagreement in likeness judgments between individual raters. We find that averaging across rankings by multiple raters produces image selections that provide superior identification accuracy. However, benefit of other-selection persisted for single raters, suggesting that inaccurate representations of self interfere with our ability to judge which images faithfully represent our current appearance. © 2015 The British Psychological Society.

  19. Influence of using a single facial vein as outflow in full-face transplantation: A three-dimensional computed tomographic study.

    PubMed

    Rodriguez-Lorenzo, Andres; Audolfsson, Thorir; Wong, Corrine; Cheng, Angela; Arbique, Gary; Nowinski, Daniel; Rozen, Shai

    2015-10-01

    The aim of this study was to evaluate the contribution of a single unilateral facial vein in the venous outflow of total-face allograft using three-dimensional computed tomographic imaging techniques to further elucidate the mechanisms of venous complications following total-face transplant. Full-face soft-tissue flaps were harvested from fresh adult human cadavers. A single facial vein was identified and injected distally to the submandibular gland with a radiopaque contrast (barium sulfate/gelatin mixture) in every specimen. Following vascular injections, three-dimensional computed tomographic venographies of the faces were performed. Images were viewed using TeraRecon Software (Teracon, Inc., San Mateo, CA, USA) allowing analysis of the venous anatomy and perfusion in different facial subunits by observing radiopaque filling venous patterns. Three-dimensional computed tomographic venographies demonstrated a venous network with different degrees of perfusion in subunits of the face in relation to the facial vein injection side: 100% of ipsilateral and contralateral forehead units, 100% of ipsilateral and 75% of contralateral periorbital units, 100% of ipsilateral and 25% of contralateral cheek units, 100% of ipsilateral and 75% of contralateral nose units, 100% of ipsilateral and 75% of contralateral upper lip units, 100% of ipsilateral and 25% of contralateral lower lip units, and 50% of ipsilateral and 25% of contralateral chin units. Venographies of the full-face grafts revealed better perfusion in the ipsilateral hemifaces from the facial vein in comparison with the contralateral hemifaces. Reduced perfusion was observed mostly in the contralateral cheek unit and contralateral lower face including the lower lip and chin units. Copyright © 2015 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  20. Face averages enhance user recognition for smartphone security.

    PubMed

    Robertson, David J; Kramer, Robin S S; Burton, A Mike

    2015-01-01

    Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy). In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual's 'face-average'--a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1) and for real faces (Experiment 2): users could unlock their phones more reliably when the device stored an average of the user's face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings.

  1. Stimulus features coded by single neurons of a macaque body category selective patch.

    PubMed

    Popivanov, Ivo D; Schyns, Philippe G; Vogels, Rufin

    2016-04-26

    Body category-selective regions of the primate temporal cortex respond to images of bodies, but it is unclear which fragments of such images drive single neurons' responses in these regions. Here we applied the Bubbles technique to the responses of single macaque middle superior temporal sulcus (midSTS) body patch neurons to reveal the image fragments the neurons respond to. We found that local image fragments such as extremities (limbs), curved boundaries, and parts of the torso drove the large majority of neurons. Bubbles revealed the whole body in only a few neurons. Neurons coded the features in a manner that was tolerant to translation and scale changes. Most image fragments were excitatory but for a few neurons both inhibitory and excitatory fragments (opponent coding) were present in the same image. The fragments we reveal here in the body patch with Bubbles differ from those suggested in previous studies of face-selective neurons in face patches. Together, our data indicate that the majority of body patch neurons respond to local image fragments that occur frequently, but not exclusively, in bodies, with a coding that is tolerant to translation and scale. Overall, the data suggest that the body category selectivity of the midSTS body patch depends more on the feature statistics of bodies (e.g., extensions occur more frequently in bodies) than on semantics (bodies as an abstract category).

  2. Stimulus features coded by single neurons of a macaque body category selective patch

    PubMed Central

    Popivanov, Ivo D.; Schyns, Philippe G.; Vogels, Rufin

    2016-01-01

    Body category-selective regions of the primate temporal cortex respond to images of bodies, but it is unclear which fragments of such images drive single neurons’ responses in these regions. Here we applied the Bubbles technique to the responses of single macaque middle superior temporal sulcus (midSTS) body patch neurons to reveal the image fragments the neurons respond to. We found that local image fragments such as extremities (limbs), curved boundaries, and parts of the torso drove the large majority of neurons. Bubbles revealed the whole body in only a few neurons. Neurons coded the features in a manner that was tolerant to translation and scale changes. Most image fragments were excitatory but for a few neurons both inhibitory and excitatory fragments (opponent coding) were present in the same image. The fragments we reveal here in the body patch with Bubbles differ from those suggested in previous studies of face-selective neurons in face patches. Together, our data indicate that the majority of body patch neurons respond to local image fragments that occur frequently, but not exclusively, in bodies, with a coding that is tolerant to translation and scale. Overall, the data suggest that the body category selectivity of the midSTS body patch depends more on the feature statistics of bodies (e.g., extensions occur more frequently in bodies) than on semantics (bodies as an abstract category). PMID:27071095

  3. Transfer between pose and expression training in face recognition.

    PubMed

    Chen, Wenfeng; Liu, Chang Hong

    2009-02-01

    Prior research has shown that recognition of unfamiliar faces is susceptible to image variations due to pose and expression changes. However, little is known about how these variations on a new face are learnt and handled. We aimed to investigate whether exposures to one type of variation facilitate recognition in the untrained variation. In Experiment 1, faces were trained in multiple or single pose but were tested with a new expression. In Experiment 2, faces were trained in multiple or single expression but were tested in a new pose. We found that higher level of exposure to pose information facilitated recognition of the trained face in a new expression. However, multiple-expression training failed to transfer to a new pose. The findings suggest that generalisation of pose training may be extended to different types of variation whereas generalisation of expression training is largely confined within the trained type of variation.

  4. The importance of internal facial features in learning new faces.

    PubMed

    Longmore, Christopher A; Liu, Chang Hong; Young, Andrew W

    2015-01-01

    For familiar faces, the internal features (eyes, nose, and mouth) are known to be differentially salient for recognition compared to external features such as hairstyle. Two experiments are reported that investigate how this internal feature advantage accrues as a face becomes familiar. In Experiment 1, we tested the contribution of internal and external features to the ability to generalize from a single studied photograph to different views of the same face. A recognition advantage for the internal features over the external features was found after a change of viewpoint, whereas there was no internal feature advantage when the same image was used at study and test. In Experiment 2, we removed the most salient external feature (hairstyle) from studied photographs and looked at how this affected generalization to a novel viewpoint. Removing the hair from images of the face assisted generalization to novel viewpoints, and this was especially the case when photographs showing more than one viewpoint were studied. The results suggest that the internal features play an important role in the generalization between different images of an individual's face by enabling the viewer to detect the common identity-diagnostic elements across non-identical instances of the face.

  5. En-face optical coherence tomography in the diagnosis and management of age-related macular degeneration and polypoidal choroidal vasculopathy.

    PubMed

    Lau, Tiffany; Wong, Ian Y; Iu, Lawrence; Chhablani, Jay; Yong, Tao; Hideki, Koizumi; Lee, Jacky; Wong, Raymond

    2015-05-01

    Optical coherence tomography (OCT) is a noninvasive imaging modality providing high-resolution images of the central retina that has completely transformed the field of ophthalmology. While traditional OCT has produced longitudinal cross-sectional images, advancements in data processing have led to the development of en-face OCT, which produces transverse images of retinal and choroidal layers at any specified depth. This offers additional benefit on top of longitudinal cross-sections because it provides an extensive overview of pathological structures in a single image. The aim of this review was to discuss the utility of en-face OCT in the diagnosis and management of age-related macular degeneration (AMD) and polypoidal choroidal vasculopathy (PCV). En-face imaging of the inner segment/outer segment junction of retinal photoreceptors has been shown to be a useful indicator of visual acuity and a predictor of the extent of progression of geographic atrophy. En-face OCT has also enabled high-resolution analysis and quantification of pathological structures such as reticular pseudodrusen (RPD) and choroidal neovascularization, which have the potential to become useful markers for disease monitoring. En-face Doppler OCT enables subtle changes in the choroidal vasculature to be detected in eyes with RPD and AMD, which has significantly advanced our understanding of their pathogenesis. En-face Doppler OCT has also been shown to be useful for detecting the polypoid lesions and branching vascular networks diagnostic of PCV. It may therefore serve as a noninvasive alternative to fluorescein and indocyanine green angiography for the diagnosis of PCV and other forms of the exudative macular disease.

  6. A comparative study of DIGNET, average, complete, single hierarchical and k-means clustering algorithms in 2D face image recognition

    NASA Astrophysics Data System (ADS)

    Thanos, Konstantinos-Georgios; Thomopoulos, Stelios C. A.

    2014-06-01

    The study in this paper belongs to a more general research of discovering facial sub-clusters in different ethnicity face databases. These new sub-clusters along with other metadata (such as race, sex, etc.) lead to a vector for each face in the database where each vector component represents the likelihood of participation of a given face to each cluster. This vector is then used as a feature vector in a human identification and tracking system based on face and other biometrics. The first stage in this system involves a clustering method which evaluates and compares the clustering results of five different clustering algorithms (average, complete, single hierarchical algorithm, k-means and DIGNET), and selects the best strategy for each data collection. In this paper we present the comparative performance of clustering results of DIGNET and four clustering algorithms (average, complete, single hierarchical and k-means) on fabricated 2D and 3D samples, and on actual face images from various databases, using four different standard metrics. These metrics are the silhouette figure, the mean silhouette coefficient, the Hubert test Γ coefficient, and the classification accuracy for each clustering result. The results showed that, in general, DIGNET gives more trustworthy results than the other algorithms when the metrics values are above a specific acceptance threshold. However when the evaluation results metrics have values lower than the acceptance threshold but not too low (too low corresponds to ambiguous results or false results), then it is necessary for the clustering results to be verified by the other algorithms.

  7. Aging and Emotion Recognition: Not Just a Losing Matter

    PubMed Central

    Sze, Jocelyn A.; Goodkind, Madeleine S.; Gyurak, Anett; Levenson, Robert W.

    2013-01-01

    Past studies on emotion recognition and aging have found evidence of age-related decline when emotion recognition was assessed by having participants detect single emotions depicted in static images of full or partial (e.g., eye region) faces. These tests afford good experimental control but do not capture the dynamic nature of real-world emotion recognition, which is often characterized by continuous emotional judgments and dynamic multi-modal stimuli. Research suggests that older adults often perform better under conditions that better mimic real-world social contexts. We assessed emotion recognition in young, middle-aged, and older adults using two traditional methods (single emotion judgments of static images of faces and eyes) and an additional method in which participants made continuous emotion judgments of dynamic, multi-modal stimuli (videotaped interactions between young, middle-aged, and older couples). Results revealed an age by test interaction. Largely consistent with prior research, we found some evidence that older adults performed worse than young adults when judging single emotions from images of faces (for sad and disgust faces only) and eyes (for older eyes only), with middle-aged adults falling in between. In contrast, older adults did better than young adults on the test involving continuous emotion judgments of dyadic interactions, with middle-aged adults falling in between. In tests in which target stimuli differed in age, emotion recognition was not facilitated by an age match between participant and target. These findings are discussed in terms of theoretical and methodological implications for the study of aging and emotional processing. PMID:22823183

  8. A 3D camera for improved facial recognition

    NASA Astrophysics Data System (ADS)

    Lewin, Andrew; Orchard, David A.; Scott, Andrew M.; Walton, Nicholas A.; Austin, Jim

    2004-12-01

    We describe a camera capable of recording 3D images of objects. It does this by projecting thousands of spots onto an object and then measuring the range to each spot by determining the parallax from a single frame. A second frame can be captured to record a conventional image, which can then be projected onto the surface mesh to form a rendered skin. The camera is able of locating the images of the spots to a precision of better than one tenth of a pixel, and from this it can determine range to an accuracy of less than 1 mm at 1 meter. The data can be recorded as a set of two images, and is reconstructed by forming a 'wire mesh' of range points and morphing the 2 D image over this structure. The camera can be used to record the images of faces and reconstruct the shape of the face, which allows viewing of the face from various angles. This allows images to be more critically inspected for the purpose of identifying individuals. Multiple images can be stitched together to create full panoramic images of head sized objects that can be viewed from any direction. The system is being tested with a graph matching system capable of fast and accurate shape comparisons for facial recognition. It can also be used with "models" of heads and faces to provide a means of obtaining biometric data.

  9. The occipital face area is causally involved in the formation of identity-specific face representations.

    PubMed

    Ambrus, Géza Gergely; Dotzer, Maria; Schweinberger, Stefan R; Kovács, Gyula

    2017-12-01

    Transcranial magnetic stimulation (TMS) and neuroimaging studies suggest a role of the right occipital face area (rOFA) in early facial feature processing. However, the degree to which rOFA is necessary for the encoding of facial identity has been less clear. Here we used a state-dependent TMS paradigm, where stimulation preferentially facilitates attributes encoded by less active neural populations, to investigate the role of the rOFA in face perception and specifically in image-independent identity processing. Participants performed a familiarity decision task for famous and unknown target faces, preceded by brief (200 ms) or longer (3500 ms) exposures to primes which were either an image of a different identity (DiffID), another image of the same identity (SameID), the same image (SameIMG), or a Fourier-randomized noise pattern (NOISE) while either the rOFA or the vertex as control was stimulated by single-pulse TMS. Strikingly, TMS to the rOFA eliminated the advantage of SameID over DiffID condition, thereby disrupting identity-specific priming, while leaving image-specific priming (better performance for SameIMG vs. SameID) unaffected. Our results suggest that the role of rOFA is not limited to low-level feature processing, and emphasize its role in image-independent facial identity processing and the formation of identity-specific memory traces.

  10. Simultaneous multimodal ophthalmic imaging using swept-source spectrally encoded scanning laser ophthalmoscopy and optical coherence tomography

    PubMed Central

    Malone, Joseph D.; El-Haddad, Mohamed T.; Bozic, Ivan; Tye, Logan A.; Majeau, Lucas; Godbout, Nicolas; Rollins, Andrew M.; Boudoux, Caroline; Joos, Karen M.; Patel, Shriji N.; Tao, Yuankai K.

    2016-01-01

    Scanning laser ophthalmoscopy (SLO) benefits diagnostic imaging and therapeutic guidance by allowing for high-speed en face imaging of retinal structures. When combined with optical coherence tomography (OCT), SLO enables real-time aiming and retinal tracking and provides complementary information for post-acquisition volumetric co-registration, bulk motion compensation, and averaging. However, multimodality SLO-OCT systems generally require dedicated light sources, scanners, relay optics, detectors, and additional digitization and synchronization electronics, which increase system complexity. Here, we present a multimodal ophthalmic imaging system using swept-source spectrally encoded scanning laser ophthalmoscopy and optical coherence tomography (SS-SESLO-OCT) for in vivo human retinal imaging. SESLO reduces the complexity of en face imaging systems by multiplexing spatial positions as a function of wavelength. SESLO image quality benefited from single-mode illumination and multimode collection through a prototype double-clad fiber coupler, which optimized scattered light throughput and reduce speckle contrast while maintaining lateral resolution. Using a shared 1060 nm swept-source, shared scanner and imaging optics, and a shared dual-channel high-speed digitizer, we acquired inherently co-registered en face retinal images and OCT cross-sections simultaneously at 200 frames-per-second. PMID:28101411

  11. Adaptive Markov Random Fields for Example-Based Super-resolution of Faces

    NASA Astrophysics Data System (ADS)

    Stephenson, Todd A.; Chen, Tsuhan

    2006-12-01

    Image enhancement of low-resolution images can be done through methods such as interpolation, super-resolution using multiple video frames, and example-based super-resolution. Example-based super-resolution, in particular, is suited to images that have a strong prior (for those frameworks that work on only a single image, it is more like image restoration than traditional, multiframe super-resolution). For example, hallucination and Markov random field (MRF) methods use examples drawn from the same domain as the image being enhanced to determine what the missing high-frequency information is likely to be. We propose to use even stronger prior information by extending MRF-based super-resolution to use adaptive observation and transition functions, that is, to make these functions region-dependent. We show with face images how we can adapt the modeling for each image patch so as to improve the resolution.

  12. Faciotopy—A face-feature map with face-like topology in the human occipital face area

    PubMed Central

    Henriksson, Linda; Mur, Marieke; Kriegeskorte, Nikolaus

    2015-01-01

    The occipital face area (OFA) and fusiform face area (FFA) are brain regions thought to be specialized for face perception. However, their intrinsic functional organization and status as cortical areas with well-defined boundaries remains unclear. Here we test these regions for “faciotopy”, a particular hypothesis about their intrinsic functional organisation. A faciotopic area would contain a face-feature map on the cortical surface, where cortical patches represent face features and neighbouring patches represent features that are physically neighbouring in a face. The faciotopy hypothesis is motivated by the idea that face regions might develop from a retinotopic protomap and acquire their selectivity for face features through natural visual experience. Faces have a prototypical configuration of features, are usually perceived in a canonical upright orientation, and are frequently fixated in particular locations. To test the faciotopy hypothesis, we presented images of isolated face features at fixation to subjects during functional magnetic resonance imaging. The responses in V1 were best explained by low-level image properties of the stimuli. OFA, and to a lesser degree FFA, showed evidence for faciotopic organization. When a single patch of cortex was estimated for each face feature, the cortical distances between the feature patches reflected the physical distance between the features in a face. Faciotopy would be the first example, to our knowledge, of a cortical map reflecting the topology, not of a part of the organism itself (its retina in retinotopy, its body in somatotopy), but of an external object of particular perceptual significance. PMID:26235800

  13. Faciotopy-A face-feature map with face-like topology in the human occipital face area.

    PubMed

    Henriksson, Linda; Mur, Marieke; Kriegeskorte, Nikolaus

    2015-11-01

    The occipital face area (OFA) and fusiform face area (FFA) are brain regions thought to be specialized for face perception. However, their intrinsic functional organization and status as cortical areas with well-defined boundaries remains unclear. Here we test these regions for "faciotopy", a particular hypothesis about their intrinsic functional organisation. A faciotopic area would contain a face-feature map on the cortical surface, where cortical patches represent face features and neighbouring patches represent features that are physically neighbouring in a face. The faciotopy hypothesis is motivated by the idea that face regions might develop from a retinotopic protomap and acquire their selectivity for face features through natural visual experience. Faces have a prototypical configuration of features, are usually perceived in a canonical upright orientation, and are frequently fixated in particular locations. To test the faciotopy hypothesis, we presented images of isolated face features at fixation to subjects during functional magnetic resonance imaging. The responses in V1 were best explained by low-level image properties of the stimuli. OFA, and to a lesser degree FFA, showed evidence for faciotopic organization. When a single patch of cortex was estimated for each face feature, the cortical distances between the feature patches reflected the physical distance between the features in a face. Faciotopy would be the first example, to our knowledge, of a cortical map reflecting the topology, not of a part of the organism itself (its retina in retinotopy, its body in somatotopy), but of an external object of particular perceptual significance. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  14. Non-lambertian reflectance modeling and shape recovery of faces using tensor splines.

    PubMed

    Kumar, Ritwik; Barmpoutis, Angelos; Banerjee, Arunava; Vemuri, Baba C

    2011-03-01

    Modeling illumination effects and pose variations of a face is of fundamental importance in the field of facial image analysis. Most of the conventional techniques that simultaneously address both of these problems work with the Lambertian assumption and thus fall short of accurately capturing the complex intensity variation that the facial images exhibit or recovering their 3D shape in the presence of specularities and cast shadows. In this paper, we present a novel Tensor-Spline-based framework for facial image analysis. We show that, using this framework, the facial apparent BRDF field can be accurately estimated while seamlessly accounting for cast shadows and specularities. Further, using local neighborhood information, the same framework can be exploited to recover the 3D shape of the face (to handle pose variation). We quantitatively validate the accuracy of the Tensor Spline model using a more general model based on the mixture of single-lobed spherical functions. We demonstrate the effectiveness of our technique by presenting extensive experimental results for face relighting, 3D shape recovery, and face recognition using the Extended Yale B and CMU PIE benchmark data sets.

  15. Face Averages Enhance User Recognition for Smartphone Security

    PubMed Central

    Robertson, David J.; Kramer, Robin S. S.; Burton, A. Mike

    2015-01-01

    Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy). In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual’s ‘face-average’ – a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1) and for real faces (Experiment 2): users could unlock their phones more reliably when the device stored an average of the user’s face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings. PMID:25807251

  16. Automatic image assessment from facial attributes

    NASA Astrophysics Data System (ADS)

    Ptucha, Raymond; Kloosterman, David; Mittelstaedt, Brian; Loui, Alexander

    2013-03-01

    Personal consumer photography collections often contain photos captured by numerous devices stored both locally and via online services. The task of gathering, organizing, and assembling still and video assets in preparation for sharing with others can be quite challenging. Current commercial photobook applications are mostly manual-based requiring significant user interactions. To assist the consumer in organizing these assets, we propose an automatic method to assign a fitness score to each asset, whereby the top scoring assets are used for product creation. Our method uses cues extracted from analyzing pixel data, metadata embedded in the file, as well as ancillary tags or online comments. When a face occurs in an image, its features have a dominating influence on both aesthetic and compositional properties of the displayed image. As such, this paper will emphasize the contributions faces have on affecting the overall fitness score of an image. To understand consumer preference, we conducted a psychophysical study that spanned 27 judges, 5,598 faces, and 2,550 images. Preferences on a per-face and per-image basis were independently gathered to train our classifiers. We describe how to use machine learning techniques to merge differing facial attributes into a single classifier. Our novel methods of facial weighting, fusion of facial attributes, and dimensionality reduction produce stateof- the-art results suitable for commercial applications.

  17. How does a newly encountered face become familiar? The effect of within-person variability on adults' and children's perception of identity.

    PubMed

    Baker, Kristen A; Laurence, Sarah; Mondloch, Catherine J

    2017-04-01

    Adults and children aged 6years and older easily recognize multiple images of a familiar face, but often perceive two images of an unfamiliar face as belonging to different identities. Here we examined the process by which a newly encountered face becomes familiar, defined as accurate recognition of multiple images that capture natural within-person variability in appearance. In Experiment 1 we examined whether exposure to within-person variability in appearance helps children learn a new face. Children aged 6-13years watched a 10-min video of a woman reading a story; she was filmed on a single day (low variability) or over three days, across which her appearance and filming conditions (e.g., camera, lighting) varied (high variability). After familiarization, participants sorted a set of images comprising novel images of the target identity intermixed with distractors. Compared to participants who received no familiarization, children showed evidence of learning only in the high-variability condition, in contrast to adults who showed evidence of learning in both the low- and high-variability conditions. Experiment 2 highlighted the efficiency with which adults learn a new face; their accuracy was comparable across training conditions despite variability in duration (1 vs. 10min) and type (video vs. static images) of training. Collectively, our findings show that exposure to variability leads to the formation of a robust representation of facial identity, consistent with perceptual learning in other domains (e.g., language), and that the development of face learning is protracted throughout childhood. We discuss possible underlying mechanisms. Copyright © 2016. Published by Elsevier B.V.

  18. Heterogeneous Face Attribute Estimation: A Deep Multi-Task Learning Approach.

    PubMed

    Han, Hu; K Jain, Anil; Shan, Shiguang; Chen, Xilin

    2017-08-10

    Face attribute estimation has many potential applications in video surveillance, face retrieval, and social media. While a number of methods have been proposed for face attribute estimation, most of them did not explicitly consider the attribute correlation and heterogeneity (e.g., ordinal vs. nominal and holistic vs. local) during feature representation learning. In this paper, we present a Deep Multi-Task Learning (DMTL) approach to jointly estimate multiple heterogeneous attributes from a single face image. In DMTL, we tackle attribute correlation and heterogeneity with convolutional neural networks (CNNs) consisting of shared feature learning for all the attributes, and category-specific feature learning for heterogeneous attributes. We also introduce an unconstrained face database (LFW+), an extension of public-domain LFW, with heterogeneous demographic attributes (age, gender, and race) obtained via crowdsourcing. Experimental results on benchmarks with multiple face attributes (MORPH II, LFW+, CelebA, LFWA, and FotW) show that the proposed approach has superior performance compared to state of the art. Finally, evaluations on a public-domain face database (LAP) with a single attribute show that the proposed approach has excellent generalization ability.

  19. Inside the Digital Wild West: How School Leaders Both Access and Avoid Social Media

    ERIC Educational Resources Information Center

    Corrigan, Laurie; Robertson, Lorayne

    2015-01-01

    This study examines the roles of Canadian school leaders in response to the rising phenomenon of student use of social media which impacts school climate and safety. The use of social media has resulted in more online text and image-based communication to multiple users and less face-to-face communication with single users. Adolescent…

  20. Choriocapillaris Imaging Using Multiple En Face Optical Coherence Tomography Angiography Image Averaging.

    PubMed

    Uji, Akihito; Balasubramanian, Siva; Lei, Jianqin; Baghdasaryan, Elmira; Al-Sheikh, Mayss; Sadda, SriniVas R

    2017-11-01

    Imaging of the choriocapillaris in vivo is challenging with existing technology. Optical coherence tomography angiography (OCTA), if optimized, could make the imaging less challenging. To investigate multiple en face image averaging on OCTA images of the choriocapillaris. Observational, cross-sectional case series at a referral institutional practice in Los Angeles, California. From the original cohort of 21 healthy individuals, 17 normal eyes of 17 participants were included in the study. The study dates were August to September 2016. All participants underwent OCTA imaging of the macula covering a 3 × 3-mm area using OCTA software (Cirrus 5000 with AngioPlex; Carl Zeiss Meditec). One eye per participant was repeatedly imaged to obtain 9 OCTA cube scan sets. Registration was first performed using superficial capillary plexus images, and this transformation was then applied to the choriocapillaris images. The 9 registered choriocapillaris images were then averaged. Quantitative parameters were measured on binarized OCTA images and compared with the unaveraged OCTA images. Vessel caliber measurement. Seventeen eyes of 17 participants (mean [SD] age, 35.1 [6.0] years; 9 [53%] female; and 9 [53%] of white race/ethnicity) with sufficient image quality were included in this analysis. The single unaveraged images demonstrated a granular appearance, and the vascular pattern was difficult to discern. After averaging, en face choriocapillaris images showed a meshwork appearance. The mean (SD) diameter of the vessels was 22.8 (5.8) µm (range, 9.6-40.2 µm). Compared with the single unaveraged images, the averaged images showed more flow voids (1423 flow voids [95% CI, 967-1909] vs 1254 flow voids [95% CI, 825-1683], P < .001), smaller average size of the flow voids (911 [95% CI, 301-1521] µm2 vs 1364 [95% CI, 645-2083] µm2, P < .001), and greater vessel density (70.7% [95% CI, 61.9%-79.5%] vs 61.9% [95% CI, 56.0%-67.8%], P < .001). The distribution of the number vs sizes of the flow voids was skewed in both unaveraged and averaged images. A linear log-log plot of the distribution showed a more homogeneous distribution in the averaged images compared with the unaveraged images. Multiple en face averaging can improve visualization of the choriocapillaris on OCTA images, transforming the images from a granular appearance to a level where the intervascular spaces can be resolved in healthy volunteers.

  1. Arsia Mons by Day and Night

    NASA Image and Video Library

    2004-06-22

    Released 22 June 2004 This pair of images shows part of Arsia Mons. Day/Night Infrared Pairs The image pairs presented focus on a single surface feature as seen in both the daytime and nighttime by the infrared THEMIS camera. The nighttime image (right) has been rotated 180 degrees to place north at the top. Infrared image interpretation Daytime: Infrared images taken during the daytime exhibit both the morphological and thermophysical properties of the surface of Mars. Morphologic details are visible due to the effect of sun-facing slopes receiving more energy than antisun-facing slopes. This creates a warm (bright) slope and cool (dark) slope appearance that mimics the light and shadows of a visible wavelength image. Thermophysical properties are seen in that dust heats up more quickly than rocks. Thus dusty areas are bright and rocky areas are dark. Nighttime: Infrared images taken during the nighttime exhibit only the thermophysical properties of the surface of Mars. The effect of sun-facing versus non-sun-facing energy dissipates quickly at night. Thermophysical effects dominate as different surfaces cool at different rates through the nighttime hours. Rocks cool slowly, and are therefore relatively bright at night (remember that rocks are dark during the day). Dust and other fine grained materials cool very quickly and are dark in nighttime infrared images. Image information: IR instrument. Latitude -19.6, Longitude 241.9 East (118.1 West). 100 meter/pixel resolution. http://photojournal.jpl.nasa.gov/catalog/PIA06399

  2. Crater Ejecta by Day and Night

    NASA Image and Video Library

    2004-06-24

    Released 24 June 2004 This pair of images shows a crater and its ejecta. Day/Night Infrared Pairs The image pairs presented focus on a single surface feature as seen in both the daytime and nighttime by the infrared THEMIS camera. The nighttime image (right) has been rotated 180 degrees to place north at the top. Infrared image interpretation Daytime: Infrared images taken during the daytime exhibit both the morphological and thermophysical properties of the surface of Mars. Morphologic details are visible due to the effect of sun-facing slopes receiving more energy than antisun-facing slopes. This creates a warm (bright) slope and cool (dark) slope appearance that mimics the light and shadows of a visible wavelength image. Thermophysical properties are seen in that dust heats up more quickly than rocks. Thus dusty areas are bright and rocky areas are dark. Nighttime: Infrared images taken during the nighttime exhibit only the thermophysical properties of the surface of Mars. The effect of sun-facing versus non-sun-facing energy dissipates quickly at night. Thermophysical effects dominate as different surfaces cool at different rates through the nighttime hours. Rocks cool slowly, and are therefore relatively bright at night (remember that rocks are dark during the day). Dust and other fine grained materials cool very quickly and are dark in nighttime infrared images. Image information: IR instrument. Latitude -9, Longitude 164.2 East (195.8 West). 100 meter/pixel resolution. http://photojournal.jpl.nasa.gov/catalog/PIA06445

  3. Coupled Dictionary Learning for the Detail-Enhanced Synthesis of 3-D Facial Expressions.

    PubMed

    Liang, Haoran; Liang, Ronghua; Song, Mingli; He, Xiaofei

    2016-04-01

    The desire to reconstruct 3-D face models with expressions from 2-D face images fosters increasing interest in addressing the problem of face modeling. This task is important and challenging in the field of computer animation. Facial contours and wrinkles are essential to generate a face with a certain expression; however, these details are generally ignored or are not seriously considered in previous studies on face model reconstruction. Thus, we employ coupled radius basis function networks to derive an intermediate 3-D face model from a single 2-D face image. To optimize the 3-D face model further through landmarks, a coupled dictionary that is related to 3-D face models and their corresponding 3-D landmarks is learned from the given training set through local coordinate coding. Another coupled dictionary is then constructed to bridge the 2-D and 3-D landmarks for the transfer of vertices on the face model. As a result, the final 3-D face can be generated with the appropriate expression. In the testing phase, the 2-D input faces are converted into 3-D models that display different expressions. Experimental results indicate that the proposed approach to facial expression synthesis can obtain model details more effectively than previous methods can.

  4. Three-dimensional printing for restoration of the donor face: A new digital technique tested and used in the first facial allotransplantation patient in Finland.

    PubMed

    Mäkitie, A A; Salmi, M; Lindford, A; Tuomi, J; Lassus, P

    2016-12-01

    Prosthetic mask restoration of the donor face is essential in current facial transplant protocols. The aim was to develop a new three-dimensional (3D) printing (additive manufacturing; AM) process for the production of a donor face mask that fulfilled the requirements for facial restoration after facial harvest. A digital image of a single test person's face was obtained in a standardized setting and subjected to three different image processing techniques. These data were used for the 3D modeling and printing of a donor face mask. The process was also tested in a cadaver setting and ultimately used clinically in a donor patient after facial allograft harvest. and Conclusions: All the three developed and tested techniques enabled the 3D printing of a custom-made face mask in a timely manner that is almost an exact replica of the donor patient's face. This technique was successfully used in a facial allotransplantation donor patient. Copyright © 2016 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  5. High-accuracy and robust face recognition system based on optical parallel correlator using a temporal image sequence

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Ishikawa, Mami; Ohta, Maiko; Kodate, Kashiko

    2005-09-01

    Face recognition is used in a wide range of security systems, such as monitoring credit card use, searching for individuals with street cameras via Internet and maintaining immigration control. There are still many technical subjects under study. For instance, the number of images that can be stored is limited under the current system, and the rate of recognition must be improved to account for photo shots taken at different angles under various conditions. We implemented a fully automatic Fast Face Recognition Optical Correlator (FARCO) system by using a 1000 frame/s optical parallel correlator designed and assembled by us. Operational speed for the 1: N (i.e. matching a pair of images among N, where N refers to the number of images in the database) identification experiment (4000 face images) amounts to less than 1.5 seconds, including the pre/post processing. From trial 1: N identification experiments using FARCO, we acquired low error rates of 2.6% False Reject Rate and 1.3% False Accept Rate. By making the most of the high-speed data-processing capability of this system, much more robustness can be achieved for various recognition conditions when large-category data are registered for a single person. We propose a face recognition algorithm for the FARCO while employing a temporal image sequence of moving images. Applying this algorithm to a natural posture, a two times higher recognition rate scored compared with our conventional system. The system has high potential for future use in a variety of purposes such as search for criminal suspects by use of street and airport video cameras, registration of babies at hospitals or handling of an immeasurable number of images in a database.

  6. The depth estimation of 3D face from single 2D picture based on manifold learning constraints

    NASA Astrophysics Data System (ADS)

    Li, Xia; Yang, Yang; Xiong, Hailiang; Liu, Yunxia

    2018-04-01

    The estimation of depth is virtual important in 3D face reconstruction. In this paper, we propose a t-SNE based on manifold learning constraints and introduce K-means method to divide the original database into several subset, and the selected optimal subset to reconstruct the 3D face depth information can greatly reduce the computational complexity. Firstly, we carry out the t-SNE operation to reduce the key feature points in each 3D face model from 1×249 to 1×2. Secondly, the K-means method is applied to divide the training 3D database into several subset. Thirdly, the Euclidean distance between the 83 feature points of the image to be estimated and the feature point information before the dimension reduction of each cluster center is calculated. The category of the image to be estimated is judged according to the minimum Euclidean distance. Finally, the method Kong D will be applied only in the optimal subset to estimate the depth value information of 83 feature points of 2D face images. Achieving the final depth estimation results, thus the computational complexity is greatly reduced. Compared with the traditional traversal search estimation method, although the proposed method error rate is reduced by 0.49, the number of searches decreases with the change of the category. In order to validate our approach, we use a public database to mimic the task of estimating the depth of face images from 2D images. The average number of searches decreased by 83.19%.

  7. Hyperspectral face recognition using improved inter-channel alignment based on qualitative prediction models.

    PubMed

    Cho, Woon; Jang, Jinbeum; Koschan, Andreas; Abidi, Mongi A; Paik, Joonki

    2016-11-28

    A fundamental limitation of hyperspectral imaging is the inter-band misalignment correlated with subject motion during data acquisition. One way of resolving this problem is to assess the alignment quality of hyperspectral image cubes derived from the state-of-the-art alignment methods. In this paper, we present an automatic selection framework for the optimal alignment method to improve the performance of face recognition. Specifically, we develop two qualitative prediction models based on: 1) a principal curvature map for evaluating the similarity index between sequential target bands and a reference band in the hyperspectral image cube as a full-reference metric; and 2) the cumulative probability of target colors in the HSV color space for evaluating the alignment index of a single sRGB image rendered using all of the bands of the hyperspectral image cube as a no-reference metric. We verify the efficacy of the proposed metrics on a new large-scale database, demonstrating a higher prediction accuracy in determining improved alignment compared to two full-reference and five no-reference image quality metrics. We also validate the ability of the proposed framework to improve hyperspectral face recognition.

  8. Driver face tracking using semantics-based feature of eyes on single FPGA

    NASA Astrophysics Data System (ADS)

    Yu, Ying-Hao; Chen, Ji-An; Ting, Yi-Siang; Kwok, Ngaiming

    2017-06-01

    Tracking driver's face is one of the essentialities for driving safety control. This kind of system is usually designed with complicated algorithms to recognize driver's face by means of powerful computers. The design problem is not only about detecting rate but also from parts damages under rigorous environments by vibration, heat, and humidity. A feasible strategy to counteract these damages is to integrate entire system into a single chip in order to achieve minimum installation dimension, weight, power consumption, and exposure to air. Meanwhile, an extraordinary methodology is also indispensable to overcome the dilemma of low-computing capability and real-time performance on a low-end chip. In this paper, a novel driver face tracking system is proposed by employing semantics-based vague image representation (SVIR) for minimum hardware resource usages on a FPGA, and the real-time performance is also guaranteed at the same time. Our experimental results have indicated that the proposed face tracking system is viable and promising for the smart car design in the future.

  9. Single-faced GRAYQB™: a radiation mapping device

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayer, J.; Farfan, E.; Immel, D.

    2013-12-12

    GrayQb{trademark} is a novel technology that has the potential to characterize radioactively contaminated areas such as hot cells, gloveboxes, small and large rooms, hallways, and waste tanks. The goal of GrayQb{trademark} is to speed the process of decontaminating these areas, which reduces worker exposures and promotes ALARA considerations. The device employs Phosphorous Storage Plate (PSP) technology as its primary detector material. PSPs, commonly used for medical applications and non-destructive testing, can be read using a commercially available scanner. The goal of GrayQb{trademark} technology is to locate, quantify, and identify the sources of contamination. The purpose of the work documented inmore » this report was to better characterize the performance of GrayQb{trademark} in its ability to present overlay images of the PSP image and the associated visual image of the location being surveyed. The results presented in this report are overlay images identifying the location of hot spots in both controlled and field environments. The GrayQb{trademark} technology has been mainly tested in a controlled environment with known distances and source characteristics such as specific known radionuclides, dose rates, and strength. The original concept for the GrayQb{trademark} device involved utilizing the six faces of a cube configuration and was designed to be positioned in the center of a contaminated area for 3D mapping. A smaller single-faced GrayQb{trademark}, dubbed GrayQb SF, was designed for the purpose of conducting the characterization testing documented in this report. This lighter 2D version is ideal for applications where entry ports are too small for a deployment of the original GrayQb™ version or where only a single surface is of interest. The shape, size, and weight of these two designs have been carefully modeled to account for most limitations encountered in hot cells, gloveboxes, and contaminated areas. GrayQb{trademark} and GrayQb{trademark} SF share the same fundamental detection system design (e.g., pinhole and PSPs). Therefore, performance tests completed on the single face GrayQB in this report is also applicable to the six- faced GrayQB (e.g., ambient light sensitivity and PSP response). This report details the characterization of the GrayQb{trademark} SF in both an uncontrolled environment; specifically, the Savannah River Site (SRS) Plutonium Fuel Form Facility in Building 235-F (Metallurgical Building) and controlled testing at SRS’s Health Physics Instrument Calibration Facility and SRS’s R&D Engineering Imaging and Radiation Systems Building. In this report, the resulting images from the Calibration Facility were obtained by overlaying the PSP and visual images manually using ImageJ. The resulting images from the Building 235-F tests presented in this report were produced using ImageJ and applying response trends developed from controlled testing results. The GrayQb{trademark} technology has been developed in two main stages at Savannah River National Laboratory (SRNL): 1) the GrayQb{trademark} development was supported by SRNL’s Laboratory Directed Research and Development Program and 2) the GrayQb{trademark} SF development and its testing in Building 235-F were supported by the Office of Deactivation and Decommissioning and Facility Engineering (EM-13), U.S. Department of Energy – Office of Environmental Management.« less

  10. An objective method for measuring face detection thresholds using the sweep steady-state visual evoked response

    PubMed Central

    Ales, Justin M.; Farzin, Faraz; Rossion, Bruno; Norcia, Anthony M.

    2012-01-01

    We introduce a sensitive method for measuring face detection thresholds rapidly, objectively, and independently of low-level visual cues. The method is based on the swept parameter steady-state visual evoked potential (ssVEP), in which a stimulus is presented at a specific temporal frequency while parametrically varying (“sweeping”) the detectability of the stimulus. Here, the visibility of a face image was increased by progressive derandomization of the phase spectra of the image in a series of equally spaced steps. Alternations between face and fully randomized images at a constant rate (3/s) elicit a robust first harmonic response at 3 Hz specific to the structure of the face. High-density EEG was recorded from 10 human adult participants, who were asked to respond with a button-press as soon as they detected a face. The majority of participants produced an evoked response at the first harmonic (3 Hz) that emerged abruptly between 30% and 35% phase-coherence of the face, which was most prominent on right occipito-temporal sites. Thresholds for face detection were estimated reliably in single participants from 15 trials, or on each of the 15 individual face trials. The ssVEP-derived thresholds correlated with the concurrently measured perceptual face detection thresholds. This first application of the sweep VEP approach to high-level vision provides a sensitive and objective method that could be used to measure and compare visual perception thresholds for various object shapes and levels of categorization in different human populations, including infants and individuals with developmental delay. PMID:23024355

  11. Measures of skin conductance and heart rate in alcoholic men and women during memory performance

    PubMed Central

    Poey, Alan; Ruiz, Susan Mosher; Marinkovic, Ksenija; Oscar-Berman, Marlene

    2015-01-01

    We examined abnormalities in physiological responses to emotional stimuli associated with long-term chronic alcoholism. Skin conductance responses (SCR) and heart rate (HR) responses were measured in 32 abstinent alcoholic (ALC) and 30 healthy nonalcoholic (NC) men and women undergoing an emotional memory task in an MRI scanner. The task required participants to remember the identity of two emotionally-valenced faces presented at the onset of each trial during functional magnetic resonance imaging (fMRI) scanning. After viewing the faces, participants saw a distractor image (an alcoholic beverage, nonalcoholic beverage, or scrambled image) followed by a single probe face. The task was to decide whether the probe face matched one of the two encoded faces. Skin conductance measurements (before and after the encoded faces, distractor, and probe) were obtained from electrodes on the index and middle fingers on the left hand. HR measurements (beats per minute before and after the encoded faces, distractor, and probe) were obtained by a pulse oximeter placed on the little finger on the left hand. We expected that, relative to NC participants, the ALC participants would show reduced SCR and HR responses to the face stimuli, and that we would identify greater reactivity to the alcoholic beverage stimuli than to the distractor stimuli unrelated to alcohol. While the beverage type did not differentiate the groups, the ALC group did have reduced skin conductance and HR responses to elements of the task, as compared to the NC group. PMID:26020002

  12. DOPI and PALM imaging of single carbohydrate binding modules bound to cellulose nanocrystals

    NASA Astrophysics Data System (ADS)

    Dagel, D. J.; Liu, Y.-S.; Zhong, L.; Luo, Y.; Zeng, Y.; Himmel, M.; Ding, S.-Y.; Smith, S.

    2011-03-01

    We use single molecule imaging methods to study the binding characteristics of carbohydrate-binding modules (CBMs) to cellulose crystals. The CBMs are carbohydrate specific binding proteins, and a functional component of most cellulase enzymes, which in turn hydrolyze cellulose, releasing simple sugars suitable for fermentation to biofuels. The CBM plays the important role of locating the crystalline face of cellulose, a critical step in cellulase action. A biophysical understanding of the CBM action aids in developing a mechanistic picture of the cellulase enzyme, important for selection and potential modification. Towards this end, we have genetically modified cellulose-binding CBM derived from bacterial source with green fluorescent protein (GFP), and photo-activated fluorescence protein PAmCherry tags, respectively. Using the single molecule method known as Defocused Orientation and Position Imaging (DOPI), we observe a preferred orientation of the CBM-GFP complex relative to the Valonia cellulose nanocrystals. Subsequent analysis showed the CBMs bind to the opposite hydrophobic <110> faces of the cellulose nanocrystals with a welldefined cross-orientation of about { 70°. Photo Activated Localization Microscopy (PALM) is used to localize CBMPAmCherry with a localization accuracy of { 10nm. Analysis of the nearest neighbor distributions along and perpendicular to the cellulose nanocrystal axes are consistent with single-file CBM binding along the fiber axis, and microfibril bundles consisting of close packed { 20nm or smaller cellulose microfibrils.

  13. Dual-color 3D superresolution microscopy by combined spectral-demixing and biplane imaging.

    PubMed

    Winterflood, Christian M; Platonova, Evgenia; Albrecht, David; Ewers, Helge

    2015-07-07

    Multicolor three-dimensional (3D) superresolution techniques allow important insight into the relative organization of cellular structures. While a number of innovative solutions have emerged, multicolor 3D techniques still face significant technical challenges. In this Letter we provide a straightforward approach to single-molecule localization microscopy imaging in three dimensions and two colors. We combine biplane imaging and spectral-demixing, which eliminates a number of problems, including color cross-talk, chromatic aberration effects, and problems with color registration. We present 3D dual-color images of nanoscopic structures in hippocampal neurons with a 3D compound resolution routinely achieved only in a single color. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  14. Single Etch-Pit Shape on Off-Angled 4H-SiC(0001) Si-Face Formed by Chlorine Trifluoride

    NASA Astrophysics Data System (ADS)

    Hatayama, Tomoaki; Tamura, Tetsuya; Yano, Hiroshi; Fuyuki, Takashi

    2012-07-01

    The etch pit shape of an off-angled 4H-SiC Si-face formed by chlorine trifluoride (ClF3) in nitrogen (N2) ambient has been studied. One type of etch pit with a crooked hexagonal shape was formed at an etching temperature below 500 °C. The angle of the etch pit measured from a cross-sectional atomic force microscopy image was about 10° from the [11bar 20] view. The dislocation type of the etch pit was discussed in relation to the etch pit shape and an electron-beam-induced current image.

  15. Face landmark point tracking using LK pyramid optical flow

    NASA Astrophysics Data System (ADS)

    Zhang, Gang; Tang, Sikan; Li, Jiaquan

    2018-04-01

    LK pyramid optical flow is an effective method to implement object tracking in a video. It is used for face landmark point tracking in a video in the paper. The landmark points, i.e. outer corner of left eye, inner corner of left eye, inner corner of right eye, outer corner of right eye, tip of a nose, left corner of mouth, right corner of mouth, are considered. It is in the first frame that the landmark points are marked by hand. For subsequent frames, performance of tracking is analyzed. Two kinds of conditions are considered, i.e. single factors such as normalized case, pose variation and slowly moving, expression variation, illumination variation, occlusion, front face and rapidly moving, pose face and rapidly moving, and combination of the factors such as pose and illumination variation, pose and expression variation, pose variation and occlusion, illumination and expression variation, expression variation and occlusion. Global measures and local ones are introduced to evaluate performance of tracking under different factors or combination of the factors. The global measures contain the number of images aligned successfully, average alignment error, the number of images aligned before failure, and the local ones contain the number of images aligned successfully for components of a face, average alignment error for the components. To testify performance of tracking for face landmark points under different cases, tests are carried out for image sequences gathered by us. Results show that the LK pyramid optical flow method can implement face landmark point tracking under normalized case, expression variation, illumination variation which does not affect facial details, pose variation, and that different factors or combination of the factors have different effect on performance of alignment for different landmark points.

  16. Application of real-time single camera SLAM technology for image-guided targeting in neurosurgery

    NASA Astrophysics Data System (ADS)

    Chang, Yau-Zen; Hou, Jung-Fu; Tsao, Yi Hsiang; Lee, Shih-Tseng

    2012-10-01

    In this paper, we propose an application of augmented reality technology for targeting tumors or anatomical structures inside the skull. The application is a combination of the technologies of MonoSLAM (Single Camera Simultaneous Localization and Mapping) and computer graphics. A stereo vision system is developed to construct geometric data of human face for registration with CT images. Reliability and accuracy of the application is enhanced by the use of fiduciary markers fixed to the skull. The MonoSLAM keeps track of the current location of the camera with respect to an augmented reality (AR) marker using the extended Kalman filter. The fiduciary markers provide reference when the AR marker is invisible to the camera. Relationship between the markers on the face and the augmented reality marker is obtained by a registration procedure by the stereo vision system and is updated on-line. A commercially available Android based tablet PC equipped with a 320×240 front-facing camera was used for implementation. The system is able to provide a live view of the patient overlaid by the solid models of tumors or anatomical structures, as well as the missing part of the tool inside the skull.

  17. Local intensity area descriptor for facial recognition in ideal and noise conditions

    NASA Astrophysics Data System (ADS)

    Tran, Chi-Kien; Tseng, Chin-Dar; Chao, Pei-Ju; Ting, Hui-Min; Chang, Liyun; Huang, Yu-Jie; Lee, Tsair-Fwu

    2017-03-01

    We propose a local texture descriptor, local intensity area descriptor (LIAD), which is applied for human facial recognition in ideal and noisy conditions. Each facial image is divided into small regions from which LIAD histograms are extracted and concatenated into a single feature vector to represent the facial image. The recognition is performed using a nearest neighbor classifier with histogram intersection and chi-square statistics as dissimilarity measures. Experiments were conducted with LIAD using the ORL database of faces (Olivetti Research Laboratory, Cambridge), the Face94 face database, the Georgia Tech face database, and the FERET database. The results demonstrated the improvement in accuracy of our proposed descriptor compared to conventional descriptors [local binary pattern (LBP), uniform LBP, local ternary pattern, histogram of oriented gradients, and local directional pattern]. Moreover, the proposed descriptor was less sensitive to noise and had low histogram dimensionality. Thus, it is expected to be a powerful texture descriptor that can be used for various computer vision problems.

  18. Robust representations of individual faces in chimpanzees (Pan troglodytes) but not monkeys (Macaca mulatta).

    PubMed

    Taubert, Jessica; Weldon, Kimberly B; Parr, Lisa A

    2017-03-01

    Being able to recognize the faces of our friends and family members no matter where we see them represents a substantial challenge for the visual system because the retinal image of a face can be degraded by both changes in the person (age, expression, pose, hairstyle, etc.) and changes in the viewing conditions (direction and degree of illumination). Yet most of us are able to recognize familiar people effortlessly. A popular theory for how face recognition is achieved has argued that the brain stabilizes facial appearance by building average representations that enhance diagnostic features that reliably vary between people while diluting features that vary between instances of the same person. This explains why people find it easier to recognize average images of people, created by averaging multiple images of the same person together, than single instances (i.e. photographs). Although this theory is gathering momentum in the psychological and computer sciences, there is no evidence of whether this mechanism represents a unique specialization for individual recognition in humans. Here we tested two species, chimpanzees (Pan troglodytes) and rhesus monkeys (Macaca mulatta), to determine whether average images of different familiar individuals were easier to discriminate than photographs of familiar individuals. Using a two-alternative forced-choice, match-to-sample procedure, we report a behaviour response profile that suggests chimpanzees encode the faces of conspecifics differently than rhesus monkeys and in a manner similar to humans.

  19. Multimodal ophthalmic imaging using swept source spectrally encoded scanning laser ophthalmoscopy and optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Malone, Joseph D.; El-Haddad, Mohamed T.; Tye, Logan A.; Majeau, Lucas; Godbout, Nicolas; Rollins, Andrew M.; Boudoux, Caroline; Tao, Yuankai K.

    2016-03-01

    Scanning laser ophthalmoscopy (SLO) and optical coherence tomography (OCT) benefit clinical diagnostic imaging in ophthalmology by enabling in vivo noninvasive en face and volumetric visualization of retinal structures, respectively. Spectrally encoding methods enable confocal imaging through fiber optics and reduces system complexity. Previous applications in ophthalmic imaging include spectrally encoded confocal scanning laser ophthalmoscopy (SECSLO) and a combined SECSLO-OCT system for image guidance, tracking, and registration. However, spectrally encoded imaging suffers from speckle noise because each spectrally encoded channel is effectively monochromatic. Here, we demonstrate in vivo human retinal imaging using a swept source spectrally encoded scanning laser ophthalmoscope and OCT (SSSESLO- OCT) at 1060 nm. SS-SESLO-OCT uses a shared 100 kHz Axsun swept source, shared scanner and imaging optics, and are detected simultaneously on a shared, dual channel high-speed digitizer. SESLO illumination and detection was performed using the single mode core and multimode inner cladding of a double clad fiber coupler, respectively, to preserve lateral resolution while improving collection efficiency and reducing speckle contrast at the expense of confocality. Concurrent en face SESLO and cross-sectional OCT images were acquired with 1376 x 500 pixels at 200 frames-per-second. Our system design is compact and uses a shared light source, imaging optics, and digitizer, which reduces overall system complexity and ensures inherent co-registration between SESLO and OCT FOVs. En face SESLO images acquired concurrent with OCT cross-sections enables lateral motion tracking and three-dimensional volume registration with broad applications in multivolume OCT averaging, image mosaicking, and intraoperative instrument tracking.

  20. A mixture of sparse coding models explaining properties of face neurons related to holistic and parts-based processing

    PubMed Central

    2017-01-01

    Experimental studies have revealed evidence of both parts-based and holistic representations of objects and faces in the primate visual system. However, it is still a mystery how such seemingly contradictory types of processing can coexist within a single system. Here, we propose a novel theory called mixture of sparse coding models, inspired by the formation of category-specific subregions in the inferotemporal (IT) cortex. We developed a hierarchical network that constructed a mixture of two sparse coding submodels on top of a simple Gabor analysis. The submodels were each trained with face or non-face object images, which resulted in separate representations of facial parts and object parts. Importantly, evoked neural activities were modeled by Bayesian inference, which had a top-down explaining-away effect that enabled recognition of an individual part to depend strongly on the category of the whole input. We show that this explaining-away effect was indeed crucial for the units in the face submodel to exhibit significant selectivity to face images over object images in a similar way to actual face-selective neurons in the macaque IT cortex. Furthermore, the model explained, qualitatively and quantitatively, several tuning properties to facial features found in the middle patch of face processing in IT as documented by Freiwald, Tsao, and Livingstone (2009). These included, in particular, tuning to only a small number of facial features that were often related to geometrically large parts like face outline and hair, preference and anti-preference of extreme facial features (e.g., very large/small inter-eye distance), and reduction of the gain of feature tuning for partial face stimuli compared to whole face stimuli. Thus, we hypothesize that the coding principle of facial features in the middle patch of face processing in the macaque IT cortex may be closely related to mixture of sparse coding models. PMID:28742816

  1. A mixture of sparse coding models explaining properties of face neurons related to holistic and parts-based processing.

    PubMed

    Hosoya, Haruo; Hyvärinen, Aapo

    2017-07-01

    Experimental studies have revealed evidence of both parts-based and holistic representations of objects and faces in the primate visual system. However, it is still a mystery how such seemingly contradictory types of processing can coexist within a single system. Here, we propose a novel theory called mixture of sparse coding models, inspired by the formation of category-specific subregions in the inferotemporal (IT) cortex. We developed a hierarchical network that constructed a mixture of two sparse coding submodels on top of a simple Gabor analysis. The submodels were each trained with face or non-face object images, which resulted in separate representations of facial parts and object parts. Importantly, evoked neural activities were modeled by Bayesian inference, which had a top-down explaining-away effect that enabled recognition of an individual part to depend strongly on the category of the whole input. We show that this explaining-away effect was indeed crucial for the units in the face submodel to exhibit significant selectivity to face images over object images in a similar way to actual face-selective neurons in the macaque IT cortex. Furthermore, the model explained, qualitatively and quantitatively, several tuning properties to facial features found in the middle patch of face processing in IT as documented by Freiwald, Tsao, and Livingstone (2009). These included, in particular, tuning to only a small number of facial features that were often related to geometrically large parts like face outline and hair, preference and anti-preference of extreme facial features (e.g., very large/small inter-eye distance), and reduction of the gain of feature tuning for partial face stimuli compared to whole face stimuli. Thus, we hypothesize that the coding principle of facial features in the middle patch of face processing in the macaque IT cortex may be closely related to mixture of sparse coding models.

  2. Combining features from ERP components in single-trial EEG for discriminating four-category visual objects.

    PubMed

    Wang, Changming; Xiong, Shi; Hu, Xiaoping; Yao, Li; Zhang, Jiacai

    2012-10-01

    Categorization of images containing visual objects can be successfully recognized using single-trial electroencephalograph (EEG) measured when subjects view images. Previous studies have shown that task-related information contained in event-related potential (ERP) components could discriminate two or three categories of object images. In this study, we investigated whether four categories of objects (human faces, buildings, cats and cars) could be mutually discriminated using single-trial EEG data. Here, the EEG waveforms acquired while subjects were viewing four categories of object images were segmented into several ERP components (P1, N1, P2a and P2b), and then Fisher linear discriminant analysis (Fisher-LDA) was used to classify EEG features extracted from ERP components. Firstly, we compared the classification results using features from single ERP components, and identified that the N1 component achieved the highest classification accuracies. Secondly, we discriminated four categories of objects using combining features from multiple ERP components, and showed that combination of ERP components improved four-category classification accuracies by utilizing the complementarity of discriminative information in ERP components. These findings confirmed that four categories of object images could be discriminated with single-trial EEG and could direct us to select effective EEG features for classifying visual objects.

  3. Touch HDR: photograph enhancement by user controlled wide dynamic range adaptation

    NASA Astrophysics Data System (ADS)

    Verrall, Steve; Siddiqui, Hasib; Atanassov, Kalin; Goma, Sergio; Ramachandra, Vikas

    2013-03-01

    High Dynamic Range (HDR) technology enables photographers to capture a greater range of tonal detail. HDR is typically used to bring out detail in a dark foreground object set against a bright background. HDR technologies include multi-frame HDR and single-frame HDR. Multi-frame HDR requires the combination of a sequence of images taken at different exposures. Single-frame HDR requires histogram equalization post-processing of a single image, a technique referred to as local tone mapping (LTM). Images generated using HDR technology can look less natural than their non- HDR counterparts. Sometimes it is only desired to enhance small regions of an original image. For example, it may be desired to enhance the tonal detail of one subject's face while preserving the original background. The Touch HDR technique described in this paper achieves these goals by enabling selective blending of HDR and non-HDR versions of the same image to create a hybrid image. The HDR version of the image can be generated by either multi-frame or single-frame HDR. Selective blending can be performed as a post-processing step, for example, as a feature of a photo editor application, at any time after the image has been captured. HDR and non-HDR blending is controlled by a weighting surface, which is configured by the user through a sequence of touches on a touchscreen.

  4. Visual cryptography for face privacy

    NASA Astrophysics Data System (ADS)

    Ross, Arun; Othman, Asem A.

    2010-04-01

    We discuss the problem of preserving the privacy of a digital face image stored in a central database. In the proposed scheme, a private face image is dithered into two host face images such that it can be revealed only when both host images are simultaneously available; at the same time, the individual host images do not reveal the identity of the original image. In order to accomplish this, we appeal to the field of Visual Cryptography. Experimental results confirm the following: (a) the possibility of hiding a private face image in two unrelated host face images; (b) the successful matching of face images that are reconstructed by superimposing the host images; and (c) the inability of the host images, known as sheets, to reveal the identity of the secret face image.

  5. Functional organization of the face-sensitive areas in human occipital-temporal cortex.

    PubMed

    Shao, Hanyu; Weng, Xuchu; He, Sheng

    2017-08-15

    Human occipital-temporal cortex features several areas sensitive to faces, presumably forming the biological substrate for face perception. To date, there are piecemeal insights regarding the functional organization of these regions. They have come, however, from studies that are far from homogeneous with regard to the regions involved, the experimental design, and the data analysis approach. In order to provide an overall view of the functional organization of the face-sensitive areas, it is necessary to conduct a comprehensive study that taps into the pivotal functional properties of all the face-sensitive areas, within the context of the same experimental design, and uses multiple data analysis approaches. In this study, we identified the most robustly activated face-sensitive areas in bilateral occipital-temporal cortices (i.e., AFP, aFFA, pFFA, OFA, pcSTS, pSTS) and systemically compared their regionally averaged activation and multivoxel activation patterns to 96 images from 16 object categories, including faces and non-faces. This condition-rich and single-image analysis approach critically samples the functional properties of a brain region, allowing us to test how two basic functional properties, namely face-category selectivity and face-exemplar sensitivity are distributed among these regions. Moreover, by examining the correlational structure of neural responses to the 96 images, we characterize their interactions in the greater face-processing network. We found that (1) r-pFFA showed the highest face-category selectivity, followed by l-pFFA, bilateral aFFA and OFA, and then bilateral pcSTS. In contrast, bilateral AFP and pSTS showed low face-category selectivity; (2) l-aFFA, l-pcSTS and bilateral AFP showed evidence of face-exemplar sensitivity; (3) r-OFA showed high overall response similarities with bilateral LOC and r-pFFA, suggesting it might be a transitional stage between general and face-selective information processing; (4) r-aFFA showed high face-selective response similarity with r-pFFA and r-OFA, indicating it was specifically involved in processing face information. Results also reveal two properties of these face sensitive regions across the two hemispheres: (1) the averaged left intra-hemispheric response similarity for the images was lower than the averaged right intra-hemispheric and the inter-hemispheric response similarity, implying convergence of face processing towards the right hemisphere, and (2) the response similarities between homologous regions in the two hemispheres decreased as information processing proceeded from the early, more posterior, processing stage (OFA), indicating an increasing degree of hemispheric specialization and right hemisphere bias for face information processing. This study contributes to an emerging picture of how faces are processed within the occipital and temporal cortex. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Effects of Implied Motion and Facing Direction on Positional Preferences in Single-Object Pictures.

    PubMed

    Palmer, Stephen E; Langlois, Thomas A

    2017-07-01

    Palmer, Gardner, and Wickens studied aesthetic preferences for pictures of single objects and found a strong inward bias: Right-facing objects were preferred left-of-center and left-facing objects right-of-center. They found no effect of object motion (people and cars showed the same inward bias as chairs and teapots), but the objects were not depicted as moving. Here we measured analogous inward biases with objects depicted as moving with an implied direction and speed by having participants drag-and-drop target objects into the most aesthetically pleasing position. In Experiment 1, human figures were shown diving or falling while moving forward or backward. Aesthetic biases were evident for both inward-facing and inward-moving figures, but the motion-based bias dominated so strongly that backward divers or fallers were preferred moving inward but facing outward. Experiment 2 investigated implied speed effects using images of humans, horses, and cars moving at different speeds (e.g., standing, walking, trotting, and galloping horses). Inward motion or facing biases were again present, and differences in their magnitude due to speed were evident. Unexpectedly, faster moving objects were generally preferred closer to frame center than slower moving objects. These results are discussed in terms of the combined effects of prospective, future-oriented biases, and retrospective, past-oriented biases.

  7. Age-related increase of image-invariance in the fusiform face area.

    PubMed

    Nordt, Marisa; Semmelmann, Kilian; Genç, Erhan; Weigelt, Sarah

    2018-06-01

    Face recognition undergoes prolonged development from childhood to adulthood, thereby raising the question which neural underpinnings are driving this development. Here, we address the development of the neural foundation of the ability to recognize a face across naturally varying images. Fourteen children (ages, 7-10) and 14 adults (ages, 20-23) watched images of either the same or different faces in a functional magnetic resonance imaging adaptation paradigm. The same face was either presented in exact image repetitions or in varying images. Additionally, a subset of participants completed a behavioral task, in which they decided if the face in consecutively presented images belonged to the same person. Results revealed age-related increases in neural sensitivity to face identity in the fusiform face area. Importantly, ventral temporal face-selective regions exhibited more image-invariance - as indicated by stronger adaptation for different images of the same person - in adults compared to children. Crucially, the amount of adaptation to face identity across varying images was correlated with the ability to recognize individual faces in different images. These results suggest that the increase of image-invariance in face-selective regions might be related to the development of face recognition skills. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. Not All Faces Are Processed Equally: Evidence for Featural Rather than Holistic Processing of One's Own Face in a Face-Imaging Task

    ERIC Educational Resources Information Center

    Greenberg, Seth N.; Goshen-Gottstein, Yonatan

    2009-01-01

    The present work considers the mental imaging of faces, with a focus in own-face imaging. Experiments 1 and 3 demonstrated an own-face disadvantage, with slower generation of mental images of one's own face than of other familiar faces. In contrast, Experiment 2 demonstrated that mental images of facial parts are generated more quickly for one's…

  9. Visual search for faces by race: a cross-race study.

    PubMed

    Sun, Gang; Song, Luping; Bentin, Shlomo; Yang, Yanjie; Zhao, Lun

    2013-08-30

    Using a single averaged face of each race previous study indicated that the detection of one other-race face among own-race faces background was faster than vice versa (Levin, 1996, 2000). However, employing a variable mapping of face pictures one recent report found preferential detection of own-race faces vs. other-race faces (Lipp et al., 2009). Using the well-controlled design and a heterogeneous set of real face images, in the present study we explored the visual search for own and other race faces in Chinese and Caucasian participants. Across both groups, the search for a face of one race among other-race faces was serial and self-terminating. In Chinese participants, the search consistently faster for other-race than own-race faces, irrespective of upright or upside-down condition; however, this search asymmetry was not evident in Caucasian participants. These characteristics suggested that the race of a face is not a visual basic feature, and in Chinese participants the faster search for other-race than own-race faces also reflects perceptual factors. The possible mechanism underlying other-race search effects was discussed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Monocular correspondence detection for symmetrical objects by template matching

    NASA Astrophysics Data System (ADS)

    Vilmar, G.; Besslich, Philipp W., Jr.

    1990-09-01

    We describe a possibility to reconstruct 3-D information from a single view of an 3-D bilateral symmetric object. The symmetry assumption allows us to obtain a " second view" from a different viewpoint by a simple reflection of the monocular image. Therefore we have to solve the correspondence problem in a special case where known feature-based or area-based binocular approaches fail. In principle our approach is based on a frequency domain template matching of the features on the epipolar lines. During a training period our system " learns" the assignment of correspondence models to image features. The object shape is interpolated when no template matches to the image features. This fact is an important advantage of this methodology because no " real world" image holds the symmetry assumption perfectly. To simplify the training process we used single views on human faces (e. g. passport photos) but our system is trainable on any other kind of objects.

  11. A unified classifier for robust face recognition based on combining multiple subspace algorithms

    NASA Astrophysics Data System (ADS)

    Ijaz Bajwa, Usama; Ahmad Taj, Imtiaz; Waqas Anwar, Muhammad

    2012-10-01

    Face recognition being the fastest growing biometric technology has expanded manifold in the last few years. Various new algorithms and commercial systems have been proposed and developed. However, none of the proposed or developed algorithm is a complete solution because it may work very well on one set of images with say illumination changes but may not work properly on another set of image variations like expression variations. This study is motivated by the fact that any single classifier cannot claim to show generally better performance against all facial image variations. To overcome this shortcoming and achieve generality, combining several classifiers using various strategies has been studied extensively also incorporating the question of suitability of any classifier for this task. The study is based on the outcome of a comprehensive comparative analysis conducted on a combination of six subspace extraction algorithms and four distance metrics on three facial databases. The analysis leads to the selection of the most suitable classifiers which performs better on one task or the other. These classifiers are then combined together onto an ensemble classifier by two different strategies of weighted sum and re-ranking. The results of the ensemble classifier show that these strategies can be effectively used to construct a single classifier that can successfully handle varying facial image conditions of illumination, aging and facial expressions.

  12. Image fusion in craniofacial virtual reality modeling based on CT and 3dMD photogrammetry.

    PubMed

    Xin, Pengfei; Yu, Hongbo; Cheng, Huanchong; Shen, Shunyao; Shen, Steve G F

    2013-09-01

    The aim of this study was to demonstrate the feasibility of building a craniofacial virtual reality model by image fusion of 3-dimensional (3D) CT models and 3 dMD stereophotogrammetric facial surface. A CT scan and stereophotography were performed. The 3D CT models were reconstructed by Materialise Mimics software, and the stereophotogrammetric facial surface was reconstructed by 3 dMD patient software. All 3D CT models were exported as Stereo Lithography file format, and the 3 dMD model was exported as Virtual Reality Modeling Language file format. Image registration and fusion were performed in Mimics software. Genetic algorithm was used for precise image fusion alignment with minimum error. The 3D CT models and the 3 dMD stereophotogrammetric facial surface were finally merged into a single file and displayed using Deep Exploration software. Errors between the CT soft tissue model and 3 dMD facial surface were also analyzed. Virtual model based on CT-3 dMD image fusion clearly showed the photorealistic face and bone structures. Image registration errors in virtual face are mainly located in bilateral cheeks and eyeballs, and the errors are more than 1.5 mm. However, the image fusion of whole point cloud sets of CT and 3 dMD is acceptable with a minimum error that is less than 1 mm. The ease of use and high reliability of CT-3 dMD image fusion allows the 3D virtual head to be an accurate, realistic, and widespread tool, and has a great benefit to virtual face model.

  13. Near-infrared imaging of face transplants: are both pedicles necessary?

    PubMed

    Nguyen, John T; Ashitate, Yoshitomo; Venugopal, Vivek; Neacsu, Florin; Kettenring, Frank; Frangioni, John V; Gioux, Sylvain; Lee, Bernard T

    2013-09-01

    Facial transplantation is a complex procedure that corrects severe facial defects due to traumas, burns, and congenital disorders. Although face transplantation has been successfully performed clinically, potential risks include tissue ischemia and necrosis. The vascular supply is typically based on the bilateral neck vessels. As it remains unclear whether perfusion can be based off a single pedicle, this study was designed to assess perfusion patterns of facial transplant allografts using near-infrared (NIR) fluorescence imaging. Upper facial composite tissue allotransplants were created using both carotid artery and external jugular vein pedicles in Yorkshire pigs. A flap validation model was created in n = 2 pigs and a clamp occlusion model was performed in n = 3 pigs. In the clamp occlusion models, sequential clamping of the vessels was performed to assess perfusion. Animals were injected with indocyanine green and imaged with NIR fluorescence. Quantitative metrics were assessed based on fluorescence intensity. With NIR imaging, arterial perforators emitted fluorescence indicating perfusion along the surface of the skin. Isolated clamping of one vascular pedicle showed successful perfusion across the midline based on NIR fluorescence imaging. This perfusion extended into the facial allograft within 60 s and perfused the entire contralateral side within 5 min. Determination of vascular perfusion is important in microsurgical constructs as complications can lead to flap loss. It is still unclear if facial transplants require both pedicles. This initial pilot study using intraoperative NIR fluorescence imaging suggests that facial flap models can be adequately perfused from a single pedicle. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. Scatter Correction with Combined Single-Scatter Simulation and Monte Carlo Simulation Scaling Improved the Visual Artifacts and Quantification in 3-Dimensional Brain PET/CT Imaging with 15O-Gas Inhalation.

    PubMed

    Magota, Keiichi; Shiga, Tohru; Asano, Yukari; Shinyama, Daiki; Ye, Jinghan; Perkins, Amy E; Maniawski, Piotr J; Toyonaga, Takuya; Kobayashi, Kentaro; Hirata, Kenji; Katoh, Chietsugu; Hattori, Naoya; Tamaki, Nagara

    2017-12-01

    In 3-dimensional PET/CT imaging of the brain with 15 O-gas inhalation, high radioactivity in the face mask creates cold artifacts and affects the quantitative accuracy when scatter is corrected by conventional methods (e.g., single-scatter simulation [SSS] with tail-fitting scaling [TFS-SSS]). Here we examined the validity of a newly developed scatter-correction method that combines SSS with a scaling factor calculated by Monte Carlo simulation (MCS-SSS). Methods: We performed phantom experiments and patient studies. In the phantom experiments, a plastic bottle simulating a face mask was attached to a cylindric phantom simulating the brain. The cylindric phantom was filled with 18 F-FDG solution (3.8-7.0 kBq/mL). The bottle was filled with nonradioactive air or various levels of 18 F-FDG (0-170 kBq/mL). Images were corrected either by TFS-SSS or MCS-SSS using the CT data of the bottle filled with nonradioactive air. We compared the image activity concentration in the cylindric phantom with the true activity concentration. We also performed 15 O-gas brain PET based on the steady-state method on patients with cerebrovascular disease to obtain quantitative images of cerebral blood flow and oxygen metabolism. Results: In the phantom experiments, a cold artifact was observed immediately next to the bottle on TFS-SSS images, where the image activity concentrations in the cylindric phantom were underestimated by 18%, 36%, and 70% at the bottle radioactivity levels of 2.4, 5.1, and 9.7 kBq/mL, respectively. At higher bottle radioactivity, the image activity concentrations in the cylindric phantom were greater than 98% underestimated. For the MCS-SSS, in contrast, the error was within 5% at each bottle radioactivity level, although the image generated slight high-activity artifacts around the bottle when the bottle contained significantly high radioactivity. In the patient imaging with 15 O 2 and C 15 O 2 inhalation, cold artifacts were observed on TFS-SSS images, whereas no artifacts were observed on any of the MCS-SSS images. Conclusion: MCS-SSS accurately corrected the scatters in 15 O-gas brain PET when the 3-dimensional acquisition mode was used, preventing the generation of cold artifacts, which were observed immediately next to a face mask on TFS-SSS images. The MCS-SSS method will contribute to accurate quantitative assessments. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  15. Processing the Facial Image: A Brief History

    ERIC Educational Resources Information Center

    Gross, Charles G.

    2005-01-01

    The study of the neural basis of face perception is a major research interest today. This review traces its roots in monkey neuropsychology and neurophysiology beginning with the Kluver-Bucy syndrome and its fractionation and then continuing with lesion and single neuron recording studies of inferior temporal cortex. The context and consequence of…

  16. Spatial imaging of UV emission from Jupiter and Saturn

    NASA Technical Reports Server (NTRS)

    Clarke, J. T.; Moos, H. W.

    1981-01-01

    Spatial imaging with the IUE is accomplished both by moving one of the apertures in a series of exposures and within the large aperture in a single exposure. The image of the field of view subtended by the large aperture is focussed directly onto the detector camera face at each wavelength; since the spatial resolution of the instrument is 5 to 6 arc sec and the aperture extends 23.0 by 10.3 arc sec, imaging both parallel and perpendicular to dispersion is possible in a single exposure. The correction for the sensitivity variation along the slit at 1216 A is obtained from exposures of diffuse geocoronal H Ly alpha emission. The relative size of the aperture superimposed on the apparent discs of Jupiter and Saturn in typical observation is illustrated. By moving the planet image 10 to 20 arc sec along the major axis of the aperture (which is constrained to point roughly north-south) maps of the discs of these planets are obtained with 6 arc sec spatial resolution.

  17. Fast and accurate face recognition based on image compression

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Blasch, Erik

    2017-05-01

    Image compression is desired for many image-related applications especially for network-based applications with bandwidth and storage constraints. The face recognition community typical reports concentrate on the maximal compression rate that would not decrease the recognition accuracy. In general, the wavelet-based face recognition methods such as EBGM (elastic bunch graph matching) and FPB (face pattern byte) are of high performance but run slowly due to their high computation demands. The PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis) algorithms run fast but perform poorly in face recognition. In this paper, we propose a novel face recognition method based on standard image compression algorithm, which is termed as compression-based (CPB) face recognition. First, all gallery images are compressed by the selected compression algorithm. Second, a mixed image is formed with the probe and gallery images and then compressed. Third, a composite compression ratio (CCR) is computed with three compression ratios calculated from: probe, gallery and mixed images. Finally, the CCR values are compared and the largest CCR corresponds to the matched face. The time cost of each face matching is about the time of compressing the mixed face image. We tested the proposed CPB method on the "ASUMSS face database" (visible and thermal images) from 105 subjects. The face recognition accuracy with visible images is 94.76% when using JPEG compression. On the same face dataset, the accuracy of FPB algorithm was reported as 91.43%. The JPEG-compressionbased (JPEG-CPB) face recognition is standard and fast, which may be integrated into a real-time imaging device.

  18. InterFace: A software package for face image warping, averaging, and principal components analysis.

    PubMed

    Kramer, Robin S S; Jenkins, Rob; Burton, A Mike

    2017-12-01

    We describe InterFace, a software package for research in face recognition. The package supports image warping, reshaping, averaging of multiple face images, and morphing between faces. It also supports principal components analysis (PCA) of face images, along with tools for exploring the "face space" produced by PCA. The package uses a simple graphical user interface, allowing users to perform these sophisticated image manipulations without any need for programming knowledge. The program is available for download in the form of an app, which requires that users also have access to the (freely available) MATLAB Runtime environment.

  19. Matching voice and face identity from static images.

    PubMed

    Mavica, Lauren W; Barenholtz, Elan

    2013-04-01

    Previous research has suggested that people are unable to correctly choose which unfamiliar voice and static image of a face belong to the same person. Here, we present evidence that people can perform this task with greater than chance accuracy. In Experiment 1, participants saw photographs of two, same-gender models, while simultaneously listening to a voice recording of one of the models pictured in the photographs and chose which of the two faces they thought belonged to the same model as the recorded voice. We included three conditions: (a) the visual stimuli were frontal headshots (including the neck and shoulders) and the auditory stimuli were recordings of spoken sentences; (b) the visual stimuli only contained cropped faces and the auditory stimuli were full sentences; (c) we used the same pictures as Condition 1 but the auditory stimuli were recordings of a single word. In Experiment 2, participants performed the same task as in Condition 1 of Experiment 1 but with the stimuli presented in sequence. Participants also rated the model's faces and voices along multiple "physical" dimensions (e.g., weight,) or "personality" dimensions (e.g., extroversion); the degree of agreement between the ratings for each model's face and voice was compared to performance for that model in the matching task. In all three conditions, we found that participants chose, at better than chance levels, which faces and voices belonged to the same person. Performance in the matching task was not correlated with the degree of agreement on any of the rated dimensions.

  20. Validation of the facial assessment by computer evaluation (FACE) program for software-aided eyelid measurements.

    PubMed

    Choi, Catherine J; Lefebvre, Daniel R; Yoon, Michael K

    2016-06-01

    The aim of this article is to validate the accuracy of Facial Assessment by Computer Evaluation (FACE) program in eyelid measurements. Sixteen subjects between the ages of 27 and 65 were included with IRB approval. Clinical measurements of upper eyelid margin reflex distance (MRD1) and inter-palpebral fissure (IPF) were obtained. Photographs were then taken with a digital single lens reflex camera with built-in pop-up flash (dSLR-pop) and a dSLR with lens-mounted ring flash (dSLR-ring) with the cameras upright, rotated 90, 180, and 270 degrees. The images were analyzed using both the FACE and ImageJ software to measure MRD1 and IPF.Thirty-two eyes of sixteen subjects were included. Comparison of clinical measurement of MRD1 and IPF with FACE measurements of photos in upright position showed no statistically significant differences for dSLR-pop (MRD1: p = 0.0912, IPF: p = 0.334) and for dSLR-ring (MRD1: p = 0.105, IPF: p = 0.538). One-to-one comparison of MRD1 and IPF measurements in four positions obtained with FACE versus ImageJ for dSLR-pop showed moderate to substantial agreement for MRD1 (intraclass correlation coefficient = 0.534 upright, 0.731 in 90 degree rotation, 0.627 in 180 degree rotation, 0.477 in 270 degree rotation) and substantial to excellent agreement in IPF (ICC = 0.740, 0.859, 0.849, 0.805). In photos taken with dSLR-ring, there was excellent agreement of all MRD1 (ICC = 0.916, 0.932, 0.845, 0.812) and IPF (ICC = 0.937, 0.938, 0.917, 0.888) values. The FACE program is a valid method for measuring margin reflex distance and inter-palpebral fissure.

  1. Eye coding mechanisms in early human face event-related potentials.

    PubMed

    Rousselet, Guillaume A; Ince, Robin A A; van Rijsbergen, Nicola J; Schyns, Philippe G

    2014-11-10

    In humans, the N170 event-related potential (ERP) is an integrated measure of cortical activity that varies in amplitude and latency across trials. Researchers often conjecture that N170 variations reflect cortical mechanisms of stimulus coding for recognition. Here, to settle the conjecture and understand cortical information processing mechanisms, we unraveled the coding function of N170 latency and amplitude variations in possibly the simplest socially important natural visual task: face detection. On each experimental trial, 16 observers saw face and noise pictures sparsely sampled with small Gaussian apertures. Reverse-correlation methods coupled with information theory revealed that the presence of the eye specifically covaries with behavioral and neural measurements: the left eye strongly modulates reaction times and lateral electrodes represent mainly the presence of the contralateral eye during the rising part of the N170, with maximum sensitivity before the N170 peak. Furthermore, single-trial N170 latencies code more about the presence of the contralateral eye than N170 amplitudes and early latencies are associated with faster reaction times. The absence of these effects in control images that did not contain a face refutes alternative accounts based on retinal biases or allocation of attention to the eye location on the face. We conclude that the rising part of the N170, roughly 120-170 ms post-stimulus, is a critical time-window in human face processing mechanisms, reflecting predominantly, in a face detection task, the encoding of a single feature: the contralateral eye. © 2014 ARVO.

  2. Cross-modal face recognition using multi-matcher face scores

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Blasch, Erik

    2015-05-01

    The performance of face recognition can be improved using information fusion of multimodal images and/or multiple algorithms. When multimodal face images are available, cross-modal recognition is meaningful for security and surveillance applications. For example, a probe face is a thermal image (especially at nighttime), while only visible face images are available in the gallery database. Matching a thermal probe face onto the visible gallery faces requires crossmodal matching approaches. A few such studies were implemented in facial feature space with medium recognition performance. In this paper, we propose a cross-modal recognition approach, where multimodal faces are cross-matched in feature space and the recognition performance is enhanced with stereo fusion at image, feature and/or score level. In the proposed scenario, there are two cameras for stereo imaging, two face imagers (visible and thermal images) in each camera, and three recognition algorithms (circular Gaussian filter, face pattern byte, linear discriminant analysis). A score vector is formed with three cross-matched face scores from the aforementioned three algorithms. A classifier (e.g., k-nearest neighbor, support vector machine, binomial logical regression [BLR]) is trained then tested with the score vectors by using 10-fold cross validations. The proposed approach was validated with a multispectral stereo face dataset from 105 subjects. Our experiments show very promising results: ACR (accuracy rate) = 97.84%, FAR (false accept rate) = 0.84% when cross-matching the fused thermal faces onto the fused visible faces by using three face scores and the BLR classifier.

  3. Dual instrument for in vivo and ex vivo OCT imaging in an ENT department

    PubMed Central

    Cernat, Ramona; Tatla, Taran S.; Pang, Jingyin; Tadrous, Paul J.; Bradu, Adrian; Dobre, George; Gelikonov, Grigory; Gelikonov, Valentin; Podoleanu, Adrian Gh.

    2012-01-01

    A dual instrument is assembled to investigate the usefulness of optical coherence tomography (OCT) imaging in an ear, nose and throat (ENT) department. Instrument 1 is dedicated to in vivo laryngeal investigation, based on an endoscope probe head assembled by compounding a miniature transversal flying spot scanning probe with a commercial fiber bundle endoscope. This dual probe head is used to implement a dual channel nasolaryngeal endoscopy-OCT system. The two probe heads are used to provide simultaneously OCT cross section images and en face fiber bundle endoscopic images. Instrument 2 is dedicated to either in vivo imaging of accessible surface skin and mucosal lesions of the scalp, face, neck and oral cavity or ex vivo imaging of the same excised tissues, based on a single OCT channel. This uses a better interface optics in a hand held probe. The two instruments share sequentially, the swept source at 1300 nm, the photo-detector unit and the imaging PC. An aiming red laser is permanently connected to the two instruments. This projects visible light collinearly with the 1300 nm beam and allows pixel correspondence between the en face endoscopy image and the cross section OCT image in Instrument 1, as well as surface guidance in Instrument 2 for the operator. The dual channel instrument was initially tested on phantom models and then on patients with suspect laryngeal lesions in a busy ENT practice. This feasibility study demonstrates the OCT potential of the dual imaging instrument as a useful tool in the testing and translation of OCT technology from the lab to the clinic. Instrument 1 is under investigation as a possible endoscopic screening tool for early laryngeal cancer. Larger size and better quality cross-section OCT images produced by Instrument 2 provide a reference base for comparison and continuing research on imaging freshly excised tissue, as well as in vivo interrogation of more superficial skin and mucosal lesions in the head and neck patient. PMID:23243583

  4. Unified Digital Image Display And Processing System

    NASA Astrophysics Data System (ADS)

    Horii, Steven C.; Maguire, Gerald Q.; Noz, Marilyn E.; Schimpf, James H.

    1981-11-01

    Our institution like many others, is faced with a proliferation of medical imaging techniques. Many of these methods give rise to digital images (e.g. digital radiography, computerized tomography (CT) , nuclear medicine and ultrasound). We feel that a unified, digital system approach to image management (storage, transmission and retrieval), image processing and image display will help in integrating these new modalities into the present diagnostic radiology operations. Future techniques are likely to employ digital images, so such a system could readily be expanded to include other image sources. We presently have the core of such a system. We can both view and process digital nuclear medicine (conventional gamma camera) images, positron emission tomography (PET) and CT images on a single system. Images from our recently installed digital radiographic unit can be added. Our paper describes our present system, explains the rationale for its configuration, and describes the directions in which it will expand.

  5. Image Analysis in Plant Sciences: Publish Then Perish.

    PubMed

    Lobet, Guillaume

    2017-07-01

    Image analysis has become a powerful technique for most plant scientists. In recent years dozens of image analysis tools have been published in plant science journals. These tools cover the full spectrum of plant scales, from single cells to organs and canopies. However, the field of plant image analysis remains in its infancy. It still has to overcome important challenges, such as the lack of robust validation practices or the absence of long-term support. In this Opinion article, I: (i) present the current state of the field, based on data from the plant-image-analysis.org database; (ii) identify the challenges faced by its community; and (iii) propose workable ways of improvement. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. The role of external features in face recognition with central vision loss: A pilot study

    PubMed Central

    Bernard, Jean-Baptiste; Chung, Susana T.L.

    2016-01-01

    Purpose We evaluated how the performance for recognizing familiar face images depends on the internal (eyebrows, eyes, nose, mouth) and external face features (chin, outline of face, hairline) in individuals with central vision loss. Methods In Experiment 1, we measured eye movements for four observers with central vision loss to determine whether they fixated more often on the internal or the external features of face images while attempting to recognize the images. We then measured the accuracy for recognizing face images that contained only the internal, only the external, or both internal and external features (Experiment 2), and for hybrid images where the internal and external features came from two different source images (Experiment 3), for five observers with central vision loss and four age-matched control observers. Results When recognizing familiar face images, approximately 40% of the fixations of observers with central vision loss were centered on the external features of faces. The recognition accuracy was higher for images containing only external features (66.8±3.3% correct) than for images containing only internal features (35.8±15.0%), a finding contradicting that of control observers. For hybrid face images, observers with central vision loss responded more accurately to the external features (50.4±17.8%) than to the internal features (9.3±4.9%), while control observers did not show the same bias toward responding to the external features. Conclusions Contrary to people with normal vision who rely more on the internal features of face images for recognizing familiar faces, individuals with central vision loss show a higher dependence on using external features of face images. PMID:26829260

  7. The Role of External Features in Face Recognition with Central Vision Loss.

    PubMed

    Bernard, Jean-Baptiste; Chung, Susana T L

    2016-05-01

    We evaluated how the performance of recognizing familiar face images depends on the internal (eyebrows, eyes, nose, mouth) and external face features (chin, outline of face, hairline) in individuals with central vision loss. In experiment 1, we measured eye movements for four observers with central vision loss to determine whether they fixated more often on the internal or the external features of face images while attempting to recognize the images. We then measured the accuracy for recognizing face images that contained only the internal, only the external, or both internal and external features (experiment 2) and for hybrid images where the internal and external features came from two different source images (experiment 3) for five observers with central vision loss and four age-matched control observers. When recognizing familiar face images, approximately 40% of the fixations of observers with central vision loss was centered on the external features of faces. The recognition accuracy was higher for images containing only external features (66.8 ± 3.3% correct) than for images containing only internal features (35.8 ± 15.0%), a finding contradicting that of control observers. For hybrid face images, observers with central vision loss responded more accurately to the external features (50.4 ± 17.8%) than to the internal features (9.3 ± 4.9%), whereas control observers did not show the same bias toward responding to the external features. Contrary to people with normal vision who rely more on the internal features of face images for recognizing familiar faces, individuals with central vision loss show a higher dependence on using external features of face images.

  8. Face recognition from unconstrained three-dimensional face images using multitask sparse representation

    NASA Astrophysics Data System (ADS)

    Bentaieb, Samia; Ouamri, Abdelaziz; Nait-Ali, Amine; Keche, Mokhtar

    2018-01-01

    We propose and evaluate a three-dimensional (3D) face recognition approach that applies the speeded up robust feature (SURF) algorithm to the depth representation of shape index map, under real-world conditions, using only a single gallery sample for each subject. First, the 3D scans are preprocessed, then SURF is applied on the shape index map to find interest points and their descriptors. Each 3D face scan is represented by keypoints descriptors, and a large dictionary is built from all the gallery descriptors. At the recognition step, descriptors of a probe face scan are sparsely represented by the dictionary. A multitask sparse representation classification is used to determine the identity of each probe face. The feasibility of the approach that uses the SURF algorithm on the shape index map for face identification/authentication is checked through an experimental investigation conducted on Bosphorus, University of Milano Bicocca, and CASIA 3D datasets. It achieves an overall rank one recognition rate of 97.75%, 80.85%, and 95.12%, respectively, on these datasets.

  9. Three-dimensional face model reproduction method using multiview images

    NASA Astrophysics Data System (ADS)

    Nagashima, Yoshio; Agawa, Hiroshi; Kishino, Fumio

    1991-11-01

    This paper describes a method of reproducing three-dimensional face models using multi-view images for a virtual space teleconferencing system that achieves a realistic visual presence for teleconferencing. The goal of this research, as an integral component of a virtual space teleconferencing system, is to generate a three-dimensional face model from facial images, synthesize images of the model virtually viewed from different angles, and with natural shadow to suit the lighting conditions of the virtual space. The proposed method is as follows: first, front and side view images of the human face are taken by TV cameras. The 3D data of facial feature points are obtained from front- and side-views by an image processing technique based on the color, shape, and correlation of face components. Using these 3D data, the prepared base face models, representing typical Japanese male and female faces, are modified to approximate the input facial image. The personal face model, representing the individual character, is then reproduced. Next, an oblique view image is taken by TV camera. The feature points of the oblique view image are extracted using the same image processing technique. A more precise personal model is reproduced by fitting the boundary of the personal face model to the boundary of the oblique view image. The modified boundary of the personal face model is determined by using face direction, namely rotation angle, which is detected based on the extracted feature points. After the 3D model is established, the new images are synthesized by mapping facial texture onto the model.

  10. Attention selectively modifies the representation of individual faces in the human brain

    PubMed Central

    Gratton, Caterina; Sreenivasan, Kartik K.; Silver, Michael A.; D’Esposito, Mark

    2013-01-01

    Attention modifies neural tuning for low-level features, but it is unclear how attention influences tuning for complex stimuli. We investigated this question in humans using fMRI and face stimuli. Participants were shown six faces (F1-F6) along a morph continuum, and selectivity was quantified by constructing tuning curves for individual voxels. Face-selective voxels exhibited greater responses to their preferred face than to non-preferred faces, particularly in posterior face areas. Anterior face areas instead displayed tuning for face categories: voxels in these areas preferred either the first (F1-F3) or second (F4-F6) half of the morph continuum. Next, we examined the effects of attention on voxel tuning by having subjects direct attention to one of the superimposed images of F1 and F6. We found that attention selectively enhanced responses in voxels preferring the attended face. Taken together, our results demonstrate that single voxels carry information about individual faces and that the nature of this information varies across cortical face areas. Additionally, we found that attention selectively enhances these representations. Our findings suggest that attention may act via a unitary principle of selective enhancement of responses to both simple and complex stimuli across multiple stages of the visual hierarchy. PMID:23595755

  11. Learning the moves: the effect of familiarity and facial motion on person recognition across large changes in viewing format.

    PubMed

    Roark, Dana A; O'Toole, Alice J; Abdi, Hervé; Barrett, Susan E

    2006-01-01

    Familiarity with a face or person can support recognition in tasks that require generalization to novel viewing contexts. Using naturalistic viewing conditions requiring recognition of people from face or whole body gait stimuli, we investigated the effects of familiarity, facial motion, and direction of learning/test transfer on person recognition. Participants were familiarized with previously unknown people from gait videos and were tested on faces (experiment 1a) or were familiarized with faces and were tested with gait videos (experiment 1b). Recognition was more accurate when learning from the face and testing with the gait videos, than when learning from the gait videos and testing with the face. The repetition of a single stimulus, either the face or gait, produced strong recognition gains across transfer conditions. Also, the presentation of moving faces resulted in better performance than that of static faces. In experiment 2, we investigated the role of facial motion further by testing recognition with static profile images. Motion provided no benefit for recognition, indicating that structure-from-motion is an unlikely source of the motion advantage found in the first set of experiments.

  12. A novel BCI based on ERP components sensitive to configural processing of human faces

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Zhao, Qibin; Jing, Jin; Wang, Xingyu; Cichocki, Andrzej

    2012-04-01

    This study introduces a novel brain-computer interface (BCI) based on an oddball paradigm using stimuli of facial images with loss of configural face information (e.g., inversion of face). To the best of our knowledge, till now the configural processing of human faces has not been applied to BCI but widely studied in cognitive neuroscience research. Our experiments confirm that the face-sensitive event-related potential (ERP) components N170 and vertex positive potential (VPP) have reflected early structural encoding of faces and can be modulated by the configural processing of faces. With the proposed novel paradigm, we investigate the effects of ERP components N170, VPP and P300 on target detection for BCI. An eight-class BCI platform is developed to analyze ERPs and evaluate the target detection performance using linear discriminant analysis without complicated feature extraction processing. The online classification accuracy of 88.7% and information transfer rate of 38.7 bits min-1 using stimuli of inverted faces with only single trial suggest that the proposed paradigm based on the configural processing of faces is very promising for visual stimuli-driven BCI applications.

  13. A novel BCI based on ERP components sensitive to configural processing of human faces.

    PubMed

    Zhang, Yu; Zhao, Qibin; Jin, Jing; Wang, Xingyu; Cichocki, Andrzej

    2012-04-01

    This study introduces a novel brain-computer interface (BCI) based on an oddball paradigm using stimuli of facial images with loss of configural face information (e.g., inversion of face). To the best of our knowledge, till now the configural processing of human faces has not been applied to BCI but widely studied in cognitive neuroscience research. Our experiments confirm that the face-sensitive event-related potential (ERP) components N170 and vertex positive potential (VPP) have reflected early structural encoding of faces and can be modulated by the configural processing of faces. With the proposed novel paradigm, we investigate the effects of ERP components N170, VPP and P300 on target detection for BCI. An eight-class BCI platform is developed to analyze ERPs and evaluate the target detection performance using linear discriminant analysis without complicated feature extraction processing. The online classification accuracy of 88.7% and information transfer rate of 38.7 bits min(-1) using stimuli of inverted faces with only single trial suggest that the proposed paradigm based on the configural processing of faces is very promising for visual stimuli-driven BCI applications.

  14. Face detection on distorted images using perceptual quality-aware features

    NASA Astrophysics Data System (ADS)

    Gunasekar, Suriya; Ghosh, Joydeep; Bovik, Alan C.

    2014-02-01

    We quantify the degradation in performance of a popular and effective face detector when human-perceived image quality is degraded by distortions due to additive white gaussian noise, gaussian blur or JPEG compression. It is observed that, within a certain range of perceived image quality, a modest increase in image quality can drastically improve face detection performance. These results can be used to guide resource or bandwidth allocation in a communication/delivery system that is associated with face detection tasks. A new face detector based on QualHOG features is also proposed that augments face-indicative HOG features with perceptual quality-aware spatial Natural Scene Statistics (NSS) features, yielding improved tolerance against image distortions. The new detector provides statistically significant improvements over a strong baseline on a large database of face images representing a wide range of distortions. To facilitate this study, we created a new Distorted Face Database, containing face and non-face patches from images impaired by a variety of common distortion types and levels. This new dataset is available for download and further experimentation at www.ideal.ece.utexas.edu/˜suriya/DFD/.

  15. Locality constrained joint dynamic sparse representation for local matching based face recognition.

    PubMed

    Wang, Jianzhong; Yi, Yugen; Zhou, Wei; Shi, Yanjiao; Qi, Miao; Zhang, Ming; Zhang, Baoxue; Kong, Jun

    2014-01-01

    Recently, Sparse Representation-based Classification (SRC) has attracted a lot of attention for its applications to various tasks, especially in biometric techniques such as face recognition. However, factors such as lighting, expression, pose and disguise variations in face images will decrease the performances of SRC and most other face recognition techniques. In order to overcome these limitations, we propose a robust face recognition method named Locality Constrained Joint Dynamic Sparse Representation-based Classification (LCJDSRC) in this paper. In our method, a face image is first partitioned into several smaller sub-images. Then, these sub-images are sparsely represented using the proposed locality constrained joint dynamic sparse representation algorithm. Finally, the representation results for all sub-images are aggregated to obtain the final recognition result. Compared with other algorithms which process each sub-image of a face image independently, the proposed algorithm regards the local matching-based face recognition as a multi-task learning problem. Thus, the latent relationships among the sub-images from the same face image are taken into account. Meanwhile, the locality information of the data is also considered in our algorithm. We evaluate our algorithm by comparing it with other state-of-the-art approaches. Extensive experiments on four benchmark face databases (ORL, Extended YaleB, AR and LFW) demonstrate the effectiveness of LCJDSRC.

  16. Sub-pattern based multi-manifold discriminant analysis for face recognition

    NASA Astrophysics Data System (ADS)

    Dai, Jiangyan; Guo, Changlu; Zhou, Wei; Shi, Yanjiao; Cong, Lin; Yi, Yugen

    2018-04-01

    In this paper, we present a Sub-pattern based Multi-manifold Discriminant Analysis (SpMMDA) algorithm for face recognition. Unlike existing Multi-manifold Discriminant Analysis (MMDA) approach which is based on holistic information of face image for recognition, SpMMDA operates on sub-images partitioned from the original face image and then extracts the discriminative local feature from the sub-images separately. Moreover, the structure information of different sub-images from the same face image is considered in the proposed method with the aim of further improve the recognition performance. Extensive experiments on three standard face databases (Extended YaleB, CMU PIE and AR) demonstrate that the proposed method is effective and outperforms some other sub-pattern based face recognition methods.

  17. Bioinformatics approaches to single-cell analysis in developmental biology.

    PubMed

    Yalcin, Dicle; Hakguder, Zeynep M; Otu, Hasan H

    2016-03-01

    Individual cells within the same population show various degrees of heterogeneity, which may be better handled with single-cell analysis to address biological and clinical questions. Single-cell analysis is especially important in developmental biology as subtle spatial and temporal differences in cells have significant associations with cell fate decisions during differentiation and with the description of a particular state of a cell exhibiting an aberrant phenotype. Biotechnological advances, especially in the area of microfluidics, have led to a robust, massively parallel and multi-dimensional capturing, sorting, and lysis of single-cells and amplification of related macromolecules, which have enabled the use of imaging and omics techniques on single cells. There have been improvements in computational single-cell image analysis in developmental biology regarding feature extraction, segmentation, image enhancement and machine learning, handling limitations of optical resolution to gain new perspectives from the raw microscopy images. Omics approaches, such as transcriptomics, genomics and epigenomics, targeting gene and small RNA expression, single nucleotide and structural variations and methylation and histone modifications, rely heavily on high-throughput sequencing technologies. Although there are well-established bioinformatics methods for analysis of sequence data, there are limited bioinformatics approaches which address experimental design, sample size considerations, amplification bias, normalization, differential expression, coverage, clustering and classification issues, specifically applied at the single-cell level. In this review, we summarize biological and technological advancements, discuss challenges faced in the aforementioned data acquisition and analysis issues and present future prospects for application of single-cell analyses to developmental biology. © The Author 2015. Published by Oxford University Press on behalf of the European Society of Human Reproduction and Embryology. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. Illumination robust face recognition using spatial adaptive shadow compensation based on face intensity prior

    NASA Astrophysics Data System (ADS)

    Hsieh, Cheng-Ta; Huang, Kae-Horng; Lee, Chang-Hsing; Han, Chin-Chuan; Fan, Kuo-Chin

    2017-12-01

    Robust face recognition under illumination variations is an important and challenging task in a face recognition system, particularly for face recognition in the wild. In this paper, a face image preprocessing approach, called spatial adaptive shadow compensation (SASC), is proposed to eliminate shadows in the face image due to different lighting directions. First, spatial adaptive histogram equalization (SAHE), which uses face intensity prior model, is proposed to enhance the contrast of each local face region without generating visible noises in smooth face areas. Adaptive shadow compensation (ASC), which performs shadow compensation in each local image block, is then used to produce a wellcompensated face image appropriate for face feature extraction and recognition. Finally, null-space linear discriminant analysis (NLDA) is employed to extract discriminant features from SASC compensated images. Experiments performed on the Yale B, Yale B extended, and CMU PIE face databases have shown that the proposed SASC always yields the best face recognition accuracy. That is, SASC is more robust to face recognition under illumination variations than other shadow compensation approaches.

  19. Multimodality Imaging of RNA Interference

    PubMed Central

    Nayak, Tapas R.; Krasteva, Lazura K.; Cai, Weibo

    2013-01-01

    The discovery of small interfering RNAs (siRNAs) and their potential to knock down virtually any gene of interest has ushered in a new era of RNA interference (RNAi). Clinical use of RNAi faces severe limitations due to inefficiency delivery of siRNA or short hairpin RNA (shRNA). Many molecular imaging techniques have been adopted in RNAi-related research for evaluation of siRNA/shRNA delivery, biodistribution, pharmacokinetics, and the therapeutic effect. In this review article, we summarize the current status of in vivo imaging of RNAi. The molecular imaging techniques that have been employed include bioluminescence/fluorescence imaging, magnetic resonance imaging/spectroscopy, positron emission tomography, single-photon emission computed tomography, and various combinations of these techniques. Further development of non-invasive imaging strategies for RNAi, not only focusing on the delivery of siRNA/shRNA but also the therapeutic efficacy, is critical for future clinical translation. Rigorous validation will be needed to confirm that biodistribution of the carrier is correlated with that of siRNA/shRNA, since imaging only detects the label (e.g. radioisotopes) but not the gene or carrier themselves. It is also essential to develop multimodality imaging approaches for realizing the full potential of therapeutic RNAi, as no single imaging modality may be sufficient to simultaneously monitor both the gene delivery and silencing effect of RNAi. PMID:23745567

  20. Why the long face? The importance of vertical image structure for biological "barcodes" underlying face recognition.

    PubMed

    Spence, Morgan L; Storrs, Katherine R; Arnold, Derek H

    2014-07-29

    Humans are experts at face recognition. The mechanisms underlying this complex capacity are not fully understood. Recently, it has been proposed that face recognition is supported by a coarse-scale analysis of visual information contained in horizontal bands of contrast distributed along the vertical image axis-a biological facial "barcode" (Dakin & Watt, 2009). A critical prediction of the facial barcode hypothesis is that the distribution of image contrast along the vertical axis will be more important for face recognition than image distributions along the horizontal axis. Using a novel paradigm involving dynamic image distortions, a series of experiments are presented examining famous face recognition impairments from selectively disrupting image distributions along the vertical or horizontal image axes. Results show that disrupting the image distribution along the vertical image axis is more disruptive for recognition than matched distortions along the horizontal axis. Consistent with the facial barcode hypothesis, these results suggest that human face recognition relies disproportionately on appropriately scaled distributions of image contrast along the vertical image axis. © 2014 ARVO.

  1. Can we match ultraviolet face images against their visible counterparts?

    NASA Astrophysics Data System (ADS)

    Narang, Neeru; Bourlai, Thirimachos; Hornak, Lawrence A.

    2015-05-01

    In law enforcement and security applications, the acquisition of face images is critical in producing key trace evidence for the successful identification of potential threats. However, face recognition (FR) for face images captured using different camera sensors, and under variable illumination conditions, and expressions is very challenging. In this paper, we investigate the advantages and limitations of the heterogeneous problem of matching ultra violet (from 100 nm to 400 nm in wavelength) or UV, face images against their visible (VIS) counterparts, when all face images are captured under controlled conditions. The contributions of our work are three-fold; (i) We used a camera sensor designed with the capability to acquire UV images at short-ranges, and generated a dual-band (VIS and UV) database that is composed of multiple, full frontal, face images of 50 subjects. Two sessions were collected that span over the period of 2 months. (ii) For each dataset, we determined which set of face image pre-processing algorithms are more suitable for face matching, and, finally, (iii) we determined which FR algorithm better matches cross-band face images, resulting in high rank-1 identification rates. Experimental results show that our cross spectral matching (the heterogeneous problem, where gallery and probe sets consist of face images acquired in different spectral bands) algorithms achieve sufficient identification performance. However, we also conclude that the problem under study, is very challenging, and it requires further investigation to address real-world law enforcement or military applications. To the best of our knowledge, this is first time in the open literature the problem of cross-spectral matching of UV against VIS band face images is being investigated.

  2. Challenges and advantages in wide-field optical coherence tomography angiography imaging of the human retinal and choroidal vasculature at 1.7-MHz A-scan rate

    NASA Astrophysics Data System (ADS)

    Poddar, Raju; Migacz, Justin V.; Schwartz, Daniel M.; Werner, John S.; Gorczynska, Iwona

    2017-10-01

    We present noninvasive, three-dimensional, depth-resolved imaging of human retinal and choroidal blood circulation with a swept-source optical coherence tomography (OCT) system at 1065-nm center wavelength. Motion contrast OCT imaging was performed with the phase-variance OCT angiography method. A Fourier-domain mode-locked light source was used to enable an imaging rate of 1.7 MHz. We experimentally demonstrate the challenges and advantages of wide-field OCT angiography (OCTA). In the discussion, we consider acquisition time, scanning area, scanning density, and their influence on visualization of selected features of the retinal and choroidal vascular networks. The OCTA imaging was performed with a field of view of 16 deg (5 mm×5 mm) and 30 deg (9 mm×9 mm). Data were presented in en face projections generated from single volumes and in en face projection mosaics generated from up to 4 datasets. OCTA imaging at 1.7 MHz A-scan rate was compared with results obtained from a commercial OCTA instrument and with conventional ophthalmic diagnostic methods: fundus photography, fluorescein, and indocyanine green angiography. Comparison of images obtained from all methods is demonstrated using the same eye of a healthy volunteer. For example, imaging of retinal pathology is presented in three cases of advanced age-related macular degeneration.

  3. Image Based Hair Segmentation Algorithm for the Application of Automatic Facial Caricature Synthesis

    PubMed Central

    Peng, Zhenyun; Zhang, Yaohui

    2014-01-01

    Hair is a salient feature in human face region and are one of the important cues for face analysis. Accurate detection and presentation of hair region is one of the key components for automatic synthesis of human facial caricature. In this paper, an automatic hair detection algorithm for the application of automatic synthesis of facial caricature based on a single image is proposed. Firstly, hair regions in training images are labeled manually and then the hair position prior distributions and hair color likelihood distribution function are estimated from these labels efficiently. Secondly, the energy function of the test image is constructed according to the estimated prior distributions of hair location and hair color likelihood. This energy function is further optimized according to graph cuts technique and initial hair region is obtained. Finally, K-means algorithm and image postprocessing techniques are applied to the initial hair region so that the final hair region can be segmented precisely. Experimental results show that the average processing time for each image is about 280 ms and the average hair region detection accuracy is above 90%. The proposed algorithm is applied to a facial caricature synthesis system. Experiments proved that with our proposed hair segmentation algorithm the facial caricatures are vivid and satisfying. PMID:24592182

  4. Structure and morphology of magnetite anaerobically-produced by a marine magnetotactic bacterium and a dissimilatory iron-reducing bacterium

    USGS Publications Warehouse

    Sparks, N.H.C.; Mann, S.; Bazylinski, D.A.; Lovley, D.R.; Jannasch, H.W.; Frankel, R.B.

    1990-01-01

    Intracellular crystals of magnetite synthesized by cells of the magnetotactic vibroid organism, MV-1, and extracellular crystals of magnetite produced by the non-magnetotactic dissimilatory iron-reducing bacterium strain GS-15, were examined using high-resolution transmission electron microscopy, electron diffraction and 57Fe Mo??ssbauer spectroscopy. The magnetotactic bacterium contained a single chain of approximately 10 crystals aligned along the long axis of the cell. The crystals were essentially pure stoichiometric magnetite. When viewed along the crystal long axis the particles had a hexagonal cross-section whereas side-on they appeared as rectangules or truncated rectangles of average dimension, 53 ?? 35 nm. These findings are explained in terms of a three-dimensional morphology comprising a hexagonal prism of {110} faces which are capped and truncated by {111} end faces. Electron diffraction and lattice imaging studies indicated that the particles were structurally well-defined single crystals. In contrast, magnetite particles produced by the strain, GS-15 were irregular in shape and had smaller mean dimensions (14 nm). Single crystals were imaged but these were not of high structural perfection. These results highlight the influence of intracellular control on the crystallochemical specificity of bacterial magnetites. The characterization of these crystals is important in aiding the identification of biogenic magnetic materials in paleomagnetism and in studies of sediment magnetization. ?? 1990.

  5. Near infrared and visible face recognition based on decision fusion of LBP and DCT features

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua; Zhang, Shuai; Liu, Guodong; Xiong, Jinquan

    2018-03-01

    Visible face recognition systems, being vulnerable to illumination, expression, and pose, can not achieve robust performance in unconstrained situations. Meanwhile, near infrared face images, being light- independent, can avoid or limit the drawbacks of face recognition in visible light, but its main challenges are low resolution and signal noise ratio (SNR). Therefore, near infrared and visible fusion face recognition has become an important direction in the field of unconstrained face recognition research. In order to extract the discriminative complementary features between near infrared and visible images, in this paper, we proposed a novel near infrared and visible face fusion recognition algorithm based on DCT and LBP features. Firstly, the effective features in near-infrared face image are extracted by the low frequency part of DCT coefficients and the partition histograms of LBP operator. Secondly, the LBP features of visible-light face image are extracted to compensate for the lacking detail features of the near-infrared face image. Then, the LBP features of visible-light face image, the DCT and LBP features of near-infrared face image are sent to each classifier for labeling. Finally, decision level fusion strategy is used to obtain the final recognition result. The visible and near infrared face recognition is tested on HITSZ Lab2 visible and near infrared face database. The experiment results show that the proposed method extracts the complementary features of near-infrared and visible face images and improves the robustness of unconstrained face recognition. Especially for the circumstance of small training samples, the recognition rate of proposed method can reach 96.13%, which has improved significantly than 92.75 % of the method based on statistical feature fusion.

  6. Repetition suppression of faces is modulated by emotion

    NASA Astrophysics Data System (ADS)

    Ishai, Alumit; Pessoa, Luiz; Bikle, Philip C.; Ungerleider, Leslie G.

    2004-06-01

    Single-unit recordings and functional brain imaging studies have shown reduced neural responses to repeated stimuli in the visual cortex. By using event-related functional MRI, we compared the activation evoked by repetitions of neutral and fearful faces, which were either task relevant (targets) or irrelevant (distracters). We found that within the inferior occipital gyri, lateral fusiform gyri, superior temporal sulci, amygdala, and the inferior frontal gyri/insula, targets evoked stronger responses than distracters and their repetition was associated with significantly reduced responses. Repetition suppression, as manifested by the difference in response amplitude between the first and third repetitions of a target, was stronger for fearful than neutral faces. Distracter faces, regardless of their repetition or valence, evoked negligible activation, indicating top-down attenuation of behaviorally irrelevant stimuli. Our findings demonstrate a three-way interaction between emotional valence, repetition, and task relevance and suggest that repetition suppression is influenced by high-level cognitive processes in the human brain. face perception | functional MRI

  7. Healthy Aging Delays Scalp EEG Sensitivity to Noise in a Face Discrimination Task

    PubMed Central

    Rousselet, Guillaume A.; Gaspar, Carl M.; Pernet, Cyril R.; Husk, Jesse S.; Bennett, Patrick J.; Sekuler, Allison B.

    2010-01-01

    We used a single-trial ERP approach to quantify age-related changes in the time-course of noise sensitivity. A total of 62 healthy adults, aged between 19 and 98, performed a non-speeded discrimination task between two faces. Stimulus information was controlled by parametrically manipulating the phase spectrum of these faces. Behavioral 75% correct thresholds increased with age. This result may be explained by lower signal-to-noise ratios in older brains. ERP from each subject were entered into a single-trial general linear regression model to identify variations in neural activity statistically associated with changes in image structure. The fit of the model, indexed by R2, was computed at multiple post-stimulus time points. The time-course of the R2 function showed significantly delayed noise sensitivity in older observers. This age effect is reliable, as demonstrated by test–retest in 24 subjects, and started about 120 ms after stimulus onset. Our analyses suggest also a qualitative change from a young to an older pattern of brain activity at around 47 ± 4 years old. PMID:21833194

  8. Energy conservation using face detection

    NASA Astrophysics Data System (ADS)

    Deotale, Nilesh T.; Kalbande, Dhananjay R.; Mishra, Akassh A.

    2011-10-01

    Computerized Face Detection, is concerned with the difficult task of converting a video signal of a person to written text. It has several applications like face recognition, simultaneous multiple face processing, biometrics, security, video surveillance, human computer interface, image database management, digital cameras use face detection for autofocus, selecting regions of interest in photo slideshows that use a pan-and-scale and The Present Paper deals with energy conservation using face detection. Automating the process to a computer requires the use of various image processing techniques. There are various methods that can be used for Face Detection such as Contour tracking methods, Template matching, Controlled background, Model based, Motion based and color based. Basically, the video of the subject are converted into images are further selected manually for processing. However, several factors like poor illumination, movement of face, viewpoint-dependent Physical appearance, Acquisition geometry, Imaging conditions, Compression artifacts makes Face detection difficult. This paper reports an algorithm for conservation of energy using face detection for various devices. The present paper suggests Energy Conservation can be done by Detecting the Face and reducing the brightness of complete image and then adjusting the brightness of the particular area of an image where the face is located using histogram equalization.

  9. A compressed sensing approach for resolution improvement in fiber-bundle based endomicroscopy

    NASA Astrophysics Data System (ADS)

    Dumas, John P.; Lodhi, Muhammad A.; Bajwa, Waheed U.; Pierce, Mark C.

    2018-02-01

    Endomicroscopy techniques such as confocal, multi-photon, and wide-field imaging have all been demonstrated using coherent fiber-optic imaging bundles. While the narrow diameter and flexibility of fiber bundles is clinically advantageous, the number of resolvable points in an image is conventionally limited to the number of individual fibers within the bundle. We are introducing concepts from the compressed sensing (CS) field to fiber bundle based endomicroscopy, to allow images to be recovered with more resolvable points than fibers in the bundle. The distal face of the fiber bundle is treated as a low-resolution sensor with circular pixels (fibers) arranged in a hexagonal lattice. A spatial light modulator is located conjugate to the object and distal face, applying multiple high resolution masks to the intermediate image prior to propagation through the bundle. We acquire images of the proximal end of the bundle for each (known) mask pattern and then apply CS inversion algorithms to recover a single high-resolution image. We first developed a theoretical forward model describing image formation through the mask and fiber bundle. We then imaged objects through a rigid fiber bundle and demonstrate that our CS endomicroscopy architecture can recover intra-fiber details while filling inter-fiber regions with interpolation. Finally, we examine the relationship between reconstruction quality and the ratio of the number of mask elements to the number of fiber cores, finding that images could be generated with approximately 28,900 resolvable points for a 1,000 fiber region in our platform.

  10. Effects of intranasal oxytocin on the neural basis of face processing in autism spectrum disorder.

    PubMed

    Domes, Gregor; Heinrichs, Markus; Kumbier, Ekkehardt; Grossmann, Annette; Hauenstein, Karlheinz; Herpertz, Sabine C

    2013-08-01

    Autism spectrum disorder (ASD) is associated with altered face processing and decreased activity in brain regions involved in face processing. The neuropeptide oxytocin has been shown to promote face processing and modulate brain activity in healthy adults. The present study examined the effects of oxytocin on the neural basis of face processing in adults with Asperger syndrome (AS). A group of 14 individuals with AS and a group of 14 neurotypical control participants performed a face-matching and a house-matching task during functional magnetic resonance imaging. The effects of a single dose of 24 IU intranasally administered oxytocin were tested in a randomized, placebo-controlled, within-subject, cross-over design. Under placebo, the AS group showed decreased activity in the right amygdala, fusiform gyrus, and inferior occipital gyrus compared with the control group during face processing. After oxytocin treatment, right amygdala activity to facial stimuli increased in the AS group. These findings indicate that oxytocin increases the saliency of social stimuli and in ASD and suggest that oxytocin might promote face processing and eye contact in individuals with ASD as prerequisites for neurotypical social interaction. Copyright © 2013 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  11. Improving face image extraction by using deep learning technique

    NASA Astrophysics Data System (ADS)

    Xue, Zhiyun; Antani, Sameer; Long, L. R.; Demner-Fushman, Dina; Thoma, George R.

    2016-03-01

    The National Library of Medicine (NLM) has made a collection of over a 1.2 million research articles containing 3.2 million figure images searchable using the Open-iSM multimodal (text+image) search engine. Many images are visible light photographs, some of which are images containing faces ("face images"). Some of these face images are acquired in unconstrained settings, while others are studio photos. To extract the face regions in the images, we first applied one of the most widely-used face detectors, a pre-trained Viola-Jones detector implemented in Matlab and OpenCV. The Viola-Jones detector was trained for unconstrained face image detection, but the results for the NLM database included many false positives, which resulted in a very low precision. To improve this performance, we applied a deep learning technique, which reduced the number of false positives and as a result, the detection precision was improved significantly. (For example, the classification accuracy for identifying whether the face regions output by this Viola- Jones detector are true positives or not in a test set is about 96%.) By combining these two techniques (Viola-Jones and deep learning) we were able to increase the system precision considerably, while avoiding the need to manually construct a large training set by manual delineation of the face regions.

  12. Self- or familiar-face recognition advantage? New insight using ambient images.

    PubMed

    Bortolon, Catherine; Lorieux, Siméon; Raffard, Stéphane

    2018-06-01

    Self-face recognition has been widely explored in the past few years. Nevertheless, the current literature relies on the use of standardized photographs which do not represent daily-life face recognition. Therefore, we aim for the first time to evaluate self-face processing in healthy individuals using natural/ambient images which contain variations in the environment and in the face itself. In total, 40 undergraduate and graduate students performed a forced delayed-matching task, including images of one's own face, friend, famous and unknown individuals. For both reaction time and accuracy, results showed that participants were faster and more accurate when matching different images of their own face compared to both famous and unfamiliar faces. Nevertheless, no significant differences were found between self-face and friend-face and between friend-face and famous-face. They were also faster and more accurate when matching friend and famous faces compared to unfamiliar faces. Our results suggest that faster and more accurate responses to self-face might be better explained by a familiarity effect - that is, (1) the result of frequent exposition to one's own image through mirror and photos, (2) a more robust mental representation of one's own face and (3) strong face recognition units as for other familiar faces.

  13. Hydrodynamics of laser-driven double-foil collisions studied by orthogonal x-ray imaging

    NASA Astrophysics Data System (ADS)

    Aglitskiy, Y.; Metzler, N.; Karasik, M.; Serlin, V.; Obenschain, S. P.; Schmitt, A. J.; Velikovich, A. L.; Gardner, J. H.; Weaver, J.; Oh, J.

    2006-10-01

    With this experiment we start the study of the physics of hydrodynamic instability seeding and growth during the deceleration and stagnation phases. Our first targets consisted of two separated parallel plastic foils -- flat and rippled. The flat foil was irradiated by the 4 ns Nike KrF laser pulses at 50 TW/cm^2 and accelerated towards the rippled one. Orthogonal imaging, i. e., a simultaneous side-on and face-on radiography of the targets has been used in these experiments. Side-on x-ray radiography and VISAR data yield shock and target velocities before and after the collision. Face-on streaks revealed well-pronounced oscillatory behavior of the single-mode mass perturbations. Both sets of synchronized data were compared with 1D and 2D simulations. Observed velocities, timing and the peak value of areal mass variation are in good agreement with the simulated ones.

  14. A comparative view of face perception.

    PubMed

    Leopold, David A; Rhodes, Gillian

    2010-08-01

    Face perception serves as the basis for much of human social exchange. Diverse information can be extracted about an individual from a single glance at their face, including their identity, emotional state, and direction of attention. Neuropsychological and functional magnetic resonance imaging (fMRI) experiments reveal a complex network of specialized areas in the human brain supporting these face-reading skills. Here we consider the evolutionary roots of human face perception by exploring the manner in which different animal species view and respond to faces. We focus on behavioral experiments collected from both primates and nonprimates, assessing the types of information that animals are able to extract from the faces of their conspecifics, human experimenters, and natural predators. These experiments reveal that faces are an important category of visual stimuli for animals in all major vertebrate taxa, possibly reflecting the early emergence of neural specialization for faces in vertebrate evolution. At the same time, some aspects of facial perception are only evident in primates and a few other social mammals, and may therefore have evolved to suit the needs of complex social communication. Because the human brain likely utilizes both primitive and recently evolved neural specializations for the processing of faces, comparative studies may hold the key to understanding how these parallel circuits emerged during human evolution. 2010 APA, all rights reserved

  15. Dense 3D Face Alignment from 2D Video for Real-Time Use

    PubMed Central

    Jeni, László A.; Cohn, Jeffrey F.; Kanade, Takeo

    2018-01-01

    To enable real-time, person-independent 3D registration from 2D video, we developed a 3D cascade regression approach in which facial landmarks remain invariant across pose over a range of approximately 60 degrees. From a single 2D image of a person’s face, a dense 3D shape is registered in real time for each frame. The algorithm utilizes a fast cascade regression framework trained on high-resolution 3D face-scans of posed and spontaneous emotion expression. The algorithm first estimates the location of a dense set of landmarks and their visibility, then reconstructs face shapes by fitting a part-based 3D model. Because no assumptions are required about illumination or surface properties, the method can be applied to a wide range of imaging conditions that include 2D video and uncalibrated multi-view video. The method has been validated in a battery of experiments that evaluate its precision of 3D reconstruction, extension to multi-view reconstruction, temporal integration for videos and 3D head-pose estimation. Experimental findings strongly support the validity of real-time, 3D registration and reconstruction from 2D video. The software is available online at http://zface.org. PMID:29731533

  16. Fast Face-Recognition Optical Parallel Correlator Using High Accuracy Correlation Filter

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Kodate, Kashiko

    2005-11-01

    We designed and fabricated a fully automatic fast face recognition optical parallel correlator [E. Watanabe and K. Kodate: Appl. Opt. 44 (2005) 5666] based on the VanderLugt principle. The implementation of an as-yet unattained ultra high-speed system was aided by reconfiguring the system to make it suitable for easier parallel processing, as well as by composing a higher accuracy correlation filter and high-speed ferroelectric liquid crystal-spatial light modulator (FLC-SLM). In running trial experiments using this system (dubbed FARCO), we succeeded in acquiring remarkably low error rates of 1.3% for false match rate (FMR) and 2.6% for false non-match rate (FNMR). Given the results of our experiments, the aim of this paper is to examine methods of designing correlation filters and arranging database image arrays for even faster parallel correlation, underlining the issues of calculation technique, quantization bit rate, pixel size and shift from optical axis. The correlation filter has proved its excellent performance and higher precision than classical correlation and joint transform correlator (JTC). Moreover, arrangement of multi-object reference images leads to 10-channel correlation signals, as sharply marked as those of a single channel. This experiment result demonstrates great potential for achieving the process speed of 10000 face/s.

  17. A dual-modal retinal imaging system with adaptive optics.

    PubMed

    Meadway, Alexander; Girkin, Christopher A; Zhang, Yuhua

    2013-12-02

    An adaptive optics scanning laser ophthalmoscope (AO-SLO) is adapted to provide optical coherence tomography (OCT) imaging. The AO-SLO function is unchanged. The system uses the same light source, scanning optics, and adaptive optics in both imaging modes. The result is a dual-modal system that can acquire retinal images in both en face and cross-section planes at the single cell level. A new spectral shaping method is developed to reduce the large sidelobes in the coherence profile of the OCT imaging when a non-ideal source is used with a minimal introduction of noise. The technique uses a combination of two existing digital techniques. The thickness and position of the traditionally named inner segment/outer segment junction are measured from individual photoreceptors. In-vivo images of healthy and diseased human retinas are demonstrated.

  18. Recognition of rotated images using the multi-valued neuron and rotation-invariant 2D Fourier descriptors

    NASA Astrophysics Data System (ADS)

    Aizenberg, Evgeni; Bigio, Irving J.; Rodriguez-Diaz, Eladio

    2012-03-01

    The Fourier descriptors paradigm is a well-established approach for affine-invariant characterization of shape contours. In the work presented here, we extend this method to images, and obtain a 2D Fourier representation that is invariant to image rotation. The proposed technique retains phase uniqueness, and therefore structural image information is not lost. Rotation-invariant phase coefficients were used to train a single multi-valued neuron (MVN) to recognize satellite and human face images rotated by a wide range of angles. Experiments yielded 100% and 96.43% classification rate for each data set, respectively. Recognition performance was additionally evaluated under effects of lossy JPEG compression and additive Gaussian noise. Preliminary results show that the derived rotation-invariant features combined with the MVN provide a promising scheme for efficient recognition of rotated images.

  19. Interactive display system having a digital micromirror imaging device

    DOEpatents

    Veligdan, James T.; DeSanto, Leonard; Kaull, Lisa; Brewster, Calvin

    2006-04-11

    A display system includes a waveguide optical panel having an inlet face and an opposite outlet face. A projector cooperates with a digital imaging device, e.g. a digital micromirror imaging device, for projecting an image through the panel for display on the outlet face. The imaging device includes an array of mirrors tiltable between opposite display and divert positions. The display positions reflect an image light beam from the projector through the panel for display on the outlet face. The divert positions divert the image light beam away from the panel, and are additionally used for reflecting a probe light beam through the panel toward the outlet face. Covering a spot on the panel, e.g. with a finger, reflects the probe light beam back through the panel toward the inlet face for detection thereat and providing interactive capability.

  20. Single-lens stereovision system using a prism: position estimation of a multi-ocular prism.

    PubMed

    Cui, Xiaoyu; Lim, Kah Bin; Zhao, Yue; Kee, Wei Loon

    2014-05-01

    In this paper, a position estimation method using a prism-based single-lens stereovision system is proposed. A multifaced prism was considered as a single optical system composed of few refractive planes. A transformation matrix which relates the coordinates of an object point to its coordinates on the image plane through the refraction of the prism was derived based on geometrical optics. A mathematical model which is able to denote the position of an arbitrary faces prism with only seven parameters is introduced. This model further extends the application of the single-lens stereovision system using a prism to other areas. Experimentation results are presented to prove the effectiveness and robustness of our proposed model.

  1. Illumination normalization of face image based on illuminant direction estimation and improved Retinex.

    PubMed

    Yi, Jizheng; Mao, Xia; Chen, Lijiang; Xue, Yuli; Rovetta, Alberto; Caleanu, Catalin-Daniel

    2015-01-01

    Illumination normalization of face image for face recognition and facial expression recognition is one of the most frequent and difficult problems in image processing. In order to obtain a face image with normal illumination, our method firstly divides the input face image into sixteen local regions and calculates the edge level percentage in each of them. Secondly, three local regions, which meet the requirements of lower complexity and larger average gray value, are selected to calculate the final illuminant direction according to the error function between the measured intensity and the calculated intensity, and the constraint function for an infinite light source model. After knowing the final illuminant direction of the input face image, the Retinex algorithm is improved from two aspects: (1) we optimize the surround function; (2) we intercept the values in both ends of histogram of face image, determine the range of gray levels, and stretch the range of gray levels into the dynamic range of display device. Finally, we achieve illumination normalization and get the final face image. Unlike previous illumination normalization approaches, the method proposed in this paper does not require any training step or any knowledge of 3D face and reflective surface model. The experimental results using extended Yale face database B and CMU-PIE show that our method achieves better normalization effect comparing with the existing techniques.

  2. Video-based face recognition via convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Bao, Tianlong; Ding, Chunhui; Karmoshi, Saleem; Zhu, Ming

    2017-06-01

    Face recognition has been widely studied recently while video-based face recognition still remains a challenging task because of the low quality and large intra-class variation of video captured face images. In this paper, we focus on two scenarios of video-based face recognition: 1)Still-to-Video(S2V) face recognition, i.e., querying a still face image against a gallery of video sequences; 2)Video-to-Still(V2S) face recognition, in contrast to S2V scenario. A novel method was proposed in this paper to transfer still and video face images to an Euclidean space by a carefully designed convolutional neural network, then Euclidean metrics are used to measure the distance between still and video images. Identities of still and video images that group as pairs are used as supervision. In the training stage, a joint loss function that measures the Euclidean distance between the predicted features of training pairs and expanding vectors of still images is optimized to minimize the intra-class variation while the inter-class variation is guaranteed due to the large margin of still images. Transferred features are finally learned via the designed convolutional neural network. Experiments are performed on COX face dataset. Experimental results show that our method achieves reliable performance compared with other state-of-the-art methods.

  3. Performance evaluation of no-reference image quality metrics for face biometric images

    NASA Astrophysics Data System (ADS)

    Liu, Xinwei; Pedersen, Marius; Charrier, Christophe; Bours, Patrick

    2018-03-01

    The accuracy of face recognition systems is significantly affected by the quality of face sample images. The recent established standardization proposed several important aspects for the assessment of face sample quality. There are many existing no-reference image quality metrics (IQMs) that are able to assess natural image quality by taking into account similar image-based quality attributes as introduced in the standardization. However, whether such metrics can assess face sample quality is rarely considered. We evaluate the performance of 13 selected no-reference IQMs on face biometrics. The experimental results show that several of them can assess face sample quality according to the system performance. We also analyze the strengths and weaknesses of different IQMs as well as why some of them failed to assess face sample quality. Retraining an original IQM by using face database can improve the performance of such a metric. In addition, the contribution of this paper can be used for the evaluation of IQMs on other biometric modalities; furthermore, it can be used for the development of multimodality biometric IQMs.

  4. A Method for En Face OCT Imaging of Subretinal Fluid in Age-Related Macular Degeneration

    PubMed Central

    Mohammad, Fatimah; Wanek, Justin; Zelkha, Ruth; Lim, Jennifer I.; Chen, Judy; Shahidi, Mahnaz

    2014-01-01

    Purpose. The purpose of the study is to report a method for en face imaging of subretinal fluid (SRF) due to age-related macular degeneration (AMD) based on spectral domain optical coherence tomography (SDOCT). Methods. High density SDOCT imaging was performed at two visits in 4 subjects with neovascular AMD and one healthy subject. En face OCT images of a retinal layer anterior to the retinal pigment epithelium were generated. Validity, repeatability, and utility of the method were established. Results. En face OCT images generated by manual and automatic segmentation were nearly indistinguishable and displayed similar regions of SRF. En face OCT images displayed uniform intensities and similar retinal vascular patterns in a healthy subject, while the size and appearance of a hypopigmented fibrotic scar in an AMD subject were similar at 2 visits. In AMD subjects, dark regions on en face OCT images corresponded to reduced or absent light reflectance due to SRF. On en face OCT images, a decrease in SRF areas with treatment was demonstrated and this corresponded with a reduction in the central subfield retinal thickness. Conclusion. En face OCT imaging is a promising tool for visualization and monitoring of SRF area due to disease progression and treatment. PMID:25478209

  5. Efficient dense blur map estimation for automatic 2D-to-3D conversion

    NASA Astrophysics Data System (ADS)

    Vosters, L. P. J.; de Haan, G.

    2012-03-01

    Focus is an important depth cue for 2D-to-3D conversion of low depth-of-field images and video. However, focus can be only reliably estimated on edges. Therefore, Bea et al. [1] first proposed an optimization based approach to propagate focus to non-edge image portions, for single image focus editing. While their approach produces accurate dense blur maps, the computational complexity and memory requirements for solving the resulting sparse linear system with standard multigrid or (multilevel) preconditioning techniques, are infeasible within the stringent requirements of the consumer electronics and broadcast industry. In this paper we propose fast, efficient, low latency, line scanning based focus propagation, which mitigates the need for complex multigrid or (multilevel) preconditioning techniques. In addition we propose facial blur compensation to compensate for false shading edges that cause incorrect blur estimates in people's faces. In general shading leads to incorrect focus estimates, which may lead to unnatural 3D and visual discomfort. Since visual attention mostly tends to faces, our solution solves the most distracting errors. A subjective assessment by paired comparison on a set of challenging low-depth-of-field images shows that the proposed approach achieves equal 3D image quality as optimization based approaches, and that facial blur compensation results in a significant improvement.

  6. Smartphones as image processing systems for prosthetic vision.

    PubMed

    Zapf, Marc P; Matteucci, Paul B; Lovell, Nigel H; Suaning, Gregg J

    2013-01-01

    The feasibility of implants for prosthetic vision has been demonstrated by research and commercial organizations. In most devices, an essential forerunner to the internal stimulation circuit is an external electronics solution for capturing, processing and relaying image information as well as extracting useful features from the scene surrounding the patient. The capabilities and multitude of image processing algorithms that can be performed by the device in real-time plays a major part in the final quality of the prosthetic vision. It is therefore optimal to use powerful hardware yet to avoid bulky, straining solutions. Recent publications have reported of portable single-board computers fast enough for computationally intensive image processing. Following the rapid evolution of commercial, ultra-portable ARM (Advanced RISC machine) mobile devices, the authors investigated the feasibility of modern smartphones running complex face detection as external processing devices for vision implants. The role of dedicated graphics processors in speeding up computation was evaluated while performing a demanding noise reduction algorithm (image denoising). The time required for face detection was found to decrease by 95% from 2.5 year old to recent devices. In denoising, graphics acceleration played a major role, speeding up denoising by a factor of 18. These results demonstrate that the technology has matured sufficiently to be considered as a valid external electronics platform for visual prosthetic research.

  7. Seeing Jesus in toast: Neural and behavioral correlates of face pareidolia

    PubMed Central

    Liu, Jiangang; Li, Jun; Feng, Lu; Li, Ling; Tian, Jie; Lee, Kang

    2014-01-01

    Face pareidolia is the illusory perception of non-existent faces. The present study, for the first time, contrasted behavioral and neural responses of face pareidolia with those of letter pareidolia to explore face-specific behavioral and neural responses during illusory face processing. Participants were shown pure-noise images but were led to believe that 50% of them contained either faces or letters; they reported seeing faces or letters illusorily 34% and 38% of the time, respectively. The right fusiform face area (rFFA) showed a specific response when participants “saw” faces as opposed to letters in the pure-noise images. Behavioral responses during face pareidolia produced a classification image that resembled a face, whereas those during letter pareidolia produced a classification image that was letter-like. Further, the extent to which such behavioral classification images resembled faces was directly related to the level of face-specific activations in the right FFA. This finding suggests that the right FFA plays a specific role not only in processing of real faces but also in illusory face perception, perhaps serving to facilitate the interaction between bottom-up information from the primary visual cortex and top-down signals from the prefrontal cortex (PFC). Whole brain analyses revealed a network specialized in face pareidolia, including both the frontal and occipito-temporal regions. Our findings suggest that human face processing has a strong top-down component whereby sensory input with even the slightest suggestion of a face can result in the interpretation of a face. PMID:24583223

  8. Comparison of the structural properties of Zn-face and O-face single crystal homoepitaxial ZnO epilayers grown by RF-magnetron sputtering

    NASA Astrophysics Data System (ADS)

    Schifano, R.; Riise, H. N.; Domagala, J. Z.; Azarov, A. Yu.; Ratajczak, R.; Monakhov, E. V.; Venkatachalapathy, V.; Vines, L.; Chan, K. S.; Wong-Leung, J.; Svensson, B. G.

    2017-01-01

    Homoepitaxial ZnO growth is demonstrated from conventional RF-sputtering at 400 °C on both Zn and O polar faces of hydrothermally grown ZnO substrates. A minimum yield for the Rutherford backscattering and channeling spectrum, χmin, equal to ˜3% and ˜12% and a full width at half maximum of the 00.2 diffraction peak rocking curve of (70 ± 10) arc sec and (1400 ± 100) arc sec have been found for samples grown on the Zn and O face, respectively. The structural characteristics of the film deposited on the Zn face are comparable with those of epilayers grown by more complex techniques like molecular beam epitaxy. In contrast, the film simultaneously deposited on the O-face exhibits an inferior crystalline structure ˜0.7% strained in the c-direction and a higher atomic number contrast compared with the substrate, as revealed by high angle annular dark field imaging measurements. These differences between the Zn- and O-face films are discussed in detail and associated with the different growth mechanisms prevailing on the two surfaces.

  9. Fourier power spectrum characteristics of face photographs: attractiveness perception depends on low-level image properties.

    PubMed

    Menzel, Claudia; Hayn-Leichsenring, Gregor U; Langner, Oliver; Wiese, Holger; Redies, Christoph

    2015-01-01

    We investigated whether low-level processed image properties that are shared by natural scenes and artworks - but not veridical face photographs - affect the perception of facial attractiveness and age. Specifically, we considered the slope of the radially averaged Fourier power spectrum in a log-log plot. This slope is a measure of the distribution of special frequency power in an image. Images of natural scenes and artworks possess - compared to face images - a relatively shallow slope (i.e., increased high spatial frequency power). Since aesthetic perception might be based on the efficient processing of images with natural scene statistics, we assumed that the perception of facial attractiveness might also be affected by these properties. We calculated Fourier slope and other beauty-associated measurements in face images and correlated them with ratings of attractiveness and age of the depicted persons (Study 1). We found that Fourier slope - in contrast to the other tested image properties - did not predict attractiveness ratings when we controlled for age. In Study 2A, we overlaid face images with random-phase patterns with different statistics. Patterns with a slope similar to those in natural scenes and artworks resulted in lower attractiveness and higher age ratings. In Studies 2B and 2C, we directly manipulated the Fourier slope of face images and found that images with shallower slopes were rated as more attractive. Additionally, attractiveness of unaltered faces was affected by the Fourier slope of a random-phase background (Study 3). Faces in front of backgrounds with statistics similar to natural scenes and faces were rated as more attractive. We conclude that facial attractiveness ratings are affected by specific image properties. An explanation might be the efficient coding hypothesis.

  10. Fourier Power Spectrum Characteristics of Face Photographs: Attractiveness Perception Depends on Low-Level Image Properties

    PubMed Central

    Langner, Oliver; Wiese, Holger; Redies, Christoph

    2015-01-01

    We investigated whether low-level processed image properties that are shared by natural scenes and artworks – but not veridical face photographs – affect the perception of facial attractiveness and age. Specifically, we considered the slope of the radially averaged Fourier power spectrum in a log-log plot. This slope is a measure of the distribution of special frequency power in an image. Images of natural scenes and artworks possess – compared to face images – a relatively shallow slope (i.e., increased high spatial frequency power). Since aesthetic perception might be based on the efficient processing of images with natural scene statistics, we assumed that the perception of facial attractiveness might also be affected by these properties. We calculated Fourier slope and other beauty-associated measurements in face images and correlated them with ratings of attractiveness and age of the depicted persons (Study 1). We found that Fourier slope – in contrast to the other tested image properties – did not predict attractiveness ratings when we controlled for age. In Study 2A, we overlaid face images with random-phase patterns with different statistics. Patterns with a slope similar to those in natural scenes and artworks resulted in lower attractiveness and higher age ratings. In Studies 2B and 2C, we directly manipulated the Fourier slope of face images and found that images with shallower slopes were rated as more attractive. Additionally, attractiveness of unaltered faces was affected by the Fourier slope of a random-phase background (Study 3). Faces in front of backgrounds with statistics similar to natural scenes and faces were rated as more attractive. We conclude that facial attractiveness ratings are affected by specific image properties. An explanation might be the efficient coding hypothesis. PMID:25835539

  11. Can the usage of human growth hormones affect facial appearance and the accuracy of face recognition systems?

    NASA Astrophysics Data System (ADS)

    Rose, Jake; Martin, Michael; Bourlai, Thirimachos

    2014-06-01

    In law enforcement and security applications, the acquisition of face images is critical in producing key trace evidence for the successful identification of potential threats. The goal of the study is to demonstrate that steroid usage significantly affects human facial appearance and hence, the performance of commercial and academic face recognition (FR) algorithms. In this work, we evaluate the performance of state-of-the-art FR algorithms on two unique face image datasets of subjects before (gallery set) and after (probe set) steroid (or human growth hormone) usage. For the purpose of this study, datasets of 73 subjects were created from multiple sources found on the Internet, containing images of men and women before and after steroid usage. Next, we geometrically pre-processed all images of both face datasets. Then, we applied image restoration techniques on the same face datasets, and finally, we applied FR algorithms in order to match the pre-processed face images of our probe datasets against the face images of the gallery set. Experimental results demonstrate that only a specific set of FR algorithms obtain the most accurate results (in terms of the rank-1 identification rate). This is because there are several factors that influence the efficiency of face matchers including (i) the time lapse between the before and after image pre-processing and restoration face photos, (ii) the usage of different drugs (e.g. Dianabol, Winstrol, and Decabolan), (iii) the usage of different cameras to capture face images, and finally, (iv) the variability of standoff distance, illumination and other noise factors (e.g. motion noise). All of the previously mentioned complicated scenarios make clear that cross-scenario matching is a very challenging problem and, thus, further investigation is required.

  12. Computer vision and soft computing for automatic skull-face overlay in craniofacial superimposition.

    PubMed

    Campomanes-Álvarez, B Rosario; Ibáñez, O; Navarro, F; Alemán, I; Botella, M; Damas, S; Cordón, O

    2014-12-01

    Craniofacial superimposition can provide evidence to support that some human skeletal remains belong or not to a missing person. It involves the process of overlaying a skull with a number of ante mortem images of an individual and the analysis of their morphological correspondence. Within the craniofacial superimposition process, the skull-face overlay stage just focuses on achieving the best possible overlay of the skull and a single ante mortem image of the suspect. Although craniofacial superimposition has been in use for over a century, skull-face overlay is still applied by means of a trial-and-error approach without an automatic method. Practitioners finish the process once they consider that a good enough overlay has been attained. Hence, skull-face overlay is a very challenging, subjective, error prone, and time consuming part of the whole process. Though the numerical assessment of the method quality has not been achieved yet, computer vision and soft computing arise as powerful tools to automate it, dramatically reducing the time taken by the expert and obtaining an unbiased overlay result. In this manuscript, we justify and analyze the use of these techniques to properly model the skull-face overlay problem. We also present the automatic technical procedure we have developed using these computational methods and show the four overlays obtained in two craniofacial superimposition cases. This automatic procedure can be thus considered as a tool to aid forensic anthropologists to develop the skull-face overlay, automating and avoiding subjectivity of the most tedious task within craniofacial superimposition. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  13. Two-step superresolution approach for surveillance face image through radial basis function-partial least squares regression and locality-induced sparse representation

    NASA Astrophysics Data System (ADS)

    Jiang, Junjun; Hu, Ruimin; Han, Zhen; Wang, Zhongyuan; Chen, Jun

    2013-10-01

    Face superresolution (SR), or face hallucination, refers to the technique of generating a high-resolution (HR) face image from a low-resolution (LR) one with the help of a set of training examples. It aims at transcending the limitations of electronic imaging systems. Applications of face SR include video surveillance, in which the individual of interest is often far from cameras. A two-step method is proposed to infer a high-quality and HR face image from a low-quality and LR observation. First, we establish the nonlinear relationship between LR face images and HR ones, according to radial basis function and partial least squares (RBF-PLS) regression, to transform the LR face into the global face space. Then, a locality-induced sparse representation (LiSR) approach is presented to enhance the local facial details once all the global faces for each LR training face are constructed. A comparison of some state-of-the-art SR methods shows the superiority of the proposed two-step approach, RBF-PLS global face regression followed by LiSR-based local patch reconstruction. Experiments also demonstrate the effectiveness under both simulation conditions and some real conditions.

  14. Processing of Fear and Anger Facial Expressions: The Role of Spatial Frequency

    PubMed Central

    Comfort, William E.; Wang, Meng; Benton, Christopher P.; Zana, Yossi

    2013-01-01

    Spatial frequency (SF) components encode a portion of the affective value expressed in face images. The aim of this study was to estimate the relative weight of specific frequency spectrum bandwidth on the discrimination of anger and fear facial expressions. The general paradigm was a classification of the expression of faces morphed at varying proportions between anger and fear images in which SF adaptation and SF subtraction are expected to shift classification of facial emotion. A series of three experiments was conducted. In Experiment 1 subjects classified morphed face images that were unfiltered or filtered to remove either low (<8 cycles/face), middle (12–28 cycles/face), or high (>32 cycles/face) SF components. In Experiment 2 subjects were adapted to unfiltered or filtered prototypical (non-morphed) fear face images and subsequently classified morphed face images. In Experiment 3 subjects were adapted to unfiltered or filtered prototypical fear face images with the phase component randomized before classifying morphed face images. Removing mid frequency components from the target images shifted classification toward fear. The same shift was observed under adaptation condition to unfiltered and low- and middle-range filtered fear images. However, when the phase spectrum of the same adaptation stimuli was randomized, no adaptation effect was observed. These results suggest that medium SF components support the perception of fear more than anger at both low and high level of processing. They also suggest that the effect at high-level processing stage is related more to high-level featural and/or configural information than to the low-level frequency spectrum. PMID:23637687

  15. Temporal Tuning of Word- and Face-selective Cortex.

    PubMed

    Yeatman, Jason D; Norcia, Anthony M

    2016-11-01

    Sensitivity to temporal change places fundamental limits on object processing in the visual system. An emerging consensus from the behavioral and neuroimaging literature suggests that temporal resolution differs substantially for stimuli of different complexity and for brain areas at different levels of the cortical hierarchy. Here, we used steady-state visually evoked potentials to directly measure three fundamental parameters that characterize the underlying neural response to text and face images: temporal resolution, peak temporal frequency, and response latency. We presented full-screen images of text or a human face, alternated with a scrambled image, at temporal frequencies between 1 and 12 Hz. These images elicited a robust response at the first harmonic that showed differential tuning, scalp topography, and delay for the text and face images. Face-selective responses were maximal at 4 Hz, but text-selective responses, by contrast, were maximal at 1 Hz. The topography of the text image response was strongly left-lateralized at higher stimulation rates, whereas the response to the face image was slightly right-lateralized but nearly bilateral at all frequencies. Both text and face images elicited steady-state activity at more than one apparent latency; we observed early (141-160 msec) and late (>250 msec) text- and face-selective responses. These differences in temporal tuning profiles are likely to reflect differences in the nature of the computations performed by word- and face-selective cortex. Despite the close proximity of word- and face-selective regions on the cortical surface, our measurements demonstrate substantial differences in the temporal dynamics of word- versus face-selective responses.

  16. Rotational distortion correction in endoscopic optical coherence tomography based on speckle decorrelation

    PubMed Central

    Uribe-Patarroyo, Néstor; Bouma, Brett E.

    2015-01-01

    We present a new technique for the correction of nonuniform rotation distortion in catheter-based optical coherence tomography (OCT), based on the statistics of speckle between A-lines using intensity-based dynamic light scattering. This technique does not rely on tissue features and can be performed on single frames of data, thereby enabling real-time image correction. We demonstrate its suitability in a gastrointestinal balloon-catheter OCT system, determining the actual rotational speed with high temporal resolution, and present corrected cross-sectional and en face views showing significant enhancement of image quality. PMID:26625040

  17. Multimodal Imaging Using a 11B(d,nγ)12C Source

    NASA Astrophysics Data System (ADS)

    Nattress, Jason; Rose, Paul; Mayer, Michal; Wonders, Marc; Wilhelm, Kyle; Erickson, Anna; Jovanovic, Igor; Multimodal Imaging; Nuclear Detection (MIND) in Active Interrogation Collaboration

    2016-03-01

    Detection of shielded special nuclear material (SNM) still remains one of the greatest challenges facing nuclear security, where small signal-to-background ratios result from complex, challenging configurations of practical objects. Passive detection relies on the spontaneous radioactive decay, whereas active interrogation (AI) uses external probing radiation to identify and characterize the material. AI provides higher signal intensity, providing a more viable method for SNM detection. New and innovative approaches are needed to overcome specific application constraints, such as limited scanning time. We report on a new AI approach that integrates both neutron and gamma transmission signatures to deduce specific material properties that can be utilized to aid SNM identification. The approach uses a single AI source, single detector type imaging system based on the 11B(d,nγ)12C reaction and an array of eight EJ-309 liquid scintillators, respectively. An integral transmission imaging approach has been employed initially for both neutrons and photons, exploiting the detectors' particle discrimination properties. Representative object images using neutrons and photons will be presented.

  18. Design of efficient, broadband single-element (20-80 MHz) ultrasonic transducers for medical imaging applications.

    PubMed

    Cannata, Jonathan M; Ritter, Timothy A; Chen, Wo-Hsing; Silverman, Ronald H; Shung, K Kirk

    2003-11-01

    This paper discusses the design, fabrication, and testing of sensitive broadband lithium niobate (LiNbO3) single-element ultrasonic transducers in the 20-80 MHz frequency range. Transducers of varying dimensions were built for an f# range of 2.0-3.1. The desired focal depths were achieved by either casting an acoustic lens on the transducer face or press-focusing the piezoelectric into a spherical curvature. For designs that required electrical impedance matching, a low impedance transmission line coaxial cable was used. All transducers were tested in a pulse-echo arrangement, whereby the center frequency, bandwidth, insertion loss, and focal depth were measured. Several transducers were fabricated with center frequencies in the 20-80 MHz range with the measured -6 dB bandwidths and two-way insertion loss values ranging from 57 to 74% and 9.6 to 21.3 dB, respectively. Both transducer focusing techniques proved successful in producing highly sensitive, high-frequency, single-element, ultrasonic-imaging transducers. In vivo and in vitro ultrasonic backscatter microscope (UBM) images of human eyes were obtained with the 50 MHz transducers. The high sensitivity of these devices could possibly allow for an increase in depth of penetration, higher image signal-to-noise ratio (SNR), and improved image contrast at high frequencies when compared to previously reported results.

  19. Misleading first impressions: different for different facial images of the same person.

    PubMed

    Todorov, Alexander; Porter, Jenny M

    2014-07-01

    Studies on first impressions from facial appearance have rapidly proliferated in the past decade. Almost all of these studies have relied on a single face image per target individual, and differences in impressions have been interpreted as originating in stable physiognomic differences between individuals. Here we show that images of the same individual can lead to different impressions, with within-individual image variance comparable to or exceeding between-individuals variance for a variety of social judgments (Experiment 1). We further show that preferences for images shift as a function of the context (e.g., selecting an image for online dating vs. a political campaign; Experiment 2), that preferences are predictably biased by the selection of the images (e.g., an image fitting a political campaign vs. a randomly selected image; Experiment 3), and that these biases are evident after extremely brief (40-ms) presentation of the images (Experiment 4). We discuss the implications of these findings for studies on the accuracy of first impressions. © The Author(s) 2014.

  20. Semi-Supervised Sparse Representation Based Classification for Face Recognition With Insufficient Labeled Samples

    NASA Astrophysics Data System (ADS)

    Gao, Yuan; Ma, Jiayi; Yuille, Alan L.

    2017-05-01

    This paper addresses the problem of face recognition when there is only few, or even only a single, labeled examples of the face that we wish to recognize. Moreover, these examples are typically corrupted by nuisance variables, both linear (i.e., additive nuisance variables such as bad lighting, wearing of glasses) and non-linear (i.e., non-additive pixel-wise nuisance variables such as expression changes). The small number of labeled examples means that it is hard to remove these nuisance variables between the training and testing faces to obtain good recognition performance. To address the problem we propose a method called Semi-Supervised Sparse Representation based Classification (S$^3$RC). This is based on recent work on sparsity where faces are represented in terms of two dictionaries: a gallery dictionary consisting of one or more examples of each person, and a variation dictionary representing linear nuisance variables (e.g., different lighting conditions, different glasses). The main idea is that (i) we use the variation dictionary to characterize the linear nuisance variables via the sparsity framework, then (ii) prototype face images are estimated as a gallery dictionary via a Gaussian Mixture Model (GMM), with mixed labeled and unlabeled samples in a semi-supervised manner, to deal with the non-linear nuisance variations between labeled and unlabeled samples. We have done experiments with insufficient labeled samples, even when there is only a single labeled sample per person. Our results on the AR, Multi-PIE, CAS-PEAL, and LFW databases demonstrate that the proposed method is able to deliver significantly improved performance over existing methods.

  1. Lomonosov Crater, Day and Night

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site]

    Released 16 June 2004 This pair of images shows part of Lomonosov Crater.

    Day/Night Infrared Pairs

    The image pairs presented focus on a single surface feature as seen in both the daytime and nighttime by the infrared THEMIS camera. The nighttime image (right) has been rotated 180 degrees to place north at the top.

    Infrared image interpretation

    Daytime: Infrared images taken during the daytime exhibit both the morphological and thermophysical properties of the surface of Mars. Morphologic details are visible due to the effect of sun-facing slopes receiving more energy than antisun-facing slopes. This creates a warm (bright) slope and cool (dark) slope appearance that mimics the light and shadows of a visible wavelength image. Thermophysical properties are seen in that dust heats up more quickly than rocks. Thus dusty areas are bright and rocky areas are dark.

    Nighttime: Infrared images taken during the nighttime exhibit only the thermophysical properties of the surface of Mars. The effect of sun-facing versus non-sun-facing energy dissipates quickly at night. Thermophysical effects dominate as different surfaces cool at different rates through the nighttime hours. Rocks cool slowly, and are therefore relatively bright at night (remember that rocks are dark during the day). Dust and other fine grained materials cool very quickly and are dark in nighttime infrared images.

    Image information: IR instrument. Latitude 64.9, Longitude 350.7 East (9.3 West). 100 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  2. Arsia Mons by Day and Night

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site]

    Released 22 June 2004 This pair of images shows part of Arsia Mons.

    Day/Night Infrared Pairs

    The image pairs presented focus on a single surface feature as seen in both the daytime and nighttime by the infrared THEMIS camera. The nighttime image (right) has been rotated 180 degrees to place north at the top.

    Infrared image interpretation

    Daytime: Infrared images taken during the daytime exhibit both the morphological and thermophysical properties of the surface of Mars. Morphologic details are visible due to the effect of sun-facing slopes receiving more energy than antisun-facing slopes. This creates a warm (bright) slope and cool (dark) slope appearance that mimics the light and shadows of a visible wavelength image. Thermophysical properties are seen in that dust heats up more quickly than rocks. Thus dusty areas are bright and rocky areas are dark.

    Nighttime: Infrared images taken during the nighttime exhibit only the thermophysical properties of the surface of Mars. The effect of sun-facing versus non-sun-facing energy dissipates quickly at night. Thermophysical effects dominate as different surfaces cool at different rates through the nighttime hours. Rocks cool slowly, and are therefore relatively bright at night (remember that rocks are dark during the day). Dust and other fine grained materials cool very quickly and are dark in nighttime infrared images.

    Image information: IR instrument. Latitude -19.6, Longitude 241.9 East (118.1 West). 100 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  3. Albor Tholus by Day and Night

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site]

    Released 21 June 2004 This pair of images shows part of Albor Tholus.

    Day/Night Infrared Pairs

    The image pairs presented focus on a single surface feature as seen in both the daytime and nighttime by the infrared THEMIS camera. The nighttime image (right) has been rotated 180 degrees to place north at the top.

    Infrared image interpretation

    Daytime: Infrared images taken during the daytime exhibit both the morphological and thermophysical properties of the surface of Mars. Morphologic details are visible due to the effect of sun-facing slopes receiving more energy than antisun-facing slopes. This creates a warm (bright) slope and cool (dark) slope appearance that mimics the light and shadows of a visible wavelength image. Thermophysical properties are seen in that dust heats up more quickly than rocks. Thus dusty areas are bright and rocky areas are dark.

    Nighttime: Infrared images taken during the nighttime exhibit only the thermophysical properties of the surface of Mars. The effect of sun-facing versus non-sun-facing energy dissipates quickly at night. Thermophysical effects dominate as different surfaces cool at different rates through the nighttime hours. Rocks cool slowly, and are therefore relatively bright at night (remember that rocks are dark during the day). Dust and other fine grained materials cool very quickly and are dark in nighttime infrared images.

    Image information: IR instrument. Latitude 17.6, Longitude 150.3 East (209.7 West). 100 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  4. Ares Valles: Night and Day

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site]

    Released 15 June 2004 This pair of images shows part of the Ares Valles region.

    Day/Night Infrared Pairs

    The image pairs presented focus on a single surface feature as seen in both the daytime and nighttime by the infrared THEMIS camera. The nighttime image (right) has been rotated 180 degrees to place north at the top.

    Infrared image interpretation

    Daytime: Infrared images taken during the daytime exhibit both the morphological and thermophysical properties of the surface of Mars. Morphologic details are visible due to the effect of sun-facing slopes receiving more energy than antisun-facing slopes. This creates a warm (bright) slope and cool (dark) slope appearance that mimics the light and shadows of a visible wavelength image. Thermophysical properties are seen in that dust heats up more quickly than rocks. Thus dusty areas are bright and rocky areas are dark.

    Nighttime: Infrared images taken during the nighttime exhibit only the thermophysical properties of the surface of Mars. The effect of sun-facing versus non-sun-facing energy dissipates quickly at night. Thermophysical effects dominate as different surfaces cool at different rates through the nighttime hours. Rocks cool slowly, and are therefore relatively bright at night (remember that rocks are dark during the day). Dust and other fine grained materials cool very quickly and are dark in nighttime infrared images.

    Image information: IR instrument. Latitude 3.6, Longitude 339.9 East (20.1 West). 100 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  5. Channel by Day and Night

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site]

    Released 17 June 2004 This pair of images shows part of a small channel.

    Day/Night Infrared Pairs

    The image pairs presented focus on a single surface feature as seen in both the daytime and nighttime by the infrared THEMIS camera. The nighttime image (right) has been rotated 180 degrees to place north at the top.

    Infrared image interpretation

    Daytime: Infrared images taken during the daytime exhibit both the morphological and thermophysical properties of the surface of Mars. Morphologic details are visible due to the effect of sun-facing slopes receiving more energy than antisun-facing slopes. This creates a warm (bright) slope and cool (dark) slope appearance that mimics the light and shadows of a visible wavelength image. Thermophysical properties are seen in that dust heats up more quickly than rocks. Thus dusty areas are bright and rocky areas are dark.

    Nighttime: Infrared images taken during the nighttime exhibit only the thermophysical properties of the surface of Mars. The effect of sun-facing versus non-sun-facing energy dissipates quickly at night. Thermophysical effects dominate as different surfaces cool at different rates through the nighttime hours. Rocks cool slowly, and are therefore relatively bright at night (remember that rocks are dark during the day). Dust and other fine grained materials cool very quickly and are dark in nighttime infrared images.

    Image information: IR instrument. Latitude 19.8, Longitude 141.5 East (218.5 West). 100 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  6. Noctus Labyrinthus by Day and Night

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site]

    Released 25 June 2004 This pair of images shows part of Noctus Labyrinthus.

    Day/Night Infrared Pairs

    The image pairs presented focus on a single surface feature as seen in both the daytime and nighttime by the infrared THEMIS camera. The nighttime image (right) has been rotated 180 degrees to place north at the top.

    Infrared image interpretation

    Daytime: Infrared images taken during the daytime exhibit both the morphological and thermophysical properties of the surface of Mars. Morphologic details are visible due to the effect of sun-facing slopes receiving more energy than antisun-facing slopes. This creates a warm (bright) slope and cool (dark) slope appearance that mimics the light and shadows of a visible wavelength image. Thermophysical properties are seen in that dust heats up more quickly than rocks. Thus dusty areas are bright and rocky areas are dark.

    Nighttime: Infrared images taken during the nighttime exhibit only the thermophysical properties of the surface of Mars. The effect of sun-facing versus non-sun-facing energy dissipates quickly at night. Thermophysical effects dominate as different surfaces cool at different rates through the nighttime hours. Rocks cool slowly, and are therefore relatively bright at night (remember that rocks are dark during the day). Dust and other fine grained materials cool very quickly and are dark in nighttime infrared images.

    Image information: IR instrument. Latitude -9.6, Longitude 264.5 East (95.5 West). 100 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  7. Ius Chasma by Day and Night

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site]

    Released 18 June 2004 This pair of images shows part of Ius Chasma.

    Day/Night Infrared Pairs

    The image pairs presented focus on a single surface feature as seen in both the daytime and nighttime by the infrared THEMIS camera. The nighttime image (right) has been rotated 180 degrees to place north at the top.

    Infrared image interpretation

    Daytime: Infrared images taken during the daytime exhibit both the morphological and thermophysical properties of the surface of Mars. Morphologic details are visible due to the effect of sun-facing slopes receiving more energy than antisun-facing slopes. This creates a warm (bright) slope and cool (dark) slope appearance that mimics the light and shadows of a visible wavelength image. Thermophysical properties are seen in that dust heats up more quickly than rocks. Thus dusty areas are bright and rocky areas are dark.

    Nighttime: Infrared images taken during the nighttime exhibit only the thermophysical properties of the surface of Mars. The effect of sun-facing versus non-sun-facing energy dissipates quickly at night. Thermophysical effects dominate as different surfaces cool at different rates through the nighttime hours. Rocks cool slowly, and are therefore relatively bright at night (remember that rocks are dark during the day). Dust and other fine grained materials cool very quickly and are dark in nighttime infrared images.

    Image information: IR instrument. Latitude -1, Longitude 276 East (84 West). 100 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  8. Crater Ejecta by Day and Night

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site]

    Released 24 June 2004 This pair of images shows a crater and its ejecta.

    Day/Night Infrared Pairs

    The image pairs presented focus on a single surface feature as seen in both the daytime and nighttime by the infrared THEMIS camera. The nighttime image (right) has been rotated 180 degrees to place north at the top.

    Infrared image interpretation

    Daytime: Infrared images taken during the daytime exhibit both the morphological and thermophysical properties of the surface of Mars. Morphologic details are visible due to the effect of sun-facing slopes receiving more energy than antisun-facing slopes. This creates a warm (bright) slope and cool (dark) slope appearance that mimics the light and shadows of a visible wavelength image. Thermophysical properties are seen in that dust heats up more quickly than rocks. Thus dusty areas are bright and rocky areas are dark.

    Nighttime: Infrared images taken during the nighttime exhibit only the thermophysical properties of the surface of Mars. The effect of sun-facing versus non-sun-facing energy dissipates quickly at night. Thermophysical effects dominate as different surfaces cool at different rates through the nighttime hours. Rocks cool slowly, and are therefore relatively bright at night (remember that rocks are dark during the day). Dust and other fine grained materials cool very quickly and are dark in nighttime infrared images.

    Image information: IR instrument. Latitude -9, Longitude 164.2 East (195.8 West). 100 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  9. Gusev Crater by Day and Night

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site]

    Released 23 June 2004 This pair of images shows part of Gusev Crater.

    Day/Night Infrared Pairs

    The image pairs presented focus on a single surface feature as seen in both the daytime and nighttime by the infrared THEMIS camera. The nighttime image (right) has been rotated 180 degrees to place north at the top.

    Infrared image interpretation

    Daytime: Infrared images taken during the daytime exhibit both the morphological and thermophysical properties of the surface of Mars. Morphologic details are visible due to the effect of sun-facing slopes receiving more energy than antisun-facing slopes. This creates a warm (bright) slope and cool (dark) slope appearance that mimics the light and shadows of a visible wavelength image. Thermophysical properties are seen in that dust heats up more quickly than rocks. Thus dusty areas are bright and rocky areas are dark.

    Nighttime: Infrared images taken during the nighttime exhibit only the thermophysical properties of the surface of Mars. The effect of sun-facing versus non-sun-facing energy dissipates quickly at night. Thermophysical effects dominate as different surfaces cool at different rates through the nighttime hours. Rocks cool slowly, and are therefore relatively bright at night (remember that rocks are dark during the day). Dust and other fine grained materials cool very quickly and are dark in nighttime infrared images.

    Image information: IR instrument. Latitude -14.5, Longitude 175.5 East (184.5 West). 100 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  10. Meridiani Crater in Day and Night

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site]

    Released 14 June 2004 This pair of images shows crater ejecta in the Terra Meridiani region.

    Day/Night Infrared Pairs

    The image pairs presented focus on a single surface feature as seen in both the daytime and nighttime by the infrared THEMIS camera. The nighttime image (right) has been rotated 180 degrees to place north at the top.

    Infrared image interpretation

    Daytime: Infrared images taken during the daytime exhibit both the morphological and thermophysical properties of the surface of Mars. Morphologic details are visible due to the effect of sun-facing slopes receiving more energy than antisun-facing slopes. This creates a warm (bright) slope and cool (dark) slope appearance that mimics the light and shadows of a visible wavelength image. Thermophysical properties are seen in that dust heats up more quickly than rocks. Thus dusty areas are bright and rocky areas are dark.

    Nighttime: Infrared images taken during the nighttime exhibit only the thermophysical properties of the surface of Mars. The effect of sun-facing versus non-sun-facing energy dissipates quickly at night. Thermophysical effects dominate as different surfaces cool at different rates through the nighttime hours. Rocks cool slowly, and are therefore relatively bright at night (remember that rocks are dark during the day). Dust and other fine grained materials cool very quickly and are dark in nighttime infrared images.

    Image information: IR instrument. Latitude -1.6, Longitude 4.1 East (355.9 West). 100 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  11. Day And Night In Terra Meridiani

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site]

    Released 11 June 2004 This pair of images shows part of the Terra Meridiani region.

    Day/Night Infrared Pairs

    The image pairs presented focus on a single surface feature as seen in both the daytime and nighttime by the infrared THEMIS camera. The nighttime image (right) has been rotated 180 degrees to place north at the top.

    Infrared image interpretation

    Daytime: Infrared images taken during the daytime exhibit both the morphological and thermophysical properties of the surface of Mars. Morphologic details are visible due to the effect of sun-facing slopes receiving more energy than antisun-facing slopes. This creates a warm (bright) slope and cool (dark) slope appearance that mimics the light and shadows of a visible wavelength image. Thermophysical properties are seen in that dust heats up more quickly than rocks. Thus dusty areas are bright and rocky areas are dark.

    Nighttime: Infrared images taken during the nighttime exhibit only the thermophysical properties of the surface of Mars. The effect of sun-facing versus non-sun-facing energy dissipates quickly at night. Thermophysical effects dominate as different surfaces cool at different rates through the nighttime hours. Rocks cool slowly, and are therefore relatively bright at night (remember that rocks are dark during the day). Dust and other fine grained materials cool very quickly and are dark in nighttime infrared images.

    Image information: IR instrument. Latitude 1.3, Longitude 0.5 East (359.5 West). 100 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  12. Three-dimensional evaluation of the relationship between jaw divergence and facial soft tissue dimensions.

    PubMed

    Rongo, Roberto; Antoun, Joseph Saswat; Lim, Yi Xin; Dias, George; Valletta, Rosa; Farella, Mauro

    2014-09-01

    To evaluate the relationship between mandibular divergence and vertical and transverse dimensions of the face. A sample was recruited from the orthodontic clinic of the University of Otago, New Zealand. The recruited participants (N  =  60) were assigned to three different groups based on the mandibular plane angle (hyperdivergent, n  =  20; normodivergent, n  =  20; and hypodivergent, n  =  20). The sample consisted of 31 females and 29 males, with a mean age of 21.1 years (SD ± 5.0). Facial scans were recorded for each participant using a three-dimensional (3D) white-light scanner and then merged to form a single 3D image of the face. Vertical and transverse measurements of the face were assessed from the 3D facial image. The hyperdivergent sample had a significantly larger total and lower anterior facial height than the other two groups (P < .05), although no difference was found for the middle facial height (P > .05). Similarly, there were no significant differences in the transverse measurements of the three study groups (P > .05). Both gender and body mass index (BMI) had a greater influence on the transverse dimension. Hyperdivergent facial types are associated with a long face but not necessarily a narrow face. Variations in facial soft tissue vertical and transversal dimensions are more likely to be due to gender. Body mass index has a role in mandibular width (GoGo) assessment.

  13. Enhanced Visualization of Subtle Outer Retinal Pathology by En Face Optical Coherence Tomography and Correlation with Multi-Modal Imaging

    PubMed Central

    Chew, Avenell L.; Lamey, Tina; McLaren, Terri; De Roach, John

    2016-01-01

    Purpose To present en face optical coherence tomography (OCT) images generated by graph-search theory algorithm-based custom software and examine correlation with other imaging modalities. Methods En face OCT images derived from high density OCT volumetric scans of 3 healthy subjects and 4 patients using a custom algorithm (graph-search theory) and commercial software (Heidelberg Eye Explorer software (Heidelberg Engineering)) were compared and correlated with near infrared reflectance, fundus autofluorescence, adaptive optics flood-illumination ophthalmoscopy (AO-FIO) and microperimetry. Results Commercial software was unable to generate accurate en face OCT images in eyes with retinal pigment epithelium (RPE) pathology due to segmentation error at the level of Bruch’s membrane (BM). Accurate segmentation of the basal RPE and BM was achieved using custom software. The en face OCT images from eyes with isolated interdigitation or ellipsoid zone pathology were of similar quality between custom software and Heidelberg Eye Explorer software in the absence of any other significant outer retinal pathology. En face OCT images demonstrated angioid streaks, lesions of acute macular neuroretinopathy, hydroxychloroquine toxicity and Bietti crystalline deposits that correlated with other imaging modalities. Conclusions Graph-search theory algorithm helps to overcome the limitations of outer retinal segmentation inaccuracies in commercial software. En face OCT images can provide detailed topography of the reflectivity within a specific layer of the retina which correlates with other forms of fundus imaging. Our results highlight the need for standardization of image reflectivity to facilitate quantification of en face OCT images and longitudinal analysis. PMID:27959968

  14. Locally Linear Embedding of Local Orthogonal Least Squares Images for Face Recognition

    NASA Astrophysics Data System (ADS)

    Hafizhelmi Kamaru Zaman, Fadhlan

    2018-03-01

    Dimensionality reduction is very important in face recognition since it ensures that high-dimensionality data can be mapped to lower dimensional space without losing salient and integral facial information. Locally Linear Embedding (LLE) has been previously used to serve this purpose, however, the process of acquiring LLE features requires high computation and resources. To overcome this limitation, we propose a locally-applied Local Orthogonal Least Squares (LOLS) model can be used as initial feature extraction before the application of LLE. By construction of least squares regression under orthogonal constraints we can preserve more discriminant information in the local subspace of facial features while reducing the overall features into a more compact form that we called LOLS images. LLE can then be applied on the LOLS images to maps its representation into a global coordinate system of much lower dimensionality. Several experiments carried out using publicly available face datasets such as AR, ORL, YaleB, and FERET under Single Sample Per Person (SSPP) constraint demonstrates that our proposed method can reduce the time required to compute LLE features while delivering better accuracy when compared to when either LLE or OLS alone is used. Comparison against several other feature extraction methods and more recent feature-learning method such as state-of-the-art Convolutional Neural Networks (CNN) also reveal the superiority of the proposed method under SSPP constraint.

  15. Adapting Local Features for Face Detection in Thermal Image.

    PubMed

    Ma, Chao; Trung, Ngo Thanh; Uchiyama, Hideaki; Nagahara, Hajime; Shimada, Atsushi; Taniguchi, Rin-Ichiro

    2017-11-27

    A thermal camera captures the temperature distribution of a scene as a thermal image. In thermal images, facial appearances of different people under different lighting conditions are similar. This is because facial temperature distribution is generally constant and not affected by lighting condition. This similarity in face appearances is advantageous for face detection. To detect faces in thermal images, cascade classifiers with Haar-like features are generally used. However, there are few studies exploring the local features for face detection in thermal images. In this paper, we introduce two approaches relying on local features for face detection in thermal images. First, we create new feature types by extending Multi-Block LBP. We consider a margin around the reference and the generally constant distribution of facial temperature. In this way, we make the features more robust to image noise and more effective for face detection in thermal images. Second, we propose an AdaBoost-based training method to get cascade classifiers with multiple types of local features. These feature types have different advantages. In this way we enhance the description power of local features. We did a hold-out validation experiment and a field experiment. In the hold-out validation experiment, we captured a dataset from 20 participants, comprising 14 males and 6 females. For each participant, we captured 420 images with 10 variations in camera distance, 21 poses, and 2 appearances (participant with/without glasses). We compared the performance of cascade classifiers trained by different sets of the features. The experiment results showed that the proposed approaches effectively improve the performance of face detection in thermal images. In the field experiment, we compared the face detection performance in realistic scenes using thermal and RGB images, and gave discussion based on the results.

  16. The effect of face inversion for neurons inside and outside fMRI-defined face-selective cortical regions

    PubMed Central

    Van Belle, Goedele; Vanduffel, Wim; Rossion, Bruno; Vogels, Rufin

    2014-01-01

    It is widely believed that face processing in the primate brain occurs in a network of category-selective cortical regions. Combined functional MRI (fMRI)-single-cell recording studies in macaques have identified high concentrations of neurons that respond more to faces than objects within face-selective patches. However, cells with a preference for faces over objects are also found scattered throughout inferior temporal (IT) cortex, raising the question whether face-selective cells inside and outside of the face patches differ functionally. Here, we compare the properties of face-selective cells inside and outside of face-selective patches in the IT cortex by means of an image manipulation that reliably disrupts behavior toward face processing: inversion. We recorded IT neurons from two fMRI-defined face-patches (ML and AL) and a region outside of the face patches (herein labeled OUT) during upright and inverted face stimulation. Overall, turning faces upside down reduced the firing rate of face-selective cells. However, there were differences among the recording regions. First, the reduced neuronal response for inverted faces was independent of stimulus position, relative to fixation, in the face-selective patches (ML and AL) only. Additionally, the effect of inversion for face-selective cells in ML, but not those in AL or OUT, was impervious to whether the neurons were initially searched for using upright or inverted stimuli. Collectively, these results show that face-selective cells differ in their functional characteristics depending on their anatomicofunctional location, suggesting that upright faces are preferably coded by face-selective cells inside but not outside of the fMRI-defined face-selective regions of the posterior IT cortex. PMID:25520434

  17. Neural network face recognition using wavelets

    NASA Astrophysics Data System (ADS)

    Karunaratne, Passant V.; Jouny, Ismail I.

    1997-04-01

    The recognition of human faces is a phenomenon that has been mastered by the human visual system and that has been researched extensively in the domain of computer neural networks and image processing. This research is involved in the study of neural networks and wavelet image processing techniques in the application of human face recognition. The objective of the system is to acquire a digitized still image of a human face, carry out pre-processing on the image as required, an then, given a prior database of images of possible individuals, be able to recognize the individual in the image. The pre-processing segment of the system includes several procedures, namely image compression, denoising, and feature extraction. The image processing is carried out using Daubechies wavelets. Once the images have been passed through the wavelet-based image processor they can be efficiently analyzed by means of a neural network. A back- propagation neural network is used for the recognition segment of the system. The main constraints of the system is with regard to the characteristics of the images being processed. The system should be able to carry out effective recognition of the human faces irrespective of the individual's facial-expression, presence of extraneous objects such as head-gear or spectacles, and face/head orientation. A potential application of this face recognition system would be as a secondary verification method in an automated teller machine.

  18. Validating automatic semantic annotation of anatomy in DICOM CT images

    NASA Astrophysics Data System (ADS)

    Pathak, Sayan D.; Criminisi, Antonio; Shotton, Jamie; White, Steve; Robertson, Duncan; Sparks, Bobbi; Munasinghe, Indeera; Siddiqui, Khan

    2011-03-01

    In the current health-care environment, the time available for physicians to browse patients' scans is shrinking due to the rapid increase in the sheer number of images. This is further aggravated by mounting pressure to become more productive in the face of decreasing reimbursement. Hence, there is an urgent need to deliver technology which enables faster and effortless navigation through sub-volume image visualizations. Annotating image regions with semantic labels such as those derived from the RADLEX ontology can vastly enhance image navigation and sub-volume visualization. This paper uses random regression forests for efficient, automatic detection and localization of anatomical structures within DICOM 3D CT scans. A regression forest is a collection of decision trees which are trained to achieve direct mapping from voxels to organ location and size in a single pass. This paper focuses on comparing automated labeling with expert-annotated ground-truth results on a database of 50 highly variable CT scans. Initial investigations show that regression forest derived localization errors are smaller and more robust than those achieved by state-of-the-art global registration approaches. The simplicity of the algorithm's context-rich visual features yield typical runtimes of less than 10 seconds for a 5123 voxel DICOM CT series on a single-threaded, single-core machine running multiple trees; each tree taking less than a second. Furthermore, qualitative evaluation demonstrates that using the detected organs' locations as index into the image volume improves the efficiency of the navigational workflow in all the CT studies.

  19. Face-n-Food: Gender Differences in Tuning to Faces.

    PubMed

    Pavlova, Marina A; Scheffler, Klaus; Sokolov, Alexander N

    2015-01-01

    Faces represent valuable signals for social cognition and non-verbal communication. A wealth of research indicates that women tend to excel in recognition of facial expressions. However, it remains unclear whether females are better tuned to faces. We presented healthy adult females and males with a set of newly created food-plate images resembling faces (slightly bordering on the Giuseppe Arcimboldo style). In a spontaneous recognition task, participants were shown a set of images in a predetermined order from the least to most resembling a face. Females not only more readily recognized the images as a face (they reported resembling a face on images, on which males still did not), but gave on overall more face responses. The findings are discussed in the light of gender differences in deficient face perception. As most neuropsychiatric, neurodevelopmental and psychosomatic disorders characterized by social brain abnormalities are sex specific, the task may serve as a valuable tool for uncovering impairments in visual face processing.

  20. Face-n-Food: Gender Differences in Tuning to Faces

    PubMed Central

    Pavlova, Marina A.; Scheffler, Klaus; Sokolov, Alexander N.

    2015-01-01

    Faces represent valuable signals for social cognition and non-verbal communication. A wealth of research indicates that women tend to excel in recognition of facial expressions. However, it remains unclear whether females are better tuned to faces. We presented healthy adult females and males with a set of newly created food-plate images resembling faces (slightly bordering on the Giuseppe Arcimboldo style). In a spontaneous recognition task, participants were shown a set of images in a predetermined order from the least to most resembling a face. Females not only more readily recognized the images as a face (they reported resembling a face on images, on which males still did not), but gave on overall more face responses. The findings are discussed in the light of gender differences in deficient face perception. As most neuropsychiatric, neurodevelopmental and psychosomatic disorders characterized by social brain abnormalities are sex specific, the task may serve as a valuable tool for uncovering impairments in visual face processing. PMID:26154177

  1. Face recognition system using multiple face model of hybrid Fourier feature under uncontrolled illumination variation.

    PubMed

    Hwang, Wonjun; Wang, Haitao; Kim, Hyunwoo; Kee, Seok-Cheol; Kim, Junmo

    2011-04-01

    The authors present a robust face recognition system for large-scale data sets taken under uncontrolled illumination variations. The proposed face recognition system consists of a novel illumination-insensitive preprocessing method, a hybrid Fourier-based facial feature extraction, and a score fusion scheme. First, in the preprocessing stage, a face image is transformed into an illumination-insensitive image, called an "integral normalized gradient image," by normalizing and integrating the smoothed gradients of a facial image. Then, for feature extraction of complementary classifiers, multiple face models based upon hybrid Fourier features are applied. The hybrid Fourier features are extracted from different Fourier domains in different frequency bandwidths, and then each feature is individually classified by linear discriminant analysis. In addition, multiple face models are generated by plural normalized face images that have different eye distances. Finally, to combine scores from multiple complementary classifiers, a log likelihood ratio-based score fusion scheme is applied. The proposed system using the face recognition grand challenge (FRGC) experimental protocols is evaluated; FRGC is a large available data set. Experimental results on the FRGC version 2.0 data sets have shown that the proposed method shows an average of 81.49% verification rate on 2-D face images under various environmental variations such as illumination changes, expression changes, and time elapses.

  2. Perceived face size in healthy adults.

    PubMed

    D'Amour, Sarah; Harris, Laurence R

    2017-01-01

    Perceptual body size distortions have traditionally been studied using subjective, qualitative measures that assess only one type of body representation-the conscious body image. Previous research on perceived body size has typically focused on measuring distortions of the entire body and has tended to overlook the face. Here, we present a novel psychophysical method for determining perceived body size that taps into implicit body representation. Using a two-alternative forced choice (2AFC), participants were sequentially shown two life-size images of their own face, viewed upright, upside down, or tilted 90°. In one interval, the width or length dimension was varied, while the other interval contained an undistorted image. Participants reported which image most closely matched their own face. An adaptive staircase adjusted the distorted image to hone in on the image that was equally likely to be judged as matching their perceived face as the accurate image. When viewed upright or upside down, face width was overestimated and length underestimated, whereas perception was accurate for the on-side views. These results provide the first psychophysically robust measurements of how accurately healthy participants perceive the size of their face, revealing distortions of the implicit body representation independent of the conscious body image.

  3. A novel thermal face recognition approach using face pattern words

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng

    2010-04-01

    A reliable thermal face recognition system can enhance the national security applications such as prevention against terrorism, surveillance, monitoring and tracking, especially at nighttime. The system can be applied at airports, customs or high-alert facilities (e.g., nuclear power plant) for 24 hours a day. In this paper, we propose a novel face recognition approach utilizing thermal (long wave infrared) face images that can automatically identify a subject at both daytime and nighttime. With a properly acquired thermal image (as a query image) in monitoring zone, the following processes will be employed: normalization and denoising, face detection, face alignment, face masking, Gabor wavelet transform, face pattern words (FPWs) creation, face identification by similarity measure (Hamming distance). If eyeglasses are present on a subject's face, an eyeglasses mask will be automatically extracted from the querying face image, and then masked with all comparing FPWs (no more transforms). A high identification rate (97.44% with Top-1 match) has been achieved upon our preliminary face dataset (of 39 subjects) from the proposed approach regardless operating time and glasses-wearing condition.e

  4. Perspective distortion in craniofacial superimposition: Logarithmic decay curves mapped mathematically and by practical experiment.

    PubMed

    Stephan, Carl N

    2015-12-01

    The superimposition of a face photograph with that of a skull for identification purposes necessitates the use of comparable photographic parameters between the two image acquisition sessions, so that differences in optics and consequent recording of images does not thwart the morphological analysis. Widely divergent, but published, speculations about the thresholds at which perspective distortion becomes negligible (0.5 to >13.5 m) must be resolved and perspective distortion (PD) relationships quantified across their full range to judge tolerance levels, and the suitability of commonly employed contemporary equipment (e.g., 1 m photographic copy-stands). Herein, basic trigonometry is employed to map PD for two same sized 179 mm linear lengths - separated anteroposteriorly by 127 mm - as a function of subject-to-camera distance (SCD; 0.2-20 m). These lengths approximate basic craniofacial heights (e.g., tr-n) and widths (e.g., zy-zy), while the latter approximates facial depth (e.g., n-t). As anticipated, PD decayed in logarithmic and continuous manner with increasing SCD. At SCD of 12 m, the within-image PD was negligible (<1%). At <2.5 m SCD, it exceeded 5% and increased sharply as SCD decreased. Since life size images of skulls and faces are commonly employed for superimposition, a relative 1% perspective distortion difference is recommended as the ceiling standard for craniofacial comparison (translates into a ≤2 mm difference in physiognomical face height). Since superimposition depends on relative comparisons of a photographic pair (not one photograph), there is practically no scenario in superimposition casework where SCDs should be ignored and no single distance at which PD should be considered negligible (even if one image holds >12 m SCD). Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  5. Visual search for emotional expressions: Effect of stimulus set on anger and happiness superiority.

    PubMed

    Savage, Ruth A; Becker, Stefanie I; Lipp, Ottmar V

    2016-01-01

    Prior reports of preferential detection of emotional expressions in visual search have yielded inconsistent results, even for face stimuli that avoid obvious expression-related perceptual confounds. The current study investigated inconsistent reports of anger and happiness superiority effects using face stimuli drawn from the same database. Experiment 1 excluded procedural differences as a potential factor, replicating a happiness superiority effect in a procedure that previously yielded an anger superiority effect. Experiments 2a and 2b confirmed that image colour or poser gender did not account for prior inconsistent findings. Experiments 3a and 3b identified stimulus set as the critical variable, revealing happiness or anger superiority effects for two partially overlapping sets of face stimuli. The current results highlight the critical role of stimulus selection for the observation of happiness or anger superiority effects in visual search even for face stimuli that avoid obvious expression related perceptual confounds and are drawn from a single database.

  6. Rating Nasolabial Aesthetics in Unilateral Cleft Lip and Palate Patients: Cropped Versus Full-Face Images.

    PubMed

    Schwirtz, Roderic M F; Mulder, Frans J; Mosmuller, David G M; Tan, Robin A; Maal, Thomas J; Prahl, Charlotte; de Vet, Henrica C W; Don Griot, J Peter W

    2018-05-01

    To determine if cropping facial images affects nasolabial aesthetics assessments in unilateral cleft lip patients and to evaluate the effect of facial attractiveness on nasolabial evaluation. Two cleft surgeons and one cleft orthodontist assessed standardized frontal photographs 4 times; nasolabial aesthetics were rated on cropped and full-face images using the Cleft Aesthetic Rating Scale, and total facial attractiveness was rated on full-face images with and without the nasolabial area blurred using a 5-point Likert scale. Cleft Palate Craniofacial Unit of a University Medical Center. Inclusion criteria: nonsyndromic unilateral cleft lip and an available frontal view photograph around 10 years of age. a history of facial trauma and an incomplete cleft. Eighty-one photographs were available for assessment. Differences in mean CARS scores between cropped versus full-face photographs and attractive versus unattractive rated patients were evaluated by paired t test. Nasolabial aesthetics are scored more negatively on full-face photographs compared to cropped photographs, regardless of facial attractiveness. (Mean CARS score, nose: cropped = 2.8, full-face = 3.0, P < .001; lip: cropped = 2.4, full-face = 2.7, P < .001; nose and lip: cropped = 2.6, full-face = 2.8, P < .001). Aesthetic outcomes of the nasolabial area are assessed significantly more positively when using cropped images compared to full-face images. For this reason, cropping images, revealing the nasolabial area only, is recommended for aesthetical assessments.

  7. Fusion of LBP and SWLD using spatio-spectral information for hyperspectral face recognition

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua; Jiang, Peng; Zhang, Shuai; Xiong, Jinquan

    2018-01-01

    Hyperspectral imaging, recording intrinsic spectral information of the skin cross different spectral bands, become an important issue for robust face recognition. However, the main challenges for hyperspectral face recognition are high data dimensionality, low signal to noise ratio and inter band misalignment. In this paper, hyperspectral face recognition based on LBP (Local binary pattern) and SWLD (Simplified Weber local descriptor) is proposed to extract discriminative local features from spatio-spectral fusion information. Firstly, the spatio-spectral fusion strategy based on statistical information is used to attain discriminative features of hyperspectral face images. Secondly, LBP is applied to extract the orientation of the fusion face edges. Thirdly, SWLD is proposed to encode the intensity information in hyperspectral images. Finally, we adopt a symmetric Kullback-Leibler distance to compute the encoded face images. The hyperspectral face recognition is tested on Hong Kong Polytechnic University Hyperspectral Face database (PolyUHSFD). Experimental results show that the proposed method has higher recognition rate (92.8%) than the state of the art hyperspectral face recognition algorithms.

  8. Face repetition detection and social interest: An ERP study in adults with and without Williams syndrome.

    PubMed

    Key, Alexandra P; Dykens, Elisabeth M

    2016-12-01

    The present study examined possible neural mechanisms underlying increased social interest in persons with Williams syndrome (WS). Visual event-related potentials (ERPs) during passive viewing were used to compare incidental memory traces for repeated vs. single presentations of previously unfamiliar social (faces) and nonsocial (houses) images in 26 adults with WS and 26 typical adults. Results indicated that participants with WS developed familiarity with the repeated faces and houses (frontal N400 response), but only typical adults evidenced the parietal old/new effect (previously associated with stimulus recollection) for the repeated faces. There was also no evidence of exceptional salience of social information in WS, as ERP markers of memory for repeated faces vs. houses were not significantly different. Thus, while persons with WS exhibit behavioral evidence of increased social interest, their processing of social information in the absence of specific instructions may be relatively superficial. The ERP evidence of face repetition detection in WS was independent of IQ and the earlier perceptual differentiation of social vs. nonsocial stimuli. Large individual differences in ERPs of participants with WS may provide valuable information for understanding the WS phenotype and have relevance for educational and treatment purposes.

  9. On the integral use of foundational concepts in verifying validity during skull-photo superimposition.

    PubMed

    Jayaprakash, Paul T

    2017-09-01

    Often cited reliability test on video superimposition method integrated scaling face-images in relation to skull-images, tragus-auditory meatus relationship in addition to exocanthion-Whitnall's tubercle relationship when orientating the skull-image and wipe mode imaging in addition to mix mode imaging when obtaining skull-face image overlay and evaluating the goodness of match. However, a report that found higher false positive matches in computer assisted superimposition method transited from the above foundational concepts and relied on images of unspecified sizes that are lesser than 'life-size', frontal plane landmarks in the skull- and face- images alone for orientating the skull-image and mix images alone for evaluating the goodness of match. Recently, arguing the use of 'life-size' images as 'archaic', the authors who tested the reliability in the computer assisted superimposition method have denied any method transition. This article describes that the use of images of unspecified sizes at lesser than 'life-size' eliminates the only possibility to quantify parameters during superimposition which alone enables dynamic skull orientation when overlaying a skull-image with a face-image in an anatomically acceptable orientation. The dynamic skull orientation process mandatorily requires aligning the tragus in the 2D face-image with the auditory meatus in the 3D skull-image for anatomically orientating the skull-image in relation to the posture in the face-image, a step not mentioned by the authors describing the computer assisted superimposition method. Furthermore, mere reliance on mix type images during image overlay eliminates the possibility to assess the relationship between the leading edges of the skull- and face-image outlines as also specific area match among the corresponding craniofacial organs during superimposition. Indicating the possibility of increased false positive matches as a consequence of the above method transitions, the need for testing the reliability in the superimposition method adopting concepts that are considered safe is stressed. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Full-field optical coherence tomography image restoration based on Hilbert transformation

    NASA Astrophysics Data System (ADS)

    Na, Jihoon; Choi, Woo June; Choi, Eun Seo; Ryu, Seon Young; Lee, Byeong Ha

    2007-02-01

    We propose the envelope detection method that is based on Hilbert transform for image restoration in full-filed optical coherence tomography (FF-OCT). The FF-OCT system presenting a high-axial resolution of 0.9 μm was implemented with a Kohler illuminator based on Linnik interferometer configuration. A 250 W customized quartz tungsten halogen lamp was used as a broadband light source and a CCD camera was used as a 2-dimentional detector array. The proposed image restoration method for FF-OCT requires only single phase-shifting. By using both the original and the phase-shifted images, we could remove the offset and the background signals from the interference fringe images. The desired coherent envelope image was obtained by applying Hilbert transform. With the proposed image restoration method, we demonstrate en-face imaging performance of the implemented FF-OCT system by presenting a tilted mirror surface, an integrated circuit chip, and a piece of onion epithelium.

  11. Smiles in face matching: Idiosyncratic information revealed through a smile improves unfamiliar face matching performance.

    PubMed

    Mileva, Mila; Burton, A Mike

    2018-06-19

    Unfamiliar face matching is a surprisingly difficult task, yet we often rely on people's matching decisions in applied settings (e.g., border control). Most attempts to improve accuracy (including training and image manipulation) have had very limited success. In a series of studies, we demonstrate that using smiling rather than neutral pairs of images brings about significant improvements in face matching accuracy. This is true for both match and mismatch trials, implying that the information provided through a smile helps us detect images of the same identity as well as distinguishing between images of different identities. Study 1 compares matching performance when images in the face pair display either an open-mouth smile or a neutral expression. In Study 2, we add an intermediate level, closed-mouth smile, to identify the effect of teeth being exposed, and Study 3 explores face matching accuracy when only information about the lower part of the face is available. Results demonstrate that an open-mouth smile changes the face in an idiosyncratic way which aids face matching decisions. Such findings have practical implications for matching in the applied context where we typically use neutral images to represent ourselves in official documents. © 2018 The British Psychological Society.

  12. Microfluidic Imaging Flow Cytometry by Asymmetric-detection Time-stretch Optical Microscopy (ATOM).

    PubMed

    Tang, Anson H L; Lai, Queenie T K; Chung, Bob M F; Lee, Kelvin C M; Mok, Aaron T Y; Yip, G K; Shum, Anderson H C; Wong, Kenneth K Y; Tsia, Kevin K

    2017-06-28

    Scaling the number of measurable parameters, which allows for multidimensional data analysis and thus higher-confidence statistical results, has been the main trend in the advanced development of flow cytometry. Notably, adding high-resolution imaging capabilities allows for the complex morphological analysis of cellular/sub-cellular structures. This is not possible with standard flow cytometers. However, it is valuable for advancing our knowledge of cellular functions and can benefit life science research, clinical diagnostics, and environmental monitoring. Incorporating imaging capabilities into flow cytometry compromises the assay throughput, primarily due to the limitations on speed and sensitivity in the camera technologies. To overcome this speed or throughput challenge facing imaging flow cytometry while preserving the image quality, asymmetric-detection time-stretch optical microscopy (ATOM) has been demonstrated to enable high-contrast, single-cell imaging with sub-cellular resolution, at an imaging throughput as high as 100,000 cells/s. Based on the imaging concept of conventional time-stretch imaging, which relies on all-optical image encoding and retrieval through the use of ultrafast broadband laser pulses, ATOM further advances imaging performance by enhancing the image contrast of unlabeled/unstained cells. This is achieved by accessing the phase-gradient information of the cells, which is spectrally encoded into single-shot broadband pulses. Hence, ATOM is particularly advantageous in high-throughput measurements of single-cell morphology and texture - information indicative of cell types, states, and even functions. Ultimately, this could become a powerful imaging flow cytometry platform for the biophysical phenotyping of cells, complementing the current state-of-the-art biochemical-marker-based cellular assay. This work describes a protocol to establish the key modules of an ATOM system (from optical frontend to data processing and visualization backend), as well as the workflow of imaging flow cytometry based on ATOM, using human cells and micro-algae as the examples.

  13. Face sketch recognition based on edge enhancement via deep learning

    NASA Astrophysics Data System (ADS)

    Xie, Zhenzhu; Yang, Fumeng; Zhang, Yuming; Wu, Congzhong

    2017-11-01

    In this paper,we address the face sketch recognition problem. Firstly, we utilize the eigenface algorithm to convert a sketch image into a synthesized sketch face image. Subsequently, considering the low-level vision problem in synthesized face sketch image .Super resolution reconstruction algorithm based on CNN(convolutional neural network) is employed to improve the visual effect. To be specific, we uses a lightweight super-resolution structure to learn a residual mapping instead of directly mapping the feature maps from the low-level space to high-level patch representations, which making the networks are easier to optimize and have lower computational complexity. Finally, we adopt LDA(Linear Discriminant Analysis) algorithm to realize face sketch recognition on synthesized face image before super resolution and after respectively. Extensive experiments on the face sketch database(CUFS) from CUHK demonstrate that the recognition rate of SVM(Support Vector Machine) algorithm improves from 65% to 69% and the recognition rate of LDA(Linear Discriminant Analysis) algorithm improves from 69% to 75%.What'more,the synthesized face image after super resolution can not only better describer image details such as hair ,nose and mouth etc, but also improve the recognition accuracy effectively.

  14. Uyghur face recognition method combining 2DDCT with POEM

    NASA Astrophysics Data System (ADS)

    Yi, Lihamu; Ya, Ermaimaiti

    2017-11-01

    In this paper, in light of the reduced recognition rate and poor robustness of Uyghur face under illumination and partial occlusion, a Uyghur face recognition method combining Two Dimension Discrete Cosine Transform (2DDCT) with Patterns Oriented Edge Magnitudes (POEM) was proposed. Firstly, the Uyghur face images were divided into 8×8 block matrix, and the Uyghur face images after block processing were converted into frequency-domain status using 2DDCT; secondly, the Uyghur face images were compressed to exclude non-sensitive medium frequency parts and non-high frequency parts, so it can reduce the feature dimensions necessary for the Uyghur face images, and further reduce the amount of computation; thirdly, the corresponding POEM histograms of the Uyghur face images were obtained by calculating the feature quantity of POEM; fourthly, the POEM histograms were cascaded together as the texture histogram of the center feature point to obtain the texture features of the Uyghur face feature points; finally, classification of the training samples was carried out using deep learning algorithm. The simulation experiment results showed that the proposed algorithm further improved the recognition rate of the self-built Uyghur face database, and greatly improved the computing speed of the self-built Uyghur face database, and had strong robustness.

  15. En-face Flying Spot OCT/Ophthalmoscope

    NASA Astrophysics Data System (ADS)

    Rosen, Richard B.; Garcia, Patricia; Podoleanu, Adrian Gh.; Cucu, Radu; Dobre, George; Trifanov, Irina; van Velthoven, Mirjam E. J.; de Smet, Marc D.; Rogers, John A.; Hathaway, Mark; Pedro, Justin; Weitz, Rishard

    This is a review of a technique for high-resolution imaging of the eye that allows multiple sample sectioning perspectives with different axial resolutions. The technique involves the flying spot approach employed in confocal scanning laser ophthalmoscopy which is extended to OCT imaging via time domain en face fast lateral scanning. The ability of imaging with multiple axial resolutions stimulated the development of the dual en face OCT-confocal imaging technology. Dual imaging also allows various other imaging combinations, such as OCT with confocal microscopy for imaging the eye anterior segment and OCT with fluorescence angiography imaging.

  16. Original and Mirror Face Images and Minimum Squared Error Classification for Visible Light Face Recognition.

    PubMed

    Wang, Rong

    2015-01-01

    In real-world applications, the image of faces varies with illumination, facial expression, and poses. It seems that more training samples are able to reveal possible images of the faces. Though minimum squared error classification (MSEC) is a widely used method, its applications on face recognition usually suffer from the problem of a limited number of training samples. In this paper, we improve MSEC by using the mirror faces as virtual training samples. We obtained the mirror faces generated from original training samples and put these two kinds of samples into a new set. The face recognition experiments show that our method does obtain high accuracy performance in classification.

  17. Infrared and visible fusion face recognition based on NSCT domain

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua; Zhang, Shuai; Liu, Guodong; Xiong, Jinquan

    2018-01-01

    Visible face recognition systems, being vulnerable to illumination, expression, and pose, can not achieve robust performance in unconstrained situations. Meanwhile, near infrared face images, being light- independent, can avoid or limit the drawbacks of face recognition in visible light, but its main challenges are low resolution and signal noise ratio (SNR). Therefore, near infrared and visible fusion face recognition has become an important direction in the field of unconstrained face recognition research. In this paper, a novel fusion algorithm in non-subsampled contourlet transform (NSCT) domain is proposed for Infrared and visible face fusion recognition. Firstly, NSCT is used respectively to process the infrared and visible face images, which exploits the image information at multiple scales, orientations, and frequency bands. Then, to exploit the effective discriminant feature and balance the power of high-low frequency band of NSCT coefficients, the local Gabor binary pattern (LGBP) and Local Binary Pattern (LBP) are applied respectively in different frequency parts to obtain the robust representation of infrared and visible face images. Finally, the score-level fusion is used to fuse the all the features for final classification. The visible and near infrared face recognition is tested on HITSZ Lab2 visible and near infrared face database. Experiments results show that the proposed method extracts the complementary features of near-infrared and visible-light images and improves the robustness of unconstrained face recognition.

  18. Applications of digital image acquisition in anthropometry

    NASA Technical Reports Server (NTRS)

    Woolford, B.; Lewis, J. L.

    1981-01-01

    A description is given of a video kinesimeter, a device for the automatic real-time collection of kinematic and dynamic data. Based on the detection of a single bright spot by three TV cameras, the system provides automatic real-time recording of three-dimensional position and force data. It comprises three cameras, two incandescent lights, a voltage comparator circuit, a central control unit, and a mass storage device. The control unit determines the signal threshold for each camera before testing, sequences the lights, synchronizes and analyzes the scan voltages from the three cameras, digitizes force from a dynamometer, and codes the data for transmission to a floppy disk for recording. Two of the three cameras face each other along the 'X' axis; the third camera, which faces the center of the line between the first two, defines the 'Y' axis. An image from the 'Y' camera and either 'X' camera is necessary for determining the three-dimensional coordinates of the point.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polese, Luigi Gentile; Brackney, Larry

    An image-based occupancy sensor includes a motion detection module that receives and processes an image signal to generate a motion detection signal, a people detection module that receives the image signal and processes the image signal to generate a people detection signal, a face detection module that receives the image signal and processes the image signal to generate a face detection signal, and a sensor integration module that receives the motion detection signal from the motion detection module, receives the people detection signal from the people detection module, receives the face detection signal from the face detection module, and generatesmore » an occupancy signal using the motion detection signal, the people detection signal, and the face detection signal, with the occupancy signal indicating vacancy or occupancy, with an occupancy indication specifying that one or more people are detected within the monitored volume.« less

  20. A real time mobile-based face recognition with fisherface methods

    NASA Astrophysics Data System (ADS)

    Arisandi, D.; Syahputra, M. F.; Putri, I. L.; Purnamawati, S.; Rahmat, R. F.; Sari, P. P.

    2018-03-01

    Face Recognition is a field research in Computer Vision that study about learning face and determine the identity of the face from a picture sent to the system. By utilizing this face recognition technology, learning process about people’s identity between students in a university will become simpler. With this technology, student won’t need to browse student directory in university’s server site and look for the person with certain face trait. To obtain this goal, face recognition application use image processing methods consist of two phase, pre-processing phase and recognition phase. In pre-processing phase, system will process input image into the best image for recognition phase. Purpose of this pre-processing phase is to reduce noise and increase signal in image. Next, to recognize face phase, we use Fisherface Methods. This methods is chosen because of its advantage that would help system of its limited data. Therefore from experiment the accuracy of face recognition using fisherface is 90%.

  1. Reflectance from images: a model-based approach for human faces.

    PubMed

    Fuchs, Martin; Blanz, Volker; Lensch, Hendrik; Seidel, Hans-Peter

    2005-01-01

    In this paper, we present an image-based framework that acquires the reflectance properties of a human face. A range scan of the face is not required. Based on a morphable face model, the system estimates the 3D shape and establishes point-to-point correspondence across images taken from different viewpoints and across different individuals' faces. This provides a common parameterization of all reconstructed surfaces that can be used to compare and transfer BRDF data between different faces. Shape estimation from images compensates deformations of the face during the measurement process, such as facial expressions. In the common parameterization, regions of homogeneous materials on the face surface can be defined a priori. We apply analytical BRDF models to express the reflectance properties of each region and we estimate their parameters in a least-squares fit from the image data. For each of the surface points, the diffuse component of the BRDF is locally refined, which provides high detail. We present results for multiple analytical BRDF models, rendered at novel orientations and lighting conditions.

  2. Imaging the eye fundus with real-time en-face spectral domain optical coherence tomography

    PubMed Central

    Bradu, Adrian; Podoleanu, Adrian Gh.

    2014-01-01

    Real-time display of processed en-face spectral domain optical coherence tomography (SD-OCT) images is important for diagnosis. However, due to many steps of data processing requirements, such as Fast Fourier transformation (FFT), data re-sampling, spectral shaping, apodization, zero padding, followed by software cut of the 3D volume acquired to produce an en-face slice, conventional high-speed SD-OCT cannot render an en-face OCT image in real time. Recently we demonstrated a Master/Slave (MS)-OCT method that is highly parallelizable, as it provides reflectivity values of points at depth within an A-scan in parallel. This allows direct production of en-face images. In addition, the MS-OCT method does not require data linearization, which further simplifies the processing. The computation in our previous paper was however time consuming. In this paper we present an optimized algorithm that can be used to provide en-face MS-OCT images much quicker. Using such an algorithm we demonstrate around 10 times faster production of sets of en-face OCT images than previously obtained as well as simultaneous real-time display of up to 4 en-face OCT images of 200 × 200 pixels2 from the fovea and the optic nerve of a volunteer. We also demonstrate 3D and B-scan OCT images obtained from sets of MS-OCT C-scans, i.e. with no FFT and no intermediate step of generation of A-scans. PMID:24761303

  3. Combined 60° Wide-Field Choroidal Thickness Maps and High-Definition En Face Vasculature Visualization Using Swept-Source Megahertz OCT at 1050 nm

    PubMed Central

    Mohler, Kathrin J.; Draxinger, Wolfgang; Klein, Thomas; Kolb, Jan Philip; Wieser, Wolfgang; Haritoglou, Christos; Kampik, Anselm; Fujimoto, James G.; Neubauer, Aljoscha S.; Huber, Robert; Wolf, Armin

    2015-01-01

    Purpose To demonstrate ultrahigh-speed swept-source optical coherence tomography (SS-OCT) at 1.68 million A-scans/s for choroidal imaging in normal and diseased eyes over a ∼60° field of view. To investigate and correlate wide-field three-dimensional (3D) choroidal thickness (ChT) and vascular patterns using ChT maps and coregistered high-definition en face images extracted from a single densely sampled Megahertz-OCT (MHz-OCT) dataset. Methods High-definition, ∼60° wide-field 3D datasets consisting of 2088 × 1024 A-scans were acquired using a 1.68 MHz prototype SS-OCT system at 1050 nm based on a Fourier-domain mode-locked laser. Nine subjects (nine eyes) with various chorioretinal diseases or without ocular pathology are presented. Coregistered ChT maps, choroidal summation maps, and depth-resolved en face images referenced to either the retinal pigment epithelium or the choroidal–scleral interface were generated using manual segmentation. Results Wide-field ChT maps showed a large inter- and intraindividual variance in peripheral and central ChT. In only four of the nine eyes, the location with the largest ChT was coincident with the fovea. The anatomy of the large lumen vessels of the outer choroid seems to play a major role in determining the global ChT pattern. Focal ChT changes with large thickness gradients were observed in some eyes. Conclusions Different ChT and vascular patterns could be visualized over ∼60° in patients for the first time using OCT. Due to focal ChT changes, a high density of thickness measurements may be favorable. High-definition depth-resolved en face images are complementary to cross sections and thickness maps and enhance the interpretation of different ChT patterns. PMID:26431482

  4. Combined 60° Wide-Field Choroidal Thickness Maps and High-Definition En Face Vasculature Visualization Using Swept-Source Megahertz OCT at 1050 nm.

    PubMed

    Mohler, Kathrin J; Draxinger, Wolfgang; Klein, Thomas; Kolb, Jan Philip; Wieser, Wolfgang; Haritoglou, Christos; Kampik, Anselm; Fujimoto, James G; Neubauer, Aljoscha S; Huber, Robert; Wolf, Armin

    2015-10-01

    To demonstrate ultrahigh-speed swept-source optical coherence tomography (SS-OCT) at 1.68 million A-scans/s for choroidal imaging in normal and diseased eyes over a ∼60° field of view. To investigate and correlate wide-field three-dimensional (3D) choroidal thickness (ChT) and vascular patterns using ChT maps and coregistered high-definition en face images extracted from a single densely sampled Megahertz-OCT (MHz-OCT) dataset. High-definition, ∼60° wide-field 3D datasets consisting of 2088 × 1024 A-scans were acquired using a 1.68 MHz prototype SS-OCT system at 1050 nm based on a Fourier-domain mode-locked laser. Nine subjects (nine eyes) with various chorioretinal diseases or without ocular pathology are presented. Coregistered ChT maps, choroidal summation maps, and depth-resolved en face images referenced to either the retinal pigment epithelium or the choroidal-scleral interface were generated using manual segmentation. Wide-field ChT maps showed a large inter- and intraindividual variance in peripheral and central ChT. In only four of the nine eyes, the location with the largest ChT was coincident with the fovea. The anatomy of the large lumen vessels of the outer choroid seems to play a major role in determining the global ChT pattern. Focal ChT changes with large thickness gradients were observed in some eyes. Different ChT and vascular patterns could be visualized over ∼60° in patients for the first time using OCT. Due to focal ChT changes, a high density of thickness measurements may be favorable. High-definition depth-resolved en face images are complementary to cross sections and thickness maps and enhance the interpretation of different ChT patterns.

  5. Retinotopy and attention to the face and house images in the human visual cortex.

    PubMed

    Wang, Bin; Yan, Tianyi; Ohno, Seiichiro; Kanazawa, Susumu; Wu, Jinglong

    2016-06-01

    Attentional modulation of the neural activities in human visual areas has been well demonstrated. However, the retinotopic activities that are driven by face and house images and attention to face and house images remain unknown. In the present study, we used images of faces and houses to estimate the retinotopic activities that were driven by both the images and attention to the images, driven by attention to the images, and driven by the images. Generally, our results show that both face and house images produced similar retinotopic activities in visual areas, which were only observed in the attention + stimulus and the attention conditions, but not in the stimulus condition. The fusiform face area (FFA) responded to faces that were presented on the horizontal meridian, whereas parahippocampal place area (PPA) rarely responded to house at any visual field. We further analyzed the amplitudes of the neural responses to the target wedge. In V1, V2, V3, V3A, lateral occipital area 1 (LO-1), and hV4, the neural responses to the attended target wedge were significantly greater than those to the unattended target wedge. However, in LO-2, ventral occipital areas 1 and 2 (VO-1 and VO-2) and FFA and PPA, the differences were not significant. We proposed that these areas likely have large fields of attentional modulation for face and house images and exhibit responses to both the target wedge and the background stimuli. In addition, we proposed that the absence of retinotopic activity in the stimulus condition might imply no perceived difference between the target wedge and the background stimuli.

  6. Widely accessible method for superresolution fluorescence imaging of living systems

    PubMed Central

    Dedecker, Peter; Mo, Gary C. H.; Dertinger, Thomas; Zhang, Jin

    2012-01-01

    Superresolution fluorescence microscopy overcomes the diffraction resolution barrier and allows the molecular intricacies of life to be revealed with greatly enhanced detail. However, many current superresolution techniques still face limitations and their implementation is typically associated with a steep learning curve. Patterned illumination-based superresolution techniques [e.g., stimulated emission depletion (STED), reversible optically-linear fluorescence transitions (RESOLFT), and saturated structured illumination microscopy (SSIM)] require specialized equipment, whereas single-molecule–based approaches [e.g., stochastic optical reconstruction microscopy (STORM), photo-activation localization microscopy (PALM), and fluorescence-PALM (F-PALM)] involve repetitive single-molecule localization, which requires its own set of expertise and is also temporally demanding. Here we present a superresolution fluorescence imaging method, photochromic stochastic optical fluctuation imaging (pcSOFI). In this method, irradiating a reversibly photoswitching fluorescent protein at an appropriate wavelength produces robust single-molecule intensity fluctuations, from which a superresolution picture can be extracted by a statistical analysis of the fluctuations in each pixel as a function of time, as previously demonstrated in SOFI. This method, which uses off-the-shelf equipment, genetically encodable labels, and simple and rapid data acquisition, is capable of providing two- to threefold-enhanced spatial resolution, significant background rejection, markedly improved contrast, and favorable temporal resolution in living cells. Furthermore, both 3D and multicolor imaging are readily achievable. Because of its ease of use and high performance, we anticipate that pcSOFI will prove an attractive approach for superresolution imaging. PMID:22711840

  7. 77 FR 16910 - Special Conditions: Boeing Model 787 Series Airplanes; Single-place Side-facing Seats With...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-23

    ...-0311; Special Conditions No. 25-458-SC] Special Conditions: Boeing Model 787 Series Airplanes; Single... associated with single-place side-facing seats with inflatable lapbelts. The applicable airworthiness... have a novel or unusual design feature associated with single-place side-facing seats with inflatable...

  8. Differential amygdala response during facial recognition in patients with schizophrenia: an fMRI study.

    PubMed

    Kosaka, H; Omori, M; Murata, T; Iidaka, T; Yamada, H; Okada, T; Takahashi, T; Sadato, N; Itoh, H; Yonekura, Y; Wada, Y

    2002-09-01

    Human lesion or neuroimaging studies suggest that amygdala is involved in facial emotion recognition. Although impairments in recognition of facial and/or emotional expression have been reported in schizophrenia, there are few neuroimaging studies that have examined differential brain activation during facial recognition between patients with schizophrenia and normal controls. To investigate amygdala responses during facial recognition in schizophrenia, we conducted a functional magnetic resonance imaging (fMRI) study with 12 right-handed medicated patients with schizophrenia and 12 age- and sex-matched healthy controls. The experiment task was a type of emotional intensity judgment task. During the task period, subjects were asked to view happy (or angry/disgusting/sad) and neutral faces simultaneously presented every 3 s and to judge which face was more emotional (positive or negative face discrimination). Imaging data were investigated in voxel-by-voxel basis for single-group analysis and for between-group analysis according to the random effect model using Statistical Parametric Mapping (SPM). No significant difference in task accuracy was found between the schizophrenic and control groups. Positive face discrimination activated the bilateral amygdalae of both controls and schizophrenics, with more prominent activation of the right amygdala shown in the schizophrenic group. Negative face discrimination activated the bilateral amygdalae in the schizophrenic group whereas the right amygdala alone in the control group, although no significant group difference was found. Exaggerated amygdala activation during emotional intensity judgment found in the schizophrenic patients may reflect impaired gating of sensory input containing emotion. Copyright 2002 Elsevier Science B.V.

  9. mPano: cloud-based mobile panorama view from single picture

    NASA Astrophysics Data System (ADS)

    Li, Hongzhi; Zhu, Wenwu

    2013-09-01

    Panorama view provides people an informative and natural user experience to represent the whole scene. The advances on mobile augmented reality, mobile-cloud computing, and mobile internet can enable panorama view on mobile phone with new functionalities, such as anytime anywhere query where a landmark picture is and what the whole scene looks like. To generate and explore panorama view on mobile devices faces significant challenges due to the limitations of computing capacity, battery life, and memory size of mobile phones, as well as the bandwidth of mobile Internet connection. To address the challenges, this paper presents a novel cloud-based mobile panorama view system that can generate and view panorama-view on mobile devices from a single picture, namely "Pano". In our system, first, we propose a novel iterative multi-modal image retrieval (IMIR) approach to get spatially adjacent images using both tag and content information from the single picture. Second, we propose a cloud-based parallel server synthing approach to generate panorama view in cloud, against today's local-client synthing approach that is almost impossible for mobile phones. Third, we propose predictive-cache solution to reduce latency of image delivery from cloud server to the mobile client. We have built a real mobile panorama view system and perform experiments. The experimental results demonstrated the effectiveness of our system and the proposed key component technologies, especially for landmark images.

  10. Emotional responses associated with self-face processing in individuals with autism spectrum disorders: an fMRI study.

    PubMed

    Morita, Tomoyo; Kosaka, Hirotaka; Saito, Daisuke N; Ishitobi, Makoto; Munesue, Toshio; Itakura, Shoji; Omori, Masao; Okazawa, Hidehiko; Wada, Yuji; Sadato, Norihiro

    2012-01-01

    Individuals with autism spectrum disorders (ASD) show impaired emotional responses to self-face processing, but the underlying neural bases are unclear. Using functional magnetic resonance imaging, we investigated brain activity when 15 individuals with high-functioning ASD and 15 controls rated the photogenicity of self-face images and photographs of others' faces. Controls showed a strong correlation between photogenicity ratings and extent of embarrassment evoked by self-face images; this correlation was weaker among ASD individuals, indicating a decoupling between the cognitive evaluation of self-face images and emotional responses. Individuals with ASD demonstrated relatively low self-related activity in the posterior cingulate cortex (PCC), which was related to specific autistic traits. There were significant group differences in the modulation of activity by embarrassment ratings in the right insular (IC) and lateral orbitofrontal cortices. Task-related activity in the right IC was lower in the ASD group. The reduced activity in the right IC for self-face images was associated with weak coupling between cognitive evaluation and emotional responses to self-face images. The PCC is responsible for self-referential processing, and the IC plays a role in emotional experience. Dysfunction in these areas could contribute to the lack of self-conscious behaviors in response to self-reflection in ASD individuals.

  11. Neural Correlates of Face and Object Perception in an Awake Chimpanzee (Pan Troglodytes) Examined by Scalp-Surface Event-Related Potentials

    PubMed Central

    Fukushima, Hirokata; Hirata, Satoshi; Ueno, Ari; Matsuda, Goh; Fuwa, Kohki; Sugama, Keiko; Kusunoki, Kiyo; Hirai, Masahiro; Hiraki, Kazuo; Tomonaga, Masaki; Hasegawa, Toshikazu

    2010-01-01

    Background The neural system of our closest living relative, the chimpanzee, is a topic of increasing research interest. However, electrophysiological examinations of neural activity during visual processing in awake chimpanzees are currently lacking. Methodology/Principal Findings In the present report, skin-surface event-related brain potentials (ERPs) were measured while a fully awake chimpanzee observed photographs of faces and objects in two experiments. In Experiment 1, human faces and stimuli composed of scrambled face images were displayed. In Experiment 2, three types of pictures (faces, flowers, and cars) were presented. The waveforms evoked by face stimuli were distinguished from other stimulus types, as reflected by an enhanced early positivity appearing before 200 ms post stimulus, and an enhanced late negativity after 200 ms, around posterior and occipito-temporal sites. Face-sensitive activity was clearly observed in both experiments. However, in contrast to the robustly observed face-evoked N170 component in humans, we found that faces did not elicit a peak in the latency range of 150–200 ms in either experiment. Conclusions/Significance Although this pilot study examined a single subject and requires further examination, the observed scalp voltage patterns suggest that selective processing of faces in the chimpanzee brain can be detected by recording surface ERPs. In addition, this non-invasive method for examining an awake chimpanzee can be used to extend our knowledge of the characteristics of visual cognition in other primate species. PMID:20967284

  12. Face Recognition for Access Control Systems Combining Image-Difference Features Based on a Probabilistic Model

    NASA Astrophysics Data System (ADS)

    Miwa, Shotaro; Kage, Hiroshi; Hirai, Takashi; Sumi, Kazuhiko

    We propose a probabilistic face recognition algorithm for Access Control System(ACS)s. Comparing with existing ACSs using low cost IC-cards, face recognition has advantages in usability and security that it doesn't require people to hold cards over scanners and doesn't accept imposters with authorized cards. Therefore face recognition attracts more interests in security markets than IC-cards. But in security markets where low cost ACSs exist, price competition is important, and there is a limitation on the quality of available cameras and image control. Therefore ACSs using face recognition are required to handle much lower quality images, such as defocused and poor gain-controlled images than high security systems, such as immigration control. To tackle with such image quality problems we developed a face recognition algorithm based on a probabilistic model which combines a variety of image-difference features trained by Real AdaBoost with their prior probability distributions. It enables to evaluate and utilize only reliable features among trained ones during each authentication, and achieve high recognition performance rates. The field evaluation using a pseudo Access Control System installed in our office shows that the proposed system achieves a constant high recognition performance rate independent on face image qualities, that is about four times lower EER (Equal Error Rate) under a variety of image conditions than one without any prior probability distributions. On the other hand using image difference features without any prior probabilities are sensitive to image qualities. We also evaluated PCA, and it has worse, but constant performance rates because of its general optimization on overall data. Comparing with PCA, Real AdaBoost without any prior distribution performs twice better under good image conditions, but degrades to a performance as good as PCA under poor image conditions.

  13. Illumination-tolerant face verification of low-bit-rate JPEG2000 wavelet images with advanced correlation filters for handheld devices

    NASA Astrophysics Data System (ADS)

    Wijaya, Surya Li; Savvides, Marios; Vijaya Kumar, B. V. K.

    2005-02-01

    Face recognition on mobile devices, such as personal digital assistants and cell phones, is a big challenge owing to the limited computational resources available to run verifications on the devices themselves. One approach is to transmit the captured face images by use of the cell-phone connection and to run the verification on a remote station. However, owing to limitations in communication bandwidth, it may be necessary to transmit a compressed version of the image. We propose using the image compression standard JPEG2000, which is a wavelet-based compression engine used to compress the face images to low bit rates suitable for transmission over low-bandwidth communication channels. At the receiver end, the face images are reconstructed with a JPEG2000 decoder and are fed into the verification engine. We explore how advanced correlation filters, such as the minimum average correlation energy filter [Appl. Opt. 26, 3633 (1987)] and its variants, perform by using face images captured under different illumination conditions and encoded with different bit rates under the JPEG2000 wavelet-encoding standard. We evaluate the performance of these filters by using illumination variations from the Carnegie Mellon University's Pose, Illumination, and Expression (PIE) face database. We also demonstrate the tolerance of these filters to noisy versions of images with illumination variations.

  14. Quality labeled faces in the wild (QLFW): a database for studying face recognition in real-world environments

    NASA Astrophysics Data System (ADS)

    Karam, Lina J.; Zhu, Tong

    2015-03-01

    The varying quality of face images is an important challenge that limits the effectiveness of face recognition technology when applied in real-world applications. Existing face image databases do not consider the effect of distortions that commonly occur in real-world environments. This database (QLFW) represents an initial attempt to provide a set of labeled face images spanning the wide range of quality, from no perceived impairment to strong perceived impairment for face detection and face recognition applications. Types of impairment include JPEG2000 compression, JPEG compression, additive white noise, Gaussian blur and contrast change. Subjective experiments are conducted to assess the perceived visual quality of faces under different levels and types of distortions and also to assess the human recognition performance under the considered distortions. One goal of this work is to enable automated performance evaluation of face recognition technologies in the presence of different types and levels of visual distortions. This will consequently enable the development of face recognition systems that can operate reliably on real-world visual content in the presence of real-world visual distortions. Another goal is to enable the development and assessment of visual quality metrics for face images and for face detection and recognition applications.

  15. Activity in Face-Responsive Brain Regions is Modulated by Invisible, Attended Faces: Evidence from Masked Priming

    PubMed Central

    Eger, Evelyn; Dolan, Raymond; Henson, Richard N.

    2009-01-01

    It is often assumed that neural activity in face-responsive regions of primate cortex correlates with conscious perception of faces. However, whether such activity occurs without awareness is still debated. Using functional magnetic resonance imaging (fMRI) in conjunction with a novel masked face priming paradigm, we observed neural modulations that could not be attributed to perceptual awareness. More specifically, we found reduced activity in several classic face-processing regions, including the “fusiform face area,” “occipital face area,” and superior temporal sulcus, when a face was preceded by a briefly flashed image of the same face, relative to a different face, even when 2 images of the same face differed. Importantly, unlike most previous studies, which have minimized awareness by using conditions of inattention, the present results occurred when the stimuli (the primes) were attended. By contrast, when primes were perceived consciously, in a long-lag priming paradigm, we found repetition-related activity increases in additional frontal and parietal regions. These data not only demonstrate that fMRI activity in face-responsive regions can be modulated independently of perceptual awareness, but also document where such subliminal face-processing occurs (i.e., restricted to face-responsive regions of occipital and temporal cortex) and to what extent (i.e., independent of the specific image). PMID:18400791

  16. Left-right facial orientation of familiar faces: developmental aspects of « the mere exposure hypothesis ».

    PubMed

    Amestoy, Anouck; Bouvard, Manuel P; Cazalets, Jean-René

    2010-01-01

    We investigated the developmental aspect of sensitivity to the orientation of familiar faces by asking 38 adults and 72 children from 3 to 12 years old to make a preference choice between standard and mirror images of themselves and of familiar faces, presented side-by-side or successively. When familiar (parental) faces were presented simultaneously, 3- to 5-year-olds showed no preference, but by age 5-7 years an adult-like preference for the standard image emerged. Similarly, the adult-like preference for the mirror image of their own face emerged by 5-7 years of age. When familiar or self faces were presented successively, 3- to 7-year-olds showed no preference, and adult-like preference for the standard image emerged by age 7-12 years. These results suggest the occurrence of a developmental process in the perception of familiar face asymmetries which is retained in memory related to knowledge about faces.

  17. Recognition of simulated cyanosis by color-vision-normal and color-vision-deficient subjects.

    PubMed

    Dain, Stephen J

    2014-04-01

    There are anecdotal reports that the recognition of cyanosis is difficult for some color-deficient observers. The chromaticity changes of blood with oxygenation in vitro lie close to the dichromatic confusion lines. The chromaticity changes of lips and nail beds measured in vivo are also generally aligned in the same way. Experiments involving visual assessment of cyanosis in vivo are fraught with technical and ethical difficulties A single lower face image of a healthy individual was digitally altered to produce levels of simulated cyanosis. The color change is essentially one of saturation. Some images with other color changes were also included to ensure that there was no propensity to identify those as cyanosed. The images were assessed for reality by a panel of four instructors from the NSW Ambulance Service training section. The images were displayed singly and the observer was required to identify if the person was cyanosed or not. Color normal subjects comprised 32 experienced ambulance officers and 27 new recruits. Twenty-seven color deficient subjects (non-NSW Ambulance Service) were examined. The recruits were less accurate and slower at identifying the cyanosed images and the color vision deficient were less accurate and slower still. The identification of cyanosis is a skill that improves with training and is adversely affected in color deficient observers.

  18. Perceptual expertise in forensic facial image comparison

    PubMed Central

    White, David; Phillips, P. Jonathon; Hahn, Carina A.; Hill, Matthew; O'Toole, Alice J.

    2015-01-01

    Forensic facial identification examiners are required to match the identity of faces in images that vary substantially, owing to changes in viewing conditions and in a person's appearance. These identifications affect the course and outcome of criminal investigations and convictions. Despite calls for research on sources of human error in forensic examination, existing scientific knowledge of face matching accuracy is based, almost exclusively, on people without formal training. Here, we administered three challenging face matching tests to a group of forensic examiners with many years' experience of comparing face images for law enforcement and government agencies. Examiners outperformed untrained participants and computer algorithms, thereby providing the first evidence that these examiners are experts at this task. Notably, computationally fusing responses of multiple experts produced near-perfect performance. Results also revealed qualitative differences between expert and non-expert performance. First, examiners' superiority was greatest at longer exposure durations, suggestive of more entailed comparison in forensic examiners. Second, experts were less impaired by image inversion than non-expert students, contrasting with face memory studies that show larger face inversion effects in high performers. We conclude that expertise in matching identity across unfamiliar face images is supported by processes that differ qualitatively from those supporting memory for individual faces. PMID:26336174

  19. Internal representations for face detection: an application of noise-based image classification to BOLD responses.

    PubMed

    Nestor, Adrian; Vettel, Jean M; Tarr, Michael J

    2013-11-01

    What basic visual structures underlie human face detection and how can we extract such structures directly from the amplitude of neural responses elicited by face processing? Here, we address these issues by investigating an extension of noise-based image classification to BOLD responses recorded in high-level visual areas. First, we assess the applicability of this classification method to such data and, second, we explore its results in connection with the neural processing of faces. To this end, we construct luminance templates from white noise fields based on the response of face-selective areas in the human ventral cortex. Using behaviorally and neurally-derived classification images, our results reveal a family of simple but robust image structures subserving face representation and detection. Thus, we confirm the role played by classical face selective regions in face detection and we help clarify the representational basis of this perceptual function. From a theory standpoint, our findings support the idea of simple but highly diagnostic neurally-coded features for face detection. At the same time, from a methodological perspective, our work demonstrates the ability of noise-based image classification in conjunction with fMRI to help uncover the structure of high-level perceptual representations. Copyright © 2012 Wiley Periodicals, Inc.

  20. Multiple Representations-Based Face Sketch-Photo Synthesis.

    PubMed

    Peng, Chunlei; Gao, Xinbo; Wang, Nannan; Tao, Dacheng; Li, Xuelong; Li, Jie

    2016-11-01

    Face sketch-photo synthesis plays an important role in law enforcement and digital entertainment. Most of the existing methods only use pixel intensities as the feature. Since face images can be described using features from multiple aspects, this paper presents a novel multiple representations-based face sketch-photo-synthesis method that adaptively combines multiple representations to represent an image patch. In particular, it combines multiple features from face images processed using multiple filters and deploys Markov networks to exploit the interacting relationships between the neighboring image patches. The proposed framework could be solved using an alternating optimization strategy and it normally converges in only five outer iterations in the experiments. Our experimental results on the Chinese University of Hong Kong (CUHK) face sketch database, celebrity photos, CUHK Face Sketch FERET Database, IIIT-D Viewed Sketch Database, and forensic sketches demonstrate the effectiveness of our method for face sketch-photo synthesis. In addition, cross-database and database-dependent style-synthesis evaluations demonstrate the generalizability of this novel method and suggest promising solutions for face identification in forensic science.

  1. Ethnicity identification from face images

    NASA Astrophysics Data System (ADS)

    Lu, Xiaoguang; Jain, Anil K.

    2004-08-01

    Human facial images provide the demographic information, such as ethnicity and gender. Conversely, ethnicity and gender also play an important role in face-related applications. Image-based ethnicity identification problem is addressed in a machine learning framework. The Linear Discriminant Analysis (LDA) based scheme is presented for the two-class (Asian vs. non-Asian) ethnicity classification task. Multiscale analysis is applied to the input facial images. An ensemble framework, which integrates the LDA analysis for the input face images at different scales, is proposed to further improve the classification performance. The product rule is used as the combination strategy in the ensemble. Experimental results based on a face database containing 263 subjects (2,630 face images, with equal balance between the two classes) are promising, indicating that LDA and the proposed ensemble framework have sufficient discriminative power for the ethnicity classification problem. The normalized ethnicity classification scores can be helpful in the facial identity recognition. Useful as a "soft" biometric, face matching scores can be updated based on the output of ethnicity classification module. In other words, ethnicity classifier does not have to be perfect to be useful in practice.

  2. A special purpose knowledge-based face localization method

    NASA Astrophysics Data System (ADS)

    Hassanat, Ahmad; Jassim, Sabah

    2008-04-01

    This paper is concerned with face localization for visual speech recognition (VSR) system. Face detection and localization have got a great deal of attention in the last few years, because it is an essential pre-processing step in many techniques that handle or deal with faces, (e.g. age, face, gender, race and visual speech recognition). We shall present an efficient method for localization human's faces in video images captured on mobile constrained devices, under a wide variation in lighting conditions. We use a multiphase method that may include all or some of the following steps starting with image pre-processing, followed by a special purpose edge detection, then an image refinement step. The output image will be passed through a discrete wavelet decomposition procedure, and the computed LL sub-band at a certain level will be transformed into a binary image that will be scanned by using a special template to select a number of possible candidate locations. Finally, we fuse the scores from the wavelet step with scores determined by color information for the candidate location and employ a form of fuzzy logic to distinguish face from non-face locations. We shall present results of large number of experiments to demonstrate that the proposed face localization method is efficient and achieve high level of accuracy that outperforms existing general-purpose face detection methods.

  3. Maximal likelihood correspondence estimation for face recognition across pose.

    PubMed

    Li, Shaoxin; Liu, Xin; Chai, Xiujuan; Zhang, Haihong; Lao, Shihong; Shan, Shiguang

    2014-10-01

    Due to the misalignment of image features, the performance of many conventional face recognition methods degrades considerably in across pose scenario. To address this problem, many image matching-based methods are proposed to estimate semantic correspondence between faces in different poses. In this paper, we aim to solve two critical problems in previous image matching-based correspondence learning methods: 1) fail to fully exploit face specific structure information in correspondence estimation and 2) fail to learn personalized correspondence for each probe image. To this end, we first build a model, termed as morphable displacement field (MDF), to encode face specific structure information of semantic correspondence from a set of real samples of correspondences calculated from 3D face models. Then, we propose a maximal likelihood correspondence estimation (MLCE) method to learn personalized correspondence based on maximal likelihood frontal face assumption. After obtaining the semantic correspondence encoded in the learned displacement, we can synthesize virtual frontal images of the profile faces for subsequent recognition. Using linear discriminant analysis method with pixel-intensity features, state-of-the-art performance is achieved on three multipose benchmarks, i.e., CMU-PIE, FERET, and MultiPIE databases. Owe to the rational MDF regularization and the usage of novel maximal likelihood objective, the proposed MLCE method can reliably learn correspondence between faces in different poses even in complex wild environment, i.e., labeled face in the wild database.

  4. Image preprocessing study on KPCA-based face recognition

    NASA Astrophysics Data System (ADS)

    Li, Xuan; Li, Dehua

    2015-12-01

    Face recognition as an important biometric identification method, with its friendly, natural, convenient advantages, has obtained more and more attention. This paper intends to research a face recognition system including face detection, feature extraction and face recognition, mainly through researching on related theory and the key technology of various preprocessing methods in face detection process, using KPCA method, focuses on the different recognition results in different preprocessing methods. In this paper, we choose YCbCr color space for skin segmentation and choose integral projection for face location. We use erosion and dilation of the opening and closing operation and illumination compensation method to preprocess face images, and then use the face recognition method based on kernel principal component analysis method for analysis and research, and the experiments were carried out using the typical face database. The algorithms experiment on MATLAB platform. Experimental results show that integration of the kernel method based on PCA algorithm under certain conditions make the extracted features represent the original image information better for using nonlinear feature extraction method, which can obtain higher recognition rate. In the image preprocessing stage, we found that images under various operations may appear different results, so as to obtain different recognition rate in recognition stage. At the same time, in the process of the kernel principal component analysis, the value of the power of the polynomial function can affect the recognition result.

  5. Heterogeneous sharpness for cross-spectral face recognition

    NASA Astrophysics Data System (ADS)

    Cao, Zhicheng; Schmid, Natalia A.

    2017-05-01

    Matching images acquired in different electromagnetic bands remains a challenging problem. An example of this type of comparison is matching active or passive infrared (IR) against a gallery of visible face images, known as cross-spectral face recognition. Among many unsolved issues is the one of quality disparity of the heterogeneous images. Images acquired in different spectral bands are of unequal image quality due to distinct imaging mechanism, standoff distances, or imaging environment, etc. To reduce the effect of quality disparity on the recognition performance, one can manipulate images to either improve the quality of poor-quality images or to degrade the high-quality images to the level of the quality of their heterogeneous counterparts. To estimate the level of discrepancy in quality of two heterogeneous images a quality metric such as image sharpness is needed. It provides a guidance in how much quality improvement or degradation is appropriate. In this work we consider sharpness as a relative measure of heterogeneous image quality. We propose a generalized definition of sharpness by first achieving image quality parity and then finding and building a relationship between the image quality of two heterogeneous images. Therefore, the new sharpness metric is named heterogeneous sharpness. Image quality parity is achieved by experimentally finding the optimal cross-spectral face recognition performance where quality of the heterogeneous images is varied using a Gaussian smoothing function with different standard deviation. This relationship is established using two models; one of them involves a regression model and the other involves a neural network. To train, test and validate the model, we use composite operators developed in our lab to extract features from heterogeneous face images and use the sharpness metric to evaluate the face image quality within each band. Images from three different spectral bands visible light, near infrared, and short-wave infrared are considered in this work. Both error of a regression model and validation error of a neural network are analyzed.

  6. Tracking the ultrafast motion of a single molecule by femtosecond orbital imaging

    NASA Astrophysics Data System (ADS)

    Cocker, Tyler L.; Peller, Dominik; Yu, Ping; Repp, Jascha; Huber, Rupert

    2016-11-01

    Watching a single molecule move on its intrinsic timescale has been one of the central goals of modern nanoscience, and calls for measurements that combine ultrafast temporal resolution with atomic spatial resolution. Steady-state experiments access the requisite spatial scales, as illustrated by direct imaging of individual molecular orbitals using scanning tunnelling microscopy or the acquisition of tip-enhanced Raman and luminescence spectra with sub-molecular resolution. But tracking the intrinsic dynamics of a single molecule directly in the time domain faces the challenge that interactions with the molecule must be confined to a femtosecond time window. For individual nanoparticles, such ultrafast temporal confinement has been demonstrated by combining scanning tunnelling microscopy with so-called lightwave electronics, which uses the oscillating carrier wave of tailored light pulses to directly manipulate electronic motion on timescales faster even than a single cycle of light. Here we build on ultrafast terahertz scanning tunnelling microscopy to access a state-selective tunnelling regime, where the peak of a terahertz electric-field waveform transiently opens an otherwise forbidden tunnelling channel through a single molecular state. It thereby removes a single electron from an individual pentacene molecule’s highest occupied molecular orbital within a time window shorter than one oscillation cycle of the terahertz wave. We exploit this effect to record approximately 100-femtosecond snapshot images of the orbital structure with sub-ångström spatial resolution, and to reveal, through pump/probe measurements, coherent molecular vibrations at terahertz frequencies directly in the time domain. We anticipate that the combination of lightwave electronics and the atomic resolution of our approach will open the door to visualizing ultrafast photochemistry and the operation of molecular electronics on the single-orbital scale.

  7. Tracking the ultrafast motion of a single molecule by femtosecond orbital imaging

    PubMed Central

    Yu, Ping; Repp, Jascha; Huber, Rupert

    2017-01-01

    Watching a single molecule move on its intrinsic time scale—one of the central goals of modern nanoscience—calls for measurements that combine ultrafast temporal resolution1–8 with atomic spatial resolution9–30. Steady-state experiments achieve the requisite spatial resolution, as illustrated by direct imaging of individual molecular orbitals using scanning tunnelling microscopy9–11 or the acquisition of tip-enhanced Raman and luminescence spectra with sub-molecular resolution27–29. But tracking the dynamics of a single molecule directly in the time domain faces the challenge that single-molecule excitations need to be confined to an ultrashort time window. A first step towards overcoming this challenge has combined scanning tunnelling microscopy with so-called ‘lightwave electronics”1–8, which uses the oscillating carrier wave of tailored light pulses to directly manipulate electronic motion on time scales faster even than that of a single cycle of light. Here we use such ultrafast terahertz scanning tunnelling microscopy to access a state-selective tunnelling regime, where the peak of a terahertz electric-field waveform transiently opens an otherwise forbidden tunnelling channel through a single molecular state and thereby removes a single electron from an individual pentacene molecule’s highest occupied molecular orbital within a time window shorter than one oscillation cycle of the terahertz wave. We exploit this effect to record ~100 fs snapshot images of the structure of the orbital involved, and to reveal through pump-probe measurements coherent molecular vibrations at terahertz frequencies directly in the time domain and with sub-angstrom spatial resolution. We anticipate that the combination of lightwave electronics1–8 and atomic resolution of our approach will open the door to controlling electronic motion inside individual molecules at optical clock rates. PMID:27830788

  8. Teleradiology network system and computer-aided diagnosis workstation using the web medical image conference system with a new information security solution

    NASA Astrophysics Data System (ADS)

    Satoh, Hitoshi; Niki, Noboru; Eguchi, Kenji; Ohmatsu, Hironobu; Kaneko, Masahiro; Kakinuma, Ryutaru; Moriyama, Noriyuki

    2011-03-01

    We have developed the teleradiology network system with a new information security solution that provided with web medical image conference system. In the teleradiology network system, the security of information network is very important subjects. We are studying the secret sharing scheme as a method safely to store or to transmit the confidential medical information used with the teleradiology network system. The confidential medical information is exposed to the risk of the damage and intercept. Secret sharing scheme is a method of dividing the confidential medical information into two or more tallies. Individual medical information cannot be decoded by using one tally at all. Our method has the function of RAID. With RAID technology, if there is a failure in a single tally, there is redundant data already copied to other tally. Confidential information is preserved at an individual Data Center connected through internet because individual medical information cannot be decoded by using one tally at all. Therefore, even if one of the Data Centers is struck and information is damaged, the confidential medical information can be decoded by using the tallies preserved at the data center to which it escapes damage. We can safely share the screen of workstation to which the medical image of Data Center is displayed from two or more web conference terminals at the same time. Moreover, Real time biometric face authentication system is connected with Data Center. Real time biometric face authentication system analyzes the feature of the face image of which it takes a picture in 20 seconds with the camera and defends the safety of the medical information. We propose a new information transmission method and a new information storage method with a new information security solution.

  9. Diagnostic imaging for chronic orofacial pain, maxillofacial osseous and soft tissue pathology and temporomandibular disorders.

    PubMed

    Shintaku, Werner; Enciso, Reyes; Broussard, Jack; Clark, Glenn T

    2006-08-01

    Since dentists can be faced by unusual cases during their professional life, this article reviews the common orofacial disorders that are of concern to a dentist trying to diagnose the source of pain or dysfunction symptoms, providing an overview of the essential knowledge and usage of nowadays available advanced diagnostic imaging modalities. In addition to symptom-driven diagnostic dilemmas, where such imaging is utilized, occasionally there are asymptomatic anomalies discovered by routine clinical care and/or on dental or panoramic images that need more discussion. The correct selection criteria of an image exam should be based on the individual characteristics of the patient, and the type of imaging technique should be selected depending on the specific clinical problem, the kind of tissue to be visualized, the information obtained from the imaging modality, radiation exposure, and the cost of the examination. The usage of more specialized imaging modalities such as magnetic resonance imaging, computed tomography, ultrasound, as well as single photon computed tomography, positron electron tomography, and their hybrid machines, SPECT/ CT and PET/CT, are discussed.

  10. A Randomized Trial of Displaying Paid Price Information on Imaging Study and Procedure Ordering Rates.

    PubMed

    Chien, Alyna T; Lehmann, Lisa Soleymani; Hatfield, Laura A; Koplan, Kate E; Petty, Carter R; Sinaiko, Anna D; Rosenthal, Meredith B; Sequist, Thomas D

    2017-04-01

    Prior studies have demonstrated how price transparency lowers the test-ordering rates of trainees in hospitals, and physician-targeted price transparency efforts have been viewed as a promising cost-controlling strategy. To examine the effect of displaying paid-price information on test-ordering rates for common imaging studies and procedures within an accountable care organization (ACO). Block randomized controlled trial for 1 year. A total of 1205 fully licensed clinicians (728 primary care, 477 specialists). Starting January 2014, clinicians in the Control arm received no price display; those in the intervention arms received Single or Paired Internal/External Median Prices in the test-ordering screen of their electronic health record. Internal prices were the amounts paid by insurers for the ACO's services; external paid prices were the amounts paid by insurers for the same services when delivered by unaffiliated providers. Ordering rates (orders per 100 face-to-face encounters with adult patients): overall, designated to be completed internally within the ACO, considered "inappropriate" (e.g., MRI for simple headache), and thought to be "appropriate" (e.g., screening colonoscopy). We found no significant difference in overall ordering rates across the Control, Single Median Price, or Paired Internal/External Median Prices study arms. For every 100 encounters, clinicians in the Control arm ordered 15.0 (SD 31.1) tests, those in the Single Median Price arm ordered 15.0 (SD 16.2) tests, and those in the Paired Prices arms ordered 15.7 (SD 20.5) tests (one-way ANOVA p-value 0.88). There was no difference in ordering rates for tests designated to be completed internally or considered to be inappropriate or appropriate. Displaying paid-price information did not alter how frequently primary care and specialist clinicians ordered imaging studies and procedures within an ACO. Those with a particular interest in removing waste from the health care system may want to consider a variety of contextual factors that can affect physician-targeted price transparency.

  11. Face-selective regions show invariance to linear, but not to non-linear, changes in facial images.

    PubMed

    Baseler, Heidi A; Young, Andrew W; Jenkins, Rob; Mike Burton, A; Andrews, Timothy J

    2016-12-01

    Familiar face recognition is remarkably invariant across huge image differences, yet little is understood concerning how image-invariant recognition is achieved. To investigate the neural correlates of invariance, we localized the core face-responsive regions and then compared the pattern of fMR-adaptation to different stimulus transformations in each region to behavioural data demonstrating the impact of the same transformations on familiar face recognition. In Experiment 1, we compared linear transformations of size and aspect ratio to a non-linear transformation affecting only part of the face. We found that adaptation to facial identity in face-selective regions showed invariance to linear changes, but there was no invariance to non-linear changes. In Experiment 2, we measured the sensitivity to non-linear changes that fell within the normal range of variation across face images. We found no adaptation to facial identity for any of the non-linear changes in the image, including to faces that varied in different levels of caricature. These results show a compelling difference in the sensitivity to linear compared to non-linear image changes in face-selective regions of the human brain that is only partially consistent with their effect on behavioural judgements of identity. We conclude that while regions such as the FFA may well be involved in the recognition of face identity, they are more likely to contribute to some form of normalisation that underpins subsequent recognition than to form the neural substrate of recognition per se. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Anti-Stokes effect CCD camera and SLD based optical coherence tomography for full-field imaging in the 1550nm region

    NASA Astrophysics Data System (ADS)

    Kredzinski, Lukasz; Connelly, Michael J.

    2012-06-01

    Full-field Optical coherence tomography is an en-face interferometric imaging technology capable of carrying out high resolution cross-sectional imaging of the internal microstructure of an examined specimen in a non-invasive manner. The presented system is based on competitively priced optical components available at the main optical communications band located in the 1550 nm region. It consists of a superluminescent diode and an anti-stokes imaging device. The single mode fibre coupled SLD was connected to a multi-mode fibre inserted into a mode scrambler to obtain spatially incoherent illumination, suitable for OCT wide-field modality in terms of crosstalk suppression and image enhancement. This relatively inexpensive system with moderate resolution of approximately 24um x 12um (axial x lateral) was constructed to perform a 3D cross sectional imaging of a human tooth. To our knowledge this is the first 1550 nm full-field OCT system reported.

  13. Facial Asymmetry-Based Age Group Estimation: Role in Recognizing Age-Separated Face Images.

    PubMed

    Sajid, Muhammad; Taj, Imtiaz Ahmad; Bajwa, Usama Ijaz; Ratyal, Naeem Iqbal

    2018-04-23

    Face recognition aims to establish the identity of a person based on facial characteristics. On the other hand, age group estimation is the automatic calculation of an individual's age range based on facial features. Recognizing age-separated face images is still a challenging research problem due to complex aging processes involving different types of facial tissues, skin, fat, muscles, and bones. Certain holistic and local facial features are used to recognize age-separated face images. However, most of the existing methods recognize face images without incorporating the knowledge learned from age group estimation. In this paper, we propose an age-assisted face recognition approach to handle aging variations. Inspired by the observation that facial asymmetry is an age-dependent intrinsic facial feature, we first use asymmetric facial dimensions to estimate the age group of a given face image. Deeply learned asymmetric facial features are then extracted for face recognition using a deep convolutional neural network (dCNN). Finally, we integrate the knowledge learned from the age group estimation into the face recognition algorithm using the same dCNN. This integration results in a significant improvement in the overall performance compared to using the face recognition algorithm alone. The experimental results on two large facial aging datasets, the MORPH and FERET sets, show that the proposed age group estimation based on the face recognition approach yields superior performance compared to some existing state-of-the-art methods. © 2018 American Academy of Forensic Sciences.

  14. Fast hierarchical knowledge-based approach for human face detection in color images

    NASA Astrophysics Data System (ADS)

    Jiang, Jun; Gong, Jie; Zhang, Guilin; Hu, Ruolan

    2001-09-01

    This paper presents a fast hierarchical knowledge-based approach for automatically detecting multi-scale upright faces in still color images. The approach consists of three levels. At the highest level, skin-like regions are determinated by skin model, which is based on the color attributes hue and saturation in HSV color space, as well color attributes red and green in normalized color space. In level 2, a new eye model is devised to select human face candidates in segmented skin-like regions. An important feature of the eye model is that it is independent of the scale of human face. So it is possible for finding human faces in different scale with scanning image only once, and it leads to reduction the computation time of face detection greatly. In level 3, a human face mosaic image model, which is consistent with physical structure features of human face well, is applied to judge whether there are face detects in human face candidate regions. This model includes edge and gray rules. Experiment results show that the approach has high robustness and fast speed. It has wide application perspective at human-computer interactions and visual telephone etc.

  15. Personality judgments from everyday images of faces

    PubMed Central

    Sutherland, Clare A. M.; Rowley, Lauren E.; Amoaku, Unity T.; Daguzan, Ella; Kidd-Rossiter, Kate A.; Maceviciute, Ugne; Young, Andrew W.

    2015-01-01

    People readily make personality attributions to images of strangers' faces. Here we investigated the basis of these personality attributions as made to everyday, naturalistic face images. In a first study, we used 1000 highly varying “ambient image” face photographs to test the correspondence between personality judgments of the Big Five and dimensions known to underlie a range of facial first impressions: approachability, dominance, and youthful-attractiveness. Interestingly, the facial Big Five judgments were found to separate to some extent: judgments of openness, extraversion, emotional stability, and agreeableness were mainly linked to facial first impressions of approachability, whereas conscientiousness judgments involved a combination of approachability and dominance. In a second study we used average face images to investigate which main cues are used by perceivers to make impressions of the Big Five, by extracting consistent cues to impressions from the large variation in the original images. When forming impressions of strangers from highly varying, naturalistic face photographs, perceivers mainly seem to rely on broad facial cues to approachability, such as smiling. PMID:26579008

  16. A smart technique for attendance system to recognize faces through parallelism

    NASA Astrophysics Data System (ADS)

    Prabhavathi, B.; Tanuja, V.; Madhu Viswanatham, V.; Rajashekhara Babu, M.

    2017-11-01

    Major part of recognising a person is face with the help of image processing techniques we can exploit the physical features of a person. In the old approach method that is used in schools and colleges it is there that the professor calls the student name and then the attendance for the students marked. Here in paper want to deviate from the old approach and go with the new approach by using techniques that are there in image processing. In this paper we presenting spontaneous presence for students in classroom. At first classroom image has been in use and after that image is kept in data record. For the images that are stored in the database we apply system algorithm which includes steps such as, histogram classification, noise removal, face detection and face recognition methods. So by using these steps we detect the faces and then compare it with the database. The attendance gets marked automatically if the system recognizes the faces.

  17. High frame-rate en face optical coherence tomography system using KTN optical beam deflector

    NASA Astrophysics Data System (ADS)

    Ohmi, Masato; Shinya, Yusuke; Imai, Tadayuki; Toyoda, Seiji; Kobayashi, Junya; Sakamoto, Tadashi

    2017-02-01

    We developed high frame-rate en face optical coherence tomography (OCT) system using KTa1-xNbxO3 (KTN) optical beam deflector. In the imaging system, the fast scanning was performed at 200 kHz by the KTN optical beam deflector, while the slow scanning was performed at 800 Hz by the galvanometer mirror. As a preliminary experiment, we succeeded in obtaining en face OCT images of human fingerprint with a frame rate of 800 fps. This is the highest frame-rate obtained using time-domain (TD) en face OCT imaging. The 3D-OCT image of sweat gland was also obtained by our imaging system.

  18. Development of novel high-speed en face optical coherence tomography system using KTN optical beam deflector

    NASA Astrophysics Data System (ADS)

    Ohmi, Masato; Fukuda, Akihiro; Miyazu, Jun; Ueno, Masahiro; Toyoda, Seiji; Kobayashi, Junya

    2015-02-01

    We developed a novel high-speed en face optical coherence tomography (OCT) system using a KTa1-xNbxO3 (KTN) optical beam deflector. Using the imaging system, fast scanning was performed at 200 kHz by the KTN beam deflector, while slow scanning was performed at 400 Hz by the galvanometer mirror. In a preliminary experiment, we obtained en face OCT images of a human fingerprint at 400 fps. This is the highest speed reported in time-domain en face OCT imaging and is comparable to the speed of swept-source OCT. A 3D-OCT image of a sweat gland was also obtained by our imaging system.

  19. Amygdala reactivity to fearful faces correlates positively with impulsive aggression.

    PubMed

    da Cunha-Bang, Sofi; Fisher, Patrick M; Hjordt, Liv V; Holst, Klaus; Knudsen, Gitte M

    2018-01-07

    Facial expressions robustly activate the amygdala, a brain structure playing a critical role in aggression. Whereas previous studies suggest that amygdala reactivity is related to various measures of impulsive aggression, we here estimate a composite measure of impulsive aggression and evaluate whether it is associated with amygdala reactivity to angry and fearful faces. We estimated amygdala reactivity with functional magnetic resonance imaging in 47 men with varying degree of aggressive traits (19 incarcerated violent offenders and 28 healthy controls). We modeled a composite "impulsive aggression" trait construct (LV agg ) using a linear structural equation model, with a single latent variable capturing the shared correlation between five self-report measures of trait aggression, anger and impulsivity. We tested for associations between amygdala reactivity and the LV agg , adjusting for age and group. The LV agg was significantly positively associated with amygdala reactivity to fearful (p = 0.001), but not angry faces (p = 0.9). We found no group difference in amygdala reactivity to fearful or angry faces. The findings suggest that that amygdala reactivity to fearful faces is represented by a composite index of impulsive aggression and provide evidence that impulsive aggression is associated with amygdala reactivity in response to submissive cues, i.e., fearful faces.

  20. A robust human face detection algorithm

    NASA Astrophysics Data System (ADS)

    Raviteja, Thaluru; Karanam, Srikrishna; Yeduguru, Dinesh Reddy V.

    2012-01-01

    Human face detection plays a vital role in many applications like video surveillance, managing a face image database, human computer interface among others. This paper proposes a robust algorithm for face detection in still color images that works well even in a crowded environment. The algorithm uses conjunction of skin color histogram, morphological processing and geometrical analysis for detecting human faces. To reinforce the accuracy of face detection, we further identify mouth and eye regions to establish the presence/absence of face in a particular region of interest.

  1. High-performance serial block-face SEM of nonconductive biological samples enabled by focal gas injection-based charge compensation.

    PubMed

    Deerinck, T J; Shone, T M; Bushong, E A; Ramachandra, R; Peltier, S T; Ellisman, M H

    2018-05-01

    A longstanding limitation of imaging with serial block-face scanning electron microscopy is specimen surface charging. This charging is largely due to the difficulties in making biological specimens and the resins in which they are embedded sufficiently conductive. Local accumulation of charge on the specimen surface can result in poor image quality and distortions. Even minor charging can lead to misalignments between sequential images of the block-face due to image jitter. Typically, variable-pressure SEM is used to reduce specimen charging, but this results in a significant reduction to spatial resolution, signal-to-noise ratio and overall image quality. Here we show the development and application of a simple system that effectively mitigates specimen charging by using focal gas injection of nitrogen over the sample block-face during imaging. A standard gas injection valve is paired with a precisely positioned but retractable application nozzle, which is mechanically coupled to the reciprocating action of the serial block-face ultramicrotome. This system enables the application of nitrogen gas precisely over the block-face during imaging while allowing the specimen chamber to be maintained under high vacuum to maximise achievable SEM image resolution. The action of the ultramicrotome drives the nozzle retraction, automatically moving it away from the specimen area during the cutting cycle of the knife. The device described was added to a Gatan 3View system with minimal modifications, allowing high-resolution block-face imaging of even the most charge prone of epoxy-embedded biological samples. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.

  2. Study on image feature extraction and classification for human colorectal cancer using optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Huang, Shu-Wei; Yang, Shan-Yi; Huang, Wei-Cheng; Chiu, Han-Mo; Lu, Chih-Wei

    2011-06-01

    Most of the colorectal cancer has grown from the adenomatous polyp. Adenomatous lesions have a well-documented relationship to colorectal cancer in previous studies. Thus, to detect the morphological changes between polyp and tumor can allow early diagnosis of colorectal cancer and simultaneous removal of lesions. OCT (Optical coherence tomography) has been several advantages including high resolution and non-invasive cross-sectional image in vivo. In this study, we investigated the relationship between the B-scan OCT image features and histology of malignant human colorectal tissues, also en-face OCT image and the endoscopic image pattern. The in-vitro experiments were performed by a swept-source optical coherence tomography (SS-OCT) system; the swept source has a center wavelength at 1310 nm and 160nm in wavelength scanning range which produced 6 um axial resolution. In the study, the en-face images were reconstructed by integrating the axial values in 3D OCT images. The reconstructed en-face images show the same roundish or gyrus-like pattern with endoscopy images. The pattern of en-face images relate to the stages of colon cancer. Endoscopic OCT technique would provide three-dimensional imaging and rapidly reconstruct en-face images which can increase the speed of colon cancer diagnosis. Our results indicate a great potential for early detection of colorectal adenomas by using the OCT imaging.

  3. Do Infants Recognize the Arcimboldo Images as Faces? Behavioral and Near-Infrared Spectroscopic Study

    ERIC Educational Resources Information Center

    Kobayashi, Megumi; Otsuka, Yumiko; Nakato, Emi; Kanazawa, So; Yamaguchi, Masami K.; Kakigi, Ryusuke

    2012-01-01

    Arcimboldo images induce the perception of faces when shown upright despite the fact that only nonfacial objects such as vegetables and fruits are painted. In the current study, we examined whether infants recognize a face in the Arcimboldo images by using the preferential looking technique and near-infrared spectroscopy (NIRS). In the first…

  4. Dynamic dual-isotope molecular imaging elucidates principles for optimizing intrathecal drug delivery

    PubMed Central

    Wolf, Daniel A.; Hesterman, Jacob Y.; Sullivan, Jenna M.; Orcutt, Kelly D.; Silva, Matthew D.; Lobo, Merryl; Wellman, Tyler; Hoppin, Jack

    2016-01-01

    The intrathecal (IT) dosing route offers a seemingly obvious solution for delivering drugs directly to the central nervous system. However, gaps in understanding drug molecule behavior within the anatomically and kinetically unique environment of the mammalian IT space have impeded the establishment of pharmacokinetic principles for optimizing regional drug exposure along the neuraxis. Here, we have utilized high-resolution single-photon emission tomography with X-ray computed tomography to study the behavior of multiple molecular imaging tracers following an IT bolus injection, with supporting histology, autoradiography, block-face tomography, and MRI. Using simultaneous dual-isotope imaging, we demonstrate that the regional CNS tissue exposure of molecules with varying chemical properties is affected by IT space anatomy, cerebrospinal fluid (CSF) dynamics, CSF clearance routes, and the location and volume of the injected bolus. These imaging approaches can be used across species to optimize the safety and efficacy of IT drug therapy for neurological disorders. PMID:27699254

  5. Alpha-band rhythm modulation under the condition of subliminal face presentation: MEG study.

    PubMed

    Sakuraba, Satoshi; Kobayashi, Hana; Sakai, Shinya; Yokosawa, Koichi

    2013-01-01

    The human brain has two streams to process visual information: a dorsal stream and a ventral stream. Negative potential N170 or its magnetic counterpart M170 is known as the face-specific signal originating from the ventral stream. It is possible to present a visual image unconsciously by using continuous flash suppression (CFS), which is a visual masking technique adopting binocular rivalry. In this work, magnetoencephalograms were recorded during presentation of the three invisible images: face images, which are processed by the ventral stream; tool images, which could be processed by the dorsal stream, and a blank image. Alpha-band activities detected by sensors that are sensitive to M170 were compared. The alpha-band rhythm was suppressed more during presentation of face images than during presentation of the blank image (p=.028). The suppression remained for about 1 s after ending presentations. However, no significant difference was observed between tool and other images. These results suggest that alpha-band rhythm can be modulated also by unconscious visual images.

  6. En face swept-source optical coherence tomographic analysis of X-linked juvenile retinoschisis.

    PubMed

    Ono, Shinji; Takahashi, Atsushi; Mase, Tomoko; Nagaoka, Taiji; Yoshida, Akitoshi

    2016-07-01

    To clarify the area of retinoschisis by X-linked juvenile retinoschisis (XLRS) using swept-source optical coherence tomography (SS-OCT) en face images. We report two cases of XLRS in the same family. The patients presented with bilateral blurred vision. The posterior segment examination showed a spoked-wheel pattern in the macula. SS-OCT cross-sectional images revealed widespread retinal splitting at the level of the inner nuclear layer bilaterally. We diagnosed XLRS. To evaluate the area of retinoschisis, we obtained en face SS-OCT images, which clearly visualized the area of retinoschisis seen as a sunflower-like structure in the macula. We report the findings on en face SS-OCT images from patients with XLRS. The en face images using SS-OCT showed the precise area of retinoschisis compared with the SS-OCT thickness map and are useful for managing patients with XLRS.

  7. The surprisingly high human efficiency at learning to recognize faces

    PubMed Central

    Peterson, Matthew F.; Abbey, Craig K.; Eckstein, Miguel P.

    2009-01-01

    We investigated the ability of humans to optimize face recognition performance through rapid learning of individual relevant features. We created artificial faces with discriminating visual information heavily concentrated in single features (nose, eyes, chin or mouth). In each of 2500 learning blocks a feature was randomly selected and retained over the course of four trials, during which observers identified randomly sampled, noisy face images. Observers learned the discriminating feature through indirect feedback, leading to large performance gains. Performance was compared to a learning Bayesian ideal observer, resulting in unexpectedly high learning compared to previous studies with simpler stimuli. We explore various explanations and conclude that the higher learning measured with faces cannot be driven by adaptive eye movement strategies but can be mostly accounted for by suboptimalities in human face discrimination when observers are uncertain about the discriminating feature. We show that an initial bias of humans to use specific features to perform the task even though they are informed that each of four features is equally likely to be the discriminatory feature would lead to seemingly supra-optimal learning. We also examine the possibility of inefficient human integration of visual information across the spatially distributed facial features. Together, the results suggest that humans can show large performance improvement effects in discriminating faces as they learn to identify the feature containing the discriminatory information. PMID:19000918

  8. Pose-Invariant Face Recognition via RGB-D Images.

    PubMed

    Sang, Gaoli; Li, Jing; Zhao, Qijun

    2016-01-01

    Three-dimensional (3D) face models can intrinsically handle large pose face recognition problem. In this paper, we propose a novel pose-invariant face recognition method via RGB-D images. By employing depth, our method is able to handle self-occlusion and deformation, both of which are challenging problems in two-dimensional (2D) face recognition. Texture images in the gallery can be rendered to the same view as the probe via depth. Meanwhile, depth is also used for similarity measure via frontalization and symmetric filling. Finally, both texture and depth contribute to the final identity estimation. Experiments on Bosphorus, CurtinFaces, Eurecom, and Kiwi databases demonstrate that the additional depth information has improved the performance of face recognition with large pose variations and under even more challenging conditions.

  9. Automatic face recognition in HDR imaging

    NASA Astrophysics Data System (ADS)

    Pereira, Manuela; Moreno, Juan-Carlos; Proença, Hugo; Pinheiro, António M. G.

    2014-05-01

    The gaining popularity of the new High Dynamic Range (HDR) imaging systems is raising new privacy issues caused by the methods used for visualization. HDR images require tone mapping methods for an appropriate visualization on conventional and non-expensive LDR displays. These visualization methods might result in completely different visualization raising several issues on privacy intrusion. In fact, some visualization methods result in a perceptual recognition of the individuals, while others do not even show any identity. Although perceptual recognition might be possible, a natural question that can rise is how computer based recognition will perform using tone mapping generated images? In this paper, a study where automatic face recognition using sparse representation is tested with images that result from common tone mapping operators applied to HDR images. Its ability for the face identity recognition is described. Furthermore, typical LDR images are used for the face recognition training.

  10. Eye-motion-corrected optical coherence tomography angiography using Lissajous scanning.

    PubMed

    Chen, Yiwei; Hong, Young-Joo; Makita, Shuichi; Yasuno, Yoshiaki

    2018-03-01

    To correct eye motion artifacts in en face optical coherence tomography angiography (OCT-A) images, a Lissajous scanning method with subsequent software-based motion correction is proposed. The standard Lissajous scanning pattern is modified to be compatible with OCT-A and a corresponding motion correction algorithm is designed. The effectiveness of our method was demonstrated by comparing en face OCT-A images with and without motion correction. The method was further validated by comparing motion-corrected images with scanning laser ophthalmoscopy images, and the repeatability of the method was evaluated using a checkerboard image. A motion-corrected en face OCT-A image from a blinking case is presented to demonstrate the ability of the method to deal with eye blinking. Results show that the method can produce accurate motion-free en face OCT-A images of the posterior segment of the eye in vivo .

  11. Perceptual expertise in forensic facial image comparison.

    PubMed

    White, David; Phillips, P Jonathon; Hahn, Carina A; Hill, Matthew; O'Toole, Alice J

    2015-09-07

    Forensic facial identification examiners are required to match the identity of faces in images that vary substantially, owing to changes in viewing conditions and in a person's appearance. These identifications affect the course and outcome of criminal investigations and convictions. Despite calls for research on sources of human error in forensic examination, existing scientific knowledge of face matching accuracy is based, almost exclusively, on people without formal training. Here, we administered three challenging face matching tests to a group of forensic examiners with many years' experience of comparing face images for law enforcement and government agencies. Examiners outperformed untrained participants and computer algorithms, thereby providing the first evidence that these examiners are experts at this task. Notably, computationally fusing responses of multiple experts produced near-perfect performance. Results also revealed qualitative differences between expert and non-expert performance. First, examiners' superiority was greatest at longer exposure durations, suggestive of more entailed comparison in forensic examiners. Second, experts were less impaired by image inversion than non-expert students, contrasting with face memory studies that show larger face inversion effects in high performers. We conclude that expertise in matching identity across unfamiliar face images is supported by processes that differ qualitatively from those supporting memory for individual faces. © 2015 The Author(s).

  12. Self-organized Evaluation of Dynamic Hand Gestures for Sign Language Recognition

    NASA Astrophysics Data System (ADS)

    Buciu, Ioan; Pitas, Ioannis

    Two main theories exist with respect to face encoding and representation in the human visual system (HVS). The first one refers to the dense (holistic) representation of the face, where faces have "holon"-like appearance. The second one claims that a more appropriate face representation is given by a sparse code, where only a small fraction of the neural cells corresponding to face encoding is activated. Theoretical and experimental evidence suggest that the HVS performs face analysis (encoding, storing, face recognition, facial expression recognition) in a structured and hierarchical way, where both representations have their own contribution and goal. According to neuropsychological experiments, it seems that encoding for face recognition, relies on holistic image representation, while a sparse image representation is used for facial expression analysis and classification. From the computer vision perspective, the techniques developed for automatic face and facial expression recognition fall into the same two representation types. Like in Neuroscience, the techniques which perform better for face recognition yield a holistic image representation, while those techniques suitable for facial expression recognition use a sparse or local image representation. The proposed mathematical models of image formation and encoding try to simulate the efficient storing, organization and coding of data in the human cortex. This is equivalent with embedding constraints in the model design regarding dimensionality reduction, redundant information minimization, mutual information minimization, non-negativity constraints, class information, etc. The presented techniques are applied as a feature extraction step followed by a classification method, which also heavily influences the recognition results.

  13. Robust Point Set Matching for Partial Face Recognition.

    PubMed

    Weng, Renliang; Lu, Jiwen; Tan, Yap-Peng

    2016-03-01

    Over the past three decades, a number of face recognition methods have been proposed in computer vision, and most of them use holistic face images for person identification. In many real-world scenarios especially some unconstrained environments, human faces might be occluded by other objects, and it is difficult to obtain fully holistic face images for recognition. To address this, we propose a new partial face recognition approach to recognize persons of interest from their partial faces. Given a pair of gallery image and probe face patch, we first detect keypoints and extract their local textural features. Then, we propose a robust point set matching method to discriminatively match these two extracted local feature sets, where both the textural information and geometrical information of local features are explicitly used for matching simultaneously. Finally, the similarity of two faces is converted as the distance between these two aligned feature sets. Experimental results on four public face data sets show the effectiveness of the proposed approach.

  14. Face recognition via edge-based Gabor feature representation for plastic surgery-altered images

    NASA Astrophysics Data System (ADS)

    Chude-Olisah, Chollette C.; Sulong, Ghazali; Chude-Okonkwo, Uche A. K.; Hashim, Siti Z. M.

    2014-12-01

    Plastic surgery procedures on the face introduce skin texture variations between images of the same person (intra-subject), thereby making the task of face recognition more difficult than in normal scenario. Usually, in contemporary face recognition systems, the original gray-level face image is used as input to the Gabor descriptor, which translates to encoding some texture properties of the face image. The texture-encoding process significantly degrades the performance of such systems in the case of plastic surgery due to the presence of surgically induced intra-subject variations. Based on the proposition that the shape of significant facial components such as eyes, nose, eyebrow, and mouth remains unchanged after plastic surgery, this paper employs an edge-based Gabor feature representation approach for the recognition of surgically altered face images. We use the edge information, which is dependent on the shapes of the significant facial components, to address the plastic surgery-induced texture variation problems. To ensure that the significant facial components represent useful edge information with little or no false edges, a simple illumination normalization technique is proposed for preprocessing. Gabor wavelet is applied to the edge image to accentuate on the uniqueness of the significant facial components for discriminating among different subjects. The performance of the proposed method is evaluated on the Georgia Tech (GT) and the Labeled Faces in the Wild (LFW) databases with illumination and expression problems, and the plastic surgery database with texture changes. Results show that the proposed edge-based Gabor feature representation approach is robust against plastic surgery-induced face variations amidst expression and illumination problems and outperforms the existing plastic surgery face recognition methods reported in the literature.

  15. Hyperspectral face recognition with spatiospectral information fusion and PLS regression.

    PubMed

    Uzair, Muhammad; Mahmood, Arif; Mian, Ajmal

    2015-03-01

    Hyperspectral imaging offers new opportunities for face recognition via improved discrimination along the spectral dimension. However, it poses new challenges, including low signal-to-noise ratio, interband misalignment, and high data dimensionality. Due to these challenges, the literature on hyperspectral face recognition is not only sparse but is limited to ad hoc dimensionality reduction techniques and lacks comprehensive evaluation. We propose a hyperspectral face recognition algorithm using a spatiospectral covariance for band fusion and partial least square regression for classification. Moreover, we extend 13 existing face recognition techniques, for the first time, to perform hyperspectral face recognition.We formulate hyperspectral face recognition as an image-set classification problem and evaluate the performance of seven state-of-the-art image-set classification techniques. We also test six state-of-the-art grayscale and RGB (color) face recognition algorithms after applying fusion techniques on hyperspectral images. Comparison with the 13 extended and five existing hyperspectral face recognition techniques on three standard data sets show that the proposed algorithm outperforms all by a significant margin. Finally, we perform band selection experiments to find the most discriminative bands in the visible and near infrared response spectrum.

  16. Multimodal Translation System Using Texture-Mapped Lip-Sync Images for Video Mail and Automatic Dubbing Applications

    NASA Astrophysics Data System (ADS)

    Morishima, Shigeo; Nakamura, Satoshi

    2004-12-01

    We introduce a multimodal English-to-Japanese and Japanese-to-English translation system that also translates the speaker's speech motion by synchronizing it to the translated speech. This system also introduces both a face synthesis technique that can generate any viseme lip shape and a face tracking technique that can estimate the original position and rotation of a speaker's face in an image sequence. To retain the speaker's facial expression, we substitute only the speech organ's image with the synthesized one, which is made by a 3D wire-frame model that is adaptable to any speaker. Our approach provides translated image synthesis with an extremely small database. The tracking motion of the face from a video image is performed by template matching. In this system, the translation and rotation of the face are detected by using a 3D personal face model whose texture is captured from a video frame. We also propose a method to customize the personal face model by using our GUI tool. By combining these techniques and the translated voice synthesis technique, an automatic multimodal translation can be achieved that is suitable for video mail or automatic dubbing systems into other languages.

  17. Applying face identification to detecting hijacking of airplane

    NASA Astrophysics Data System (ADS)

    Luo, Xuanwen; Cheng, Qiang

    2004-09-01

    That terrorists hijacked the airplanes and crashed the World Trade Center is disaster to civilization. To avoid the happening of hijack is critical to homeland security. To report the hijacking in time, limit the terrorist to operate the plane if happened and land the plane to the nearest airport could be an efficient way to avoid the misery. Image processing technique in human face recognition or identification could be used for this task. Before the plane take off, the face images of pilots are input into a face identification system installed in the airplane. The camera in front of pilot seat keeps taking the pilot face image during the flight and comparing it with pre-input pilot face images. If a different face is detected, a warning signal is sent to ground automatically. At the same time, the automatic cruise system is started or the plane is controlled by the ground. The terrorists will have no control over the plane. The plane will be landed to a nearest or appropriate airport under the control of the ground or cruise system. This technique could also be used in automobile industry as an image key to avoid car stealth.

  18. Laser Doppler imaging of cutaneous blood flow through transparent face masks: a necessary preamble to computer-controlled rapid prototyping fabrication with submillimeter precision.

    PubMed

    Allely, Rebekah R; Van-Buendia, Lan B; Jeng, James C; White, Patricia; Wu, Jingshu; Niszczak, Jonathan; Jordan, Marion H

    2008-01-01

    A paradigm shift in management of postburn facial scarring is lurking "just beneath the waves" with the widespread availability of two recent technologies: precise three-dimensional scanning/digitizing of complex surfaces and computer-controlled rapid prototyping three-dimensional "printers". Laser Doppler imaging may be the sensible method to track the scar hyperemia that should form the basis of assessing progress and directing incremental changes in the digitized topographical face mask "prescription". The purpose of this study was to establish feasibility of detecting perfusion through transparent face masks using the Laser Doppler Imaging scanner. Laser Doppler images of perfusion were obtained at multiple facial regions on five uninjured staff members. Images were obtained without a mask, followed by images with a loose fitting mask with and without a silicone liner, and then with a tight fitting mask with and without a silicone liner. Right and left oblique images, in addition to the frontal images, were used to overcome unobtainable measurements at the extremes of face mask curvature. General linear model, mixed model, and t tests were used for data analysis. Three hundred seventy-five measurements were used for analysis, with a mean perfusion unit of 299 and pixel validity of 97%. The effect of face mask pressure with and without the silicone liner was readily quantified with significant changes in mean cutaneous blood flow (P < .5). High valid pixel rate laser Doppler imager flow data can be obtained through transparent face masks. Perfusion decreases with the application of pressure and with silicone. Every participant measured differently in perfusion units; however, consistent perfusion patterns in the face were observed.

  19. SU-E-T-65: A Prospective Trial of Open Face Masks for Head and Neck Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiant, D; Squire, S; Maurer, J

    Purpose: Open face head and neck masks allow for active patient monitoring during treatment and may reduced claustrophobia and anxiety compared to closed masks. The ability of open masks to limit intrafraction motion and to preserve the patient shape/position from simulation over protracted treatments should be considered. Methods: Thirty-two head and neck patients were prospectively randomized to treatment in a closed mask or a novel open face mask. All patients received daily volumetric imaging. The daily images were automatically rigidly registered to the planning CT’s offline using a commercial image processing tool. The shifts needed to optimize the registration, themore » mutual information coefficient (MI), and the Pearson correlation (PC) coefficients were recorded to evaluate shape preservation. The open group was set-up and monitored with surface imaging at treatment. The real time surface imaging information was recorded to evaluate intrafraction motion. Results: Sixteen patients were included in each group. Evaluations were made over a total of 984 fractions. The mean MI and PC showed significantly higher shape preservation for the open group than for the closed group (p = 0). The mean rotations for the open group were smaller or < 0.15° larger versus the closed group. The mean intrafraction motion for the open group was 0.93 +/−0.99 mm (2 SD). The maximum single fraction displacement was 3.2 mm. Fourteen of 16 patients showed no significant correlation of motion with fraction number (p > 0.05). Conclusion: The open masks preserved shape as well as the closed masks, and they limited motion to < 2 mm for 95% of the treated fractions. These results are consistent over treatment courses of up to 35 fractions. The open mask is suitable for treatment with or without active monitoring. This work was partially supported by Qfix.« less

  20. Gender classification system in uncontrolled environments

    NASA Astrophysics Data System (ADS)

    Zeng, Pingping; Zhang, Yu-Jin; Duan, Fei

    2011-01-01

    Most face analysis systems available today perform mainly on restricted databases of images in terms of size, age, illumination. In addition, it is frequently assumed that all images are frontal and unconcealed. Actually, in a non-guided real-time supervision, the face pictures taken may often be partially covered and with head rotation less or more. In this paper, a special system supposed to be used in real-time surveillance with un-calibrated camera and non-guided photography is described. It mainly consists of five parts: face detection, non-face filtering, best-angle face selection, texture normalization, and gender classification. Emphases are focused on non-face filtering and best-angle face selection parts as well as texture normalization. Best-angle faces are figured out by PCA reconstruction, which equals to an implicit face alignment and results in a huge increase of the accuracy for gender classification. Dynamic skin model and a masked PCA reconstruction algorithm are applied to filter out faces detected in error. In order to fully include facial-texture and shape-outline features, a hybrid feature that is a combination of Gabor wavelet and PHoG (pyramid histogram of gradients) was proposed to equitable inner texture and outer contour. Comparative study on the effects of different non-face filtering and texture masking methods in the context of gender classification by SVM is reported through experiments on a set of UT (a company name) face images, a large number of internet images and CAS (Chinese Academy of Sciences) face database. Some encouraging results are obtained.

  1. Motion facilitates face perception across changes in viewpoint and expression in older adults.

    PubMed

    Maguinness, Corrina; Newell, Fiona N

    2014-12-01

    Faces are inherently dynamic stimuli. However, face perception in younger adults appears to be mediated by the ability to extract structural cues from static images and a benefit of motion is inconsistent. In contrast, static face processing is poorer and more image-dependent in older adults. We therefore compared the role of facial motion in younger and older adults to assess whether motion can enhance perception when static cues are insufficient. In our studies, older and younger adults learned faces presented in motion or in a sequence of static images, containing rigid (viewpoint) or nonrigid (expression) changes. Immediately following learning, participants matched a static test image to the learned face which varied by viewpoint (Experiment 1) or expression (Experiment 2) and was either learned or novel. First, we found an age effect with better face matching performance in younger than in older adults. However, we observed face matching performance improved in the older adult group, across changes in viewpoint and expression, when faces were learned in motion relative to static presentation. There was no benefit for facial (nonrigid) motion when the task involved matching inverted faces (Experiment 3), suggesting that the ability to use dynamic face information for the purpose of recognition reflects motion encoding which is specific to upright faces. Our results suggest that ageing may offer a unique insight into how dynamic cues support face processing, which may not be readily observed in younger adults' performance. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  2. Face Hallucination with Linear Regression Model in Semi-Orthogonal Multilinear PCA Method

    NASA Astrophysics Data System (ADS)

    Asavaskulkiet, Krissada

    2018-04-01

    In this paper, we propose a new face hallucination technique, face images reconstruction in HSV color space with a semi-orthogonal multilinear principal component analysis method. This novel hallucination technique can perform directly from tensors via tensor-to-vector projection by imposing the orthogonality constraint in only one mode. In our experiments, we use facial images from FERET database to test our hallucination approach which is demonstrated by extensive experiments with high-quality hallucinated color faces. The experimental results assure clearly demonstrated that we can generate photorealistic color face images by using the SO-MPCA subspace with a linear regression model.

  3. Local structure-based image decomposition for feature extraction with applications to face recognition.

    PubMed

    Qian, Jianjun; Yang, Jian; Xu, Yong

    2013-09-01

    This paper presents a robust but simple image feature extraction method, called image decomposition based on local structure (IDLS). It is assumed that in the local window of an image, the macro-pixel (patch) of the central pixel, and those of its neighbors, are locally linear. IDLS captures the local structural information by describing the relationship between the central macro-pixel and its neighbors. This relationship is represented with the linear representation coefficients determined using ridge regression. One image is actually decomposed into a series of sub-images (also called structure images) according to a local structure feature vector. All the structure images, after being down-sampled for dimensionality reduction, are concatenated into one super-vector. Fisher linear discriminant analysis is then used to provide a low-dimensional, compact, and discriminative representation for each super-vector. The proposed method is applied to face recognition and examined using our real-world face image database, NUST-RWFR, and five popular, publicly available, benchmark face image databases (AR, Extended Yale B, PIE, FERET, and LFW). Experimental results show the performance advantages of IDLS over state-of-the-art algorithms.

  4. Choosing face: The curse of self in profile image selection.

    PubMed

    White, David; Sutherland, Clare A M; Burton, Amy L

    2017-01-01

    People draw automatic social inferences from photos of unfamiliar faces and these first impressions are associated with important real-world outcomes. Here we examine the effect of selecting online profile images on first impressions. We model the process of profile image selection by asking participants to indicate the likelihood that images of their own face ("self-selection") and of an unfamiliar face ("other-selection") would be used as profile images on key social networking sites. Across two large Internet-based studies (n = 610), in line with predictions, image selections accentuated favorable social impressions and these impressions were aligned to the social context of the networking sites. However, contrary to predictions based on people's general expertise in self-presentation, other-selected images conferred more favorable impressions than self-selected images. We conclude that people make suboptimal choices when selecting their own profile pictures, such that self-perception places important limits on facial first impressions formed by others. These results underscore the dynamic nature of person perception in real-world contexts.

  5. Positive and negative ion beam merging system for neutral beam production

    DOEpatents

    Leung, Ka-Ngo; Reijonen, Jani

    2005-12-13

    The positive and negative ion beam merging system extracts positive and negative ions of the same species and of the same energy from two separate ion sources. The positive and negative ions from both sources pass through a bending magnetic field region between the pole faces of an electromagnet. Since the positive and negative ions come from mirror image positions on opposite sides of a beam axis, and the positive and negative ions are identical, the trajectories will be symmetrical and the positive and negative ion beams will merge into a single neutral beam as they leave the pole face of the electromagnet. The ion sources are preferably multicusp plasma ion sources. The ion sources may include a multi-aperture extraction system for increasing ion current from the sources.

  6. An objective electrophysiological marker of face individualisation impairment in acquired prosopagnosia with fast periodic visual stimulation.

    PubMed

    Liu-Shuang, Joan; Torfs, Katrien; Rossion, Bruno

    2016-03-01

    One of the most striking pieces of evidence for a specialised face processing system in humans is acquired prosopagnosia, i.e. the inability to individualise faces following brain damage. However, a sensitive and objective non-behavioural marker for this deficit is difficult to provide with standard event-related potentials (ERPs), such as the well-known face-related N170 component reported and investigated in-depth by our late distinguished colleague Shlomo Bentin. Here we demonstrate that fast periodic visual stimulation (FPVS) in electrophysiology can quantify face individualisation impairment in acquired prosopagnosia. In Experiment 1 (Liu-Shuang et al., 2014), identical faces were presented at a rate of 5.88 Hz (i.e., ≈ 6 images/s, SOA=170 ms, 1 fixation per image), with different faces appearing every 5th face (5.88 Hz/5=1.18 Hz). Responses of interest were identified at these predetermined frequencies (i.e., objectively) in the EEG frequency-domain data. A well-studied case of acquired prosopagnosia (PS) and a group of age- and gender-matched controls completed only 4 × 1-min stimulation sequences, with an orthogonal fixation cross task. Contrarily to controls, PS did not show face individualisation responses at 1.18 Hz, in line with her prosopagnosia. However, her response at 5.88 Hz, reflecting general visual processing, was within the normal range. In Experiment 2 (Rossion et al., 2015), we presented natural (i.e., unsegmented) images of objects at 5.88 Hz, with face images shown every 5th image (1.18 Hz). In accordance with her preserved ability to categorise a face as a face, and despite extensive brain lesions potentially affecting the overall EEG signal-to-noise ratio, PS showed 1.18 Hz face-selective responses within the normal range. Collectively, these findings show that fast periodic visual stimulation provides objective and sensitive electrophysiological markers of preserved and impaired face processing abilities in the neuropsychological population. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Interactive display system having a scaled virtual target zone

    DOEpatents

    Veligdan, James T.; DeSanto, Leonard

    2006-06-13

    A display system includes a waveguide optical panel having an inlet face and an opposite outlet face. A projector and imaging device cooperate with the panel for projecting a video image thereon. An optical detector bridges at least a portion of the waveguides for detecting a location on the outlet face within a target zone of an inbound light spot. A controller is operatively coupled to the imaging device and detector for displaying a cursor on the outlet face corresponding with the detected location of the spot within the target zone.

  8. The Influence of Social Comparison on Visual Representation of One's Face

    PubMed Central

    Zell, Ethan; Balcetis, Emily

    2012-01-01

    Can the effects of social comparison extend beyond explicit evaluation to visual self-representation—a perceptual stimulus that is objectively verifiable, unambiguous, and frequently updated? We morphed images of participants' faces with attractive and unattractive references. With access to a mirror, participants selected the morphed image they perceived as depicting their face. Participants who engaged in upward comparison with relevant attractive targets selected a less attractive morph compared to participants exposed to control images (Study 1). After downward comparison with relevant unattractive targets compared to control images, participants selected a more attractive morph (Study 2). Biased representations were not the products of cognitive accessibility of beauty constructs; comparisons did not influence representations of strangers' faces (Study 3). We discuss implications for vision, social comparison, and body image. PMID:22662124

  9. GrayQb TM Single-Faced Version 2 (SF2) Hanford Plutonium Reclamation Facility (PRF) deployment report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plummer, J. R.; Immel, D. M.; Serrato, M. G.

    2015-11-18

    The Savannah River National Laboratory (SRNL) in partnership with CH2M Plateau Remediation Company (CHPRC) deployed the GrayQb TM SF2 radiation imaging device at the Hanford Plutonium Reclamation Facility (PRF) to assist in the radiological characterization of the canyon. The deployment goal was to locate radiological contamination hot spots in the PRF canyon, where pencil tanks were removed and decontamination/debris removal operations are on-going, to support the CHPRC facility decontamination and decommissioning (D&D) effort. The PRF canyon D&D effort supports completion of the CHPRC Plutonium Finishing Plant Decommissioning Project. The GrayQb TM SF2 (Single Faced Version 2) is a non-destructive examinationmore » device developed by SRNL to generate radiation contour maps showing source locations and relative radiological levels present in the area under examination. The Hanford PRF GrayQbTM Deployment was sponsored by CH2M Plateau Remediation Company (CHPRC) through the DOE Richland Operations Office, Inter-Entity Work Order (IEWO), DOE-RL IEWO- M0SR900210.« less

  10. Integrated one- and two-photon scanned oblique plane illumination (SOPi) microscopy for rapid volumetric imaging

    NASA Astrophysics Data System (ADS)

    Kumar, Manish; Kishore, Sandeep; Nasenbeny, Jordan; McLean, David L.; Kozorovitskiy, Yevgenia

    2018-05-01

    Versatile, sterically accessible imaging systems capable of in vivo rapid volumetric functional and structural imaging deep in the brain continue to be a limiting factor in neuroscience research. Towards overcoming this obstacle, we present integrated one- and two-photon scanned oblique plane illumination (SOPi) microscopy which uses a single front-facing microscope objective to provide light-sheet scanning based rapid volumetric imaging capability at subcellular resolution. Our planar scan-mirror based optimized light-sheet architecture allows for non-distorted scanning of volume samples, simplifying accurate reconstruction of the imaged volume. Integration of both one-photon (1P) and two-photon (2P) light-sheet microscopy in the same system allows for easy selection between rapid volumetric imaging and higher resolution imaging in scattering media. Using SOPi, we demonstrate deep, large volume imaging capability inside scattering mouse brain sections and rapid imaging speeds up to 10 volumes per second in zebrafish larvae expressing genetically encoded fluorescent proteins GFP or GCaMP6s. SOPi flexibility and steric access makes it adaptable for numerous imaging applications and broadly compatible with orthogonal techniques for actuating or interrogating neuronal structure and activity.

  11. Integrated one- and two-photon scanned oblique plane illumination (SOPi) microscopy for rapid volumetric imaging.

    PubMed

    Kumar, Manish; Kishore, Sandeep; Nasenbeny, Jordan; McLean, David L; Kozorovitskiy, Yevgenia

    2018-05-14

    Versatile, sterically accessible imaging systems capable of in vivo rapid volumetric functional and structural imaging deep in the brain continue to be a limiting factor in neuroscience research. Towards overcoming this obstacle, we present integrated one- and two-photon scanned oblique plane illumination (SOPi, /sōpī/) microscopy which uses a single front-facing microscope objective to provide light-sheet scanning based rapid volumetric imaging capability at subcellular resolution. Our planar scan-mirror based optimized light-sheet architecture allows for non-distorted scanning of volume samples, simplifying accurate reconstruction of the imaged volume. Integration of both one-photon (1P) and two-photon (2P) light-sheet microscopy in the same system allows for easy selection between rapid volumetric imaging and higher resolution imaging in scattering media. Using SOPi, we demonstrate deep, large volume imaging capability inside scattering mouse brain sections and rapid imaging speeds up to 10 volumes per second in zebrafish larvae expressing genetically encoded fluorescent proteins GFP or GCaMP6s. SOPi's flexibility and steric access makes it adaptable for numerous imaging applications and broadly compatible with orthogonal techniques for actuating or interrogating neuronal structure and activity.

  12. Ensemble coding of face identity is present but weaker in congenital prosopagnosia.

    PubMed

    Robson, Matthew K; Palermo, Romina; Jeffery, Linda; Neumann, Markus F

    2018-03-01

    Individuals with congenital prosopagnosia (CP) are impaired at identifying individual faces but do not appear to show impairments in extracting the average identity from a group of faces (known as ensemble coding). However, possible deficits in ensemble coding in a previous study (CPs n = 4) may have been masked because CPs relied on pictorial (image) cues rather than identity cues. Here we asked whether a larger sample of CPs (n = 11) would show intact ensemble coding of identity when availability of image cues was minimised. Participants viewed a "set" of four faces and then judged whether a subsequent individual test face, either an exemplar or a "set average", was in the preceding set. Ensemble coding occurred when matching (vs. mismatching) averages were mistakenly endorsed as set members. We assessed both image- and identity-based ensemble coding, by varying whether test faces were either the same or different images of the identities in the set. CPs showed significant ensemble coding in both tasks, indicating that their performance was independent of image cues. As a group, CPs' ensemble coding was weaker than controls in both tasks, consistent with evidence that perceptual processing of face identity is disrupted in CP. This effect was driven by CPs (n= 3) who, in addition to having impaired face memory, also performed particularly poorly on a measure of face perception (CFPT). Future research, using larger samples, should examine whether deficits in ensemble coding may be restricted to CPs who also have substantial face perception deficits. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Automatic Detection of Frontal Face Midline by Chain-coded Merlin-Farber Hough Trasform

    NASA Astrophysics Data System (ADS)

    Okamoto, Daichi; Ohyama, Wataru; Wakabayashi, Tetsushi; Kimura, Fumitaka

    We propose a novel approach for detection of the facial midline (facial symmetry axis) from a frontal face image. The facial midline has several applications, for instance reducing computational cost required for facial feature extraction (FFE) and postoperative assessment for cosmetic or dental surgery. The proposed method detects the facial midline of a frontal face from an edge image as the symmetry axis using the Merlin-Faber Hough transformation. And a new performance improvement scheme for midline detection by MFHT is present. The main concept of the proposed scheme is suppression of redundant vote on the Hough parameter space by introducing chain code representation for the binary edge image. Experimental results on the image dataset containing 2409 images from FERET database indicate that the proposed algorithm can improve the accuracy of midline detection from 89.9% to 95.1 % for face images with different scales and rotation.

  14. Responses of chimpanzees to cues of conspecific observation☆

    PubMed Central

    Nettle, Daniel; Cronin, Katherine A.; Bateson, Melissa

    2013-01-01

    Recent evidence has shown that humans are remarkably sensitive to artificial cues of conspecific observation when making decisions with potential social consequences. Whether similar effects are found in other great apes has not yet been investigated. We carried out two experiments in which individual chimpanzees, Pan troglodytes, took items of food from an array in the presence of either an image of a large conspecific face or a scrambled control image. In experiment 1 we compared three versions of the face image varying in size and the amount of the face displayed. In experiment 2 we compared a fourth variant of the image with more prominent coloured eyes displayed closer to the focal chimpanzee. The chimpanzees did not look at the face images significantly more than at the control images in either experiment. Although there were trends for some individuals in each experiment to be slower to take high-value food items in the face conditions, these were not consistent or robust. We suggest that the extreme human sensitivity to cues of potential conspecific observation may not be shared with chimpanzees. PMID:24027343

  15. Identifiable Images of Bystanders Extracted from Corneal Reflections

    PubMed Central

    Jenkins, Rob; Kerr, Christie

    2013-01-01

    Criminal investigations often use photographic evidence to identify suspects. Here we combined robust face perception and high-resolution photography to mine face photographs for hidden information. By zooming in on high-resolution face photographs, we were able to recover images of unseen bystanders from reflections in the subjects' eyes. To establish whether these bystanders could be identified from the reflection images, we presented them as stimuli in a face matching task (Experiment 1). Accuracy in the face matching task was well above chance (50%), despite the unpromising source of the stimuli. Participants who were unfamiliar with the bystanders' faces (n = 16) performed at 71% accuracy [t(15) = 7.64, p<.0001, d = 1.91], and participants who were familiar with the faces (n = 16) performed at 84% accuracy [t(15) = 11.15, p<.0001, d = 2.79]. In a test of spontaneous recognition (Experiment 2), observers could reliably name a familiar face from an eye reflection image. For crimes in which the victims are photographed (e.g., hostage taking, child sex abuse), reflections in the eyes of the photographic subject could help to identify perpetrators. PMID:24386177

  16. DeitY-TU face database: its design, multiple camera capturing, characteristics, and evaluation

    NASA Astrophysics Data System (ADS)

    Bhowmik, Mrinal Kanti; Saha, Kankan; Saha, Priya; Bhattacharjee, Debotosh

    2014-10-01

    The development of the latest face databases is providing researchers different and realistic problems that play an important role in the development of efficient algorithms for solving the difficulties during automatic recognition of human faces. This paper presents the creation of a new visual face database, named the Department of Electronics and Information Technology-Tripura University (DeitY-TU) face database. It contains face images of 524 persons belonging to different nontribes and Mongolian tribes of north-east India, with their anthropometric measurements for identification. Database images are captured within a room with controlled variations in illumination, expression, and pose along with variability in age, gender, accessories, make-up, and partial occlusion. Each image contains the combined primary challenges of face recognition, i.e., illumination, expression, and pose. This database also represents some new features: soft biometric traits such as mole, freckle, scar, etc., and facial anthropometric variations that may be helpful for researchers for biometric recognition. It also gives an equivalent study of the existing two-dimensional face image databases. The database has been tested using two baseline algorithms: linear discriminant analysis and principal component analysis, which may be used by other researchers as the control algorithm performance score.

  17. Face recognition in the thermal infrared domain

    NASA Astrophysics Data System (ADS)

    Kowalski, M.; Grudzień, A.; Palka, N.; Szustakowski, M.

    2017-10-01

    Biometrics refers to unique human characteristics. Each unique characteristic may be used to label and describe individuals and for automatic recognition of a person based on physiological or behavioural properties. One of the most natural and the most popular biometric trait is a face. The most common research methods on face recognition are based on visible light. State-of-the-art face recognition systems operating in the visible light spectrum achieve very high level of recognition accuracy under controlled environmental conditions. Thermal infrared imagery seems to be a promising alternative or complement to visible range imaging due to its relatively high resistance to illumination changes. A thermal infrared image of the human face presents its unique heat-signature and can be used for recognition. The characteristics of thermal images maintain advantages over visible light images, and can be used to improve algorithms of human face recognition in several aspects. Mid-wavelength or far-wavelength infrared also referred to as thermal infrared seems to be promising alternatives. We present the study on 1:1 recognition in thermal infrared domain. The two approaches we are considering are stand-off face verification of non-moving person as well as stop-less face verification on-the-move. The paper presents methodology of our studies and challenges for face recognition systems in the thermal infrared domain.

  18. Face detection in color images using skin color, Laplacian of Gaussian, and Euler number

    NASA Astrophysics Data System (ADS)

    Saligrama Sundara Raman, Shylaja; Kannanedhi Narasimha Sastry, Balasubramanya Murthy; Subramanyam, Natarajan; Senkutuvan, Ramya; Srikanth, Radhika; John, Nikita; Rao, Prateek

    2010-02-01

    In this a paper, a feature based approach to face detection has been proposed using an ensemble of algorithms. The method uses chrominance values and edge features to classify the image as skin and nonskin regions. The edge detector used for this purpose is Laplacian of Gaussian (LoG) which is found to be appropriate when images having multiple faces with noise in them. Eight connectivity analysis of these regions will segregate them as probable face or nonface. The procedure is made more robust by identifying local features within these skin regions which include number of holes, percentage of skin and the golden ratio. The method proposed has been tested on color face images of various races obtained from different sources and its performance is found to be encouraging as the color segmentation cleans up almost all the complex facial features. The result obtained has a calculated accuracy of 86.5% on a test set of 230 images.

  19. The So-Called Face

    NASA Image and Video Library

    2002-05-21

    The so-called Face on Mars can be seen slightly above center and to the right in this NASA Mars Odyssey image. This 3-km long knob was first imaged by NASA Viking spacecraft in the 1970 and to some resembled a face carved into the rocks of Mars.

  20. Relevance of Whitnall's tubercle and auditory meatus in diagnosing exclusions during skull-photo superimposition.

    PubMed

    Jayaprakash, Paul T; Hashim, Natassha; Yusop, Ridzuan Abd Aziz Mohd

    2015-08-01

    Video vision mixer based skull-photo superimposition is a popular method for identifying skulls retrieved from unidentified human remains. A report on the reliability of the superimposition method suggested increased failure rates of 17.3 to 32% to exclude and 15 to 20% to include skulls while using related and unrelated face photographs. Such raise in failures prompted an analysis of the methods employed for the research. The protocols adopted for assessing the reliability are seen to vary from those suggested by the practitioners in the field. The former include overlaying the skull- and face-images on the basis of morphology by relying on anthropometric landmarks on the front plane of the face-images and evaluating the goodness of match depending on mix-mode images; the latter consist of orienting the skull considering landmarks on both the eye and ear planes of the face- and skull-images and evaluating the match utilizing images seen in wipe-mode in addition to those in mix-mode. Superimposition of a skull with face-images of five living individuals in two sets of experiments, one following the procedure described for the research on reliability and the other applying the methods suggested by the practitioners has shown that overlaying the images on the basis of morphology depending on the landmarks on the front plane alone and assessing the match in mix-mode fails to exclude the skull. However, orienting the skull relying on the relationship between the anatomical landmarks on the skull- and face-images such as Whitnall's tubercle and exocanthus in the front (eye) plane and the porion and tragus in the rear (ear) plane as well as assessing the match using wipe-mode images enables excluding that skull while superimposing with the same set of face-images. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  1. Steganography anomaly detection using simple one-class classification

    NASA Astrophysics Data System (ADS)

    Rodriguez, Benjamin M.; Peterson, Gilbert L.; Agaian, Sos S.

    2007-04-01

    There are several security issues tied to multimedia when implementing the various applications in the cellular phone and wireless industry. One primary concern is the potential ease of implementing a steganography system. Traditionally, the only mechanism to embed information into a media file has been with a desktop computer. However, as the cellular phone and wireless industry matures, it becomes much simpler for the same techniques to be performed using a cell phone. In this paper, two methods are compared that classify cell phone images as either an anomaly or clean, where a clean image is one in which no alterations have been made and an anomalous image is one in which information has been hidden within the image. An image in which information has been hidden is known as a stego image. The main concern in detecting steganographic content with machine learning using cell phone images is in training specific embedding procedures to determine if the method has been used to generate a stego image. This leads to a possible flaw in the system when the learned model of stego is faced with a new stego method which doesn't match the existing model. The proposed solution to this problem is to develop systems that detect steganography as anomalies, making the embedding method irrelevant in detection. Two applicable classification methods for solving the anomaly detection of steganographic content problem are single class support vector machines (SVM) and Parzen-window. Empirical comparison of the two approaches shows that Parzen-window outperforms the single class SVM most likely due to the fact that Parzen-window generalizes less.

  2. The N170 component is sensitive to face-like stimuli: a study of Chinese Peking opera makeup.

    PubMed

    Liu, Tiantian; Mu, Shoukuan; He, Huamin; Zhang, Lingcong; Fan, Cong; Ren, Jie; Zhang, Mingming; He, Weiqi; Luo, Wenbo

    2016-12-01

    The N170 component is considered a neural marker of face-sensitive processing. In the present study, the face-sensitive N170 component of event-related potentials (ERPs) was investigated with a modified oddball paradigm using a natural face (the standard stimulus), human- and animal-like makeup stimuli, scrambled control images that mixed human- and animal-like makeup pieces, and a grey control image. Nineteen participants were instructed to respond within 1000 ms by pressing the ' F ' or ' J ' key in response to the standard or deviant stimuli, respectively. We simultaneously recorded ERPs, response accuracy, and reaction times. The behavioral results showed that the main effect of stimulus type was significant for reaction time, whereas there were no significant differences in response accuracies among stimulus types. In relation to the ERPs, N170 amplitudes elicited by human-like makeup stimuli, animal-like makeup stimuli, scrambled control images, and a grey control image progressively decreased. A right hemisphere advantage was observed in the N170 amplitudes for human-like makeup stimuli, animal-like makeup stimuli, and scrambled control images but not for grey control image. These results indicate that the N170 component is sensitive to face-like stimuli and reflect configural processing in face recognition.

  3. 76 FR 65101 - Special Conditions: Embraer S.A.; Model EMB 500; Single-Place Side Facing Seat Dynamic Test...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-20

    ... anthropomorphic test dummy (ATD) or its equivalent, undeformed floor, no yaw, and with all lateral structural... Side Facing Seat Dynamic Test Requirements AGENCY: Federal Aviation Administration (FAA), DOT. ACTION... installation of a single-place side facing seat on Embraer S.A. EMB 500 aircraft. Side- facing seats are...

  4. Is the N170 for faces cognitively penetrable? Evidence from repetition priming of Mooney faces of familiar and unfamiliar persons.

    PubMed

    Jemel, Boutheina; Pisani, Michèle; Calabria, Marco; Crommelinck, Marc; Bruyer, Raymond

    2003-07-01

    Impoverished images of faces, two-tone Mooney faces, severely impair the ability to recognize to whom the face pertains. However, previously seeing the corresponding face in a clear format helps fame-judgments to Mooney faces. In the present experiment, we sought to demonstrate that enhancement in the perceptual encoding of Mooney faces results from top-down effects, due to previous activation of familiar face representation. Event-related potentials (ERPs) were obtained for target Mooney images of familiar and unfamiliar faces preceded by clear pictures portraying either the same photo (same photo prime), or a different photo of the same person (different photo prime) or a new unfamiliar face (no-prime). In agreement with previous findings the use of primes was effective in enhancing the recognition of familiar faces in Mooney images; this priming effect was larger in the same than in different photo priming condition. ERP data revealed that the amplitude of the N170 face-sensitive component was smaller when elicited by familiar than by unfamiliar face targets, and for familiar face targets primed by the same than by different photos (a graded priming effect). Because the priming effect was restricted to familiar faces and occurred at the peak of the N170, we suggest that the early perceptual stage of face processing is likely to be penetrable by the top-down effect due to the activation of face representations within the face recognition system.

  5. Pose invariant face recognition: 3D model from single photo

    NASA Astrophysics Data System (ADS)

    Napoléon, Thibault; Alfalou, Ayman

    2017-02-01

    Face recognition is widely studied in the literature for its possibilities in surveillance and security. In this paper, we report a novel algorithm for the identification task. This technique is based on an optimized 3D modeling allowing to reconstruct faces in different poses from a limited number of references (i.e. one image by class/person). Particularly, we propose to use an active shape model to detect a set of keypoints on the face necessary to deform our synthetic model with our optimized finite element method. Indeed, in order to improve our deformation, we propose a regularization by distances on graph. To perform the identification we use the VanderLugt correlator well know to effectively address this task. On the other hand we add a difference of Gaussian filtering step to highlight the edges and a description step based on the local binary patterns. The experiments are performed on the PHPID database enhanced with our 3D reconstructed faces of each person with an azimuth and an elevation ranging from -30° to +30°. The obtained results prove the robustness of our new method with 88.76% of good identification when the classic 2D approach (based on the VLC) obtains just 44.97%.

  6. Facial expression identification using 3D geometric features from Microsoft Kinect device

    NASA Astrophysics Data System (ADS)

    Han, Dongxu; Al Jawad, Naseer; Du, Hongbo

    2016-05-01

    Facial expression identification is an important part of face recognition and closely related to emotion detection from face images. Various solutions have been proposed in the past using different types of cameras and features. Microsoft Kinect device has been widely used for multimedia interactions. More recently, the device has been increasingly deployed for supporting scientific investigations. This paper explores the effectiveness of using the device in identifying emotional facial expressions such as surprise, smile, sad, etc. and evaluates the usefulness of 3D data points on a face mesh structure obtained from the Kinect device. We present a distance-based geometric feature component that is derived from the distances between points on the face mesh and selected reference points in a single frame. The feature components extracted across a sequence of frames starting and ending by neutral emotion represent a whole expression. The feature vector eliminates the need for complex face orientation correction, simplifying the feature extraction process and making it more efficient. We applied the kNN classifier that exploits a feature component based similarity measure following the principle of dynamic time warping to determine the closest neighbors. Preliminary tests on a small scale database of different facial expressions show promises of the newly developed features and the usefulness of the Kinect device in facial expression identification.

  7. Discrimination of human and dog faces and inversion responses in domestic dogs (Canis familiaris).

    PubMed

    Racca, Anaïs; Amadei, Eleonora; Ligout, Séverine; Guo, Kun; Meints, Kerstin; Mills, Daniel

    2010-05-01

    Although domestic dogs can respond to many facial cues displayed by other dogs and humans, it remains unclear whether they can differentiate individual dogs or humans based on facial cues alone and, if so, whether they would demonstrate the face inversion effect, a behavioural hallmark commonly used in primates to differentiate face processing from object processing. In this study, we first established the applicability of the visual paired comparison (VPC or preferential looking) procedure for dogs using a simple object discrimination task with 2D pictures. The animals demonstrated a clear looking preference for novel objects when simultaneously presented with prior-exposed familiar objects. We then adopted this VPC procedure to assess their face discrimination and inversion responses. Dogs showed a deviation from random behaviour, indicating discrimination capability when inspecting upright dog faces, human faces and object images; but the pattern of viewing preference was dependent upon image category. They directed longer viewing time at novel (vs. familiar) human faces and objects, but not at dog faces, instead, a longer viewing time at familiar (vs. novel) dog faces was observed. No significant looking preference was detected for inverted images regardless of image category. Our results indicate that domestic dogs can use facial cues alone to differentiate individual dogs and humans and that they exhibit a non-specific inversion response. In addition, the discrimination response by dogs of human and dog faces appears to differ with the type of face involved.

  8. Single slice US-MRI registration for neurosurgical MRI-guided US

    NASA Astrophysics Data System (ADS)

    Pardasani, Utsav; Baxter, John S. H.; Peters, Terry M.; Khan, Ali R.

    2016-03-01

    Image-based ultrasound to magnetic resonance image (US-MRI) registration can be an invaluable tool in image-guided neuronavigation systems. State-of-the-art commercial and research systems utilize image-based registration to assist in functions such as brain-shift correction, image fusion, and probe calibration. Since traditional US-MRI registration techniques use reconstructed US volumes or a series of tracked US slices, the functionality of this approach can be compromised by the limitations of optical or magnetic tracking systems in the neurosurgical operating room. These drawbacks include ergonomic issues, line-of-sight/magnetic interference, and maintenance of the sterile field. For those seeking a US vendor-agnostic system, these issues are compounded with the challenge of instrumenting the probe without permanent modification and calibrating the probe face to the tracking tool. To address these challenges, this paper explores the feasibility of a real-time US-MRI volume registration in a small virtual craniotomy site using a single slice. We employ the Linear Correlation of Linear Combination (LC2) similarity metric in its patch-based form on data from MNI's Brain Images for Tumour Evaluation (BITE) dataset as a PyCUDA enabled Python module in Slicer. By retaining the original orientation information, we are able to improve on the poses using this approach. To further assist the challenge of US-MRI registration, we also present the BOXLC2 metric which demonstrates a speed improvement to LC2, while retaining a similar accuracy in this context.

  9. Deep learning and face recognition: the state of the art

    NASA Astrophysics Data System (ADS)

    Balaban, Stephen

    2015-05-01

    Deep Neural Networks (DNNs) have established themselves as a dominant technique in machine learning. DNNs have been top performers on a wide variety of tasks including image classification, speech recognition, and face recognition.1-3 Convolutional neural networks (CNNs) have been used in nearly all of the top performing methods on the Labeled Faces in the Wild (LFW) dataset.3-6 In this talk and accompanying paper, I attempt to provide a review and summary of the deep learning techniques used in the state-of-the-art. In addition, I highlight the need for both larger and more challenging public datasets to benchmark these systems. Despite the ability of DNNs and autoencoders to perform unsupervised feature learning, modern facial recognition pipelines still require domain specific engineering in the form of re-alignment. For example, in Facebook's recent DeepFace paper, a 3D "frontalization" step lies at the beginning of the pipeline. This step creates a 3D face model for the incoming image and then uses a series of affine transformations of the fiducial points to "frontalize" the image. This step enables the DeepFace system to use a neural network architecture with locally connected layers without weight sharing as opposed to standard convolutional layers.6 Deep learning techniques combined with large datasets have allowed research groups to surpass human level performance on the LFW dataset.3, 5 The high accuracy (99.63% for FaceNet at the time of publishing) and utilization of outside data (hundreds of millions of images in the case of Google's FaceNet) suggest that current face verification benchmarks such as LFW may not be challenging enough, nor provide enough data, for current techniques.3, 5 There exist a variety of organizations with mobile photo sharing applications that would be capable of releasing a very large scale and highly diverse dataset of facial images captured on mobile devices. Such an "ImageNet for Face Recognition" would likely receive a warm welcome from researchers and practitioners alike.

  10. 45 CFR 164.514 - Other requirements relating to uses and disclosures of protected health information.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... numbers; (J) Account numbers; (K) Certificate/license numbers; (L) Vehicle identifiers and serial numbers... and voice prints; (Q) Full face photographic images and any comparable images; and (R) Any other..., including finger and voice prints; and (xvi) Full face photographic images and any comparable images. (3...

  11. 45 CFR 164.514 - Other requirements relating to uses and disclosures of protected health information.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... numbers; (J) Account numbers; (K) Certificate/license numbers; (L) Vehicle identifiers and serial numbers... and voice prints; (Q) Full face photographic images and any comparable images; and (R) Any other..., including finger and voice prints; and (xvi) Full face photographic images and any comparable images. (3...

  12. 45 CFR 164.514 - Other requirements relating to uses and disclosures of protected health information.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... numbers; (J) Account numbers; (K) Certificate/license numbers; (L) Vehicle identifiers and serial numbers... and voice prints; (Q) Full face photographic images and any comparable images; and (R) Any other..., including finger and voice prints; and (xvi) Full face photographic images and any comparable images. (3...

  13. 45 CFR 164.514 - Other requirements relating to uses and disclosures of protected health information.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... numbers; (J) Account numbers; (K) Certificate/license numbers; (L) Vehicle identifiers and serial numbers... and voice prints; (Q) Full face photographic images and any comparable images; and (R) Any other..., including finger and voice prints; and (xvi) Full face photographic images and any comparable images. (3...

  14. 45 CFR 164.514 - Other requirements relating to uses and disclosures of protected health information.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... numbers; (J) Account numbers; (K) Certificate/license numbers; (L) Vehicle identifiers and serial numbers... and voice prints; (Q) Full face photographic images and any comparable images; and (R) Any other..., including finger and voice prints; and (xvi) Full face photographic images and any comparable images. (3...

  15. Guanfacine modulates the influence of emotional cues on prefrontal cortex activation for cognitive control.

    PubMed

    Schulz, Kurt P; Clerkin, Suzanne M; Fan, Jin; Halperin, Jeffrey M; Newcorn, Jeffrey H

    2013-03-01

    Functional interactions between limbic regions that process emotions and frontal networks that guide response functions provide a substrate for emotional cues to influence behavior. Stimulation of postsynaptic α₂ adrenoceptors enhances the function of prefrontal regions in these networks. However, the impact of this stimulation on the emotional biasing of behavior has not been established. This study tested the effect of the postsynaptic α₂ adrenoceptor agonist guanfacine on the emotional biasing of response execution and inhibition in prefrontal cortex. Fifteen healthy young adults were scanned twice with functional magnetic resonance imaging while performing a face emotion go/no-go task following counterbalanced administration of single doses of oral guanfacine (1 mg) and placebo in a double-blind, cross-over design. Lower perceptual sensitivity and less response bias for sad faces resulted in fewer correct responses compared to happy and neutral faces but had no effect on correct inhibitions. Guanfacine increased the sensitivity and bias selectively for sad faces, resulting in response accuracy comparable to happy and neutral faces, and reversed the valence-dependent variation in response-related activation in left dorsolateral prefrontal cortex (DLPFC), resulting in enhanced activation for response execution cued by sad faces relative to happy and neutral faces, in line with other frontoparietal regions. These results provide evidence that guanfacine stimulation of postsynaptic α₂ adrenoceptors moderates DLPFC activation associated with the emotional biasing of response execution processes. The findings have implications for the α₂ adrenoceptor agonist treatment of attention-deficit hyperactivity disorder.

  16. Neurons in the human amygdala selective for perceived emotion

    PubMed Central

    Wang, Shuo; Tudusciuc, Oana; Mamelak, Adam N.; Ross, Ian B.; Adolphs, Ralph; Rutishauser, Ueli

    2014-01-01

    The human amygdala plays a key role in recognizing facial emotions and neurons in the monkey and human amygdala respond to the emotional expression of faces. However, it remains unknown whether these responses are driven primarily by properties of the stimulus or by the perceptual judgments of the perceiver. We investigated these questions by recording from over 200 single neurons in the amygdalae of 7 neurosurgical patients with implanted depth electrodes. We presented degraded fear and happy faces and asked subjects to discriminate their emotion by button press. During trials where subjects responded correctly, we found neurons that distinguished fear vs. happy emotions as expressed by the displayed faces. During incorrect trials, these neurons indicated the patients’ subjective judgment. Additional analysis revealed that, on average, all neuronal responses were modulated most by increases or decreases in response to happy faces, and driven predominantly by judgments about the eye region of the face stimuli. Following the same analyses, we showed that hippocampal neurons, unlike amygdala neurons, only encoded emotions but not subjective judgment. Our results suggest that the amygdala specifically encodes the subjective judgment of emotional faces, but that it plays less of a role in simply encoding aspects of the image array. The conscious percept of the emotion shown in a face may thus arise from interactions between the amygdala and its connections within a distributed cortical network, a scheme also consistent with the long response latencies observed in human amygdala recordings. PMID:24982200

  17. High-speed 3D imaging of cellular activity in the brain using axially-extended beams and light sheets.

    PubMed

    Hillman, Elizabeth Mc; Voleti, Venkatakaushik; Patel, Kripa; Li, Wenze; Yu, Hang; Perez-Campos, Citlali; Benezra, Sam E; Bruno, Randy M; Galwaduge, Pubudu T

    2018-06-01

    As optical reporters and modulators of cellular activity have become increasingly sophisticated, the amount that can be learned about the brain via high-speed cellular imaging has increased dramatically. However, despite fervent innovation, point-scanning microscopy is facing a fundamental limit in achievable 3D imaging speeds and fields of view. A range of alternative approaches are emerging, some of which are moving away from point-scanning to use axially-extended beams or sheets of light, for example swept confocally aligned planar excitation (SCAPE) microscopy. These methods are proving effective for high-speed volumetric imaging of the nervous system of small organisms such as Drosophila (fruit fly) and D. Rerio (Zebrafish), and are showing promise for imaging activity in the living mammalian brain using both single and two-photon excitation. This article describes these approaches and presents a simple model that demonstrates key advantages of axially-extended illumination over point-scanning strategies for high-speed volumetric imaging, including longer integration times per voxel, improved photon efficiency and reduced photodamage. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Influence of preoperative musculotendinous junction position on rotator cuff healing using single-row technique.

    PubMed

    Tashjian, Robert Z; Hung, Man; Burks, Robert T; Greis, Patrick E

    2013-11-01

    The purpose of this study was to evaluate the correlation of rotator cuff musculotendinous junction (MTJ) retraction with healing after rotator cuff repair and with preoperative sagittal tear size. We reviewed preoperative and postoperative magnetic resonance imaging (MRI) studies of 51 patients undergoing arthroscopic single-row rotator cuff repair between March 1, 2005, and February 20, 2010. Preoperative MRI studies were evaluated for anteroposterior tear size, tendon retraction, tendon length, muscle quality, and MTJ position with respect to the glenoid. The position of the MTJ was referenced off the glenoid face as either lateral or medial. Postoperative MRI studies obtained at a minimum of 1 year postoperatively (mean, 25 ± 13.9 months) were evaluated for healing, tendon length, and MTJ position. We found that 39 of 51 tears (76%) healed, with 26 of 30 small/medium tears (87%) and 13 of 21 large/massive tears (62%) healing. Greater tendon retraction, worse preoperative muscle quality, and a more medialized MTJ were all associated with worse tendon healing (P < .05). Of tears that had a preoperative MTJ lateral to the face of the glenoid, 93% healed, whereas only 55% of tears that had a preoperative MTJ medial to the face of the glenoid healed (P < .05). Healed repairs that had limited tendon lengthening (<1 cm) and limited MTJ position change (<1 cm) from preoperative were found to be smaller, had less preoperative tendon retraction, had less preoperative MTJ medialization, and had less preoperative rotator cuff fatty infiltration (P < .05). Preoperative MTJ medialization, tendon retraction, and muscle quality are all predictive of tendon healing postoperatively when using a single-row rotator cuff repair technique. The position of the MTJ with respect to the glenoid face can be predictive of healing, with over 90% healing if lateral and 50% if medial to the face. Lengthening of the tendon accounts for a significant percentage of the musculotendinous unit lengthening that occurs in healed tears as opposed to muscle elongation. Level IV, therapeutic case series. Copyright © 2013 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  19. Flat or curved thin optical display panel

    DOEpatents

    Veligdan, J.T.

    1995-01-10

    An optical panel includes a plurality of waveguides stacked together, with each waveguide having a first end and an opposite second end. The first ends collectively define a first face, and the second ends collectively define a second face of the panel. The second face is disposed at an acute face angle relative to the waveguides to provide a panel which is relatively thin compared to the height of the second face. In an exemplary embodiment for use in a projection TV, the first face is substantially smaller in height than the second face and receives a TV image, with the second face defining a screen for viewing the image enlarged. 7 figures.

  20. Seeing Jesus in toast: neural and behavioral correlates of face pareidolia.

    PubMed

    Liu, Jiangang; Li, Jun; Feng, Lu; Li, Ling; Tian, Jie; Lee, Kang

    2014-04-01

    Face pareidolia is the illusory perception of non-existent faces. The present study, for the first time, contrasted behavioral and neural responses of face pareidolia with those of letter pareidolia to explore face-specific behavioral and neural responses during illusory face processing. Participants were shown pure-noise images but were led to believe that 50% of them contained either faces or letters; they reported seeing faces or letters illusorily 34% and 38% of the time, respectively. The right fusiform face area (rFFA) showed a specific response when participants "saw" faces as opposed to letters in the pure-noise images. Behavioral responses during face pareidolia produced a classification image (CI) that resembled a face, whereas those during letter pareidolia produced a CI that was letter-like. Further, the extent to which such behavioral CIs resembled faces was directly related to the level of face-specific activations in the rFFA. This finding suggests that the rFFA plays a specific role not only in processing of real faces but also in illusory face perception, perhaps serving to facilitate the interaction between bottom-up information from the primary visual cortex and top-down signals from the prefrontal cortex (PFC). Whole brain analyses revealed a network specialized in face pareidolia, including both the frontal and occipitotemporal regions. Our findings suggest that human face processing has a strong top-down component whereby sensory input with even the slightest suggestion of a face can result in the interpretation of a face. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Non-rigid, but not rigid, motion interferes with the processing of structural face information in developmental prosopagnosia.

    PubMed

    Maguinness, Corrina; Newell, Fiona N

    2015-04-01

    There is growing evidence to suggest that facial motion is an important cue for face recognition. However, it is poorly understood whether motion is integrated with facial form information or whether it provides an independent cue to identity. To provide further insight into this issue, we compared the effect of motion on face perception in two developmental prosopagnosics and age-matched controls. Participants first learned faces presented dynamically (video), or in a sequence of static images, in which rigid (viewpoint) or non-rigid (expression) changes occurred. Immediately following learning, participants were required to match a static face image to the learned face. Test face images varied by viewpoint (Experiment 1) or expression (Experiment 2) and were learned or novel face images. We found similar performance across prosopagnosics and controls in matching facial identity across changes in viewpoint when the learned face was shown moving in a rigid manner. However, non-rigid motion interfered with face matching across changes in expression in both individuals with prosopagnosia compared to the performance of control participants. In contrast, non-rigid motion did not differentially affect the matching of facial expressions across changes in identity for either prosopagnosics (Experiment 3). Our results suggest that whilst the processing of rigid motion information of a face may be preserved in developmental prosopagnosia, non-rigid motion can specifically interfere with the representation of structural face information. Taken together, these results suggest that both form and motion cues are important in face perception and that these cues are likely integrated in the representation of facial identity. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Integrating histology and MRI in the first digital brain of common squirrel monkey, Saimiri sciureus

    NASA Astrophysics Data System (ADS)

    Sun, Peizhen; Parvathaneni, Prasanna; Schilling, Kurt G.; Gao, Yurui; Janve, Vaibhav; Anderson, Adam; Landman, Bennett A.

    2015-03-01

    This effort is a continuation of development of a digital brain atlas of the common squirrel monkey, Saimiri sciureus, a New World monkey with functional and microstructural organization of central nervous system similar to that of humans. Here, we present the integration of histology with multi-modal magnetic resonance imaging (MRI) atlas constructed from the brain of an adult female squirrel monkey. The central concept of this work is to use block face photography to establish an intermediate common space in coordinate system which preserves the high resolution in-plane resolution of histology while enabling 3-D correspondence with MRI. In vivo MRI acquisitions include high resolution T2 structural imaging (300 μm isotropic) and low resolution diffusion tensor imaging (600 um isotropic). Ex vivo MRI acquisitions include high resolution T2 structural imaging and high resolution diffusion tensor imaging (both 300 μm isotropic). Cortical regions were manually annotated on the co-registered volumes based on published histological sections in-plane. We describe mapping of histology and MRI based data of the common squirrel monkey and construction of a viewing tool that enable online viewing of these datasets. The previously descried atlas MRI is used for its deformation to provide accurate conformation to the MRI, thus adding information at the histological level to the MRI volume. This paper presents the mapping of single 2D image slice in block face as a proof of concept and this can be extended to map the atlas space in 3D coordinate system as part of the future work and can be loaded to an XNAT system for further use.

  3. Common path endoscopic probes for optical coherence tomography (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Singh, Kanwarpal; Gardecki, Joseph A.; Tearney, Guillermo J.

    2017-02-01

    Background: Dispersion imbalance and polarization mismatch between the reference and sample arm signals can lead to image quality degradation in optical coherence tomography (OCT). One approach to reduce these image artifacts is to employ a common-path geometry in fiber-based probes. In this work, we report an 800 um diameter all-fiber common-path monolithic probe for coronary artery imaging where the reference signal is generated using an inline fiber partial reflector. Methods: Our common-path probe was designed for swept-source based Fourier domain OCT at 1310 nm wavelength. A face of a coreless fiber was coated with gold and spliced to a standard SMF-28 single mode fiber creating an inline partial reflector, which acted as a reference surface. The other face of the coreless fiber was shaped into a ball lens for focusing. The optical elements were assembled within a 560 µm diameter drive shaft, which was attached to a rotary junction. The drive shaft was placed inside a transparent sheath having an outer diameter of 800 µm. Results: With a source input power of 30mW, the inline common-path probe achieved a sensitivity of 104 dB. Images of human finger skin showed the characteristic layers of skin as well as features such as sweat ducts. Images of coronary arteries ex vivo obtained with this probe enabled visualization of the characteristic architectural morphology of the normal artery wall and known features of atherosclerotic plaque. Conclusion: In this work, we have demonstrated a common path OCT probe for cardiovascular imaging. The probe is easy to fabricate, will reduce system complexity and overall cost. We believe that this design will be helpful in endoscopic applications that require high resolution and a compact form factor.

  4. A novel deep learning algorithm for incomplete face recognition: Low-rank-recovery network.

    PubMed

    Zhao, Jianwei; Lv, Yongbiao; Zhou, Zhenghua; Cao, Feilong

    2017-10-01

    There have been a lot of methods to address the recognition of complete face images. However, in real applications, the images to be recognized are usually incomplete, and it is more difficult to realize such a recognition. In this paper, a novel convolution neural network frame, named a low-rank-recovery network (LRRNet), is proposed to conquer the difficulty effectively inspired by matrix completion and deep learning techniques. The proposed LRRNet first recovers the incomplete face images via an approach of matrix completion with the truncated nuclear norm regularization solution, and then extracts some low-rank parts of the recovered images as the filters. With these filters, some important features are obtained by means of the binaryzation and histogram algorithms. Finally, these features are classified with the classical support vector machines (SVMs). The proposed LRRNet method has high face recognition rate for the heavily corrupted images, especially for the images in the large databases. The proposed LRRNet performs well and efficiently for the images with heavily corrupted, especially in the case of large databases. Extensive experiments on several benchmark databases demonstrate that the proposed LRRNet performs better than some other excellent robust face recognition methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Learning toward practical head pose estimation

    NASA Astrophysics Data System (ADS)

    Sang, Gaoli; He, Feixiang; Zhu, Rong; Xuan, Shibin

    2017-08-01

    Head pose is useful information for many face-related tasks, such as face recognition, behavior analysis, human-computer interfaces, etc. Existing head pose estimation methods usually assume that the face images have been well aligned or that sufficient and precise training data are available. In practical applications, however, these assumptions are very likely to be invalid. This paper first investigates the impact of the failure of these assumptions, i.e., misalignment of face images, uncertainty and undersampling of training data, on head pose estimation accuracy of state-of-the-art methods. A learning-based approach is then designed to enhance the robustness of head pose estimation to these factors. To cope with misalignment, instead of using hand-crafted features, it seeks suitable features by learning from a set of training data with a deep convolutional neural network (DCNN), such that the training data can be best classified into the correct head pose categories. To handle uncertainty and undersampling, it employs multivariate labeling distributions (MLDs) with dense sampling intervals to represent the head pose attributes of face images. The correlation between the features and the dense MLD representations of face images is approximated by a maximum entropy model, whose parameters are optimized on the given training data. To estimate the head pose of a face image, its MLD representation is first computed according to the model based on the features extracted from the image by the trained DCNN, and its head pose is then assumed to be the one corresponding to the peak in its MLD. Evaluation experiments on the Pointing'04, FacePix, Multi-PIE, and CASIA-PEAL databases prove the effectiveness and efficiency of the proposed method.

  6. Virtual view image synthesis for eye-contact in TV conversation system

    NASA Astrophysics Data System (ADS)

    Murayama, Daisuke; Kimura, Keiichi; Hosaka, Tadaaki; Hamamoto, Takayuki; Shibuhisa, Nao; Tanaka, Seiichi; Sato, Shunichi; Saito, Sakae

    2010-02-01

    Eye-contact plays an important role for human communications in the sense that it can convey unspoken information. However, it is highly difficult to realize eye-contact in teleconferencing systems because of camera configurations. Conventional methods to overcome this difficulty mainly resorted to space-consuming optical devices such as half mirrors. In this paper, we propose an alternative approach to achieve eye-contact by techniques of arbitrary view image synthesis. In our method, multiple images captured by real cameras are converted to the virtual viewpoint (the center of the display) by homography, and evaluation of matching errors among these projected images provides the depth map and the virtual image. Furthermore, we also propose a simpler version of this method by using a single camera to save the computational costs, in which the only one real image is transformed to the virtual viewpoint based on the hypothesis that the subject is located at a predetermined distance. In this simple implementation, eye regions are separately generated by comparison with pre-captured frontal face images. Experimental results of both the methods show that the synthesized virtual images enable the eye-contact favorably.

  7. Pareidolia in infants.

    PubMed

    Kato, Masaharu; Mugitani, Ryoko

    2015-01-01

    Faces convey primal information for our social life. This information is so primal that we sometimes find faces in non-face objects. Such illusory perception is called pareidolia. In this study, using infants' orientation behavior toward a sound source, we demonstrated that infants also perceive pareidolic faces. An image formed by four blobs and an outline was shown to infants with or without pure tones, and the time they spent looking at each blob was compared. Since the mouth is the unique sound source in a face and the literature has shown that infants older than 6 months already have sound-mouth association, increased looking time towards the bottom blob (pareidolic mouth area) during sound presentation indicated that they illusorily perceive a face in the image. Infants aged 10 and 12 months looked longer at the bottom blob under the upright-image condition, whereas no differences in looking time were observed for any blob under the inverted-image condition. However, 8-month-olds did not show any difference in looking time under both the upright and inverted conditions, suggesting that the perception of pareidolic faces, through sound association, comes to develop at around 8 to 10 months after birth.

  8. Pareidolia in Infants

    PubMed Central

    Kato, Masaharu; Mugitani, Ryoko

    2015-01-01

    Faces convey primal information for our social life. This information is so primal that we sometimes find faces in non-face objects. Such illusory perception is called pareidolia. In this study, using infants’ orientation behavior toward a sound source, we demonstrated that infants also perceive pareidolic faces. An image formed by four blobs and an outline was shown to infants with or without pure tones, and the time they spent looking at each blob was compared. Since the mouth is the unique sound source in a face and the literature has shown that infants older than 6 months already have sound-mouth association, increased looking time towards the bottom blob (pareidolic mouth area) during sound presentation indicated that they illusorily perceive a face in the image. Infants aged 10 and 12 months looked longer at the bottom blob under the upright-image condition, whereas no differences in looking time were observed for any blob under the inverted-image condition. However, 8-month-olds did not show any difference in looking time under both the upright and inverted conditions, suggesting that the perception of pareidolic faces, through sound association, comes to develop at around 8 to 10 months after birth. PMID:25689630

  9. Decoding representations of face identity that are tolerant to rotation.

    PubMed

    Anzellotti, Stefano; Fairhall, Scott L; Caramazza, Alfonso

    2014-08-01

    In order to recognize the identity of a face we need to distinguish very similar images (specificity) while also generalizing identity information across image transformations such as changes in orientation (tolerance). Recent studies investigated the representation of individual faces in the brain, but it remains unclear whether the human brain regions that were found encode representations of individual images (specificity) or face identity (specificity plus tolerance). In the present article, we use multivoxel pattern analysis in the human ventral stream to investigate the representation of face identity across rotations in depth, a kind of transformation in which no point in the face image remains unchanged. The results reveal representations of face identity that are tolerant to rotations in depth in occipitotemporal cortex and in anterior temporal cortex, even when the similarity between mirror symmetrical views cannot be used to achieve tolerance. Converging evidence from different analysis techniques shows that the right anterior temporal lobe encodes a comparable amount of identity information to occipitotemporal regions, but this information is encoded over a smaller extent of cortex. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  10. Sex-typicality and attractiveness: are supermale and superfemale faces super-attractive?

    PubMed

    Rhodes, G; Hickford, C; Jeffery, L

    2000-02-01

    Many animals find extreme versions of secondary sexual characteristics attractive, and such preferences can enhance reproductive success (Andersson, 1994). We hypothesized, therefore, that extreme versions of sex-typical traits may be attractive in human faces. We created supermale and superfemale faces by exaggerating all spatial differences between an average male and an average female face. In Expt 1 the male average was preferred to a supermale (50% exaggeration of differences from the female average). There was no clear preference for the female average or the superfemale (50% exaggeration). In Expt 2, participants chose the most attractive face from sets of images containing feminized as well as masculinized images for each sex, and spanning a wider range of exaggeration levels than in Expt 1. Chinese sets were also shown, to see whether similar preferences would occur for a less familiar race (participants were Caucasian). The most attractive female image was significantly feminized for faces of both races. However, the most attractive male image for both races was also significantly feminized. These results indicate that feminization, rather than sex exaggeration per se, is attractive in human faces, and they corroborate similar findings by Perrett et al. (1998).

  11. Behavioural and neurophysiological evidence for face identity and face emotion processing in animals

    PubMed Central

    Tate, Andrew J; Fischer, Hanno; Leigh, Andrea E; Kendrick, Keith M

    2006-01-01

    Visual cues from faces provide important social information relating to individual identity, sexual attraction and emotional state. Behavioural and neurophysiological studies on both monkeys and sheep have shown that specialized skills and neural systems for processing these complex cues to guide behaviour have evolved in a number of mammals and are not present exclusively in humans. Indeed, there are remarkable similarities in the ways that faces are processed by the brain in humans and other mammalian species. While human studies with brain imaging and gross neurophysiological recording approaches have revealed global aspects of the face-processing network, they cannot investigate how information is encoded by specific neural networks. Single neuron electrophysiological recording approaches in both monkeys and sheep have, however, provided some insights into the neural encoding principles involved and, particularly, the presence of a remarkable degree of high-level encoding even at the level of a specific face. Recent developments that allow simultaneous recordings to be made from many hundreds of individual neurons are also beginning to reveal evidence for global aspects of a population-based code. This review will summarize what we have learned so far from these animal-based studies about the way the mammalian brain processes the faces and the emotions they can communicate, as well as associated capacities such as how identity and emotion cues are dissociated and how face imagery might be generated. It will also try to highlight what questions and advances in knowledge still challenge us in order to provide a complete understanding of just how brain networks perform this complex and important social recognition task. PMID:17118930

  12. Behavioural and neurophysiological evidence for face identity and face emotion processing in animals.

    PubMed

    Tate, Andrew J; Fischer, Hanno; Leigh, Andrea E; Kendrick, Keith M

    2006-12-29

    Visual cues from faces provide important social information relating to individual identity, sexual attraction and emotional state. Behavioural and neurophysiological studies on both monkeys and sheep have shown that specialized skills and neural systems for processing these complex cues to guide behaviour have evolved in a number of mammals and are not present exclusively in humans. Indeed, there are remarkable similarities in the ways that faces are processed by the brain in humans and other mammalian species. While human studies with brain imaging and gross neurophysiological recording approaches have revealed global aspects of the face-processing network, they cannot investigate how information is encoded by specific neural networks. Single neuron electrophysiological recording approaches in both monkeys and sheep have, however, provided some insights into the neural encoding principles involved and, particularly, the presence of a remarkable degree of high-level encoding even at the level of a specific face. Recent developments that allow simultaneous recordings to be made from many hundreds of individual neurons are also beginning to reveal evidence for global aspects of a population-based code. This review will summarize what we have learned so far from these animal-based studies about the way the mammalian brain processes the faces and the emotions they can communicate, as well as associated capacities such as how identity and emotion cues are dissociated and how face imagery might be generated. It will also try to highlight what questions and advances in knowledge still challenge us in order to provide a complete understanding of just how brain networks perform this complex and important social recognition task.

  13. Distributed Neural Activity Patterns during Human-to-Human Competition

    PubMed Central

    Piva, Matthew; Zhang, Xian; Noah, J. Adam; Chang, Steve W. C.; Hirsch, Joy

    2017-01-01

    Interpersonal interaction is the essence of human social behavior. However, conventional neuroimaging techniques have tended to focus on social cognition in single individuals rather than on dyads or groups. As a result, relatively little is understood about the neural events that underlie face-to-face interaction. We resolved some of the technical obstacles inherent in studying interaction using a novel imaging modality and aimed to identify neural mechanisms engaged both within and across brains in an ecologically valid instance of interpersonal competition. Functional near-infrared spectroscopy was utilized to simultaneously measure hemodynamic signals representing neural activity in pairs of subjects playing poker against each other (human–human condition) or against computer opponents (human–computer condition). Previous fMRI findings concerning single subjects confirm that neural areas recruited during social cognition paradigms are individually sensitive to human–human and human–computer conditions. However, it is not known whether face-to-face interactions between opponents can extend these findings. We hypothesize distributed effects due to live processing and specific variations in across-brain coherence not observable in single-subject paradigms. Angular gyrus (AG), a component of the temporal-parietal junction (TPJ) previously found to be sensitive to socially relevant cues, was selected as a seed to measure within-brain functional connectivity. Increased connectivity was confirmed between AG and bilateral dorsolateral prefrontal cortex (dlPFC) as well as a complex including the left subcentral area (SCA) and somatosensory cortex (SS) during interaction with a human opponent. These distributed findings were supported by contrast measures that indicated increased activity at the left dlPFC and frontopolar area that partially overlapped with the region showing increased functional connectivity with AG. Across-brain analyses of neural coherence between the players revealed synchrony between dlPFC and supramarginal gyrus (SMG) and SS in addition to synchrony between AG and the fusiform gyrus (FG) and SMG. These findings present the first evidence of a frontal-parietal neural complex including the TPJ, dlPFC, SCA, SS, and FG that is more active during human-to-human social cognition both within brains (functional connectivity) and across brains (across-brain coherence), supporting a model of functional integration of socially and strategically relevant information during live face-to-face competitive behaviors. PMID:29218005

  14. Affective attitudes to face images associated with intracerebral EEG source location before face viewing.

    PubMed

    Pizzagalli, D; Koenig, T; Regard, M; Lehmann, D

    1999-01-01

    We investigated whether different, personality-related affective attitudes are associated with different brain electric field (EEG) sources before any emotional challenge (stimulus exposure). A 27-channel EEG was recorded in 15 subjects during eyes-closed resting. After recording, subjects rated 32 images of human faces for affective appeal. The subjects in the first (i.e., most negative) and fourth (i.e., most positive) quartile of general affective attitude were further analyzed. The EEG data (mean=25+/-4. 8 s/subject) were subjected to frequency-domain model dipole source analysis (FFT-Dipole-Approximation), resulting in 3-dimensional intracerebral source locations and strengths for the delta-theta, alpha, and beta EEG frequency band, and for the full range (1.5-30 Hz) band. Subjects with negative attitude (compared to those with positive attitude) showed the following source locations: more inferior for all frequency bands, more anterior for the delta-theta band, more posterior and more right for the alpha, beta and 1.5-30 Hz bands. One year later, the subjects were asked to rate the face images again. The rating scores for the same face images were highly correlated for all subjects, and original and retest affective mean attitude was highly correlated across subjects. The present results show that subjects with different affective attitudes to face images had different active, cerebral, neural populations in a task-free condition prior to viewing the images. We conclude that the brain functional state which implements affective attitude towards face images as a personality feature exists without elicitors, as a continuously present, dynamic feature of brain functioning. Copyright 1999 Elsevier Science B.V.

  15. Generating virtual training samples for sparse representation of face images and face recognition

    NASA Astrophysics Data System (ADS)

    Du, Yong; Wang, Yu

    2016-03-01

    There are many challenges in face recognition. In real-world scenes, images of the same face vary with changing illuminations, different expressions and poses, multiform ornaments, or even altered mental status. Limited available training samples cannot convey these possible changes in the training phase sufficiently, and this has become one of the restrictions to improve the face recognition accuracy. In this article, we view the multiplication of two images of the face as a virtual face image to expand the training set and devise a representation-based method to perform face recognition. The generated virtual samples really reflect some possible appearance and pose variations of the face. By multiplying a training sample with another sample from the same subject, we can strengthen the facial contour feature and greatly suppress the noise. Thus, more human essential information is retained. Also, uncertainty of the training data is simultaneously reduced with the increase of the training samples, which is beneficial for the training phase. The devised representation-based classifier uses both the original and new generated samples to perform the classification. In the classification phase, we first determine K nearest training samples for the current test sample by calculating the Euclidean distances between the test sample and training samples. Then, a linear combination of these selected training samples is used to represent the test sample, and the representation result is used to classify the test sample. The experimental results show that the proposed method outperforms some state-of-the-art face recognition methods.

  16. Effects of spatial frequency and location of fearful faces on human amygdala activity.

    PubMed

    Morawetz, Carmen; Baudewig, Juergen; Treue, Stefan; Dechent, Peter

    2011-01-31

    Facial emotion perception plays a fundamental role in interpersonal social interactions. Images of faces contain visual information at various spatial frequencies. The amygdala has previously been reported to be preferentially responsive to low-spatial frequency (LSF) rather than to high-spatial frequency (HSF) filtered images of faces presented at the center of the visual field. Furthermore, it has been proposed that the amygdala might be especially sensitive to affective stimuli in the periphery. In the present study we investigated the impact of spatial frequency and stimulus eccentricity on face processing in the human amygdala and fusiform gyrus using functional magnetic resonance imaging (fMRI). The spatial frequencies of pictures of fearful faces were filtered to produce images that retained only LSF or HSF information. Facial images were presented either in the left or right visual field at two different eccentricities. In contrast to previous findings, we found that the amygdala responds to LSF and HSF stimuli in a similar manner regardless of the location of the affective stimuli in the visual field. Furthermore, the fusiform gyrus did not show differential responses to spatial frequency filtered images of faces. Our findings argue against the view that LSF information plays a crucial role in the processing of facial expressions in the amygdala and of a higher sensitivity to affective stimuli in the periphery. Copyright © 2010 Elsevier B.V. All rights reserved.

  17. Face perception is tuned to horizontal orientation in the N170 time window.

    PubMed

    Jacques, Corentin; Schiltz, Christine; Goffaux, Valerie

    2014-02-07

    The specificity of face perception is thought to reside both in its dramatic vulnerability to picture-plane inversion and its strong reliance on horizontally oriented image content. Here we asked when in the visual processing stream face-specific perception is tuned to horizontal information. We measured the behavioral performance and scalp event-related potentials (ERP) when participants viewed upright and inverted images of faces and cars (and natural scenes) that were phase-randomized in a narrow orientation band centered either on vertical or horizontal orientation. For faces, the magnitude of the inversion effect (IE) on behavioral discrimination performance was significantly reduced for horizontally randomized compared to vertically or nonrandomized images, confirming the importance of horizontal information for the recruitment of face-specific processing. Inversion affected the processing of nonrandomized and vertically randomized faces early, in the N170 time window. In contrast, the magnitude of the N170 IE was much smaller for horizontally randomized faces. The present research indicates that the early face-specific neural representations are preferentially tuned to horizontal information and offers new perspectives for a description of the visual information feeding face-specific perception.

  18. The nature of face representations in subcortical regions.

    PubMed

    Gabay, Shai; Burlingham, Charles; Behrmann, Marlene

    2014-07-01

    Studies examining the neural correlates of face perception in humans have focused almost exclusively on the distributed cortical network of face-selective regions. Recently, however, investigations have also identified subcortical correlates of face perception and the question addressed here concerns the nature of these subcortical face representations. To explore this issue, we presented to participants pairs of images sequentially to the same or to different eyes. Superior performance in the former over latter condition implicates monocular, prestriate portions of the visual system. Over a series of five experiments, we manipulated both lower-level (size, location) as well as higher-level (identity) similarity across the pair of faces. A monocular advantage was observed even when the faces in a pair differed in location and in size, implicating some subcortical invariance across lower-level image properties. A monocular advantage was also observed when the faces in a pair were two different images of the same individual, indicating the engagement of subcortical representations in more abstract, higher-level aspects of face processing. We conclude that subcortical structures of the visual system are involved, perhaps interactively, in multiple aspects of face perception, and not simply in deriving initial coarse representations. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Psychophysical study of face identity discrimination in schizophrenia: association with facial morphology.

    PubMed

    Ekstrom, Tor; Maher, Stephen; Chen, Yue

    2016-11-01

    Identifying individual identities from faces is crucial for social functioning. In schizophrenia, previous studies showed mixed results as to whether face identity discrimination is compromised. How a social category factor (such as gender and race) affects schizophrenia patients' facial identity discrimination is unclear. Using psychophysics, we examined perceptual performance on within- and between- category face identity discrimination tasks in patients (n = 51) and controls (n = 31). Face images from each of six pairs of individuals (two White females, two White males, two Black males, two Asian females, one Black male and one White male, and one White female and one White male) were morphed to create additional images along a continuum of dissimilarity in facial morphology. Patients underperformed for five out of the six face pairs (the Black/White male pair was the exception). Perceptual performance was correlated with morphological changes in face images being discriminated, to a greater extent in patients than in controls. Face identity discrimination in schizophrenia was most impaired for those faces that presumably have extensive social exposures (such as White males). Patients' perceptual performance appears to depend more on physical feature changes of faces.

  20. Vision-based in-line fabric defect detection using yarn-specific shape features

    NASA Astrophysics Data System (ADS)

    Schneider, Dorian; Aach, Til

    2012-01-01

    We develop a methodology for automatic in-line flaw detection in industrial woven fabrics. Where state of the art detection algorithms apply texture analysis methods to operate on low-resolved ({200 ppi) image data, we describe here a process flow to segment single yarns in high-resolved ({1000 ppi) textile images. Four yarn shape features are extracted, allowing a precise detection and measurement of defects. The degree of precision reached allows a classification of detected defects according to their nature, providing an innovation in the field of automatic fabric flaw detection. The design has been carried out to meet real time requirements and face adverse conditions caused by loom vibrations and dirt. The entire process flow is discussed followed by an evaluation using a database with real-life industrial fabric images. This work pertains to the construction of an on-loom defect detection system to be used in manufacturing practice.

  1. A familiarity disadvantage for remembering specific images of faces.

    PubMed

    Armann, Regine G M; Jenkins, Rob; Burton, A Mike

    2016-04-01

    Familiar faces are remembered better than unfamiliar faces. Furthermore, it is much easier to match images of familiar than unfamiliar faces. These findings could be accounted for by quantitative differences in the ease with which faces are encoded. However, it has been argued that there are also some qualitative differences in familiar and unfamiliar face processing. Unfamiliar faces are held to rely on superficial, pictorial representations, whereas familiar faces invoke more abstract representations. Here we present 2 studies that show, for 1 task, an advantage for unfamiliar faces. In recognition memory, viewers are better able to reject a new picture, if it depicts an unfamiliar face. This rare advantage for unfamiliar faces supports the notion that familiarity brings about some representational changes, and further emphasizes the idea that theoretical accounts of face processing should incorporate familiarity. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  2. Modelling the perceptual similarity of facial expressions from image statistics and neural responses.

    PubMed

    Sormaz, Mladen; Watson, David M; Smith, William A P; Young, Andrew W; Andrews, Timothy J

    2016-04-01

    The ability to perceive facial expressions of emotion is essential for effective social communication. We investigated how the perception of facial expression emerges from the image properties that convey this important social signal, and how neural responses in face-selective brain regions might track these properties. To do this, we measured the perceptual similarity between expressions of basic emotions, and investigated how this is reflected in image measures and in the neural response of different face-selective regions. We show that the perceptual similarity of different facial expressions (fear, anger, disgust, sadness, happiness) can be predicted by both surface and feature shape information in the image. Using block design fMRI, we found that the perceptual similarity of expressions could also be predicted from the patterns of neural response in the face-selective posterior superior temporal sulcus (STS), but not in the fusiform face area (FFA). These results show that the perception of facial expression is dependent on the shape and surface properties of the image and on the activity of specific face-selective regions. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Face recognition via sparse representation of SIFT feature on hexagonal-sampling image

    NASA Astrophysics Data System (ADS)

    Zhang, Daming; Zhang, Xueyong; Li, Lu; Liu, Huayong

    2018-04-01

    This paper investigates a face recognition approach based on Scale Invariant Feature Transform (SIFT) feature and sparse representation. The approach takes advantage of SIFT which is local feature other than holistic feature in classical Sparse Representation based Classification (SRC) algorithm and possesses strong robustness to expression, pose and illumination variations. Since hexagonal image has more inherit merits than square image to make recognition process more efficient, we extract SIFT keypoint in hexagonal-sampling image. Instead of matching SIFT feature, firstly the sparse representation of each SIFT keypoint is given according the constructed dictionary; secondly these sparse vectors are quantized according dictionary; finally each face image is represented by a histogram and these so-called Bag-of-Words vectors are classified by SVM. Due to use of local feature, the proposed method achieves better result even when the number of training sample is small. In the experiments, the proposed method gave higher face recognition rather than other methods in ORL and Yale B face databases; also, the effectiveness of the hexagonal-sampling in the proposed method is verified.

  4. Gender classification from face images by using local binary pattern and gray-level co-occurrence matrix

    NASA Astrophysics Data System (ADS)

    Uzbaş, Betül; Arslan, Ahmet

    2018-04-01

    Gender is an important step for human computer interactive processes and identification. Human face image is one of the important sources to determine gender. In the present study, gender classification is performed automatically from facial images. In order to classify gender, we propose a combination of features that have been extracted face, eye and lip regions by using a hybrid method of Local Binary Pattern and Gray-Level Co-Occurrence Matrix. The features have been extracted from automatically obtained face, eye and lip regions. All of the extracted features have been combined and given as input parameters to classification methods (Support Vector Machine, Artificial Neural Networks, Naive Bayes and k-Nearest Neighbor methods) for gender classification. The Nottingham Scan face database that consists of the frontal face images of 100 people (50 male and 50 female) is used for this purpose. As the result of the experimental studies, the highest success rate has been achieved as 98% by using Support Vector Machine. The experimental results illustrate the efficacy of our proposed method.

  5. Holographic Optical Coherence Imaging of Rat Osteogenic Sarcoma Tumor Spheroids

    NASA Astrophysics Data System (ADS)

    Yu, Ping; Mustata, Mirela; Peng, Leilei; Turek, John J.; Melloch, Michael R.; French, Paul M. W.; Nolte, David D.

    2004-09-01

    Holographic optical coherence imaging is a full-frame variant of coherence-domain imaging. An optoelectronic semiconductor holographic film functions as a coherence filter placed before a conventional digital video camera that passes coherent (structure-bearing) light to the camera during holographic readout while preferentially rejecting scattered light. The data are acquired as a succession of en face images at increasing depth inside the sample in a fly-through acquisition. The samples of living tissue were rat osteogenic sarcoma multicellular tumor spheroids that were grown from a single osteoblast cell line in a bioreactor. Tumor spheroids are nearly spherical and have radial symmetry, presenting a simple geometry for analysis. The tumors investigated ranged in diameter from several hundred micrometers to over 1 mm. Holographic features from the tumors were observed in reflection to depths of 500-600 µm with a total tissue path length of approximately 14 mean free paths. The volumetric data from the tumor spheroids reveal heterogeneous structure, presumably caused by necrosis and microcalcifications characteristic of some human avascular tumors.

  6. High-reliability GaAs image intensifier with unfilmed microchannel plate

    NASA Astrophysics Data System (ADS)

    Bender, Edward J.; Estrera, Joseph P.; Ford, C. E.; Giordana, A.; Glesener, John W.; Lin, P. P.; Nico, A. J.; Sinor, Timothy W.; Smithson, R. H.

    1999-07-01

    Current GaAs image intensifier technology requires that the microchannel plate (MCP) have a thin dielectric film on the side facing the photocathode. This protective coating substantially reduces the amount of outgassing of ions and neutral species from the microchannels. The prevention of MCP outgassing is necessary in order to prevent the `poisoning' of the Cs:O surface on the GaAs photocathode. Many authors have experimented with omitting the MCP coating. The results of such experiments invariably lead to an intensifier with a reported useful life of less than 100 hours, due to contamination of the Cs:O layer on the photocathode. Unfortunately, the MCP film is also a barrier to electron transport within the intensifier. Substantial enhancement of the image intensifier operating parameters is the motivation for the removal of the MCP film. This paper presents results showing for the first time that it is possible to fabricate a long lifetime image intensifier with a single uncoated MCP.

  7. Socially Important Faces Are Processed Preferentially to Other Familiar and Unfamiliar Faces in a Priming Task across a Range of Viewpoints

    PubMed Central

    Keyes, Helen; Zalicks, Catherine

    2016-01-01

    Using a priming paradigm, we investigate whether socially important faces are processed preferentially compared to other familiar and unfamiliar faces, and whether any such effects are affected by changes in viewpoint. Participants were primed with frontal images of personally familiar, famous or unfamiliar faces, and responded to target images of congruent or incongruent identity, presented in frontal, three quarter or profile views. We report that participants responded significantly faster to socially important faces (a friend’s face) compared to other highly familiar (famous) faces or unfamiliar faces. Crucially, responses to famous and unfamiliar faces did not differ. This suggests that, when presented in the context of a socially important stimulus, socially unimportant familiar faces (famous faces) are treated in a similar manner to unfamiliar faces. This effect was not tied to viewpoint, and priming did not affect socially important face processing differently to other faces. PMID:27219101

  8. Acceptance of Cloud Services in Face-to-Face Computer-Supported Collaborative Learning: A Comparison between Single-User Mode and Multi-User Mode

    ERIC Educational Resources Information Center

    Wang, Chia-Sui; Huang, Yong-Ming

    2016-01-01

    Face-to-face computer-supported collaborative learning (CSCL) was used extensively to facilitate learning in classrooms. Cloud services not only allow a single user to edit a document, but they also enable multiple users to simultaneously edit a shared document. However, few researchers have compared student acceptance of such services in…

  9. Surface Structure Spread Single Crystals (S4C): Preparation and characterization

    NASA Astrophysics Data System (ADS)

    de Alwis, A.; Holsclaw, B.; Pushkarev, V. V.; Reinicker, A.; Lawton, T. J.; Blecher, M. E.; Sykes, E. C. H.; Gellman, A. J.

    2013-02-01

    A set of six spherically curved Cu single crystals referred to as Surface Structure Spread Single Crystals (S4Cs) has been prepared in such a way that their exposed surfaces collectively span all possible crystallographic surface orientations that can be cleaved from the face centered cubic Cu lattice. The method for preparing these S4Cs and for finding the high symmetry pole point is described. Optical profilometry has been used to determine the true shapes of the S4Cs and show that over the majority of the surface, the shape is extremely close to that of a perfect sphere. The local orientations of the surfaces lie within ± 1° of the orientation expected on the basis of the spherical shape; their orientation is as good as that of many commercially prepared single crystals. STM imaging has been used to characterize the atomic level structure of the Cu(111) ± 11°-S4C. This has shown that the average step densities and the average step orientations match those expected based on the spherical shape. In other words, although there is some distribution of step-step spacing and step orientations, there is no evidence of large scale reconstruction or faceting. The Cu S4Cs have local structures based on the ideal termination of the face centered cubic Cu lattice in the direction of termination. The set of Cu S4Cs will serve as the basis for high throughput investigations of structure sensitive surface chemistry on Cu.

  10. Neural representations of faces and body parts in macaque and human cortex: a comparative FMRI study.

    PubMed

    Pinsk, Mark A; Arcaro, Michael; Weiner, Kevin S; Kalkus, Jan F; Inati, Souheil J; Gross, Charles G; Kastner, Sabine

    2009-05-01

    Single-cell studies in the macaque have reported selective neural responses evoked by visual presentations of faces and bodies. Consistent with these findings, functional magnetic resonance imaging studies in humans and monkeys indicate that regions in temporal cortex respond preferentially to faces and bodies. However, it is not clear how these areas correspond across the two species. Here, we directly compared category-selective areas in macaques and humans using virtually identical techniques. In the macaque, several face- and body part-selective areas were found located along the superior temporal sulcus (STS) and middle temporal gyrus (MTG). In the human, similar to previous studies, face-selective areas were found in ventral occipital and temporal cortex and an additional face-selective area was found in the anterior temporal cortex. Face-selective areas were also found in lateral temporal cortex, including the previously reported posterior STS area. Body part-selective areas were identified in the human fusiform gyrus and lateral occipitotemporal cortex. In a first experiment, both monkey and human subjects were presented with pictures of faces, body parts, foods, scenes, and man-made objects, to examine the response profiles of each category-selective area to the five stimulus types. In a second experiment, face processing was examined by presenting upright and inverted faces. By comparing the responses and spatial relationships of the areas, we propose potential correspondences across species. Adjacent and overlapping areas in the macaque anterior STS/MTG responded strongly to both faces and body parts, similar to areas in the human fusiform gyrus and posterior STS. Furthermore, face-selective areas on the ventral bank of the STS/MTG discriminated both upright and inverted faces from objects, similar to areas in the human ventral temporal cortex. Overall, our findings demonstrate commonalities and differences in the wide-scale brain organization between the two species and provide an initial step toward establishing functionally homologous category-selective areas.

  11. Neural Representations of Faces and Body Parts in Macaque and Human Cortex: A Comparative fMRI Study

    PubMed Central

    Pinsk, Mark A.; Arcaro, Michael; Weiner, Kevin S.; Kalkus, Jan F.; Inati, Souheil J.; Gross, Charles G.; Kastner, Sabine

    2009-01-01

    Single-cell studies in the macaque have reported selective neural responses evoked by visual presentations of faces and bodies. Consistent with these findings, functional magnetic resonance imaging studies in humans and monkeys indicate that regions in temporal cortex respond preferentially to faces and bodies. However, it is not clear how these areas correspond across the two species. Here, we directly compared category-selective areas in macaques and humans using virtually identical techniques. In the macaque, several face- and body part–selective areas were found located along the superior temporal sulcus (STS) and middle temporal gyrus (MTG). In the human, similar to previous studies, face-selective areas were found in ventral occipital and temporal cortex and an additional face-selective area was found in the anterior temporal cortex. Face-selective areas were also found in lateral temporal cortex, including the previously reported posterior STS area. Body part–selective areas were identified in the human fusiform gyrus and lateral occipitotemporal cortex. In a first experiment, both monkey and human subjects were presented with pictures of faces, body parts, foods, scenes, and man-made objects, to examine the response profiles of each category-selective area to the five stimulus types. In a second experiment, face processing was examined by presenting upright and inverted faces. By comparing the responses and spatial relationships of the areas, we propose potential correspondences across species. Adjacent and overlapping areas in the macaque anterior STS/MTG responded strongly to both faces and body parts, similar to areas in the human fusiform gyrus and posterior STS. Furthermore, face-selective areas on the ventral bank of the STS/MTG discriminated both upright and inverted faces from objects, similar to areas in the human ventral temporal cortex. Overall, our findings demonstrate commonalities and differences in the wide-scale brain organization between the two species and provide an initial step toward establishing functionally homologous category-selective areas. PMID:19225169

  12. Differences in neural responses to ipsilateral stimuli in wide-view fields between face- and house-selective areas

    PubMed Central

    Li, Ting; Niu, Yan; Xiang, Jie; Cheng, Junjie; Liu, Bo; Zhang, Hui; Yan, Tianyi; Kanazawa, Susumu; Wu, Jinglong

    2018-01-01

    Category-selective brain areas exhibit varying levels of neural activity to ipsilaterally presented stimuli. However, in face- and house-selective areas, the neural responses evoked by ipsilateral stimuli in the peripheral visual field remain unclear. In this study, we displayed face and house images using a wide-view visual presentation system while performing functional magnetic resonance imaging (fMRI). The face-selective areas (fusiform face area (FFA) and occipital face area (OFA)) exhibited intense neural responses to ipsilaterally presented images, whereas the house-selective areas (parahippocampal place area (PPA) and transverse occipital sulcus (TOS)) exhibited substantially smaller and even negative neural responses to the ipsilaterally presented images. We also found that the category preferences of the contralateral and ipsilateral neural responses were similar. Interestingly, the face- and house-selective areas exhibited neural responses to ipsilateral images that were smaller than the responses to the contralateral images. Multi-voxel pattern analysis (MVPA) was implemented to evaluate the difference between the contralateral and ipsilateral responses. The classification accuracies were much greater than those expected by chance. The classification accuracies in the FFA were smaller than those in the PPA and TOS. The closer eccentricities elicited greater classification accuracies in the PPA and TOS. We propose that these ipsilateral neural responses might be interpreted by interhemispheric communication through intrahemispheric connectivity of white matter connection and interhemispheric connectivity via the corpus callosum and occipital white matter connection. Furthermore, the PPA and TOS likely have weaker interhemispheric communication than the FFA and OFA, particularly in the peripheral visual field. PMID:29451872

  13. Development of an Autonomous Face Recognition Machine.

    DTIC Science & Technology

    1986-12-08

    This approach, like Baron’s, would be a very time consuming task. The problem of locating a face in Bromley’s work was the least complex of the three...top level design and the development and design decisions that were made in developing the Autonomous Face Recognition Machine (AFRM). The chapter is...images within a digital image. The second sectio examines the algorithm used in performing face recognition. The decision to divide the development

  14. Local gradient Gabor pattern (LGGP) with applications in face recognition, cross-spectral matching, and soft biometrics

    NASA Astrophysics Data System (ADS)

    Chen, Cunjian; Ross, Arun

    2013-05-01

    Researchers in face recognition have been using Gabor filters for image representation due to their robustness to complex variations in expression and illumination. Numerous methods have been proposed to model the output of filter responses by employing either local or global descriptors. In this work, we propose a novel but simple approach for encoding Gradient information on Gabor-transformed images to represent the face, which can be used for identity, gender and ethnicity assessment. Extensive experiments on the standard face benchmark FERET (Visible versus Visible), as well as the heterogeneous face dataset HFB (Near-infrared versus Visible), suggest that the matching performance due to the proposed descriptor is comparable against state-of-the-art descriptor-based approaches in face recognition applications. Furthermore, the same feature set is used in the framework of a Collaborative Representation Classification (CRC) scheme for deducing soft biometric traits such as gender and ethnicity from face images in the AR, Morph and CAS-PEAL databases.

  15. Probabilistic Elastic Part Model: A Pose-Invariant Representation for Real-World Face Verification.

    PubMed

    Li, Haoxiang; Hua, Gang

    2018-04-01

    Pose variation remains to be a major challenge for real-world face recognition. We approach this problem through a probabilistic elastic part model. We extract local descriptors (e.g., LBP or SIFT) from densely sampled multi-scale image patches. By augmenting each descriptor with its location, a Gaussian mixture model (GMM) is trained to capture the spatial-appearance distribution of the face parts of all face images in the training corpus, namely the probabilistic elastic part (PEP) model. Each mixture component of the GMM is confined to be a spherical Gaussian to balance the influence of the appearance and the location terms, which naturally defines a part. Given one or multiple face images of the same subject, the PEP-model builds its PEP representation by sequentially concatenating descriptors identified by each Gaussian component in a maximum likelihood sense. We further propose a joint Bayesian adaptation algorithm to adapt the universally trained GMM to better model the pose variations between the target pair of faces/face tracks, which consistently improves face verification accuracy. Our experiments show that we achieve state-of-the-art face verification accuracy with the proposed representations on the Labeled Face in the Wild (LFW) dataset, the YouTube video face database, and the CMU MultiPIE dataset.

  16. Face aging effect simulation model based on multilayer representation and shearlet transform

    NASA Astrophysics Data System (ADS)

    Li, Yuancheng; Li, Yan

    2017-09-01

    In order to extract detailed facial features, we build a face aging effect simulation model based on multilayer representation and shearlet transform. The face is divided into three layers: the global layer of the face, the local features layer, and texture layer, which separately establishes the aging model. First, the training samples are classified according to different age groups, and we use active appearance model (AAM) at the global level to obtain facial features. The regression equations of shape and texture with age are obtained by fitting the support vector machine regression, which is based on the radial basis function. We use AAM to simulate the aging of facial organs. Then, for the texture detail layer, we acquire the significant high-frequency characteristic components of the face by using the multiscale shearlet transform. Finally, we get the last simulated aging images of the human face by the fusion algorithm. Experiments are carried out on the FG-NET dataset, and the experimental results show that the simulated face images have less differences from the original image and have a good face aging simulation effect.

  17. Stable face representations

    PubMed Central

    Jenkins, Rob; Burton, A. Mike

    2011-01-01

    Photographs are often used to establish the identity of an individual or to verify that they are who they claim to be. Yet, recent research shows that it is surprisingly difficult to match a photo to a face. Neither humans nor machines can perform this task reliably. Although human perceivers are good at matching familiar faces, performance with unfamiliar faces is strikingly poor. The situation is no better for automatic face recognition systems. In practical settings, automatic systems have been consistently disappointing. In this review, we suggest that failure to distinguish between familiar and unfamiliar face processing has led to unrealistic expectations about face identification in applied settings. We also argue that a photograph is not necessarily a reliable indicator of facial appearance, and develop our proposal that summary statistics can provide more stable face representations. In particular, we show that image averaging stabilizes facial appearance by diluting aspects of the image that vary between snapshots of the same person. We review evidence that the resulting images can outperform photographs in both behavioural experiments and computer simulations, and outline promising directions for future research. PMID:21536553

  18. Relative expertise affects N170 during selective attention to superimposed face-character images.

    PubMed

    Ip, Chengteng; Wang, Hailing; Fu, Shimin

    2017-07-01

    It remains unclear whether the N170 of ERPs reflects domain-specific or domain-general visual object processing. In this study, we used superimposed images of a face and a Chinese character such that participants' relative expertise for the two object types was either similar (Experiment 1 and 2) or different (Experiment 3). Experiment 1 showed that N170 amplitude was larger when participants attended to the character instead of the face of a face-character combination. This result was unchanged in Experiment 2, in which task difficulty was selectively increased for the face component of the combined stimuli. Experiment 3 showed that, although this N170 enhancement for attending to characters relative to faces persisted for false characters with recognizable parts, it disappeared for unrecognizable characters. Therefore, N170 amplitude was significantly greater for Chinese characters than for faces presented within a combined image, independent of the relative task difficulty. This result strongly calls N170 face selectivity into question, demonstrating that, contrary to the expectations established by a domain-specific account, N170 is modulated by expertise. © 2017 Society for Psychophysiological Research.

  19. Forming impressions: effects of facial expression and gender stereotypes.

    PubMed

    Hack, Tay

    2014-04-01

    The present study of 138 participants explored how facial expressions and gender stereotypes influence impressions. It was predicted that images of smiling women would be evaluated more favorably on traits reflecting warmth, and that images of non-smiling men would be evaluated more favorably on traits reflecting competence. As predicted, smiling female faces were rated as more warm; however, contrary to prediction, perceived competence of male faces was not affected by facial expression. Participants' female stereotype endorsement was a significant predictor for evaluations of female faces; those who ascribed more strongly to traditional female stereotypes reported the most positive impressions of female faces displaying a smiling expression. However, a similar effect was not found for images of men; endorsement of traditional male stereotypes did not predict participants' impressions of male faces.

  20. Photoreceptor counting and montaging of en-face retinal images from an adaptive optics fundus camera

    PubMed Central

    Xue, Bai; Choi, Stacey S.; Doble, Nathan; Werner, John S.

    2008-01-01

    A fast and efficient method for quantifying photoreceptor density in images obtained with an en-face flood-illuminated adaptive optics (AO) imaging system is described. To improve accuracy of cone counting, en-face images are analyzed over extended areas. This is achieved with two separate semiautomated algorithms: (1) a montaging algorithm that joins retinal images with overlapping common features without edge effects and (2) a cone density measurement algorithm that counts the individual cones in the montaged image. The accuracy of the cone density measurement algorithm is high, with >97% agreement for a simulated retinal image (of known density, with low contrast) and for AO images from normal eyes when compared with previously reported histological data. Our algorithms do not require spatial regularity in cone packing and are, therefore, useful for counting cones in diseased retinas, as demonstrated for eyes with Stargardt’s macular dystrophy and retinitis pigmentosa. PMID:17429482

  1. Photoreceptor counting and montaging of en-face retinal images from an adaptive optics fundus camera

    NASA Astrophysics Data System (ADS)

    Xue, Bai; Choi, Stacey S.; Doble, Nathan; Werner, John S.

    2007-05-01

    A fast and efficient method for quantifying photoreceptor density in images obtained with an en-face flood-illuminated adaptive optics (AO) imaging system is described. To improve accuracy of cone counting, en-face images are analyzed over extended areas. This is achieved with two separate semiautomated algorithms: (1) a montaging algorithm that joins retinal images with overlapping common features without edge effects and (2) a cone density measurement algorithm that counts the individual cones in the montaged image. The accuracy of the cone density measurement algorithm is high, with >97% agreement for a simulated retinal image (of known density, with low contrast) and for AO images from normal eyes when compared with previously reported histological data. Our algorithms do not require spatial regularity in cone packing and are, therefore, useful for counting cones in diseased retinas, as demonstrated for eyes with Stargardt's macular dystrophy and retinitis pigmentosa.

  2. Long lifetime generation IV image intensifiers with unfilmed microchannel plate

    NASA Astrophysics Data System (ADS)

    Estrera, Joseph P.; Bender, Edward J.; Giordana, A.; Glesener, John W.; Iosue, Mike J.; Lin, P. P.; Sinor, Timothy W.

    2000-11-01

    Current Generation II Gallium Arsenide (GaAs) image intensifier tube technology requires that the tube microchannel plate (MCP) component have a thin dielectric coating on the side facing the tube's photocathode component. This protective coating substantially reduces the release from the MCP of ions and neutral species, particularly when the image intensifier is operated. The prevention of MCP outgassing is necessary in order ot prevent the poisoning of the Cs:O surface on the GaAs photocathode. Many authors have experimented with omitting the MCP coating. Such experiments have consistently led to an intensifier with a significantly reduced lifetime, due to contamination of the Cs:O layer on the photocathode. Unfortunately the MCP film acts as a scattering cneter to electron transport within the intensifier and effectively reduces the photoelectron detection efficiency. Substantial enhancement of the image intensifier operating parameters is the motivation for the removal of the MCP film. Removal of the MCP film promises to simplify MCP fabrication and enhance the intensifier parameters related to Electro-Optical performance and image quality. This paper presents results showing for the first time that it is possible to fabricate a long lifetime image intensifier with a single unfilmed MCP and achieve improved imaging and performance characteristics.

  3. A case of persistent visual hallucinations of faces following LSD abuse: a functional Magnetic Resonance Imaging study.

    PubMed

    Iaria, Giuseppe; Fox, Christopher J; Scheel, Michael; Stowe, Robert M; Barton, Jason J S

    2010-04-01

    In this study, we report the case of a patient experiencing hallucinations of faces that could be reliably precipitated by looking at trees. Using functional Magnetic Resonance Imaging (fMRI), we found that face hallucinations were associated with increased and decreased neural activity in a number of cortical regions. Within the same fusiform face area, however, we found significant decreased and increased neural activity according to whether the patient was experiencing hallucinations or veridical perception of faces, respectively. These findings may indicate key differences in how hallucinatory and veridical perceptions lead to the same phenomenological experience of seeing faces.

  4. Movement cues aid face recognition in developmental prosopagnosia.

    PubMed

    Bennetts, Rachel J; Butcher, Natalie; Lander, Karen; Udale, Robert; Bate, Sarah

    2015-11-01

    Seeing a face in motion can improve face recognition in the general population, and studies of face matching indicate that people with face recognition difficulties (developmental prosopagnosia; DP) may be able to use movement cues as a supplementary strategy to help them process faces. However, the use of facial movement cues in DP has not been examined in the context of familiar face recognition. This study examined whether people with DP were better at recognizing famous faces presented in motion, compared to static. Nine participants with DP and 14 age-matched controls completed a famous face recognition task. Each face was presented twice across 2 blocks: once in motion and once as a still image. Discriminability (A) was calculated for each block. Participants with DP showed a significant movement advantage overall. This was driven by a movement advantage in the first block, but not in the second block. Participants with DP were significantly worse than controls at identifying faces from static images, but there was no difference between those with DP and controls for moving images. Seeing a familiar face in motion can improve face recognition in people with DP, at least in some circumstances. The mechanisms behind this effect are unclear, but these results suggest that some people with DP are able to learn and recognize patterns of facial motion, and movement can act as a useful cue when face recognition is impaired. (c) 2015 APA, all rights reserved).

  5. High precision automated face localization in thermal images: oral cancer dataset as test case

    NASA Astrophysics Data System (ADS)

    Chakraborty, M.; Raman, S. K.; Mukhopadhyay, S.; Patsa, S.; Anjum, N.; Ray, J. G.

    2017-02-01

    Automated face detection is the pivotal step in computer vision aided facial medical diagnosis and biometrics. This paper presents an automatic, subject adaptive framework for accurate face detection in the long infrared spectrum on our database for oral cancer detection consisting of malignant, precancerous and normal subjects of varied age group. Previous works on oral cancer detection using Digital Infrared Thermal Imaging(DITI) reveals that patients and normal subjects differ significantly in their facial thermal distribution. Therefore, it is a challenging task to formulate a completely adaptive framework to veraciously localize face from such a subject specific modality. Our model consists of first extracting the most probable facial regions by minimum error thresholding followed by ingenious adaptive methods to leverage the horizontal and vertical projections of the segmented thermal image. Additionally, the model incorporates our domain knowledge of exploiting temperature difference between strategic locations of the face. To our best knowledge, this is the pioneering work on detecting faces in thermal facial images comprising both patients and normal subjects. Previous works on face detection have not specifically targeted automated medical diagnosis; face bounding box returned by those algorithms are thus loose and not apt for further medical automation. Our algorithm significantly outperforms contemporary face detection algorithms in terms of commonly used metrics for evaluating face detection accuracy. Since our method has been tested on challenging dataset consisting of both patients and normal subjects of diverse age groups, it can be seamlessly adapted in any DITI guided facial healthcare or biometric applications.

  6. Reflectance Estimation from Urban Terrestrial Images: Validation of a Symbolic Ray-Tracing Method on Synthetic Data

    NASA Astrophysics Data System (ADS)

    Coubard, F.; Brédif, M.; Paparoditis, N.; Briottet, X.

    2011-04-01

    Terrestrial geolocalized images are nowadays widely used on the Internet, mainly in urban areas, through immersion services such as Google Street View. On the long run, we seek to enhance the visualization of these images; for that purpose, radiometric corrections must be performed to free them from illumination conditions at the time of acquisition. Given the simultaneously acquired 3D geometric model of the scene with LIDAR or vision techniques, we face an inverse problem where the illumination and the geometry of the scene are known and the reflectance of the scene is to be estimated. Our main contribution is the introduction of a symbolic ray-tracing rendering to generate parametric images, for quick evaluation and comparison with the acquired images. The proposed approach is then based on an iterative estimation of the reflectance parameters of the materials, using a single rendering pre-processing. We validate the method on synthetic data with linear BRDF models and discuss the limitations of the proposed approach with more general non-linear BRDF models.

  7. Visual adaptation provides objective electrophysiological evidence of facial identity discrimination.

    PubMed

    Retter, Talia L; Rossion, Bruno

    2016-07-01

    Discrimination of facial identities is a fundamental function of the human brain that is challenging to examine with macroscopic measurements of neural activity, such as those obtained with functional magnetic resonance imaging (fMRI) and electroencephalography (EEG). Although visual adaptation or repetition suppression (RS) stimulation paradigms have been successfully implemented to this end with such recording techniques, objective evidence of an identity-specific discrimination response due to adaptation at the level of the visual representation is lacking. Here, we addressed this issue with fast periodic visual stimulation (FPVS) and EEG recording combined with a symmetry/asymmetry adaptation paradigm. Adaptation to one facial identity is induced through repeated presentation of that identity at a rate of 6 images per second (6 Hz) over 10 sec. Subsequently, this identity is presented in alternation with another facial identity (i.e., its anti-face, both faces being equidistant from an average face), producing an identity repetition rate of 3 Hz over a 20 sec testing sequence. A clear EEG response at 3 Hz is observed over the right occipito-temporal (ROT) cortex, indexing discrimination between the two facial identities in the absence of an explicit behavioral discrimination measure. This face identity discrimination occurs immediately after adaptation and disappears rapidly within 20 sec. Importantly, this 3 Hz response is not observed in a control condition without the single-identity 10 sec adaptation period. These results indicate that visual adaptation to a given facial identity produces an objective (i.e., at a pre-defined stimulation frequency) electrophysiological index of visual discrimination between that identity and another, and provides a unique behavior-free quantification of the effect of visual adaptation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Automatic face naming by learning discriminative affinity matrices from weakly labeled images.

    PubMed

    Xiao, Shijie; Xu, Dong; Wu, Jianxin

    2015-10-01

    Given a collection of images, where each image contains several faces and is associated with a few names in the corresponding caption, the goal of face naming is to infer the correct name for each face. In this paper, we propose two new methods to effectively solve this problem by learning two discriminative affinity matrices from these weakly labeled images. We first propose a new method called regularized low-rank representation by effectively utilizing weakly supervised information to learn a low-rank reconstruction coefficient matrix while exploring multiple subspace structures of the data. Specifically, by introducing a specially designed regularizer to the low-rank representation method, we penalize the corresponding reconstruction coefficients related to the situations where a face is reconstructed by using face images from other subjects or by using itself. With the inferred reconstruction coefficient matrix, a discriminative affinity matrix can be obtained. Moreover, we also develop a new distance metric learning method called ambiguously supervised structural metric learning by using weakly supervised information to seek a discriminative distance metric. Hence, another discriminative affinity matrix can be obtained using the similarity matrix (i.e., the kernel matrix) based on the Mahalanobis distances of the data. Observing that these two affinity matrices contain complementary information, we further combine them to obtain a fused affinity matrix, based on which we develop a new iterative scheme to infer the name of each face. Comprehensive experiments demonstrate the effectiveness of our approach.

  9. Expression-dependent susceptibility to face distortions in processing of facial expressions of emotion.

    PubMed

    Guo, Kun; Soornack, Yoshi; Settle, Rebecca

    2018-03-05

    Our capability of recognizing facial expressions of emotion under different viewing conditions implies the existence of an invariant expression representation. As natural visual signals are often distorted and our perceptual strategy changes with external noise level, it is essential to understand how expression perception is susceptible to face distortion and whether the same facial cues are used to process high- and low-quality face images. We systematically manipulated face image resolution (experiment 1) and blur (experiment 2), and measured participants' expression categorization accuracy, perceived expression intensity and associated gaze patterns. Our analysis revealed a reasonable tolerance to face distortion in expression perception. Reducing image resolution up to 48 × 64 pixels or increasing image blur up to 15 cycles/image had little impact on expression assessment and associated gaze behaviour. Further distortion led to decreased expression categorization accuracy and intensity rating, increased reaction time and fixation duration, and stronger central fixation bias which was not driven by distortion-induced changes in local image saliency. Interestingly, the observed distortion effects were expression-dependent with less deterioration impact on happy and surprise expressions, suggesting this distortion-invariant facial expression perception might be achieved through the categorical model involving a non-linear configural combination of local facial features. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. Automated facial acne assessment from smartphone images

    NASA Astrophysics Data System (ADS)

    Amini, Mohammad; Vasefi, Fartash; Valdebran, Manuel; Huang, Kevin; Zhang, Haomiao; Kemp, William; MacKinnon, Nicholas

    2018-02-01

    A smartphone mobile medical application is presented, that provides analysis of the health of skin on the face using a smartphone image and cloud-based image processing techniques. The mobile application employs the use of the camera to capture a front face image of a subject, after which the captured image is spatially calibrated based on fiducial points such as position of the iris of the eye. A facial recognition algorithm is used to identify features of the human face image, to normalize the image, and to define facial regions of interest (ROI) for acne assessment. We identify acne lesions and classify them into two categories: those that are papules and those that are pustules. Automated facial acne assessment was validated by performing tests on images of 60 digital human models and 10 real human face images. The application was able to identify 92% of acne lesions within five facial ROIs. The classification accuracy for separating papules from pustules was 98%. Combined with in-app documentation of treatment, lifestyle factors, and automated facial acne assessment, the app can be used in both cosmetic and clinical dermatology. It allows users to quantitatively self-measure acne severity and treatment efficacy on an ongoing basis to help them manage their chronic facial acne.

  11. Focussed Ion Beam Milling and Scanning Electron Microscopy of Brain Tissue

    PubMed Central

    Knott, Graham; Rosset, Stéphanie; Cantoni, Marco

    2011-01-01

    This protocol describes how biological samples, like brain tissue, can be imaged in three dimensions using the focussed ion beam/scanning electron microscope (FIB/SEM). The samples are fixed with aldehydes, heavy metal stained using osmium tetroxide and uranyl acetate. They are then dehydrated with alcohol and infiltrated with resin, which is then hardened. Using a light microscope and ultramicrotome with glass knives, a small block containing the region interest close to the surface is made. The block is then placed inside the FIB/SEM, and the ion beam used to roughly mill a vertical face along one side of the block, close to this region. Using backscattered electrons to image the underlying structures, a smaller face is then milled with a finer ion beam and the surface scrutinised more closely to determine the exact area of the face to be imaged and milled. The parameters of the microscope are then set so that the face is repeatedly milled and imaged so that serial images are collected through a volume of the block. The image stack will typically contain isotropic voxels with dimenions as small a 4 nm in each direction. This image quality in any imaging plane enables the user to analyse cell ultrastructure at any viewing angle within the image stack. PMID:21775953

  12. Comparison of different methods for gender estimation from face image of various poses

    NASA Astrophysics Data System (ADS)

    Ishii, Yohei; Hongo, Hitoshi; Niwa, Yoshinori; Yamamoto, Kazuhiko

    2003-04-01

    Recently, gender estimation from face images has been studied for frontal facial images. However, it is difficult to obtain such facial images constantly in the case of application systems for security, surveillance and marketing research. In order to build such systems, a method is required to estimate gender from the image of various facial poses. In this paper, three different classifiers are compared in appearance-based gender estimation, which use four directional features (FDF). The classifiers are linear discriminant analysis (LDA), Support Vector Machines (SVMs) and Sparse Network of Winnows (SNoW). Face images used for experiments were obtained from 35 viewpoints. The direction of viewpoints varied +/-45 degrees horizontally, +/-30 degrees vertically at 15 degree intervals respectively. Although LDA showed the best performance for frontal facial images, SVM with Gaussian kernel was found the best performance (86.0%) for the facial images of 35 viewpoints. It is considered that SVM with Gaussian kernel is robust to changes in viewpoint when estimating gender from these results. Furthermore, the estimation rate was quite close to the average estimation rate at 35 viewpoints respectively. It is supposed that the methods are reasonable to estimate gender within the range of experimented viewpoints by learning face images from multiple directions by one class.

  13. Cerebral Cavernous Malformation

    MedlinePlus

    ... and individuals frequently have multiple CCMs found via magnetic resonance imaging. Individuals with CCM are faced with a diagnosis ... and individuals frequently have multiple CCMs found via magnetic resonance imaging. Individuals with CCM are faced with a diagnosis ...

  14. Hybrid Feature Extraction-based Approach for Facial Parts Representation and Recognition

    NASA Astrophysics Data System (ADS)

    Rouabhia, C.; Tebbikh, H.

    2008-06-01

    Face recognition is a specialized image processing which has attracted a considerable attention in computer vision. In this article, we develop a new facial recognition system from video sequences images dedicated to person identification whose face is partly occulted. This system is based on a hybrid image feature extraction technique called ACPDL2D (Rouabhia et al. 2007), it combines two-dimensional principal component analysis and two-dimensional linear discriminant analysis with neural network. We performed the feature extraction task on the eyes and the nose images separately then a Multi-Layers Perceptron classifier is used. Compared to the whole face, the results of simulation are in favor of the facial parts in terms of memory capacity and recognition (99.41% for the eyes part, 98.16% for the nose part and 97.25 % for the whole face).

  15. Corrosion and Passivity Studies with Titanium

    DTIC Science & Technology

    1955-09-30

    the (00.1) Face of a Titanium Single Crystal . - Part 3 Secondary Electron Emission from the Titanium Crystal , and from the Copper-Covered Titanium...ner upon the (00.1) face of a titaniuT single crystal . Low- energy electron diffraction is used to investigate the struc- ture of the deposit. Before...cathode emisaion is strongly dependent on the work function k. 8ince varies with crystal faces and the tip is generally so small that it is a single

  16. Distinct spatial frequency sensitivities for processing faces and emotional expressions.

    PubMed

    Vuilleumier, Patrik; Armony, Jorge L; Driver, Jon; Dolan, Raymond J

    2003-06-01

    High and low spatial frequency information in visual images is processed by distinct neural channels. Using event-related functional magnetic resonance imaging (fMRI) in humans, we show dissociable roles of such visual channels for processing faces and emotional fearful expressions. Neural responses in fusiform cortex, and effects of repeating the same face identity upon fusiform activity, were greater with intact or high-spatial-frequency face stimuli than with low-frequency faces, regardless of emotional expression. In contrast, amygdala responses to fearful expressions were greater for intact or low-frequency faces than for high-frequency faces. An activation of pulvinar and superior colliculus by fearful expressions occurred specifically with low-frequency faces, suggesting that these subcortical pathways may provide coarse fear-related inputs to the amygdala.

  17. Flat or curved thin optical display panel

    DOEpatents

    Veligdan, James T.

    1995-01-10

    An optical panel 10 includes a plurality of waveguides 12 stacked together, with each waveguide 12 having a first end 12a and an opposite second end 12b. The first ends 12a collectively define a first face 16, and the second ends 12b collectively define a second face 18 of the panel 10. The second face 18 is disposed at an acute face angle relative to the waveguides 12 to provide a panel 10 which is relatively thin compared to the height of the second face. In an exemplary embodiment for use in a projection TV, the first face 16 is substantially smaller in height than the second face 18 and receives a TV image, with the second face 18 defining a screen for viewing the image enlarged.

  18. Directional templates for real-time detection of coronal axis rotated faces

    NASA Astrophysics Data System (ADS)

    Perez, Claudio A.; Estevez, Pablo A.; Garate, Patricio

    2004-10-01

    Real-time face and iris detection on video images has gained renewed attention because of multiple possible applications in studying eye function, drowsiness detection, virtual keyboard interfaces, face recognition, video processing and multimedia retrieval. In this paper, a study is presented on using directional templates in the detection of faces rotated in the coronal axis. The templates are built by extracting the directional image information from the regions of the eyes, nose and mouth. The face position is determined by computing a line integral using the templates over the face directional image. The line integral reaches a maximum when it coincides with the face position. It is shown an improvement in localization selectivity by the increased value in the line integral computed with the directional template. Besides, improvements in the line integral value for face size and face rotation angle was also found through the computation of the line integral using the directional template. Based on these results the new templates should improve selectivity and hence provide the means to restrict computations to a fewer number of templates and restrict the region of search during the face and eye tracking procedure. The proposed method is real time, completely non invasive and was applied with no background limitation and normal illumination conditions in an indoor environment.

  19. Face matching impairment in developmental prosopagnosia.

    PubMed

    White, David; Rivolta, Davide; Burton, A Mike; Al-Janabi, Shahd; Palermo, Romina

    2017-02-01

    Developmental prosopagnosia (DP) is commonly referred to as 'face blindness', a term that implies a perceptual basis to the condition. However, DP presents as a deficit in face recognition and is diagnosed using memory-based tasks. Here, we test face identification ability in six people with DP, who are severely impaired on face memory tasks, using tasks that do not rely on memory. First, we compared DP to control participants on a standardized test of unfamiliar face matching using facial images taken on the same day and under standardized studio conditions (Glasgow Face Matching Test; GFMT). Scores for DP participants did not differ from normative accuracy scores on the GFMT. Second, we tested face matching performance on a test created using images that were sourced from the Internet and so varied substantially due to changes in viewing conditions and in a person's appearance (Local Heroes Test; LHT). DP participants showed significantly poorer matching accuracy on the LHT than control participants, for both unfamiliar and familiar face matching. Interestingly, this deficit is specific to 'match' trials, suggesting that people with DP may have particular difficulty in matching images of the same person that contain natural day-to-day variations in appearance. We discuss these results in the broader context of individual differences in face matching ability.

  20. Modeling Face Identification Processing in Children and Adults.

    ERIC Educational Resources Information Center

    Schwarzer, Gudrun; Massaro, Dominic W.

    2001-01-01

    Two experiments studied whether and how 5-year-olds integrate single facial features to identify faces. Results indicated that children could evaluate and integrate information from eye and mouth features to identify a face when salience of features was varied. A weighted Fuzzy Logical Model of Perception fit better than a Single Channel Model,…

  1. Low-level image properties in facial expressions.

    PubMed

    Menzel, Claudia; Redies, Christoph; Hayn-Leichsenring, Gregor U

    2018-06-04

    We studied low-level image properties of face photographs and analyzed whether they change with different emotional expressions displayed by an individual. Differences in image properties were measured in three databases that depicted a total of 167 individuals. Face images were used either in their original form, cut to a standard format or superimposed with a mask. Image properties analyzed were: brightness, redness, yellowness, contrast, spectral slope, overall power and relative power in low, medium and high spatial frequencies. Results showed that image properties differed significantly between expressions within each individual image set. Further, specific facial expressions corresponded to patterns of image properties that were consistent across all three databases. In order to experimentally validate our findings, we equalized the luminance histograms and spectral slopes of three images from a given individual who showed two expressions. Participants were significantly slower in matching the expression in an equalized compared to an original image triad. Thus, existing differences in these image properties (i.e., spectral slope, brightness or contrast) facilitate emotion detection in particular sets of face images. Copyright © 2018. Published by Elsevier B.V.

  2. Neuroplasticity to a single-episode traumatic stress revealed by resting-state fMRI in awake rats.

    PubMed

    Liang, Zhifeng; King, Jean; Zhang, Nanyin

    2014-12-01

    Substantial evidence has suggested that the brain structures of the medial prefrontal cortex (mPFC) and amygdala (AMYG) are implicated in the pathophysiology of stress-related disorders. However, little is known with respect to the system-level adaptation of their neural circuitries to the perturbations of traumatic stressors. By utilizing behavioral tests and an awake animal imaging approach, in the present study we non-invasively investigated the impact of single-episode predator odor exposure in an inescapable environment on behaviors and neural circuits in rodents. We found that predator odor exposure significantly increased the freezing behavior. In addition, animals exhibited heightened anxiety levels seven days after the exposure. Intriguingly, we also found that the intrinsic functional connectivity within the AMYG-mPFC circuit was considerably compromised seven days after the traumatic event. Our data provide neuroimaging evidence suggesting that prolonged neuroadaptation induced by a single episode of traumatic stress can be non-invasively detected in rodents. These results also support the face validity and construction validity of using the paradigm of single trauma exposure in an inescapable environment as an animal model for post-traumatic stress disorder. Taken together, the present study has opened a new avenue to investigating animal models of stress-related mental disorders by going beyond static neuroanatomy, and ultimately bridging the gap between basic biomedical and human imaging research. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Morphologic features of basal cell carcinoma using the en-face mode in frequency domain optical coherence tomography.

    PubMed

    von Braunmühl, T; Hartmann, D; Tietze, J K; Cekovic, D; Kunte, C; Ruzicka, T; Berking, C; Sattler, E C

    2016-11-01

    Optical coherence tomography (OCT) has become a valuable non-invasive tool in the in vivo diagnosis of non-melanoma skin cancer, especially of basal cell carcinoma (BCC). Due to an updated software-supported algorithm, a new en-face mode - similar to the horizontal en-face mode in high-definition OCT and reflectance confocal microscopy - surface-parallel imaging is possible which, in combination with the established slice mode of frequency domain (FD-)OCT, may offer additional information in the diagnosis of BCC. To define characteristic morphologic features of BCC using the new en-face mode in addition to the conventional cross-sectional imaging mode for three-dimensional imaging of BCC in FD-OCT. A total of 33 BCC were examined preoperatively by imaging in en-face mode as well as cross-sectional mode in FD-OCT. Characteristic features were evaluated and correlated with histopathology findings. Features established in the cross-sectional imaging mode as well as additional features were present in the en-face mode of FD-OCT: lobulated structures (100%), dark peritumoral rim (75%), bright peritumoral stroma (96%), branching vessels (90%), compressed fibrous bundles between lobulated nests ('star shaped') (78%), and intranodular small bright dots (51%). These features were also evaluated according to the histopathological subtype. In the en-face mode, the lobulated structures with compressed fibrous bundles of the BCC were more distinct than in the slice mode. FD-OCT with a new depiction for horizontal and vertical imaging modes offers additional information in the diagnosis of BCC, especially in nodular BCC, and enhances the possibility of the evaluation of morphologic tumour features. © 2016 European Academy of Dermatology and Venereology.

  4. A system for tracking and recognizing pedestrian faces using a network of loosely coupled cameras

    NASA Astrophysics Data System (ADS)

    Gagnon, L.; Laliberté, F.; Foucher, S.; Branzan Albu, A.; Laurendeau, D.

    2006-05-01

    A face recognition module has been developed for an intelligent multi-camera video surveillance system. The module can recognize a pedestrian face in terms of six basic emotions and the neutral state. Face and facial features detection (eyes, nasal root, nose and mouth) are first performed using cascades of boosted classifiers. These features are used to normalize the pose and dimension of the face image. Gabor filters are then sampled on a regular grid covering the face image to build a facial feature vector that feeds a nearest neighbor classifier with a cosine distance similarity measure for facial expression interpretation and face model construction. A graphical user interface allows the user to adjust the module parameters.

  5. A computer-generated animated face stimulus set for psychophysiological research

    PubMed Central

    Naples, Adam; Nguyen-Phuc, Alyssa; Coffman, Marika; Kresse, Anna; Faja, Susan; Bernier, Raphael; McPartland., James

    2014-01-01

    Human faces are fundamentally dynamic, but experimental investigations of face perception traditionally rely on static images of faces. While naturalistic videos of actors have been used with success in some contexts, much research in neuroscience and psychophysics demands carefully controlled stimuli. In this paper, we describe a novel set of computer generated, dynamic, face stimuli. These grayscale faces are tightly controlled for low- and high-level visual properties. All faces are standardized in terms of size, luminance, and location and size of facial features. Each face begins with a neutral pose and transitions to an expression over the course of 30 frames. Altogether there are 222 stimuli spanning 3 different categories of movement: (1) an affective movement (fearful face); (2) a neutral movement (close-lipped, puffed cheeks with open eyes); and (3) a biologically impossible movement (upward dislocation of eyes and mouth). To determine whether early brain responses sensitive to low-level visual features differed between expressions, we measured the occipital P100 event related potential (ERP), which is known to reflect differences in early stages of visual processing and the N170, which reflects structural encoding of faces. We found no differences between faces at the P100, indicating that different face categories were well matched on low-level image properties. This database provides researchers with a well-controlled set of dynamic faces controlled on low-level image characteristics that are applicable to a range of research questions in social perception. PMID:25028164

  6. The sequence of cortical activity inferred by response latency variability in the human ventral pathway of face processing.

    PubMed

    Lin, Jo-Fu Lotus; Silva-Pereyra, Juan; Chou, Chih-Che; Lin, Fa-Hsuan

    2018-04-11

    Variability in neuronal response latency has been typically considered caused by random noise. Previous studies of single cells and large neuronal populations have shown that the temporal variability tends to increase along the visual pathway. Inspired by these previous studies, we hypothesized that functional areas at later stages in the visual pathway of face processing would have larger variability in the response latency. To test this hypothesis, we used magnetoencephalographic data collected when subjects were presented with images of human faces. Faces are known to elicit a sequence of activity from the primary visual cortex to the fusiform gyrus. Our results revealed that the fusiform gyrus showed larger variability in the response latency compared to the calcarine fissure. Dynamic and spectral analyses of the latency variability indicated that the response latency in the fusiform gyrus was more variable than in the calcarine fissure between 70 ms and 200 ms after the stimulus onset and between 4 Hz and 40 Hz, respectively. The sequential processing of face information from the calcarine sulcus to the fusiform sulcus was more reliably detected based on sizes of the response variability than instants of the maximal response peaks. With two areas in the ventral visual pathway, we show that the variability in response latency across brain areas can be used to infer the sequence of cortical activity.

  7. Dependence of the appearance-based perception of criminality, suggestibility, and trustworthiness on the level of pixelation of facial images.

    PubMed

    Nurmoja, Merle; Eamets, Triin; Härma, Hanne-Loore; Bachmann, Talis

    2012-10-01

    While the dependence of face identification on the level of pixelation-transform of the images of faces has been well studied, similar research on face-based trait perception is underdeveloped. Because depiction formats used for hiding individual identity in visual media and evidential material recorded by surveillance cameras often consist of pixelized images, knowing the effects of pixelation on person perception has practical relevance. Here, the results of two experiments are presented showing the effect of facial image pixelation on the perception of criminality, trustworthiness, and suggestibility. It appears that individuals (N = 46, M age = 21.5 yr., SD = 3.1 for criminality ratings; N = 94, M age = 27.4 yr., SD = 10.1 for other ratings) have the ability to discriminate between facial cues ndicative of these perceived traits from the coarse level of image pixelation (10-12 pixels per face horizontally) and that the discriminability increases with a decrease in the coarseness of pixelation. Perceived criminality and trustworthiness appear to be better carried by the pixelized images than perceived suggestibility.

  8. An efficient method for facial component detection in thermal images

    NASA Astrophysics Data System (ADS)

    Paul, Michael; Blanik, Nikolai; Blazek, Vladimir; Leonhardt, Steffen

    2015-04-01

    A method to detect certain regions in thermal images of human faces is presented. In this approach, the following steps are necessary to locate the periorbital and the nose regions: First, the face is segmented from the background by thresholding and morphological filtering. Subsequently, a search region within the face, around its center of mass, is evaluated. Automatically computed temperature thresholds are used per subject and image or image sequence to generate binary images, in which the periorbital regions are located by integral projections. Then, the located positions are used to approximate the nose position. It is possible to track features in the located regions. Therefore, these regions are interesting for different applications like human-machine interaction, biometrics and biomedical imaging. The method is easy to implement and does not rely on any training images or templates. Furthermore, the approach saves processing resources due to simple computations and restricted search regions.

  9. Correction of motion artifacts in endoscopic optical coherence tomography and autofluorescence images based on azimuthal en face image registration.

    PubMed

    Abouei, Elham; Lee, Anthony M D; Pahlevaninezhad, Hamid; Hohert, Geoffrey; Cua, Michelle; Lane, Pierre; Lam, Stephen; MacAulay, Calum

    2018-01-01

    We present a method for the correction of motion artifacts present in two- and three-dimensional in vivo endoscopic images produced by rotary-pullback catheters. This method can correct for cardiac/breathing-based motion artifacts and catheter-based motion artifacts such as nonuniform rotational distortion (NURD). This method assumes that en face tissue imaging contains slowly varying structures that are roughly parallel to the pullback axis. The method reduces motion artifacts using a dynamic time warping solution through a cost matrix that measures similarities between adjacent frames in en face images. We optimize and demonstrate the suitability of this method using a real and simulated NURD phantom and in vivo endoscopic pulmonary optical coherence tomography and autofluorescence images. Qualitative and quantitative evaluations of the method show an enhancement of the image quality. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  10. Pharmacokinetics, metabolism, biodistribution, radiation dosimetry, and toxicology of (18)F-fluoroacetate ((18)F-FACE) in non-human primates.

    PubMed

    Nishii, Ryuichi; Tong, William; Wendt, Richard; Soghomonyan, Suren; Mukhopadhyay, Uday; Balatoni, Julius; Mawlawi, Osama; Bidaut, Luc; Tinkey, Peggy; Borne, Agatha; Alauddin, Mian; Gonzalez-Lepera, Carlos; Yang, Bijun; Gelovani, Juri G

    2012-04-01

    To facilitate the clinical translation of (18)F-fluoroacetate ((18)F-FACE), the pharmacokinetics, biodistribution, radiolabeled metabolites, radiation dosimetry, and pharmacological safety of diagnostic doses of (18)F-FACE were determined in non-human primates. (18)F-FACE was synthesized using a custom-built automated synthesis module. Six rhesus monkeys (three of each sex) were injected intravenously with (18)F-FACE (165.4 ± 28.5 MBq), followed by dynamic positron emission tomography (PET) imaging of the thoracoabdominal area during 0-30 min post-injection and static whole-body PET imaging at 40, 100, and 170 min. Serial blood samples and a urine sample were obtained from each animal to determine the time course of (18)F-FACE and its radiolabeled metabolites. Electrocardiograms and hematology analyses were obtained to evaluate the acute and delayed toxicity of diagnostic dosages of (18)F-FACE. The time-integrated activity coefficients for individual source organs and the whole body after administration of (18)F-FACE were obtained using quantitative analyses of dynamic and static PET images and were extrapolated to humans. The blood clearance of (18)F-FACE exhibited bi-exponential kinetics with half-times of 4 and 250 min for the fast and slow phases, respectively. A rapid accumulation of (18)F-FACE-derived radioactivity was observed in the liver and kidneys, followed by clearance of the radioactivity into the intestine and the urinary bladder. Radio-HPLC analyses of blood and urine samples demonstrated that (18)F-fluoride was the only detectable radiolabeled metabolite at the level of less than 9% of total radioactivity in blood at 180 min after the (18)F-FACE injection. The uptake of free (18)F-fluoride in the bones was insignificant during the course of the imaging studies. No significant changes in ECG, CBC, liver enzymes, or renal function were observed. The estimated effective dose for an adult human is 3.90-7.81 mSv from the administration of 185-370 MBq of (18)F-FACE. The effective dose and individual organ radiation absorbed doses from administration of a diagnostic dosage of (18)F-FACE are acceptable. From a pharmacologic perspective, diagnostic dosages of (18)F-FACE are non-toxic in primates and, therefore, could be safely administered to human patients for PET imaging.

  11. Identifying ideal brow vector position: empirical analysis of three brow archetypes.

    PubMed

    Hamamoto, Ashley A; Liu, Tiffany W; Wong, Brian J

    2013-02-01

    Surgical browlifts counteract the effects of aging, correct ptosis, and optimize forehead aesthetics. While surgeons have control over brow shape, the metrics defining ideal brow shape are subjective. This study aims to empirically determine whether three expert brow design strategies are aesthetically equivalent by using expert focus group analysis and relating these findings to brow surgery. Comprehensive literature search identified three dominant brow design methods (Westmore, Lamas and Anastasia) that are heavily cited, referenced or internationally recognized in either medical literature or by the lay media. Using their respective guidelines, brow shape was modified for 10 synthetic female faces, yielding 30 images. A focus group of 50 professional makeup artists ranked the three images for each of the 10 faces to generate ordinal attractiveness scores. The contemporary methods employed by Anastasia and Lamas produce a brow arch more lateral than Westmore's classic method. Although the more laterally located brow arch is considered the current trend in facial aesthetics, this style was not empirically supported. No single method was consistently rated most or least attractive by the focus group, and no significant difference in attractiveness score for the different methods was observed (p = 0.2454). Although each method of brow placement has been promoted as the "best" approach, no single brow design method achieved statistical significance in optimizing attractiveness. Each can be used effectively as a guide in designing eyebrow shape during browlift procedures, making it possible to use the three methods interchangeably. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  12. Correlation based efficient face recognition and color change detection

    NASA Astrophysics Data System (ADS)

    Elbouz, M.; Alfalou, A.; Brosseau, C.; Alam, M. S.; Qasmi, S.

    2013-01-01

    Identifying the human face via correlation is a topic attracting widespread interest. At the heart of this technique lies the comparison of an unknown target image to a known reference database of images. However, the color information in the target image remains notoriously difficult to interpret. In this paper, we report a new technique which: (i) is robust against illumination change, (ii) offers discrimination ability to detect color change between faces having similar shape, and (iii) is specifically designed to detect red colored stains (i.e. facial bleeding). We adopt the Vanderlugt correlator (VLC) architecture with a segmented phase filter and we decompose the color target image using normalized red, green, and blue (RGB), and hue, saturation, and value (HSV) scales. We propose a new strategy to effectively utilize color information in signatures for further increasing the discrimination ability. The proposed algorithm has been found to be very efficient for discriminating face subjects with different skin colors, and those having color stains in different areas of the facial image.

  13. Research of Face Recognition with Fisher Linear Discriminant

    NASA Astrophysics Data System (ADS)

    Rahim, R.; Afriliansyah, T.; Winata, H.; Nofriansyah, D.; Ratnadewi; Aryza, S.

    2018-01-01

    Face identification systems are developing rapidly, and these developments drive the advancement of biometric-based identification systems that have high accuracy. However, to develop a good face recognition system and to have high accuracy is something that’s hard to find. Human faces have diverse expressions and attribute changes such as eyeglasses, mustache, beard and others. Fisher Linear Discriminant (FLD) is a class-specific method that distinguishes facial image images into classes and also creates distance between classes and intra classes so as to produce better classification.

  14. Real-time calibration-free C-scan images of the eye fundus using Master Slave swept source optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Bradu, Adrian; Kapinchev, Konstantin; Barnes, Fred; Garway-Heath, David F.; Rajendram, Ranjan; Keane, Pearce; Podoleanu, Adrian G.

    2015-03-01

    Recently, we introduced a novel Optical Coherence Tomography (OCT) method, termed as Master Slave OCT (MS-OCT), specialized for delivering en-face images. This method uses principles of spectral domain interfereometry in two stages. MS-OCT operates like a time domain OCT, selecting only signals from a chosen depth only while scanning the laser beam across the eye. Time domain OCT allows real time production of an en-face image, although relatively slowly. As a major advance, the Master Slave method allows collection of signals from any number of depths, as required by the user. The tremendous advantage in terms of parallel provision of data from numerous depths could not be fully employed by using multi core processors only. The data processing required to generate images at multiple depths simultaneously is not achievable with commodity multicore processors only. We compare here the major improvement in processing and display, brought about by using graphic cards. We demonstrate images obtained with a swept source at 100 kHz (which determines an acquisition time [Ta] for a frame of 200×200 pixels2 of Ta =1.6 s). By the end of the acquired frame being scanned, using our computing capacity, 4 simultaneous en-face images could be created in T = 0.8 s. We demonstrate that by using graphic cards, 32 en-face images can be displayed in Td 0.3 s. Other faster swept source engines can be used with no difference in terms of Td. With 32 images (or more), volumes can be created for 3D display, using en-face images, as opposed to the current technology where volumes are created using cross section OCT images.

  15. Fabrication of gallium nitride nanowires by metal-assisted photochemical etching

    NASA Astrophysics Data System (ADS)

    Zhang, Miao-Rong; Jiang, Qing-Mei; Zhang, Shao-Hui; Wang, Zu-Gang; Hou, Fei; Pan, Ge-Bo

    2017-11-01

    Gallium nitride (GaN) nanowires (NWs) were fabricated by metal-assisted photochemical etching (MaPEtch). Gold nanoparticles (AuNPs) as metal catalyst were electrodeposited on the GaN substrate. SEM and HRTEM images show the surface of GaN NWs is smooth and clean without any impurity. SAED and FFT patterns demonstrate GaN NWs have single crystal structure, and the crystallographic orientation of GaN NWs is (0002) face. On the basis of the assumption of localized galvanic cells, combined with the energy levels and electrochemical potentials of reactants in this etching system, the generation, transfer and consumption of electron-hole pairs reveal the whole MaPEtch reaction process. Such easily fabricated GaN NWs have great potential for the assembly of GaN-based single-nanowire nanodevices.

  16. Reconstructing Perceived and Retrieved Faces from Activity Patterns in Lateral Parietal Cortex.

    PubMed

    Lee, Hongmi; Kuhl, Brice A

    2016-06-01

    Recent findings suggest that the contents of memory encoding and retrieval can be decoded from the angular gyrus (ANG), a subregion of posterior lateral parietal cortex. However, typical decoding approaches provide little insight into the nature of ANG content representations. Here, we tested whether complex, multidimensional stimuli (faces) could be reconstructed from ANG by predicting underlying face components from fMRI activity patterns in humans. Using an approach inspired by computer vision methods for face recognition, we applied principal component analysis to a large set of face images to generate eigenfaces. We then modeled relationships between eigenface values and patterns of fMRI activity. Activity patterns evoked by individual faces were then used to generate predicted eigenface values, which could be transformed into reconstructions of individual faces. We show that visually perceived faces were reliably reconstructed from activity patterns in occipitotemporal cortex and several lateral parietal subregions, including ANG. Subjective assessment of reconstructed faces revealed specific sources of information (e.g., affect and skin color) that were successfully reconstructed in ANG. Strikingly, we also found that a model trained on ANG activity patterns during face perception was able to successfully reconstruct an independent set of face images that were held in memory. Together, these findings provide compelling evidence that ANG forms complex, stimulus-specific representations that are reflected in activity patterns evoked during perception and remembering. Neuroimaging studies have consistently implicated lateral parietal cortex in episodic remembering, but the functional contributions of lateral parietal cortex to memory remain a topic of debate. Here, we used an innovative form of fMRI pattern analysis to test whether lateral parietal cortex actively represents the contents of memory. Using a large set of human face images, we first extracted latent face components (eigenfaces). We then used machine learning algorithms to predict face components from fMRI activity patterns and, ultimately, to reconstruct images of individual faces. We show that activity patterns in a subregion of lateral parietal cortex, the angular gyrus, supported successful reconstruction of perceived and remembered faces, confirming a role for this region in actively representing remembered content. Copyright © 2016 the authors 0270-6474/16/366069-14$15.00/0.

  17. Real-time direct and diffraction X-ray imaging of irregular silicon wafer breakage.

    PubMed

    Rack, Alexander; Scheel, Mario; Danilewsky, Andreas N

    2016-03-01

    Fracture and breakage of single crystals, particularly of silicon wafers, are multi-scale problems: the crack tip starts propagating on an atomic scale with the breaking of chemical bonds, forms crack fronts through the crystal on the micrometre scale and ends macroscopically in catastrophic wafer shattering. Total wafer breakage is a severe problem for the semiconductor industry, not only during handling but also during temperature treatments, leading to million-dollar costs per annum in a device production line. Knowledge of the relevant dynamics governing perfect cleavage along the {111} or {110} faces, and of the deflection into higher indexed {hkl} faces of higher energy, is scarce due to the high velocity of the process. Imaging techniques are commonly limited to depicting only the state of a wafer before the crack and in the final state. This paper presents, for the first time, in situ high-speed crack propagation under thermal stress, imaged simultaneously in direct transmission and diffraction X-ray imaging. It shows how the propagating crack tip and the related strain field can be tracked in the phase-contrast and diffracted images, respectively. Movies with a time resolution of microseconds per frame reveal that the strain and crack tip do not propagate continuously or at a constant speed. Jumps in the crack tip position indicate pinning of the crack tip for about 1-2 ms followed by jumps faster than 2-6 m s(-1), leading to a macroscopically observed average velocity of 0.028-0.055 m s(-1). The presented results also give a proof of concept that the described X-ray technique is compatible with studying ultra-fast cracks up to the speed of sound.

  18. Discriminative Projection Selection Based Face Image Hashing

    NASA Astrophysics Data System (ADS)

    Karabat, Cagatay; Erdogan, Hakan

    Face image hashing is an emerging method used in biometric verification systems. In this paper, we propose a novel face image hashing method based on a new technique called discriminative projection selection. We apply the Fisher criterion for selecting the rows of a random projection matrix in a user-dependent fashion. Moreover, another contribution of this paper is to employ a bimodal Gaussian mixture model at the quantization step. Our simulation results on three different databases demonstrate that the proposed method has superior performance in comparison to previously proposed random projection based methods.

  19. FaceWarehouse: a 3D facial expression database for visual computing.

    PubMed

    Cao, Chen; Weng, Yanlin; Zhou, Shun; Tong, Yiying; Zhou, Kun

    2014-03-01

    We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image.

  20. In situ imaging of single carbohydrate-binding modules on cellulose microfibrils.

    PubMed

    Dagel, Daryl J; Liu, Yu-San; Zhong, Lanlan; Luo, Yonghua; Himmel, Michael E; Xu, Qi; Zeng, Yining; Ding, Shi-You; Smith, Steve

    2011-02-03

    The low efficiency of enzymes used in the bioprocessing of biomass for biofuels is one of the primary bottlenecks that must be overcome to make lignocellulosic biofuels cost-competitive. One of the rate-limiting factors is the accessibility of the cellulase enzymes to insoluble cellulolytic substrates, facilitated by surface absorption of the carbohydrate-binding modules (CBMs), a component of most cellulase systems. Despite their importance, reports of direct observation of CBM function and activity using microscopic methods are still uncommon. Here, we examine the site-specific binding of individual CBMs to crystalline cellulose in an aqueous environment, using the single molecule fluorescence method known as Defocused Orientation and Position Imaging (DOPI). Systematic orientations were observed that are consistent with the CBMs binding to the two opposite hydrophobic faces of the cellulose microfibril, with a well-defined orientation relative to the fiber axis. The approach provides in situ physical evidence indicating the CBMs bind with a well-defined orientation on those planes, thus supporting a binding mechanism driven by chemical and structural recognition of the cellulose surface.

  1. Brain regions sensitive to the face inversion effect: a functional magnetic resonance imaging study in humans.

    PubMed

    Leube, Dirk T; Yoon, Hyo Woon; Rapp, Alexander; Erb, Michael; Grodd, Wolfgang; Bartels, Mathias; Kircher, Tilo T J

    2003-05-22

    Perception of upright faces relies on configural processing. Therefore recognition of inverted, compared to upright faces is impaired. In a functional magnetic resonance imaging experiment we investigated the neural correlate of a face inversion task. Thirteen healthy subjects were presented with a equal number of upright and inverted faces alternating with a low level baseline with an upright and inverted picture of an abstract symbol. Brain activation was calculated for upright minus inverted faces. For this differential contrast, we found a signal change in the right superior temporal sulcus and right insula. Configural properties are processed in a network comprising right superior temporal and insular cortex.

  2. En Face Spectral-Domain Optical Coherence Tomography for the Diagnosis and Evaluation of Polypoidal Choroidal Vasculopathy.

    PubMed

    Kokame, Gregg T; Shantha, Jessica G; Hirai, Kelsi; Ayabe, Julia

    2016-08-01

    To evaluate the diagnostic capability of en face spectral-domain optical coherence tomography (SD-OCT) in patients with polypoidal choroidal vasculopathy (PCV) diagnosed by indocyanine green angiography (ICGA). A retrospective, consecutive case series of 100 eyes diagnosed with PCV by ICGA were imaged with en face SD-OCT. Evaluation of the PCV complex on en face SD-OCT was performed on the ability to diagnose PCV by the characteristic configuration of the PCV complex and the extent and size of the PCV lesion. The PCV complex was better visualized on ICGA in 45 eyes, on en face SD-OCT in 44 eyes, and equally well in 11 eyes. The extent of the PCV complex was larger on en face SD-OCT in 65 eyes, larger on ICGA in 23 eyes, and equal in size in 12 eyes. En face SD-OCT images the characteristic findings of PCV and provides a noninvasive way to diagnose and treat PCV when ICGA is not available. [Ophthalmic Surg Lasers Imaging Retina. 2016;47:737-744.]. Copyright 2016, SLACK Incorporated.

  3. Do bodily expressions compete with facial expressions? Time course of integration of emotional signals from the face and the body.

    PubMed

    Gu, Yuanyuan; Mai, Xiaoqin; Luo, Yue-jia

    2013-01-01

    The decoding of social signals from nonverbal cues plays a vital role in the social interactions of socially gregarious animals such as humans. Because nonverbal emotional signals from the face and body are normally seen together, it is important to investigate the mechanism underlying the integration of emotional signals from these two sources. We conducted a study in which the time course of the integration of facial and bodily expressions was examined via analysis of event-related potentials (ERPs) while the focus of attention was manipulated. Distinctive integrating features were found during multiple stages of processing. In the first stage, threatening information from the body was extracted automatically and rapidly, as evidenced by enhanced P1 amplitudes when the subjects viewed compound face-body images with fearful bodies compared with happy bodies. In the second stage, incongruency between emotional information from the face and the body was detected and captured by N2. Incongruent compound images elicited larger N2s than did congruent compound images. The focus of attention modulated the third stage of integration. When the subjects' attention was focused on the face, images with congruent emotional signals elicited larger P3s than did images with incongruent signals, suggesting more sustained attention and elaboration of congruent emotional information extracted from the face and body. On the other hand, when the subjects' attention was focused on the body, images with fearful bodies elicited larger P3s than did images with happy bodies, indicating more sustained attention and elaboration of threatening information from the body during evaluative processes.

  4. Do Bodily Expressions Compete with Facial Expressions? Time Course of Integration of Emotional Signals from the Face and the Body

    PubMed Central

    Gu, Yuanyuan; Mai, Xiaoqin; Luo, Yue-jia

    2013-01-01

    The decoding of social signals from nonverbal cues plays a vital role in the social interactions of socially gregarious animals such as humans. Because nonverbal emotional signals from the face and body are normally seen together, it is important to investigate the mechanism underlying the integration of emotional signals from these two sources. We conducted a study in which the time course of the integration of facial and bodily expressions was examined via analysis of event-related potentials (ERPs) while the focus of attention was manipulated. Distinctive integrating features were found during multiple stages of processing. In the first stage, threatening information from the body was extracted automatically and rapidly, as evidenced by enhanced P1 amplitudes when the subjects viewed compound face-body images with fearful bodies compared with happy bodies. In the second stage, incongruency between emotional information from the face and the body was detected and captured by N2. Incongruent compound images elicited larger N2s than did congruent compound images. The focus of attention modulated the third stage of integration. When the subjects' attention was focused on the face, images with congruent emotional signals elicited larger P3s than did images with incongruent signals, suggesting more sustained attention and elaboration of congruent emotional information extracted from the face and body. On the other hand, when the subjects' attention was focused on the body, images with fearful bodies elicited larger P3s than did images with happy bodies, indicating more sustained attention and elaboration of threatening information from the body during evaluative processes. PMID:23935825

  5. The Mental Health Status of Single-Parent Community College Students in California.

    PubMed

    Shenoy, Divya P; Lee, Christine; Trieu, Sang Leng

    2016-01-01

    Single-parenting students face unique challenges that may adversely affect their mental health, which have not been explored in community college settings. The authors conducted secondary analysis of Spring 2013 data from the American College Health Association-National College Health Assessment to examine difficulties facing single-parent community college students and the association between single parenting and negative mental health (depression, self-injury, suicide attempt). Participants were 6,832 California community college students, of whom 309 were single parents. Demographic and mental health data were characterized using univariate descriptive analyses. Bivariate analyses determined whether single parents differed from other students regarding negative mental health or traumatic/difficult events. Finances, family, and relationship difficulties disproportionally affected single parents, who reported nearly twice as many suicide attempts as their counterparts (5.3% vs. 2.7%; p < .0001). Single-parenting students face a higher prevalence of mental health stressors than other community college students.

  6. Multiple Mice Based Collaborative One-to-One Learning

    ERIC Educational Resources Information Center

    Infante, Cristian; Hidalgo, Pedro; Nussbaum, Miguel; Alarcon, Rosa; Gottlieb, Andres

    2009-01-01

    Exchange is a collaborative learning application, originally developed for wirelessly interconnected Pocket PCs, that provides support for students and a teacher performing a face-to-face computer supported collaborative learning (CSCL) activity in a Single Input/Single Display (SISD) mode. We extend the application to support a single display…

  7. Fast periodic stimulation (FPS): a highly effective approach in fMRI brain mapping.

    PubMed

    Gao, Xiaoqing; Gentile, Francesco; Rossion, Bruno

    2018-06-01

    Defining the neural basis of perceptual categorization in a rapidly changing natural environment with low-temporal resolution methods such as functional magnetic resonance imaging (fMRI) is challenging. Here, we present a novel fast periodic stimulation (FPS)-fMRI approach to define face-selective brain regions with natural images. Human observers are presented with a dynamic stream of widely variable natural object images alternating at a fast rate (6 images/s). Every 9 s, a short burst of variable face images contrasting with object images in pairs induces an objective face-selective neural response at 0.111 Hz. A model-free Fourier analysis achieves a twofold increase in signal-to-noise ratio compared to a conventional block-design approach with identical stimuli and scanning duration, allowing to derive a comprehensive map of face-selective areas in the ventral occipito-temporal cortex, including the anterior temporal lobe (ATL), in all individual brains. Critically, periodicity of the desired category contrast and random variability among widely diverse images effectively eliminates the contribution of low-level visual cues, and lead to the highest values (80-90%) of test-retest reliability in the spatial activation map yet reported in imaging higher level visual functions. FPS-fMRI opens a new avenue for understanding brain function with low-temporal resolution methods.

  8. Age synthesis and estimation via faces: a survey.

    PubMed

    Fu, Yun; Guo, Guodong; Huang, Thomas S

    2010-11-01

    Human age, as an important personal trait, can be directly inferred by distinct patterns emerging from the facial appearance. Derived from rapid advances in computer graphics and machine vision, computer-based age synthesis and estimation via faces have become particularly prevalent topics recently because of their explosively emerging real-world applications, such as forensic art, electronic customer relationship management, security control and surveillance monitoring, biometrics, entertainment, and cosmetology. Age synthesis is defined to rerender a face image aesthetically with natural aging and rejuvenating effects on the individual face. Age estimation is defined to label a face image automatically with the exact age (year) or the age group (year range) of the individual face. Because of their particularity and complexity, both problems are attractive yet challenging to computer-based application system designers. Large efforts from both academia and industry have been devoted in the last a few decades. In this paper, we survey the complete state-of-the-art techniques in the face image-based age synthesis and estimation topics. Existing models, popular algorithms, system performances, technical difficulties, popular face aging databases, evaluation protocols, and promising future directions are also provided with systematic discussions.

  9. Body image and face image in Asian American and white women: Examining associations with surveillance, construal of self, perfectionism, and sociocultural pressures.

    PubMed

    Frederick, David A; Kelly, Mackenzie C; Latner, Janet D; Sandhu, Gaganjyot; Tsong, Yuying

    2016-03-01

    Asian American women experience sociocultural pressures that could place them at increased risk for experiencing body and face dissatisfaction. Asian American and White women completed measures of appearance evaluation, overweight preoccupation, face satisfaction, face dissatisfaction frequency, perfectionism, surveillance, interdependent and independent self-construal, and perceived sociocultural pressures. In Study 1 (N=182), Asian American women were more likely than White women to report low appearance evaluation (24% vs. 12%; d=-0.50) and to be sometimes-always dissatisfied with the appearance of their eyes (38% vs. 6%; d=0.90) and face overall (59% vs. 34%; d=0.41). In Study 2 (N=488), they were more likely to report low appearance evaluation (36% vs. 23%; d=-0.31) and were less likely to report high eye appearance satisfaction (59% vs. 88%; d=-0.84). The findings highlight the importance of considering ethnic differences when assessing body and face image. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Archetypal-Imaging and Mirror-Gazing

    PubMed Central

    Caputo, Giovanni B.

    2013-01-01

    Mirrors have been studied by cognitive psychology in order to understand self-recognition, self-identity, and self-consciousness. Moreover, the relevance of mirrors in spirituality, magic and arts may also suggest that mirrors can be symbols of unconscious contents. Carl G. Jung investigated mirrors in relation to the unconscious, particularly in Psychology and Alchemy. However, the relationship between the conscious behavior in front of a mirror and the unconscious meaning of mirrors has not been clarified. Recently, empirical research found that gazing at one’s own face in the mirror for a few minutes, at a low illumination level, produces the perception of bodily dysmorphic illusions of strange-faces. Healthy observers usually describe huge distortions of their own faces, monstrous beings, prototypical faces, faces of relatives and deceased, and faces of animals. In the psychiatric population, some schizophrenics show a dramatic increase of strange-face illusions. They can also describe the perception of multiple-others that fill the mirror surface surrounding their strange-face. Schizophrenics are usually convinced that strange-face illusions are truly real and identify themselves with strange-face illusions, diversely from healthy individuals who never identify with them. On the contrary, most patients with major depression do not perceive strange-face illusions, or they perceive very faint changes of their immobile faces in the mirror, like death statues. Strange-face illusions may be the psychodynamic projection of the subject’s unconscious archetypal contents into the mirror image. Therefore, strange-face illusions might provide both an ecological setting and an experimental technique for “imaging of the unconscious”. Future researches have been proposed. PMID:25379264

  11. Archetypal-imaging and mirror-gazing.

    PubMed

    Caputo, Giovanni B

    2014-03-01

    Mirrors have been studied by cognitive psychology in order to understand self-recognition, self-identity, and self-consciousness. Moreover, the relevance of mirrors in spirituality, magic and arts may also suggest that mirrors can be symbols of unconscious contents. Carl G. Jung investigated mirrors in relation to the unconscious, particularly in Psychology and Alchemy. However, the relationship between the conscious behavior in front of a mirror and the unconscious meaning of mirrors has not been clarified. Recently, empirical research found that gazing at one's own face in the mirror for a few minutes, at a low illumination level, produces the perception of bodily dysmorphic illusions of strange-faces. Healthy observers usually describe huge distortions of their own faces, monstrous beings, prototypical faces, faces of relatives and deceased, and faces of animals. In the psychiatric population, some schizophrenics show a dramatic increase of strange-face illusions. They can also describe the perception of multiple-others that fill the mirror surface surrounding their strange-face. Schizophrenics are usually convinced that strange-face illusions are truly real and identify themselves with strange-face illusions, diversely from healthy individuals who never identify with them. On the contrary, most patients with major depression do not perceive strange-face illusions, or they perceive very faint changes of their immobile faces in the mirror, like death statues. Strange-face illusions may be the psychodynamic projection of the subject's unconscious archetypal contents into the mirror image. Therefore, strange-face illusions might provide both an ecological setting and an experimental technique for "imaging of the unconscious". Future researches have been proposed.

  12. Foveation: an alternative method to simultaneously preserve privacy and information in face images

    NASA Astrophysics Data System (ADS)

    Alonso, Víctor E.; Enríquez-Caldera, Rogerio; Sucar, Luis Enrique

    2017-03-01

    This paper presents a real-time foveation technique proposed as an alternative method for image obfuscation while simultaneously preserving privacy in face deidentification. Relevance of the proposed technique is discussed through a comparative study of the most common distortions methods in face images and an assessment on performance and effectiveness of privacy protection. All the different techniques presented here are evaluated when they go through a face recognition software. Evaluating the data utility preservation was carried out under gender and facial expression classification. Results on quantifying the tradeoff between privacy protection and image information preservation at different obfuscation levels are presented. Comparative results using the facial expression subset of the FERET database show that the technique achieves a good tradeoff between privacy and awareness with 30% of recognition rate and a classification accuracy as high as 88% obtained from the common figures of merit using the privacy-awareness map.

  13. Implementation of a high-speed face recognition system that uses an optical parallel correlator.

    PubMed

    Watanabe, Eriko; Kodate, Kashiko

    2005-02-10

    We implement a fully automatic fast face recognition system by using a 1000 frame/s optical parallel correlator designed and assembled by us. The operational speed for the 1:N (i.e., matching one image against N, where N refers to the number of images in the database) identification experiment (4000 face images) amounts to less than 1.5 s, including the preprocessing and postprocessing times. The binary real-only matched filter is devised for the sake of face recognition, and the system is optimized by the false-rejection rate (FRR) and the false-acceptance rate (FAR), according to 300 samples selected by the biometrics guideline. From trial 1:N identification experiments with the optical parallel correlator, we acquired low error rates of 2.6% FRR and 1.3% FAR. Facial images of people wearing thin glasses or heavy makeup that rendered identification difficult were identified with this system.

  14. Multi-volumetric registration and mosaicking using swept-source spectrally encoded scanning laser ophthalmoscopy and optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Bozic, Ivan; El-Haddad, Mohamed T.; Malone, Joseph D.; Joos, Karen M.; Patel, Shriji N.; Tao, Yuankai K.

    2017-02-01

    Ophthalmic diagnostic imaging using optical coherence tomography (OCT) is limited by bulk eye motions and a fundamental trade-off between field-of-view (FOV) and sampling density. Here, we introduced a novel multi-volumetric registration and mosaicking method using our previously described multimodal swept-source spectrally encoded scanning laser ophthalmoscopy and OCT (SS-SESLO-OCT) system. Our SS-SESLO-OCT acquires an entire en face fundus SESLO image simultaneously with every OCT cross-section at 200 frames-per-second. In vivo human retinal imaging was performed in a healthy volunteer, and three volumetric datasets were acquired with the volunteer moving freely and refixating between each acquisition. In post-processing, SESLO frames were used to estimate en face rotational and translational motions by registering every frame in all three volumetric datasets to the first frame in the first volume. OCT cross-sections were contrast-normalized and registered axially and rotationally across all volumes. Rotational and translational motions calculated from SESLO frames were applied to corresponding OCT B-scans to compensate for interand intra-B-scan bulk motions, and the three registered volumes were combined into a single interpolated multi-volumetric mosaic. Using complementary information from SESLO and OCT over serially acquired volumes, we demonstrated multivolumetric registration and mosaicking to recover regions of missing data resulting from blinks, saccades, and ocular drifts. We believe our registration method can be directly applied for multi-volumetric motion compensation, averaging, widefield mosaicking, and vascular mapping with potential applications in ophthalmic clinical diagnostics, handheld imaging, and intraoperative guidance.

  15. Visual imagery of famous faces: effects of memory and attention revealed by fMRI.

    PubMed

    Ishai, Alumit; Haxby, James V; Ungerleider, Leslie G

    2002-12-01

    Complex pictorial information can be represented and retrieved from memory as mental visual images. Functional brain imaging studies have shown that visual perception and visual imagery share common neural substrates. The type of memory (short- or long-term) that mediates the generation of mental images, however, has not been addressed previously. The purpose of this study was to investigate the neural correlates underlying imagery generated from short- and long-term memory (STM and LTM). We used famous faces to localize the visual response during perception and to compare the responses during visual imagery generated from STM (subjects memorized specific pictures of celebrities before the imagery task) and imagery from LTM (subjects imagined famous faces without seeing specific pictures during the experimental session). We found that visual perception of famous faces activated the inferior occipital gyri, lateral fusiform gyri, the superior temporal sulcus, and the amygdala. Small subsets of these face-selective regions were activated during imagery. Additionally, visual imagery of famous faces activated a network of regions composed of bilateral calcarine, hippocampus, precuneus, intraparietal sulcus (IPS), and the inferior frontal gyrus (IFG). In all these regions, imagery generated from STM evoked more activation than imagery from LTM. Regardless of memory type, focusing attention on features of the imagined faces (e.g., eyes, lips, or nose) resulted in increased activation in the right IPS and right IFG. Our results suggest differential effects of memory and attention during the generation and maintenance of mental images of faces.

  16. Face identity matching is selectively impaired in developmental prosopagnosia.

    PubMed

    Fisher, Katie; Towler, John; Eimer, Martin

    2017-04-01

    Individuals with developmental prosopagnosia (DP) have severe face recognition deficits, but the mechanisms that are responsible for these deficits have not yet been fully identified. We assessed whether the activation of visual working memory for individual faces is selectively impaired in DP. Twelve DPs and twelve age-matched control participants were tested in a task where they reported whether successively presented faces showed the same or two different individuals, and another task where they judged whether the faces showed the same or different facial expressions. Repetitions versus changes of the other currently irrelevant attribute were varied independently. DPs showed impaired performance in the identity task, but performed at the same level as controls in the expression task. An electrophysiological marker for the activation of visual face memory by identity matches (N250r component) was strongly attenuated in the DP group, and the size of this attenuation was correlated with poor performance in a standardized face recognition test. Results demonstrate an identity-specific deficit of visual face memory in DPs. Their reduced sensitivity to identity matches in the presence of other image changes could result from earlier deficits in the perceptual extraction of image-invariant visual identity cues from face images. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  17. Clustering Millions of Faces by Identity.

    PubMed

    Otto, Charles; Wang, Dayong; Jain, Anil K

    2018-02-01

    Given a large collection of unlabeled face images, we address the problem of clustering faces into an unknown number of identities. This problem is of interest in social media, law enforcement, and other applications, where the number of faces can be of the order of hundreds of million, while the number of identities (clusters) can range from a few thousand to millions. To address the challenges of run-time complexity and cluster quality, we present an approximate Rank-Order clustering algorithm that performs better than popular clustering algorithms (k-Means and Spectral). Our experiments include clustering up to 123 million face images into over 10 million clusters. Clustering results are analyzed in terms of external (known face labels) and internal (unknown face labels) quality measures, and run-time. Our algorithm achieves an F-measure of 0.87 on the LFW benchmark (13 K faces of 5,749 individuals), which drops to 0.27 on the largest dataset considered (13 K faces in LFW + 123M distractor images). Additionally, we show that frames in the YouTube benchmark can be clustered with an F-measure of 0.71. An internal per-cluster quality measure is developed to rank individual clusters for manual exploration of high quality clusters that are compact and isolated.

  18. Split-face comparison between single-band and dual-band pulsed light technology for treatment of photodamage.

    PubMed

    Varughese, Neal; Keller, Lauren; Goldberg, David J

    2016-08-01

    Intense pulsed light (IPL) has a well-recognized role in the treatment of photodamaged skin. To assess the safety and efficacy of a novel single-band IPL handpiece versus dual-band IPL handpiece in the treatment of photodamage. This was a prospective, single-center split-face study with 20 enrolled participants. Three treatments, 21 days apart, were administered to the subjects and follow-up was performed for 20 weeks. The left side of the face was treated with the single-band handpiece. The right side of the face was treated with the dual-band handpiece. Blinded investigators assessed the subjects' skin texture, pigmented components of photodamage, and presence of telangiectasia both before and after treatment, utilizing a five-point scale. Pigmented components of photodamage, skin texture, and presence of telangiectasias on the left and right side of the face were improved at the end of treatment. At 20-week follow-up, the side treated with single-band handpiece showed improvement in telangiectasia and pigmentation that was statistically superior to the contralateral side treated with the dual-band handpiece. Both devices equally improved textural changes. No adverse effects were noted with either device. Both single-band and dual-band IPL technology are safe and effective in the treatment of photodamaged facial skin. IPL treatment with a single-band handpiece yielded results comparable or superior to dual-band technology.

  19. Perceptual Asymmetries Are Preserved in Memory for Highly Familiar Faces of Self and Friend

    ERIC Educational Resources Information Center

    Brady, N.; Campbell, M.; Flaherty, M.

    2005-01-01

    We investigated the effect of familiarity on people's perception of facial likeness by asking participants to choose which of two mirror-symmetric chimeric images (made from the left or right half of a photograph of a face) looked more like an original image. In separate trials the participants made this judgment for their own face and for the…

  20. The role of the right prefrontal cortex in self-evaluation of the face: a functional magnetic resonance imaging study.

    PubMed

    Morita, Tomoyo; Itakura, Shoji; Saito, Daisuke N; Nakashita, Satoshi; Harada, Tokiko; Kochiyama, Takanori; Sadato, Norihiro

    2008-02-01

    Individuals can experience negative emotions (e.g., embarrassment) accompanying self-evaluation immediately after recognizing their own facial image, especially if it deviates strongly from their mental representation of ideals or standards. The aim of this study was to identify the cortical regions involved in self-recognition and self-evaluation along with self-conscious emotions. To increase the range of emotions accompanying self-evaluation, we used facial feedback images chosen from a video recording, some of which deviated significantly from normal images. In total, 19 participants were asked to rate images of their own face (SELF) and those of others (OTHERS) according to how photogenic they appeared to be. After scanning the images, the participants rated how embarrassed they felt upon viewing each face. As the photogenic scores decreased, the embarrassment ratings dramatically increased for the participant's own face compared with those of others. The SELF versus OTHERS contrast significantly increased the activation of the right prefrontal cortex, bilateral insular cortex, anterior cingulate cortex, and bilateral occipital cortex. Within the right prefrontal cortex, activity in the right precentral gyrus reflected the trait of awareness of observable aspects of the self; this provided strong evidence that the right precentral gyrus is specifically involved in self-face recognition. By contrast, activity in the anterior region, which is located in the right middle inferior frontal gyrus, was modulated by the extent of embarrassment. This finding suggests that the right middle inferior frontal gyrus is engaged in self-evaluation preceded by self-face recognition based on the relevance to a standard self.

  1. Quantitative Analysis of En Face Spectral-Domain Optical Coherence Tomography Imaging in Polypoidal Choroidal Vasculopathy.

    PubMed

    Simonett, Joseph M; Chan, Errol W; Chou, Jonathan; Skondra, Dimitra; Colon, Daniel; Chee, Caroline K; Lingam, Gopal; Fawzi, Amani A

    2017-02-01

    Spectral-domain optical coherence tomography (SD-OCT) imaging can be used to visualize polypoidal choroidal vasculopathy (PCV) lesions in the en face plane. Here, the authors describe a novel lesion quantification technique and compare PCV lesion area measurements and morphology before and after anti-vascular endothelial growth factor (VEGF) treatment. Volumetric SD-OCT scans in eyes with PCV before and after induction anti-VEGF therapy were retrospectively analyzed. En face SD-OCT images were generated and a pixel intensity thresholding process was used to quantify total lesion area. Thirteen eyes with PCV were analyzed. En face SD-OCT PCV lesion area quantification showed good intergrader reliability (intraclass correlation coefficient = 0.944). Total PCV lesion area was significantly reduced after anti-VEGF therapy (2.22 mm 2 vs. 2.73 mm 2 ; P = .02). The overall geographic pattern of the branching vascular network was typically preserved. PCV lesion area analysis using en face SD-OCT is a reproducible tool that can quantify treatment related changes. [Ophthalmic Surg Lasers Imaging Retina. 2017;48:126-133.]. Copyright 2017, SLACK Incorporated.

  2. Stepped inlet optical panel

    DOEpatents

    Veligdan, James T.

    2001-01-01

    An optical panel includes stacked optical waveguides having stepped inlet facets collectively defining an inlet face for receiving image light, and having beveled outlet faces collectively defining a display screen for displaying the image light channeled through the waveguides by internal reflection.

  3. Face recognition by applying wavelet subband representation and kernel associative memory.

    PubMed

    Zhang, Bai-Ling; Zhang, Haihong; Ge, Shuzhi Sam

    2004-01-01

    In this paper, we propose an efficient face recognition scheme which has two features: 1) representation of face images by two-dimensional (2-D) wavelet subband coefficients and 2) recognition by a modular, personalised classification method based on kernel associative memory models. Compared to PCA projections and low resolution "thumb-nail" image representations, wavelet subband coefficients can efficiently capture substantial facial features while keeping computational complexity low. As there are usually very limited samples, we constructed an associative memory (AM) model for each person and proposed to improve the performance of AM models by kernel methods. Specifically, we first applied kernel transforms to each possible training pair of faces sample and then mapped the high-dimensional feature space back to input space. Our scheme using modular autoassociative memory for face recognition is inspired by the same motivation as using autoencoders for optical character recognition (OCR), for which the advantages has been proven. By associative memory, all the prototypical faces of one particular person are used to reconstruct themselves and the reconstruction error for a probe face image is used to decide if the probe face is from the corresponding person. We carried out extensive experiments on three standard face recognition datasets, the FERET data, the XM2VTS data, and the ORL data. Detailed comparisons with earlier published results are provided and our proposed scheme offers better recognition accuracy on all of the face datasets.

  4. Event-related potentials of self-face recognition in children with pervasive developmental disorders.

    PubMed

    Gunji, Atsuko; Inagaki, Masumi; Inoue, Yuki; Takeshima, Yasuyuki; Kaga, Makiko

    2009-02-01

    Patients with pervasive developmental disorders (PDD) often have difficulty reading facial expressions and deciphering their implied meaning. We focused on semantic encoding related to face cognition to investigate event-related potentials (ERPs) to the subject's own face and familiar faces in children with and without PDD. Eight children with PDD (seven boys and one girl; aged 10.8+/-2.9 years; one left-handed) and nine age-matched typically developing children (four boys and five girls; aged 11.3+/-2.3 years; one left-handed) participated in this study. The stimuli consisted of three face images (self, familiar, and unfamiliar faces), one scrambled face image, and one object image (e.g., cup) with gray scale. We confirmed three major components: N170 and early posterior negativity (EPN) in the occipito-temporal regions (T5 and T6) and P300 in the parietal region (Pz). An enhanced N170 was observed as a face-specific response in all subjects. However, semantic encoding of each face might be unrelated to N170 because the amplitude and latency were not significantly different among the face conditions. On the other hand, an additional component after N170, EPN which was calculated in each subtracted waveform (self vs. familiar and familiar vs. unfamiliar), indicated self-awareness and familiarity with respect to face cognition in the control adults and children. Furthermore, the P300 amplitude in the control adults was significantly greater in the self-face condition than in the familiar-face condition. However, no significant differences in the EPN and P300 components were observed among the self-, familiar-, and unfamiliar-face conditions in the PDD children. The results suggest a deficit of semantic encoding of faces in children with PDD, which may be implicated in their delay in social communication.

  5. Interactive optical panel

    DOEpatents

    Veligdan, J.T.

    1995-10-03

    An interactive optical panel assembly includes an optical panel having a plurality of ribbon optical waveguides stacked together with opposite ends thereof defining panel first and second faces. A light source provides an image beam to the panel first face for being channeled through the waveguides and emitted from the panel second face in the form of a viewable light image. A remote device produces a response beam over a discrete selection area of the panel second face for being channeled through at least one of the waveguides toward the panel first face. A light sensor is disposed across a plurality of the waveguides for detecting the response beam therein for providing interactive capability. 10 figs.

  6. Learning representative features for facial images based on a modified principal component analysis

    NASA Astrophysics Data System (ADS)

    Averkin, Anton; Potapov, Alexey

    2013-05-01

    The paper is devoted to facial image analysis and particularly deals with the problem of automatic evaluation of the attractiveness of human faces. We propose a new approach for automatic construction of feature space based on a modified principal component analysis. Input data sets for the algorithm are the learning data sets of facial images, which are rated by one person. The proposed approach allows one to extract features of the individual subjective face beauty perception and to predict attractiveness values for new facial images, which were not included into a learning data set. The Pearson correlation coefficient between values predicted by our method for new facial images and personal attractiveness estimation values equals to 0.89. This means that the new approach proposed is promising and can be used for predicting subjective face attractiveness values in real systems of the facial images analysis.

  7. Computer vision research with new imaging technology

    NASA Astrophysics Data System (ADS)

    Hou, Guangqi; Liu, Fei; Sun, Zhenan

    2015-12-01

    Light field imaging is capable of capturing dense multi-view 2D images in one snapshot, which record both intensity values and directions of rays simultaneously. As an emerging 3D device, the light field camera has been widely used in digital refocusing, depth estimation, stereoscopic display, etc. Traditional multi-view stereo (MVS) methods only perform well on strongly texture surfaces, but the depth map contains numerous holes and large ambiguities on textureless or low-textured regions. In this paper, we exploit the light field imaging technology on 3D face modeling in computer vision. Based on a 3D morphable model, we estimate the pose parameters from facial feature points. Then the depth map is estimated through the epipolar plane images (EPIs) method. At last, the high quality 3D face model is exactly recovered via the fusing strategy. We evaluate the effectiveness and robustness on face images captured by a light field camera with different poses.

  8. Acute Solar Retinopathy Imaged With Adaptive Optics, Optical Coherence Tomography Angiography, and En Face Optical Coherence Tomography.

    PubMed

    Wu, Chris Y; Jansen, Michael E; Andrade, Jorge; Chui, Toco Y P; Do, Anna T; Rosen, Richard B; Deobhakta, Avnish

    2018-01-01

    Solar retinopathy is a rare form of retinal injury that occurs after direct sungazing. To enhance understanding of the structural changes that occur in solar retinopathy by obtaining high-resolution in vivo en face images. Case report of a young adult woman who presented to the New York Eye and Ear Infirmary with symptoms of acute solar retinopathy after viewing the solar eclipse on August 21, 2017. Results of comprehensive ophthalmic examination and images obtained by fundus photography, microperimetry, spectral-domain optical coherence tomography (OCT), adaptive optics scanning light ophthalmoscopy, OCT angiography, and en face OCT. The patient was examined after viewing the solar eclipse. Visual acuity was 20/20 OD and 20/25 OS. The patient was left-eye dominant. Spectral-domain OCT images were consistent with mild and severe acute solar retinopathy in the right and left eye, respectively. Microperimetry was normal in the right eye but showed paracentral decreased retinal sensitivity in the left eye with a central absolute scotoma. Adaptive optics images of the right eye showed a small region of nonwaveguiding photoreceptors, while images of the left eye showed a large area of abnormal and nonwaveguiding photoreceptors. Optical coherence tomography angiography images were normal in both eyes. En face OCT images of the right eye showed a small circular hyperreflective area, with central hyporeflectivity in the outer retina of the right eye. The left eye showed a hyperreflective lesion that intensified in area from inner to middle retina and became mostly hyporeflective in the outer retina. The shape of the lesion on adaptive optics and en face OCT images of the left eye corresponded to the shape of the scotoma drawn by the patient on Amsler grid. Acute solar retinopathy can present with foveal cone photoreceptor mosaic disturbances on adaptive optics scanning light ophthalmoscopy imaging. Corresponding reflectivity changes can be seen on en face OCT, especially in the middle and outer retina. Young adults may be especially vulnerable and need to be better informed of the risks of viewing the sun with inadequate protective eyewear.

  9. False match elimination for face recognition based on SIFT algorithm

    NASA Astrophysics Data System (ADS)

    Gu, Xuyuan; Shi, Ping; Shao, Meide

    2011-06-01

    The SIFT (Scale Invariant Feature Transform) is a well known algorithm used to detect and describe local features in images. It is invariant to image scale, rotation and robust to the noise and illumination. In this paper, a novel method used for face recognition based on SIFT is proposed, which combines the optimization of SIFT, mutual matching and Progressive Sample Consensus (PROSAC) together and can eliminate the false matches of face recognition effectively. Experiments on ORL face database show that many false matches can be eliminated and better recognition rate is achieved.

  10. The different faces of one’s self: an fMRI study into the recognition of current and past self-facial appearances

    PubMed Central

    Apps, Matthew A. J.; Tajadura-Jiménez, Ana; Turley, Grainne; Tsakiris, Manos

    2013-01-01

    Mirror self-recognition is often considered as an index of self-awareness. Neuroimaging studies have identified a neural circuit specialised for the recognition of one’s own current facial appearance. However, faces change considerably over a lifespan, highlighting the necessity for representations of one’s face to continually be updated. We used fMRI to investigate the different neural circuits involved in the recognition of the childhood and current, adult, faces of one’s self. Participants viewed images of either their own face as it currently looks morphed with the face of a familiar other or their childhood face morphed with the childhood face of the familiar other. Activity in areas which have a generalised selectivity for faces, including the inferior occipital gyrus, the superior parietal lobule and the inferior temporal gyrus, varied with the amount of current self in an image. Activity in areas involved in memory encoding and retrieval, including the hippocampus and the posterior cingulate gyrus, and areas involved in creating a sense of body ownership, including the temporo-parietal junction and the inferior parietal lobule, varied with the amount of childhood self in an image. We suggest that the recognition of one’s own past or present face is underpinned by different cognitive processes in distinct neural circuits. Current self-recognition engages areas involved in perceptual face processing, whereas childhood self-recognition recruits networks involved in body ownership and memory processing. PMID:22940117

  11. The different faces of one's self: an fMRI study into the recognition of current and past self-facial appearances.

    PubMed

    Apps, Matthew A J; Tajadura-Jiménez, Ana; Turley, Grainne; Tsakiris, Manos

    2012-11-15

    Mirror self-recognition is often considered as an index of self-awareness. Neuroimaging studies have identified a neural circuit specialised for the recognition of one's own current facial appearance. However, faces change considerably over a lifespan, highlighting the necessity for representations of one's face to continually be updated. We used fMRI to investigate the different neural circuits involved in the recognition of the childhood and current, adult, faces of one's self. Participants viewed images of either their own face as it currently looks morphed with the face of a familiar other or their childhood face morphed with the childhood face of the familiar other. Activity in areas which have a generalised selectivity for faces, including the inferior occipital gyrus, the superior parietal lobule and the inferior temporal gyrus, varied with the amount of current self in an image. Activity in areas involved in memory encoding and retrieval, including the hippocampus and the posterior cingulate gyrus, and areas involved in creating a sense of body ownership, including the temporo-parietal junction and the inferior parietal lobule, varied with the amount of childhood self in an image. We suggest that the recognition of one's own past or present face is underpinned by different cognitive processes in distinct neural circuits. Current self-recognition engages areas involved in perceptual face processing, whereas childhood self-recognition recruits networks involved in body ownership and memory processing. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. Wavelet-based associative memory

    NASA Astrophysics Data System (ADS)

    Jones, Katharine J.

    2004-04-01

    Faces provide important characteristics of a person"s identification. In security checks, face recognition still remains the method in continuous use despite other approaches (i.e. fingerprints, voice recognition, pupil contraction, DNA scanners). With an associative memory, the output data is recalled directly using the input data. This can be achieved with a Nonlinear Holographic Associative Memory (NHAM). This approach can also distinguish between strongly correlated images and images that are partially or totally enclosed by others. Adaptive wavelet lifting has been used for Content-Based Image Retrieval. In this paper, adaptive wavelet lifting will be applied to face recognition to achieve an associative memory.

  13. Cross sectional: perception of children from public and private schools regarding the esthetic impact of different types of face masks.

    PubMed

    Pithon, Matheus Melo; Ferraz, Caio Sousa; de Oliveira, Gabriel Couto; Dos Santos, Adrielle Mangabeira; Couto, Felipe Santos; da Silva Coqueiro, Raildo; Dos Santos, Rogério Lacerda

    2013-01-01

    The purpose was to evaluate the esthetic perception among children from public and private schools regarding the use of different types of face masks. Six different types of orthopedic face masks made from images of the same patient were evaluated. Initially, the images were standardized with the help of Adobe Photoshop software. The variable considered was type of mask: (A) Delaire with facebow; (B) Petit; (C) Delaire; (D)Turley; (E) Hickham; and (F) Sky Hook. The images were printed on photographic paper and incorporated into a specific personalized questionnaire that was distributed to 7- to 10-year-olds attending public and private schools (n=120). The data obtained were compared via chi-square, Fisher's exact tests, Mann-Whitney and Spearman's tests. The proportion of participants who chose image A as the best was significantly higher (P<.05) compared to the other masks. Images B and F were chosen as the worst, without a significant difference between them (P>.05). The mean scores between groups were not significantly correlated between private vs public schoolchildren (r=0.32) and between boys and girls (r=0.41). Delaire face mask with facebow was chosen as the most attractive, and the Petit and Sky Hook face masks were voted the least attractive.

  14. Aligning Arrays of Lenses and Single-Mode Optical Fibers

    NASA Technical Reports Server (NTRS)

    Liu, Duncan

    2004-01-01

    A procedure now under development is intended to enable the precise alignment of sheet arrays of microscopic lenses with the end faces of a coherent bundle of as many as 1,000 single-mode optical fibers packed closely in a regular array (see Figure 1). In the original application that prompted this development, the precise assembly of lenses and optical fibers serves as a single-mode spatial filter for a visible-light nulling interferometer. The precision of alignment must be sufficient to limit any remaining wavefront error to a root-mean-square value of less than 1/10 of a wavelength of light. This wavefront-error limit translates to requirements to (1) ensure uniformity of both the lens and fiber arrays, (2) ensure that the lateral distance from the central axis of each lens and the corresponding optical fiber is no more than a fraction of a micron, (3) angularly align the lens-sheet planes and the fiber-bundle end faces to within a few arc seconds, and (4) axially align the lenses and the fiber-bundle end faces to within tens of microns of the focal distance. Figure 2 depicts the apparatus used in the alignment procedure. The beam of light from a Zygo (or equivalent) interferometer is first compressed by a ratio of 20:1 so that upon its return to the interferometer, the beam will be magnified enough to enable measurement of wavefront quality. The apparatus includes relay lenses that enable imaging of the arrays of microscopic lenses in a charge-coupled-device (CCD) camera that is part of the interferometer. One of the arrays of microscopic lenses is mounted on a 6-axis stage, in proximity to the front face of the bundle of optical fibers. The bundle is mounted on a separate stage. A mirror is attached to the back face of the bundle of optical fibers for retroreflection of light. When a microscopic lens and a fiber are aligned with each other, the affected portion of the light is reflected back by the mirror, recollimated by the microscopic lens, transmitted through the relay lenses and the beam compressor/expander, then split so that half goes to a detector and half to the interferometer. The output of the detector is used as a feedback control signal for the six-axis stage to effect alignment.

  15. Locating faces in color photographs using neural networks

    NASA Astrophysics Data System (ADS)

    Brown, Joe R.; Talley, Jim

    1994-03-01

    This paper summarizes a research effort in finding the locations and sizes of faces in color images (photographs, video stills, etc.) if, in fact, faces are presented. Scenarios for using such a system include serving as the means of localizing skin for automatic color balancing during photo processing or it could be used as a front-end in a customs port of energy context for a system which identified persona non grata given a database of known faces. The approach presented here is a hybrid system including: a neural pre-processor, some conventional image processing steps, and a neural classifier as the final face/non-face discriminator. Neither the training (containing 17,655 faces) nor the test (containing 1829 faces) imagery databases were constrained in their content or quality. The results for the pilot system are reported along with a discussion for improving the current system.

  16. Distance Metric Learning Using Privileged Information for Face Verification and Person Re-Identification.

    PubMed

    Xu, Xinxing; Li, Wen; Xu, Dong

    2015-12-01

    In this paper, we propose a new approach to improve face verification and person re-identification in the RGB images by leveraging a set of RGB-D data, in which we have additional depth images in the training data captured using depth cameras such as Kinect. In particular, we extract visual features and depth features from the RGB images and depth images, respectively. As the depth features are available only in the training data, we treat the depth features as privileged information, and we formulate this task as a distance metric learning with privileged information problem. Unlike the traditional face verification and person re-identification tasks that only use visual features, we further employ the extra depth features in the training data to improve the learning of distance metric in the training process. Based on the information-theoretic metric learning (ITML) method, we propose a new formulation called ITML with privileged information (ITML+) for this task. We also present an efficient algorithm based on the cyclic projection method for solving the proposed ITML+ formulation. Extensive experiments on the challenging faces data sets EUROCOM and CurtinFaces for face verification as well as the BIWI RGBD-ID data set for person re-identification demonstrate the effectiveness of our proposed approach.

  17. Sparse Feature Extraction for Pose-Tolerant Face Recognition.

    PubMed

    Abiantun, Ramzi; Prabhu, Utsav; Savvides, Marios

    2014-10-01

    Automatic face recognition performance has been steadily improving over years of research, however it remains significantly affected by a number of factors such as illumination, pose, expression, resolution and other factors that can impact matching scores. The focus of this paper is the pose problem which remains largely overlooked in most real-world applications. Specifically, we focus on one-to-one matching scenarios where a query face image of a random pose is matched against a set of gallery images. We propose a method that relies on two fundamental components: (a) A 3D modeling step to geometrically correct the viewpoint of the face. For this purpose, we extend a recent technique for efficient synthesis of 3D face models called 3D Generic Elastic Model. (b) A sparse feature extraction step using subspace modeling and ℓ1-minimization to induce pose-tolerance in coefficient space. This in return enables the synthesis of an equivalent frontal-looking face, which can be used towards recognition. We show significant performance improvements in verification rates compared to commercial matchers, and also demonstrate the resilience of the proposed method with respect to degrading input quality. We find that the proposed technique is able to match non-frontal images to other non-frontal images of varying angles.

  18. Faces forming traces: neurophysiological correlates of learning naturally distinctive and caricatured faces.

    PubMed

    Schulz, Claudia; Kaufmann, Jürgen M; Kurt, Alexander; Schweinberger, Stefan R

    2012-10-15

    Distinctive faces are easier to learn and recognise than typical faces. We investigated effects of natural vs. artificial distinctiveness on performance and neural correlates of face learning. Spatial caricatures of initially non-distinctive faces were created such that their rated distinctiveness matched a set of naturally distinctive faces. During learning, we presented naturally distinctive, caricatured, and non-distinctive faces for later recognition among novel faces, using different images of the same identities at learning and test. For learned faces, an advantage in performance was observed for naturally distinctive and caricatured over non-distinctive faces, with larger benefits for naturally distinctive faces. Distinctive and caricatured faces elicited more negative occipitotemporal ERPs (P200, N250) and larger centroparietal positivity (LPC) during learning. At test, earliest distinctiveness effects were again seen in the P200. In line with recent research, N250 and LPC were larger for learned than for novel faces overall. Importantly, whereas left hemispheric N250 was increased for learned naturally distinctive faces, right hemispheric N250 responded particularly to caricatured novel faces. We conclude that natural distinctiveness induces benefits to face recognition beyond those induced by exaggeration of a face's idiosyncratic shape, and that the left hemisphere in particular may mediate recognition across different images. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. Fabrication technique for a custom face mask for the treatment of obstructive sleep apnea.

    PubMed

    Prehn, Ronald S; Colquitt, Tom

    2016-05-01

    The development of the positive airway pressure custom mask (TAP-PAP CM) has changed the treatment of obstructive sleep apnea. The TAP-PAP CM is used in continuous positive airway pressure therapy (CPAP) and is fabricated from the impression of the face. This mask is then connected to a post screwed into the mechanism of the TAP3 (Thornton Anterior Positioner) oral appliance. This strapless CPAP face mask features an efficient and stable CPAP interface with mandibular stabilization (Hybrid Therapy). A technique with a 2-stage polyvinyl siloxane face impression is described that offers improvements over the established single-stage face impression. This 2-stage impression technique eliminates problems inherent in the single-stage face impression, including voids, compressed tissue, inadequate borders, and a rushed experience due to the setting time of the single stage. The result is a custom mask with an improved seal to the CPAP device. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  20. Implementation of an RBF neural network on embedded systems: real-time face tracking and identity verification.

    PubMed

    Yang, Fan; Paindavoine, M

    2003-01-01

    This paper describes a real time vision system that allows us to localize faces in video sequences and verify their identity. These processes are image processing techniques based on the radial basis function (RBF) neural network approach. The robustness of this system has been evaluated quantitatively on eight video sequences. We have adapted our model for an application of face recognition using the Olivetti Research Laboratory (ORL), Cambridge, UK, database so as to compare the performance against other systems. We also describe three hardware implementations of our model on embedded systems based on the field programmable gate array (FPGA), zero instruction set computer (ZISC) chips, and digital signal processor (DSP) TMS320C62, respectively. We analyze the algorithm complexity and present results of hardware implementations in terms of the resources used and processing speed. The success rates of face tracking and identity verification are 92% (FPGA), 85% (ZISC), and 98.2% (DSP), respectively. For the three embedded systems, the processing speeds for images size of 288 /spl times/ 352 are 14 images/s, 25 images/s, and 4.8 images/s, respectively.

  1. Expertise with unfamiliar objects is flexible to changes in task but not changes in class

    PubMed Central

    Tangen, Jason M.

    2017-01-01

    Perceptual expertise is notoriously specific and bound by familiarity; generalizing to novel or unfamiliar images, objects, identities, and categories often comes at some cost to performance. In forensic and security settings, however, examiners are faced with the task of discriminating unfamiliar images of unfamiliar objects within their general domain of expertise (e.g., fingerprints, faces, or firearms). The job of a fingerprint expert, for instance, is to decide whether two unfamiliar fingerprint images were left by the same unfamiliar finger (e.g., Smith’s left thumb), or two different unfamiliar fingers (e.g., Smith and Jones’s left thumb). Little is known about the limits of this kind of perceptual expertise. Here, we examine fingerprint experts’ and novices’ ability to distinguish fingerprints compared to inverted faces in two different tasks. Inverted face images serve as an ideal comparison because they vary naturally between and within identities, as do fingerprints, and people tend to be less accurate or more novice-like at distinguishing faces when they are presented in an inverted or unfamiliar orientation. In Experiment 1, fingerprint experts outperformed novices in locating categorical fingerprint outliers (i.e., a loop pattern in an array of whorls), but not inverted face outliers (i.e., an inverted male face in an array of inverted female faces). In Experiment 2, fingerprint experts were more accurate than novices at discriminating matching and mismatching fingerprints that were presented very briefly, but not so for inverted faces. Our data show that perceptual expertise with fingerprints can be flexible to changing task demands, but there can also be abrupt limits: fingerprint expertise did not generalize to an unfamiliar class of stimuli. We interpret these findings as evidence that perceptual expertise with unfamiliar objects is highly constrained by one’s experience. PMID:28574998

  2. Black optic display

    DOEpatents

    Veligdan, James T.

    1997-01-01

    An optical display includes a plurality of stacked optical waveguides having first and second opposite ends collectively defining an image input face and an image screen, respectively, with the screen being oblique to the input face. Each of the waveguides includes a transparent core bound by a cladding layer having a lower index of refraction for effecting internal reflection of image light transmitted into the input face to project an image on the screen, with each of the cladding layers including a cladding cap integrally joined thereto at the waveguide second ends. Each of the cores is beveled at the waveguide second end so that the cladding cap is viewable through the transparent core. Each of the cladding caps is black for absorbing external ambient light incident upon the screen for improving contrast of the image projected internally on the screen.

  3. Self-compassion in the face of shame and body image dissatisfaction: implications for eating disorders.

    PubMed

    Ferreira, Cláudia; Pinto-Gouveia, José; Duarte, Cristiana

    2013-04-01

    The current study examines the role of self-compassion in face of shame and body image dissatisfaction, in 102 female eating disorders' patients, and 123 women from general population. Self-compassion was negatively associated with external shame, general psychopathology, and eating disorders' symptomatology. In women from the general population increased external shame predicted drive for thinness partially through lower self-compassion; also, body image dissatisfaction directly predicted drive for thinness. However, in the patients' sample increased shame and body image dissatisfaction predicted increased drive for thinness through decreased self-compassion. These results highlight the importance of the affiliative emotion dimensions of self-compassion in face of external shame, body image dissatisfaction and drive for thinness, emphasising the relevance of cultivating a self-compassionate relationship in eating disorders' patients. Copyright © 2013. Published by Elsevier Ltd.

  4. Image fusion pitfalls for cranial radiosurgery.

    PubMed

    Jonker, Benjamin P

    2013-01-01

    Stereotactic radiosurgery requires imaging to define both the stereotactic space in which the treatment is delivered and the target itself. Image fusion is the process of using rotation and translation to bring a second image set into alignment with the first image set. This allows the potential concurrent use of multiple image sets to define the target and stereotactic space. While a single magnetic resonance imaging (MRI) sequence alone can be used for delineation of the target and fiducials, there may be significant advantages to using additional imaging sets including other MRI sequences, computed tomography (CT) scans, and advanced imaging sets such as catheter-based angiography, diffusor tension imaging-based fiber tracking and positon emission tomography in order to more accurately define the target and surrounding critical structures. Stereotactic space is usually defined by detection of fiducials on the stereotactic head frame or mask system. Unfortunately MRI sequences are susceptible to geometric distortion, whereas CT scans do not face this problem (although they have poorer resolution of the target in most cases). Thus image fusion can allow the definition of stereotactic space to proceed from the geometrically accurate CT images at the same time as using MRI to define the target. The use of image fusion is associated with risk of error introduced by inaccuracies of the fusion process, as well as workflow changes that if not properly accounted for can mislead the treating clinician. The purpose of this review is to describe the uses of image fusion in stereotactic radiosurgery as well as its potential pitfalls.

  5. A SPAD-based 3D imager with in-pixel TDC for 145ps-accuracy ToF measurement

    NASA Astrophysics Data System (ADS)

    Vornicu, I.; Carmona-Galán, R.; Rodríguez-Vázquez, Á.

    2015-03-01

    The design and measurements of a CMOS 64 × 64 Single-Photon Avalanche-Diode (SPAD) array with in-pixel Time-to-Digital Converter (TDC) are presented. This paper thoroughly describes the imager at architectural and circuit level with particular emphasis on the characterization of the SPAD-detector ensemble. It is aimed to 2D imaging and 3D image reconstruction in low light environments. It has been fabricated in a standard 0.18μm CMOS process, i. e. without high voltage or low noise features. In these circumstances, we are facing a high number of dark counts and low photon detection efficiency. Several techniques have been applied to ensure proper functionality, namely: i) time-gated SPAD front-end with fast active-quenching/recharge circuit featuring tunable dead-time, ii) reverse start-stop scheme, iii) programmable time resolution of the TDC based on a novel pseudo-differential voltage controlled ring oscillator with fast start-up, iv) a global calibration scheme against temperature and process variation. Measurements results of individual SPAD-TDC ensemble jitter, array uniformity and time resolution programmability are also provided.

  6. High-Fidelity Microstructural Characterization and Performance Modeling of Aluminized Composite Propellant

    DOE PAGES

    Kosiba, Graham D.; Wixom, Ryan R.; Oehlschlaeger, Matthew A.

    2017-10-27

    Image processing and stereological techniques were used to characterize the heterogeneity of composite propellant and inform a predictive burn rate model. Composite propellant samples made up of ammonium perchlorate (AP), hydroxyl-terminated polybutadiene (HTPB), and aluminum (Al) were faced with an ion mill and imaged with a scanning electron microscope (SEM) and x-ray tomography (micro-CT). Properties of both the bulk and individual components of the composite propellant were determined from a variety of image processing tools. An algebraic model, based on the improved Beckstead-Derr-Price model developed by Cohen and Strand, was used to predict the steady-state burning of the aluminized compositemore » propellant. In the presented model the presence of aluminum particles within the propellant was introduced. The thermal effects of aluminum particles are accounted for at the solid-gas propellant surface interface and aluminum combustion is considered in the gas phase using a single global reaction. In conclusion, properties derived from image processing were used directly as model inputs, leading to a sample-specific predictive combustion model.« less

  7. High-Fidelity Microstructural Characterization and Performance Modeling of Aluminized Composite Propellant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kosiba, Graham D.; Wixom, Ryan R.; Oehlschlaeger, Matthew A.

    Image processing and stereological techniques were used to characterize the heterogeneity of composite propellant and inform a predictive burn rate model. Composite propellant samples made up of ammonium perchlorate (AP), hydroxyl-terminated polybutadiene (HTPB), and aluminum (Al) were faced with an ion mill and imaged with a scanning electron microscope (SEM) and x-ray tomography (micro-CT). Properties of both the bulk and individual components of the composite propellant were determined from a variety of image processing tools. An algebraic model, based on the improved Beckstead-Derr-Price model developed by Cohen and Strand, was used to predict the steady-state burning of the aluminized compositemore » propellant. In the presented model the presence of aluminum particles within the propellant was introduced. The thermal effects of aluminum particles are accounted for at the solid-gas propellant surface interface and aluminum combustion is considered in the gas phase using a single global reaction. In conclusion, properties derived from image processing were used directly as model inputs, leading to a sample-specific predictive combustion model.« less

  8. [Modeling continuous scaling of NDVI based on fractal theory].

    PubMed

    Luan, Hai-Jun; Tian, Qing-Jiu; Yu, Tao; Hu, Xin-Li; Huang, Yan; Du, Ling-Tong; Zhao, Li-Min; Wei, Xi; Han, Jie; Zhang, Zhou-Wei; Li, Shao-Peng

    2013-07-01

    Scale effect was one of the very important scientific problems of remote sensing. The scale effect of quantitative remote sensing can be used to study retrievals' relationship between different-resolution images, and its research became an effective way to confront the challenges, such as validation of quantitative remote sensing products et al. Traditional up-scaling methods cannot describe scale changing features of retrievals on entire series of scales; meanwhile, they are faced with serious parameters correction issues because of imaging parameters' variation of different sensors, such as geometrical correction, spectral correction, etc. Utilizing single sensor image, fractal methodology was utilized to solve these problems. Taking NDVI (computed by land surface radiance) as example and based on Enhanced Thematic Mapper Plus (ETM+) image, a scheme was proposed to model continuous scaling of retrievals. Then the experimental results indicated that: (a) For NDVI, scale effect existed, and it could be described by fractal model of continuous scaling; (2) The fractal method was suitable for validation of NDVI. All of these proved that fractal was an effective methodology of studying scaling of quantitative remote sensing.

  9. Expectations about person identity modulate the face-sensitive N170.

    PubMed

    Johnston, Patrick; Overell, Anne; Kaufman, Jordy; Robinson, Jonathan; Young, Andrew W

    2016-12-01

    Identifying familiar faces is a fundamentally important aspect of social perception that requires the ability to assign very different (ambient) images of a face to a common identity. The current consensus is that the brain processes face identity at approximately 250-300 msec following stimulus onset, as indexed by the N250 event related potential. However, using two experiments we show compelling evidence that where experimental paradigms induce expectations about person identity, changes in famous face identity are in fact detected at an earlier latency corresponding to the face-sensitive N170. In Experiment 1, using a rapid periodic stimulation paradigm presenting highly variable ambient images, we demonstrate robust effects of low frequency, periodic face-identity changes in N170 amplitude. In Experiment 2, we added infrequent aperiodic identity changes to show that the N170 was larger to both infrequent periodic and infrequent aperiodic identity changes than to high frequency identities. Our use of ambient stimulus images makes it unlikely that these effects are due to adaptation of low-level stimulus features. In line with current ideas about predictive coding, we therefore suggest that when expectations about the identity of a face exist, the visual system is capable of detecting identity mismatches at a latency consistent with the N170. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Multi-Directional Multi-Level Dual-Cross Patterns for Robust Face Recognition.

    PubMed

    Ding, Changxing; Choi, Jonghyun; Tao, Dacheng; Davis, Larry S

    2016-03-01

    To perform unconstrained face recognition robust to variations in illumination, pose and expression, this paper presents a new scheme to extract "Multi-Directional Multi-Level Dual-Cross Patterns" (MDML-DCPs) from face images. Specifically, the MDML-DCPs scheme exploits the first derivative of Gaussian operator to reduce the impact of differences in illumination and then computes the DCP feature at both the holistic and component levels. DCP is a novel face image descriptor inspired by the unique textural structure of human faces. It is computationally efficient and only doubles the cost of computing local binary patterns, yet is extremely robust to pose and expression variations. MDML-DCPs comprehensively yet efficiently encodes the invariant characteristics of a face image from multiple levels into patterns that are highly discriminative of inter-personal differences but robust to intra-personal variations. Experimental results on the FERET, CAS-PERL-R1, FRGC 2.0, and LFW databases indicate that DCP outperforms the state-of-the-art local descriptors (e.g., LBP, LTP, LPQ, POEM, tLBP, and LGXP) for both face identification and face verification tasks. More impressively, the best performance is achieved on the challenging LFW and FRGC 2.0 databases by deploying MDML-DCPs in a simple recognition scheme.

  11. Reference frames for spatial frequency in face representation differ in the temporal visual cortex and amygdala.

    PubMed

    Inagaki, Mikio; Fujita, Ichiro

    2011-07-13

    Social communication in nonhuman primates and humans is strongly affected by facial information from other individuals. Many cortical and subcortical brain areas are known to be involved in processing facial information. However, how the neural representation of faces differs across different brain areas remains unclear. Here, we demonstrate that the reference frame for spatial frequency (SF) tuning of face-responsive neurons differs in the temporal visual cortex and amygdala in monkeys. Consistent with psychophysical properties for face recognition, temporal cortex neurons were tuned to image-based SFs (cycles/image) and showed viewing distance-invariant representation of face patterns. On the other hand, many amygdala neurons were influenced by retina-based SFs (cycles/degree), a characteristic that is useful for social distance computation. The two brain areas also differed in the luminance contrast sensitivity of face-responsive neurons; amygdala neurons sharply reduced their responses to low luminance contrast images, while temporal cortex neurons maintained the level of their responses. From these results, we conclude that different types of visual processing in the temporal visual cortex and the amygdala contribute to the construction of the neural representations of faces.

  12. The detection of geothermal areas from Skylab thermal data

    NASA Technical Reports Server (NTRS)

    Siegal, B. S.; Kahle, A. B.; Goetz, A. F. H.; Gillespie, A. R.; Abrams, M. J.; Pohn, H. A.

    1975-01-01

    Skylab-4 X-5 thermal data of the geysers area was analyzed to determine the feasibility of using midday Skylab images to detect geothermal areas. The hottest ground areas indicated on the Skylab image corresponded to south-facing barren or sparsely vegetated slopes. A geothermal area approximately 15 by 30 m coincided with one of the hottest areas indicated by Skylab. This area could not be unambiguously distinguished from the other areas which are believed to be hotter than their surroundings as a result of their topography, and micrometeorological conditions. A simple modification of a previous thermal model was performed and the predicted temperatures for the hottest slopes using representative values was in general agreement with the observed data. It is concluded that data from a single midday Skylab pass cannot be used to locate geothermal areas.

  13. Intraretinal Correlates of Reticular Pseudodrusen Revealed by Autofluorescence and En Face OCT.

    PubMed

    Paavo, Maarjaliis; Lee, Winston; Merriam, John; Bearelly, Srilaxmi; Tsang, Stephen; Chang, Stanley; Sparrow, Janet R

    2017-09-01

    We sought to determine whether information revealed from the reflectance, autofluorescence, and absorption properties of RPE cells situated posterior to reticular pseudodrusen (RPD) could provide insight into the origins and structure of RPD. RPD were studied qualitatively by near-infrared fundus autofluorescence (NIR-AF), short-wavelength fundus autofluorescence (SW-AF), and infrared reflectance (IR-R) images, and the presentation was compared to horizontal and en face spectral domain optical coherence tomographic (SD-OCT) images. Images were acquired from 23 patients (39 eyes) diagnosed with RPD (mean age 80.7 ± 7.1 [SD]; 16 female; 4 Hispanics, 19 non-Hispanic whites). In SW-AF, NIR-AF, and IR-R images, fundus RPD were recognized as interlacing networks of small scale variations in IR-R and fluorescence (SW-AF, NIR-AF) intensities. Darkened foci of RPD colocalized in SW-AF and NIR-AF images, and in SD-OCT images corresponded to disturbances of the interdigitation (IZ) and ellipsoid (EZ) zones and to more pronounced hyperreflective lesions traversing photoreceptor-attributable bands in SD-OCT images. Qualitative assessment of the outer nuclear layer (ONL) revealed thinning as RPD extended radially from the outer to inner retina. In en face OCT, hyperreflective areas in the EZ band correlated topographically with hyporeflective foci at the level of the RPE. The hyperreflective lesions corresponding to RPD in SD-OCT scans are likely indicative of degenerating photoreceptor cells. The darkened foci at positions of RPD in NIR-AF and en face OCT images indicate changes in the RPE monolayer with the reduced NIR-AF and en face OCT signal suggesting a reduction in melanin that could be accounted for by RPE thinning.

  14. Effects of Orientation on Recognition of Facial Affect

    NASA Technical Reports Server (NTRS)

    Cohen, M. M.; Mealey, J. B.; Hargens, Alan R. (Technical Monitor)

    1997-01-01

    The ability to discriminate facial features is often degraded when the orientation of the face and/or the observer is altered. Previous studies have shown that gross distortions of facial features can go unrecognized when the image of the face is inverted, as exemplified by the 'Margaret Thatcher' effect. This study examines how quickly erect and supine observers can distinguish between smiling and frowning faces that are presented at various orientations. The effects of orientation are of particular interest in space, where astronauts frequently view one another in orientations other than the upright. Sixteen observers viewed individual facial images of six people on a computer screen; on a given trial, the image was either smiling or frowning. Each image was viewed when it was erect and when it was rotated (rolled) by 45 degrees, 90 degrees, 135 degrees, 180 degrees, 225 degrees and 270 degrees about the line of sight. The observers were required to respond as rapidly and accurately as possible to identify if the face presented was smiling or frowning. Measures of reaction time were obtained when the observers were both upright and supine. Analyses of variance revealed that mean reaction time, which increased with stimulus rotation (F=18.54, df 7/15, p (is less than) 0.001), was 22% longer when the faces were inverted than when they were erect, but that the orientation of the observer had no significant effect on reaction time (F=1.07, df 1/15, p (is greater than) .30). These data strongly suggest that the orientation of the image of a face on the observer's retina, but not its orientation with respect to gravity, is important in identifying the expression on the face.

  15. Sub-word image clustering in Farsi printed books

    NASA Astrophysics Data System (ADS)

    Soheili, Mohammad Reza; Kabir, Ehsanollah; Stricker, Didier

    2015-02-01

    Most OCR systems are designed for the recognition of a single page. In case of unfamiliar font faces, low quality papers and degraded prints, the performance of these products drops sharply. However, an OCR system can use redundancy of word occurrences in large documents to improve recognition results. In this paper, we propose a sub-word image clustering method for the applications dealing with large printed documents. We assume that the whole document is printed by a unique unknown font with low quality print. Our proposed method finds clusters of equivalent sub-word images with an incremental algorithm. Due to the low print quality, we propose an image matching algorithm for measuring the distance between two sub-word images, based on Hamming distance and the ratio of the area to the perimeter of the connected components. We built a ground-truth dataset of more than 111000 sub-word images to evaluate our method. All of these images were extracted from an old Farsi book. We cluster all of these sub-words, including isolated letters and even punctuation marks. Then all centers of created clusters are labeled manually. We show that all sub-words of the book can be recognized with more than 99.7% accuracy by assigning the label of each cluster center to all of its members.

  16. Efficient space-time sampling with pixel-wise coded exposure for high-speed imaging.

    PubMed

    Liu, Dengyu; Gu, Jinwei; Hitomi, Yasunobu; Gupta, Mohit; Mitsunaga, Tomoo; Nayar, Shree K

    2014-02-01

    Cameras face a fundamental trade-off between spatial and temporal resolution. Digital still cameras can capture images with high spatial resolution, but most high-speed video cameras have relatively low spatial resolution. It is hard to overcome this trade-off without incurring a significant increase in hardware costs. In this paper, we propose techniques for sampling, representing, and reconstructing the space-time volume to overcome this trade-off. Our approach has two important distinctions compared to previous works: 1) We achieve sparse representation of videos by learning an overcomplete dictionary on video patches, and 2) we adhere to practical hardware constraints on sampling schemes imposed by architectures of current image sensors, which means that our sampling function can be implemented on CMOS image sensors with modified control units in the future. We evaluate components of our approach, sampling function and sparse representation, by comparing them to several existing approaches. We also implement a prototype imaging system with pixel-wise coded exposure control using a liquid crystal on silicon device. System characteristics such as field of view and modulation transfer function are evaluated for our imaging system. Both simulations and experiments on a wide range of scenes show that our method can effectively reconstruct a video from a single coded image while maintaining high spatial resolution.

  17. Single Black Working Mothers' Perceptions: The Journey to Achieve Leadership Positions

    ERIC Educational Resources Information Center

    Raglin, Sherrell

    2017-01-01

    Single Black working mothers faced significant challenges in achieving high-level or senior-level leadership positions. The purpose of this qualitative narrative study was to collect, analyze and code the stories told by 10 participants to understand the perceptions and insights of the challenges and barriers single Black working mothers faced in…

  18. Retinal optical coherence tomography at 1 μm with dynamic focus control and axial motion tracking

    NASA Astrophysics Data System (ADS)

    Cua, Michelle; Lee, Sujin; Miao, Dongkai; Ju, Myeong Jin; Mackenzie, Paul J.; Jian, Yifan; Sarunic, Marinko V.

    2016-02-01

    High-resolution optical coherence tomography (OCT) retinal imaging is important to noninvasively visualize the various retinal structures to aid in better understanding of the pathogenesis of vision-robbing diseases. However, conventional OCT systems have a trade-off between lateral resolution and depth-of-focus. In this report, we present the development of a focus-stacking OCT system with automatic focus optimization for high-resolution, extended-focal-range clinical retinal imaging by incorporating a variable-focus liquid lens into the sample arm optics. Retinal layer tracking and selection was performed using a graphics processing unit accelerated processing platform for focus optimization, providing real-time layer-specific en face visualization. After optimization, multiple volumes focused at different depths were acquired, registered, and stitched together to yield a single, high-resolution focus-stacked dataset. Using this system, we show high-resolution images of the retina and optic nerve head, from which we extracted clinically relevant parameters such as the nerve fiber layer thickness and lamina cribrosa microarchitecture.

  19. Human fatigue expression recognition through image-based dynamic multi-information and bimodal deep learning

    NASA Astrophysics Data System (ADS)

    Zhao, Lei; Wang, Zengcai; Wang, Xiaojin; Qi, Yazhou; Liu, Qing; Zhang, Guoxin

    2016-09-01

    Human fatigue is an important cause of traffic accidents. To improve the safety of transportation, we propose, in this paper, a framework for fatigue expression recognition using image-based facial dynamic multi-information and a bimodal deep neural network. First, the landmark of face region and the texture of eye region, which complement each other in fatigue expression recognition, are extracted from facial image sequences captured by a single camera. Then, two stacked autoencoder neural networks are trained for landmark and texture, respectively. Finally, the two trained neural networks are combined by learning a joint layer on top of them to construct a bimodal deep neural network. The model can be used to extract a unified representation that fuses landmark and texture modalities together and classify fatigue expressions accurately. The proposed system is tested on a human fatigue dataset obtained from an actual driving environment. The experimental results demonstrate that the proposed method performs stably and robustly, and that the average accuracy achieves 96.2%.

  20. Retinal optical coherence tomography at 1 μm with dynamic focus control and axial motion tracking.

    PubMed

    Cua, Michelle; Lee, Sujin; Miao, Dongkai; Ju, Myeong Jin; Mackenzie, Paul J; Jian, Yifan; Sarunic, Marinko V

    2016-02-01

    High-resolution optical coherence tomography (OCT) retinal imaging is important to noninvasively visualize the various retinal structures to aid in better understanding of the pathogenesis of vision-robbing diseases. However, conventional OCT systems have a trade-off between lateral resolution and depth-of-focus. In this report, we present the development of a focus-stacking OCT system with automatic focus optimization for high-resolution, extended-focal-range clinical retinal imaging by incorporating a variable-focus liquid lens into the sample arm optics. Retinal layer tracking and selection was performed using a graphics processing unit accelerated processing platform for focus optimization, providing real-time layer-specific en face visualization. After optimization, multiple volumes focused at different depths were acquired, registered, and stitched together to yield a single, high-resolution focus-stacked dataset. Using this system, we show high-resolution images of the retina and optic nerve head, from which we extracted clinically relevant parameters such as the nerve fiber layer thickness and lamina cribrosa microarchitecture.

  1. Imaging assessment of penetrating injury of the neck and face.

    PubMed

    Offiah, Curtis; Hall, Edward

    2012-10-01

    Penetrating trauma of the neck and face is a frequent presentation to acute emergency, trauma and critical care units. There remains a steady incidence of both gunshot penetrating injury to the neck and face as well as non-missile penetrating injury-largely, but not solely, knife-related. Optimal imaging assessment of such injuries therefore remains an on-going requirement of the general and specialised radiologist. The anatomy of the neck and face-in particular, vascular, pharyngo-oesophageal, laryngo-tracheal and neural anatomy-demands a more specialised and selective management plan which incorporates specific imaging techniques. The current treatment protocol of injuries of the neck and face has seen a radical shift away from expectant surgical exploration in the management of such injuries, largely as a result of advances in the diagnostic capabilities of multi-detector computed tomography angiography (MDCTA), which is now the first-line imaging modality of choice in such cases. This review aims to highlight ballistic considerations, differing imaging modalities, including MDCTA, that might be utilised to assist in the accurate assessment of these injuries as well as the specific radiological features and patterns of specific organ-system injuries that should be considered and communicated to surgical and critical care teams. TEACHING POINTS : • MDCTA is the first-line imaging modality in penetrating trauma of the neck and, often, of the face • The inherent deformability of a bullet is a significant factor in its tissue-damaging capabilities • MDCTA can provide accurate assessment of visceral injury of the neck as well as vascular injury • Penetrating facial trauma warrants radiological assessment of key adjacent anatomical structures • In-driven fragments of native bone potentiate tissue damage in projectile penetrating facial trauma.

  2. Human face processing is tuned to sexual age preferences

    PubMed Central

    Ponseti, J.; Granert, O.; van Eimeren, T.; Jansen, O.; Wolff, S.; Beier, K.; Deuschl, G.; Bosinski, H.; Siebner, H.

    2014-01-01

    Human faces can motivate nurturing behaviour or sexual behaviour when adults see a child or an adult face, respectively. This suggests that face processing is tuned to detecting age cues of sexual maturity to stimulate the appropriate reproductive behaviour: either caretaking or mating. In paedophilia, sexual attraction is directed to sexually immature children. Therefore, we hypothesized that brain networks that normally are tuned to mature faces of the preferred gender show an abnormal tuning to sexual immature faces in paedophilia. Here, we use functional magnetic resonance imaging (fMRI) to test directly for the existence of a network which is tuned to face cues of sexual maturity. During fMRI, participants sexually attracted to either adults or children were exposed to various face images. In individuals attracted to adults, adult faces activated several brain regions significantly more than child faces. These brain regions comprised areas known to be implicated in face processing, and sexual processing, including occipital areas, the ventrolateral prefrontal cortex and, subcortically, the putamen and nucleus caudatus. The same regions were activated in paedophiles, but with a reversed preferential response pattern. PMID:24850896

  3. Adaptation effects to attractiveness of face photographs and art portraits are domain-specific

    PubMed Central

    Hayn-Leichsenring, Gregor U.; Kloth, Nadine; Schweinberger, Stefan R.; Redies, Christoph

    2013-01-01

    We studied the neural coding of facial attractiveness by investigating effects of adaptation to attractive and unattractive human faces on the perceived attractiveness of veridical human face pictures (Experiment 1) and art portraits (Experiment 2). Experiment 1 revealed a clear pattern of contrastive aftereffects. Relative to a pre-adaptation baseline, the perceived attractiveness of faces was increased after adaptation to unattractive faces, and was decreased after adaptation to attractive faces. Experiment 2 revealed similar aftereffects when art portraits rather than face photographs were used as adaptors and test stimuli, suggesting that effects of adaptation to attractiveness are not restricted to facial photographs. Additionally, we found similar aftereffects in art portraits for beauty, another aesthetic feature that, unlike attractiveness, relates to the properties of the image (rather than to the face displayed). Importantly, Experiment 3 showed that aftereffects were abolished when adaptors were art portraits and face photographs were test stimuli. These results suggest that adaptation to facial attractiveness elicits aftereffects in the perception of subsequently presented faces, for both face photographs and art portraits, and that these effects do not cross image domains. PMID:24349690

  4. Tensor discriminant color space for face recognition.

    PubMed

    Wang, Su-Jing; Yang, Jian; Zhang, Na; Zhou, Chun-Guang

    2011-09-01

    Recent research efforts reveal that color may provide useful information for face recognition. For different visual tasks, the choice of a color space is generally different. How can a color space be sought for the specific face recognition problem? To address this problem, this paper represents a color image as a third-order tensor and presents the tensor discriminant color space (TDCS) model. The model can keep the underlying spatial structure of color images. With the definition of n-mode between-class scatter matrices and within-class scatter matrices, TDCS constructs an iterative procedure to obtain one color space transformation matrix and two discriminant projection matrices by maximizing the ratio of these two scatter matrices. The experiments are conducted on two color face databases, AR and Georgia Tech face databases, and the results show that both the performance and the efficiency of the proposed method are better than those of the state-of-the-art color image discriminant model, which involve one color space transformation matrix and one discriminant projection matrix, specifically in a complicated face database with various pose variations.

  5. Adaptive 3D Face Reconstruction from Unconstrained Photo Collections.

    PubMed

    Roth, Joseph; Tong, Yiying; Liu, Xiaoming

    2016-12-07

    Given a photo collection of "unconstrained" face images of one individual captured under a variety of unknown pose, expression, and illumination conditions, this paper presents a method for reconstructing a 3D face surface model of the individual along with albedo information. Unlike prior work on face reconstruction that requires large photo collections, we formulate an approach to adapt to photo collections with a high diversity in both the number of images and the image quality. To achieve this, we incorporate prior knowledge about face shape by fitting a 3D morphable model to form a personalized template, following by using a novel photometric stereo formulation to complete the fine details, under a coarse-to-fine scheme. Our scheme incorporates a structural similarity-based local selection step to help identify a common expression for reconstruction while discarding occluded portions of faces. The evaluation of reconstruction performance is through a novel quality measure, in the absence of ground truth 3D scans. Superior large-scale experimental results are reported on synthetic, Internet, and personal photo collections.

  6. Prospective treatment plan-specific action limits for real-time intrafractional monitoring in surface image guided radiosurgery.

    PubMed

    Yock, Adam D; Pawlicki, Todd; Kim, Gwe-Ya

    2016-07-01

    In surface image guided radiosurgery, action limits are created to determine at what point intrafractional motion exhibited by the patient is large enough to warrant intervention. Action limit values remain constant across patients despite the fact that patient motion affects the target coverage of brain metastases differently depending on the planning technique and other treatment plan-specific factors. The purpose of this work was twofold. The first purpose was to characterize the sensitivity of single-met per iso and multimet per iso treatment plans to uncorrected patient motion. The second purpose was to describe a method to prospectively determine treatment plan-specific action limits considering this sensitivity. In their surface image guided radiosurgery technique, patient positioning is achieved with a thermoplastic mask that does not cover the patient's face. The patient's exposed face is imaged by a stereoscopic photogrammetry system. It is then compared to a reference surface and monitored throughout treatment. Seventy-two brain metastases (representing 29 patients) were used for this study. Twenty-five mets were treated individually ("single-met per iso plans"), and 47 were treated in a plan simultaneously with at least one other met ("multimet per iso plans"). For each met, the proportion of the gross tumor volume that remained within the 100% prescription isodose line was estimated under the influence of combinations of translations and rotations (0.0-3.0 mm and 0.0°-3.0°, respectively). The target volume and the prescription dose-volume were considered concentric spheres that each encompassed a volume determined from the treatment plan. Plan-specific contour plots and DVHs were created to illustrate the sensitivity of a specific lesion to uncorrected patient motion. Both single-met per iso and multimet per iso plans exhibited compromised target coverage under translations and rotations, though multimet per iso plans were considerably more sensitive to these transformations (2.3% and 39.8%, respectively). Plan-specific contour plots and DVHs were used to illustrate how size, distance from isocenter, and planning technique affect a particular met's sensitivity to motion. Stereotactic radiosurgery treatment plans that treat multiple brain metastases using a common isocenter are particularly susceptible to compromised target coverage as a result of uncorrected patient motion. The use of such a planning technique along with other treatment plan-specific factors should influence patient motion management. A graphical representation of the effect of translations and rotations on any particular plan can be generated to inform clinicians of the appropriate action limit when monitoring intrafractional motion.

  7. Facial recognition from volume-rendered magnetic resonance imaging data.

    PubMed

    Prior, Fred W; Brunsden, Barry; Hildebolt, Charles; Nolan, Tracy S; Pringle, Michael; Vaishnavi, S Neil; Larson-Prior, Linda J

    2009-01-01

    Three-dimensional (3-D) reconstructions of computed tomography (CT) and magnetic resonance (MR) brain imaging studies are a routine component of both clinical practice and clinical and translational research. A side effect of such reconstructions is the creation of a potentially recognizable face. The Health Insurance Portability and Accountability Act of 1996 (HIPAA) Privacy Rule requires that individually identifiable health information may not be used for research unless identifiers that may be associated with the health information including "Full face photographic images and other comparable images ..." are removed (de-identification). Thus, a key question is: Are reconstructed facial images comparable to full-face photographs for the purpose of identification? To address this question, MR images were selected from existing research repositories and subjects were asked to pair an MR reconstruction with one of 40 photographs. The chance probability that an observer could match a photograph with its 3-D MR image was 1 in 40 (0.025), and we considered 4 successes out of 40 (4/40, 0.1) to indicate that a subject could identify persons' faces from their 3-D MR images. Forty percent of the subjects were able to successfully match photographs with MR images with success rates higher than the null hypothesis success rate. The Blyth-Still-Casella 95% confidence interval for the 40% success rate was 29%-52%, and the 40% success rate was significantly higher ( P < 0.001) than our null hypothesis success rate of 1 in 10 (0.10).

  8. Electrophysiological evidence for separation between human face and non-face object processing only in the right hemisphere.

    PubMed

    Niina, Megumi; Okamura, Jun-ya; Wang, Gang

    2015-10-01

    Scalp event-related potential (ERP) studies have demonstrated larger N170 amplitudes when subjects view faces compared to items from object categories. Extensive attempts have been made to clarify face selectivity and hemispheric dominance for face processing. The purpose of this study was to investigate hemispheric differences in N170s activated by human faces and non-face objects, as well as the extent of overlap of their sources. ERP was recorded from 20 subjects while they viewed human face and non-face images. N170s obtained during the presentation of human faces appeared earlier and with larger amplitude than for other category images. Further source analysis with a two-dipole model revealed that the locations of face and object processing largely overlapped in the left hemisphere. Conversely, the source for face processing in the right hemisphere located more anterior than the source for object processing. The results suggest that the neuronal circuits for face and object processing are largely shared in the left hemisphere, with more distinct circuits in the right hemisphere. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Strength and coherence of binocular rivalry depends on shared stimulus complexity.

    PubMed

    Alais, David; Melcher, David

    2007-01-01

    Presenting incompatible images to the eyes results in alternations of conscious perception, a phenomenon known as binocular rivalry. We examined rivalry using either simple stimuli (oriented gratings) or coherent visual objects (faces, houses etc). Two rivalry characteristics were measured: Depth of rivalry suppression and coherence of alternations. Rivalry between coherent visual objects exhibits deep suppression and coherent rivalry, whereas rivalry between gratings exhibits shallow suppression and piecemeal rivalry. Interestingly, rivalry between a simple and a complex stimulus displays the same characteristics (shallow and piecemeal) as rivalry between two simple stimuli. Thus, complex stimuli fail to rival globally unless the fellow stimulus is also global. We also conducted a face adaptation experiment. Adaptation to rivaling faces improved subsequent face discrimination (as expected), but adaptation to a rivaling face/grating pair did not. To explain this, we suggest rivalry must be an early and local process (at least initially), instigated by the failure of binocular fusion, which can then become globally organized by feedback from higher-level areas when both rivalry stimuli are global, so that rivalry tends to oscillate coherently. These globally assembled images then flow through object processing areas, with the dominant image gaining in relative strength in a form of 'biased competition', therefore accounting for the deeper suppression of global images. In contrast, when only one eye receives a global image, local piecemeal suppression from the fellow eye overrides the organizing effects of global feedback to prevent coherent image formation. This indicates the primacy of local over global processes in rivalry.

  10. Comparison of face masks in the bag-mask ventilation of a manikin.

    PubMed

    Redfern, D; Rassam, S; Stacey, M R; Mecklenburgh, J S

    2006-02-01

    We conducted a study investigating the effectiveness of four face mask designs in the bag-mask ventilation of a special manikin adapted to simulate a difficult airway. Forty-eight anaesthetists volunteered to bag-mask ventilate the manikin for 3 min with four different face masks. The primary outcome of the study was to calculate mean percentage leak from the face masks over 3 min. Anaesthetists were also asked to rate the face masks using a visual analogue score. The single-use scented intersurgical face mask had the lowest mean leak (20%). This was significantly lower than the mean leak from the single-use, cushioned 7,000 series Air Safety Ltd. face mask (24%) and the reusable silicone Laerdal face mask (27%) but not significantly lower than the mean leak from the reusable anatomical intersurgical face mask (23%). There was a large variation in both performance and satisfaction between anaesthetists with each design. This highlights the importance of having a variety of face masks available for emergency use.

  11. Effects of Photo-Depicted Pupil Diameter on Judgments of Others' Attentiveness and on Facial Recognition Memory.

    PubMed

    Watier, Nicholas; Healy, Christopher; Armstrong, Heather

    2017-04-01

    Occasionally, individuals perceive that someone is no longer paying attention to the discussion at hand even when there are no overt cues of inattentiveness. As a preliminary study of this phenomenon, we examined whether pupil diameter might be implicitly used to infer others' attentiveness. Forty participants (27 women, 13 men, M age = 19.7 year, SD = 2.8) were presented with images of male faces with either large or small pupils, and, in the context of a personnel selection scenario, participants then judged the attentiveness of the person in the image. Images of faces with large pupils were judged as more attentive, compared with images of faces with small pupils. Face recognition memory performance was not affected by depicted pupil size. Our results are consistent with the proposal that pupillary fluctuations can be an index of perceived attention, and they provide preliminary evidence that pupil dilation may be implicitly relied upon to infer attentional states.

  12. Face recognition using tridiagonal matrix enhanced multivariance products representation

    NASA Astrophysics Data System (ADS)

    Ã-zay, Evrim Korkmaz

    2017-01-01

    This study aims to retrieve face images from a database according to a target face image. For this purpose, Tridiagonal Matrix Enhanced Multivariance Products Representation (TMEMPR) is taken into consideration. TMEMPR is a recursive algorithm based on Enhanced Multivariance Products Representation (EMPR). TMEMPR decomposes a matrix into three components which are a matrix of left support terms, a tridiagonal matrix of weight parameters for each recursion, and a matrix of right support terms, respectively. In this sense, there is an analogy between Singular Value Decomposition (SVD) and TMEMPR. However TMEMPR is a more flexible algorithm since its initial support terms (or vectors) can be chosen as desired. Low computational complexity is another advantage of TMEMPR because the algorithm has been constructed with recursions of certain arithmetic operations without requiring any iteration. The algorithm has been trained and tested with ORL face image database with 400 different grayscale images of 40 different people. TMEMPR's performance has been compared with SVD's performance as a result.

  13. What is adapted in face adaptation? The neural representations of expression in the human visual system.

    PubMed

    Fox, Christopher J; Barton, Jason J S

    2007-01-05

    The neural representation of facial expression within the human visual system is not well defined. Using an adaptation paradigm, we examined aftereffects on expression perception produced by various stimuli. Adapting to a face, which was used to create morphs between two expressions, substantially biased expression perception within the morphed faces away from the adapting expression. This adaptation was not based on low-level image properties, as a different image of the same person displaying that expression produced equally robust aftereffects. Smaller but significant aftereffects were generated by images of different individuals, irrespective of gender. Non-face visual, auditory, or verbal representations of emotion did not generate significant aftereffects. These results suggest that adaptation affects at least two neural representations of expression: one specific to the individual (not the image), and one that represents expression across different facial identities. The identity-independent aftereffect suggests the existence of a 'visual semantic' for facial expression in the human visual system.

  14. Human Age Estimation Method Robust to Camera Sensor and/or Face Movement

    PubMed Central

    Nguyen, Dat Tien; Cho, So Ra; Pham, Tuyen Danh; Park, Kang Ryoung

    2015-01-01

    Human age can be employed in many useful real-life applications, such as customer service systems, automatic vending machines, entertainment, etc. In order to obtain age information, image-based age estimation systems have been developed using information from the human face. However, limitations exist for current age estimation systems because of the various factors of camera motion and optical blurring, facial expressions, gender, etc. Motion blurring can usually be presented on face images by the movement of the camera sensor and/or the movement of the face during image acquisition. Therefore, the facial feature in captured images can be transformed according to the amount of motion, which causes performance degradation of age estimation systems. In this paper, the problem caused by motion blurring is addressed and its solution is proposed in order to make age estimation systems robust to the effects of motion blurring. Experiment results show that our method is more efficient for enhancing age estimation performance compared with systems that do not employ our method. PMID:26334282

  15. Reading in developmental prosopagnosia: Evidence for a dissociation between word and face recognition.

    PubMed

    Starrfelt, Randi; Klargaard, Solja K; Petersen, Anders; Gerlach, Christian

    2018-02-01

    Recent models suggest that face and word recognition may rely on overlapping cognitive processes and neural regions. In support of this notion, face recognition deficits have been demonstrated in developmental dyslexia. Here we test whether the opposite association can also be found, that is, impaired reading in developmental prosopagnosia. We tested 10 adults with developmental prosopagnosia and 20 matched controls. All participants completed the Cambridge Face Memory Test, the Cambridge Face Perception test and a Face recognition questionnaire used to quantify everyday face recognition experience. Reading was measured in four experimental tasks, testing different levels of letter, word, and text reading: (a) single word reading with words of varying length,(b) vocal response times in single letter and short word naming, (c) recognition of single letters and short words at brief exposure durations (targeting the word superiority effect), and d) text reading. Participants with developmental prosopagnosia performed strikingly similar to controls across the four reading tasks. Formal analysis revealed a significant dissociation between word and face recognition, as the difference in performance with faces and words was significantly greater for participants with developmental prosopagnosia than for controls. Adult developmental prosopagnosics read as quickly and fluently as controls, while they are seemingly unable to learn efficient strategies for recognizing faces. We suggest that this is due to the differing demands that face and word recognition put on the perceptual system. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  16. Modeling first impressions from highly variable facial images.

    PubMed

    Vernon, Richard J W; Sutherland, Clare A M; Young, Andrew W; Hartley, Tom

    2014-08-12

    First impressions of social traits, such as trustworthiness or dominance, are reliably perceived in faces, and despite their questionable validity they can have considerable real-world consequences. We sought to uncover the information driving such judgments, using an attribute-based approach. Attributes (physical facial features) were objectively measured from feature positions and colors in a database of highly variable "ambient" face photographs, and then used as input for a neural network to model factor dimensions (approachability, youthful-attractiveness, and dominance) thought to underlie social attributions. A linear model based on this approach was able to account for 58% of the variance in raters' impressions of previously unseen faces, and factor-attribute correlations could be used to rank attributes by their importance to each factor. Reversing this process, neural networks were then used to predict facial attributes and corresponding image properties from specific combinations of factor scores. In this way, the factors driving social trait impressions could be visualized as a series of computer-generated cartoon face-like images, depicting how attributes change along each dimension. This study shows that despite enormous variation in ambient images of faces, a substantial proportion of the variance in first impressions can be accounted for through linear changes in objectively defined features.

  17. Ultrahigh-Speed Optical Coherence Tomography for Three-Dimensional and En Face Imaging of the Retina and Optic Nerve Head

    PubMed Central

    Srinivasan, Vivek J.; Adler, Desmond C.; Chen, Yueli; Gorczynska, Iwona; Huber, Robert; Duker, Jay S.; Schuman, Joel S.; Fujimoto, James G.

    2009-01-01

    Purpose To demonstrate ultrahigh-speed optical coherence tomography (OCT) imaging of the retina and optic nerve head at 249,000 axial scans per second and a wavelength of 1060 nm. To investigate methods for visualization of the retina, choroid, and optic nerve using high-density sampling enabled by improved imaging speed. Methods A swept-source OCT retinal imaging system operating at a speed of 249,000 axial scans per second was developed. Imaging of the retina, choroid, and optic nerve were performed. Display methods such as speckle reduction, slicing along arbitrary planes, en face visualization of reflectance from specific retinal layers, and image compounding were investigated. Results High-definition and three-dimensional (3D) imaging of the normal retina and optic nerve head were performed. Increased light penetration at 1060 nm enabled improved visualization of the choroid, lamina cribrosa, and sclera. OCT fundus images and 3D visualizations were generated with higher pixel density and less motion artifacts than standard spectral/Fourier domain OCT. En face images enabled visualization of the porous structure of the lamina cribrosa, nerve fiber layer, choroid, photoreceptors, RPE, and capillaries of the inner retina. Conclusions Ultrahigh-speed OCT imaging of the retina and optic nerve head at 249,000 axial scans per second is possible. The improvement of ∼5 to 10× in imaging speed over commercial spectral/Fourier domain OCT technology enables higher density raster scan protocols and improved performance of en face visualization methods. The combination of the longer wavelength and ultrahigh imaging speed enables excellent visualization of the choroid, sclera, and lamina cribrosa. PMID:18658089

  18. Robust expertise effects in right FFA

    PubMed Central

    McGugin, Rankin Williams; Newton, Allen T; Gore, John C; Gauthier, Isabel

    2015-01-01

    The fusiform face area (FFA) is one of several areas in occipito-temporal cortex whose activity is correlated with perceptual expertise for objects. Here, we investigate the robustness of expertise effects in FFA and other areas to a strong task manipulation that increases both perceptual and attentional demands. With high-resolution fMRI at 7Telsa, we measured responses to images of cars, faces and a category globally visually similar to cars (sofas) in 26 subjects who varied in expertise with cars, in (a) a low load 1-back task with a single object category and (b) a high load task in which objects from two categories rapidly alternated and attention was required to both categories. The low load condition revealed several areas more active as a function of expertise, including both posterior and anterior portions of FFA bilaterally (FFA1/FFA2 respectively). Under high load, fewer areas were positively correlated with expertise and several areas were even negatively correlated, but the expertise effect in face-selective voxels in the anterior portion of FFA (FFA2) remained robust. Finally, we found that behavioral car expertise also predicted increased responses to sofa images but no behavioral advantages in sofa discrimination, suggesting that global shape similarity to a category of expertise is enough to elicit a response in FFA and other areas sensitive to experience, even when the category itself is not of special interest. The robustness of expertise effects in right FFA2 and the expertise effects driven by visual similarity both argue against attention being the sole determinant of expertise effects in extrastriate areas. PMID:25192631

  19. Correction of rotational distortion for catheter-based en face OCT and OCT angiography

    PubMed Central

    Ahsen, Osman O.; Lee, Hsiang-Chieh; Giacomelli, Michael G.; Wang, Zhao; Liang, Kaicheng; Tsai, Tsung-Han; Potsaid, Benjamin; Mashimo, Hiroshi; Fujimoto, James G.

    2015-01-01

    We demonstrate a computationally efficient method for correcting the nonuniform rotational distortion (NURD) in catheter-based imaging systems to improve endoscopic en face optical coherence tomography (OCT) and OCT angiography. The method performs nonrigid registration using fiducial markers on the catheter to correct rotational speed variations. Algorithm performance is investigated with an ultrahigh-speed endoscopic OCT system and micromotor catheter. Scan nonuniformity is quantitatively characterized, and artifacts from rotational speed variations are significantly reduced. Furthermore, we present endoscopic en face OCT and OCT angiography images of human gastrointestinal tract in vivo to demonstrate the image quality improvement using the correction algorithm. PMID:25361133

  20. Neural networks related to dysfunctional face processing in autism spectrum disorder

    PubMed Central

    Nickl-Jockschat, Thomas; Rottschy, Claudia; Thommes, Johanna; Schneider, Frank; Laird, Angela R.; Fox, Peter T.; Eickhoff, Simon B.

    2016-01-01

    One of the most consistent neuropsychological findings in autism spectrum disorders (ASD) is a reduced interest in and impaired processing of human faces. We conducted an activation likelihood estimation meta-analysis on 14 functional imaging studies on neural correlates of face processing enrolling a total of 164 ASD patients. Subsequently, normative whole-brain functional connectivity maps for the identified regions of significant convergence were computed for the task-independent (resting-state) and task-dependent (co-activations) state in healthy subjects. Quantitative functional decoding was performed by reference to the BrainMap database. Finally, we examined the overlap of the delineated network with the results of a previous meta-analysis on structural abnormalities in ASD as well as with brain regions involved in human action observation/imitation. We found a single cluster in the left fusiform gyrus showing significantly reduced activation during face processing in ASD across all studies. Both task-dependent and task-independent analyses indicated significant functional connectivity of this region with the temporo-occipital and lateral occipital cortex, the inferior frontal and parietal cortices, the thalamus and the amygdala. Quantitative reverse inference then indicated an association of these regions mainly with face processing, affective processing, and language-related tasks. Moreover, we found that the cortex in the region of right area V5 displaying structural changes in ASD patients showed consistent connectivity with the region showing aberrant responses in the context of face processing. Finally, this network was also implicated in the human action observation/imitation network. In summary, our findings thus suggest a functionally and structurally disturbed network of occipital regions related primarily to face (but potentially also language) processing, which interact with inferior frontal as well as limbic regions and may be the core of aberrant face processing and reduced interest in faces in ASD. PMID:24869925

  1. Single-treatment skin tightening by radiofrequency and long-pulsed, 1064-nm Nd: YAG laser compared.

    PubMed

    Key, Douglas J

    2007-02-01

    To compare single-treatment facial skin tightening achieved with the current radiofrequency (RF) protocol with single-treatment tightening achieved with the long-pulsed, 1064-nm Nd:YAG laser. A total of 12 patients were treated with RF energy on one side of the face and laser energy on the other. Results were evaluated on a numerical scale (0-12 with 12 = greatest enhancement) from pre- and posttreatment photographs by a blinded panel. Upper face improvement (posttreatment score minus pretreatment score) was essentially the same on both sides (30.2 and 31.3% improvement for laser and RF, respectively, P=0.89). Lower face improvement was greater in the laser-treated side (35.7 and 23.8% improvement for laser and RF, respectively), but the difference was not significant (P=0.074). Overall face improvement was significantly greater on the laser-treated side (47.5 and 29.8% improvement for laser and RF, respectively, P=0.028). A single high-fluence treatment with the long-pulse 1064-nm Nd:YAG laser may improve skin laxity more than a single treatment with the RF device. Further controlled split-face or very large non-self controlled studies are needed to conclusively determine the relative efficacies of the two technologies. (c) 2007 Wiley-Liss, Inc.

  2. Social Anxiety Modulates Subliminal Affective Priming

    PubMed Central

    Paul, Elizabeth S.; Pope, Stuart A. J.; Fennell, John G.; Mendl, Michael T.

    2012-01-01

    Background It is well established that there is anxiety-related variation between observers in the very earliest, pre-attentive stage of visual processing of images such as emotionally expressive faces, often leading to enhanced attention to threat in a variety of disorders and traits. Whether there is also variation in early-stage affective (i.e. valenced) responses resulting from such images, however, is not yet known. The present study used the subliminal affective priming paradigm to investigate whether people varying in trait social anxiety also differ in their affective responses to very briefly presented, emotionally expressive face images. Methodology/Principal Findings Participants (n = 67) completed a subliminal affective priming task, in which briefly presented and smiling, neutral and angry faces were shown for 10 ms durations (below objective and subjective thresholds for visual discrimination), and immediately followed by a randomly selected Chinese character mask (2000 ms). Ratings of participants' liking for each Chinese character indicated the degree of valenced affective response made to the unseen emotive images. Participants' ratings of their liking for the Chinese characters were significantly influenced by the type of face image preceding them, with smiling faces generating more positive ratings than neutral and angry ones (F(2,128) = 3.107, p<0.05). Self-reported social anxiety was positively correlated with ratings of smiling relative to neutral-face primed characters (Pearson's r = .323, p<0.01). Individual variation in self-reported mood awareness was not associated with ratings. Conclusions Trait social anxiety is associated with individual variation in affective responding, even in response to the earliest, pre-attentive stage of visual image processing. However, the fact that these priming effects are limited to smiling and not angry (i.e. threatening) images leads us to propose that the pre-attentive processes involved in generating the subliminal affective priming effect may be different from those that generate attentional biases in anxious individuals. PMID:22615873

  3. Social anxiety modulates subliminal affective priming.

    PubMed

    Paul, Elizabeth S; Pope, Stuart A J; Fennell, John G; Mendl, Michael T

    2012-01-01

    It is well established that there is anxiety-related variation between observers in the very earliest, pre-attentive stage of visual processing of images such as emotionally expressive faces, often leading to enhanced attention to threat in a variety of disorders and traits. Whether there is also variation in early-stage affective (i.e. valenced) responses resulting from such images, however, is not yet known. The present study used the subliminal affective priming paradigm to investigate whether people varying in trait social anxiety also differ in their affective responses to very briefly presented, emotionally expressive face images. Participants (n = 67) completed a subliminal affective priming task, in which briefly presented and smiling, neutral and angry faces were shown for 10 ms durations (below objective and subjective thresholds for visual discrimination), and immediately followed by a randomly selected Chinese character mask (2000 ms). Ratings of participants' liking for each Chinese character indicated the degree of valenced affective response made to the unseen emotive images. Participants' ratings of their liking for the Chinese characters were significantly influenced by the type of face image preceding them, with smiling faces generating more positive ratings than neutral and angry ones (F(2,128) = 3.107, p<0.05). Self-reported social anxiety was positively correlated with ratings of smiling relative to neutral-face primed characters (Pearson's r = .323, p<0.01). Individual variation in self-reported mood awareness was not associated with ratings. Trait social anxiety is associated with individual variation in affective responding, even in response to the earliest, pre-attentive stage of visual image processing. However, the fact that these priming effects are limited to smiling and not angry (i.e. threatening) images leads us to propose that the pre-attentive processes involved in generating the subliminal affective priming effect may be different from those that generate attentional biases in anxious individuals.

  4. Responses in the right posterior superior temporal sulcus show a feature-based response to facial expression.

    PubMed

    Flack, Tessa R; Andrews, Timothy J; Hymers, Mark; Al-Mosaiwi, Mohammed; Marsden, Samuel P; Strachan, James W A; Trakulpipat, Chayanit; Wang, Liang; Wu, Tian; Young, Andrew W

    2015-08-01

    The face-selective region of the right posterior superior temporal sulcus (pSTS) plays an important role in analysing facial expressions. However, it is less clear how facial expressions are represented in this region. In this study, we used the face composite effect to explore whether the pSTS contains a holistic or feature-based representation of facial expression. Aligned and misaligned composite images were created from the top and bottom halves of faces posing different expressions. In Experiment 1, participants performed a behavioural matching task in which they judged whether the top half of two images was the same or different. The ability to discriminate the top half of the face was affected by changes in the bottom half of the face when the images were aligned, but not when they were misaligned. This shows a holistic behavioural response to expression. In Experiment 2, we used fMR-adaptation to ask whether the pSTS has a corresponding holistic neural representation of expression. Aligned or misaligned images were presented in blocks that involved repeating the same image or in which the top or bottom half of the images changed. Increased neural responses were found in the right pSTS regardless of whether the change occurred in the top or bottom of the image, showing that changes in expression were detected across all parts of the face. However, in contrast to the behavioural data, the pattern did not differ between aligned and misaligned stimuli. This suggests that the pSTS does not encode facial expressions holistically. In contrast to the pSTS, a holistic pattern of response to facial expression was found in the right inferior frontal gyrus (IFG). Together, these results suggest that pSTS reflects an early stage in the processing of facial expression in which facial features are represented independently. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Learning Low-Rank Class-Specific Dictionary and Sparse Intra-Class Variant Dictionary for Face Recognition.

    PubMed

    Tang, Xin; Feng, Guo-Can; Li, Xiao-Xin; Cai, Jia-Xin

    2015-01-01

    Face recognition is challenging especially when the images from different persons are similar to each other due to variations in illumination, expression, and occlusion. If we have sufficient training images of each person which can span the facial variations of that person under testing conditions, sparse representation based classification (SRC) achieves very promising results. However, in many applications, face recognition often encounters the small sample size problem arising from the small number of available training images for each person. In this paper, we present a novel face recognition framework by utilizing low-rank and sparse error matrix decomposition, and sparse coding techniques (LRSE+SC). Firstly, the low-rank matrix recovery technique is applied to decompose the face images per class into a low-rank matrix and a sparse error matrix. The low-rank matrix of each individual is a class-specific dictionary and it captures the discriminative feature of this individual. The sparse error matrix represents the intra-class variations, such as illumination, expression changes. Secondly, we combine the low-rank part (representative basis) of each person into a supervised dictionary and integrate all the sparse error matrix of each individual into a within-individual variant dictionary which can be applied to represent the possible variations between the testing and training images. Then these two dictionaries are used to code the query image. The within-individual variant dictionary can be shared by all the subjects and only contribute to explain the lighting conditions, expressions, and occlusions of the query image rather than discrimination. At last, a reconstruction-based scheme is adopted for face recognition. Since the within-individual dictionary is introduced, LRSE+SC can handle the problem of the corrupted training data and the situation that not all subjects have enough samples for training. Experimental results show that our method achieves the state-of-the-art results on AR, FERET, FRGC and LFW databases.

  6. Learning Low-Rank Class-Specific Dictionary and Sparse Intra-Class Variant Dictionary for Face Recognition

    PubMed Central

    Tang, Xin; Feng, Guo-can; Li, Xiao-xin; Cai, Jia-xin

    2015-01-01

    Face recognition is challenging especially when the images from different persons are similar to each other due to variations in illumination, expression, and occlusion. If we have sufficient training images of each person which can span the facial variations of that person under testing conditions, sparse representation based classification (SRC) achieves very promising results. However, in many applications, face recognition often encounters the small sample size problem arising from the small number of available training images for each person. In this paper, we present a novel face recognition framework by utilizing low-rank and sparse error matrix decomposition, and sparse coding techniques (LRSE+SC). Firstly, the low-rank matrix recovery technique is applied to decompose the face images per class into a low-rank matrix and a sparse error matrix. The low-rank matrix of each individual is a class-specific dictionary and it captures the discriminative feature of this individual. The sparse error matrix represents the intra-class variations, such as illumination, expression changes. Secondly, we combine the low-rank part (representative basis) of each person into a supervised dictionary and integrate all the sparse error matrix of each individual into a within-individual variant dictionary which can be applied to represent the possible variations between the testing and training images. Then these two dictionaries are used to code the query image. The within-individual variant dictionary can be shared by all the subjects and only contribute to explain the lighting conditions, expressions, and occlusions of the query image rather than discrimination. At last, a reconstruction-based scheme is adopted for face recognition. Since the within-individual dictionary is introduced, LRSE+SC can handle the problem of the corrupted training data and the situation that not all subjects have enough samples for training. Experimental results show that our method achieves the state-of-the-art results on AR, FERET, FRGC and LFW databases. PMID:26571112

  7. Skin Color Segmentation Using Coarse-to-Fine Region on Normalized RGB Chromaticity Diagram for Face Detection

    NASA Astrophysics Data System (ADS)

    Soetedjo, Aryuanto; Yamada, Koichi

    This paper describes a new color segmentation based on a normalized RGB chromaticity diagram for face detection. Face skin is extracted from color images using a coarse skin region with fixed boundaries followed by a fine skin region with variable boundaries. Two newly developed histograms that have prominent peaks of skin color and non-skin colors are employed to adjust the boundaries of the skin region. The proposed approach does not need a skin color model, which depends on a specific camera parameter and is usually limited to a particular environment condition, and no sample images are required. The experimental results using color face images of various races under varying lighting conditions and complex backgrounds, obtained from four different resources on the Internet, show a high detection rate of 87%. The results of the detection rate and computation time are comparable to the well known real-time face detection method proposed by Viola-Jones [11], [12].

  8. Dust Mantle Near Pavonis Mons

    NASA Technical Reports Server (NTRS)

    2003-01-01

    MGS MOC Release No. MOC2-356, 10 May 2003

    This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows a thick mantle of dust covering lava flows north of Pavonis Mons so well that the flows are no longer visible. Flows are known to occur here because of the proximity to the volcano, and such flows normally have a very rugged surface. Fine dust, however, has settled out of the atmosphere over time and obscured the flows from view. The cliff at the top of the image faces north (up), the cliff in the middle of the image faces south (down), and the rugged slope at the bottom of the image faces north (up). The dark streak at the center-left was probably caused by an avalanche of dust sometime in the past few decades. The image is located near 4.1oN, 111.3oW. Sunlight illuminates the scene from the right/lower right.

  9. Face format at encoding affects the other-race effect in face memory.

    PubMed

    Zhao, Mintao; Hayward, William G; Bülthoff, Isabelle

    2014-08-07

    Memory of own-race faces is generally better than memory of other-races faces. This other-race effect (ORE) in face memory has been attributed to differences in contact, holistic processing, and motivation to individuate faces. Since most studies demonstrate the ORE with participants learning and recognizing static, single-view faces, it remains unclear whether the ORE can be generalized to different face learning conditions. Using an old/new recognition task, we tested whether face format at encoding modulates the ORE. The results showed a significant ORE when participants learned static, single-view faces (Experiment 1). In contrast, the ORE disappeared when participants learned rigidly moving faces (Experiment 2). Moreover, learning faces displayed from four discrete views produced the same results as learning rigidly moving faces (Experiment 3). Contact with other-race faces was correlated with the magnitude of the ORE. Nonetheless, the absence of the ORE in Experiments 2 and 3 cannot be readily explained by either more frequent contact with other-race faces or stronger motivation to individuate them. These results demonstrate that the ORE is sensitive to face format at encoding, supporting the hypothesis that relative involvement of holistic and featural processing at encoding mediates the ORE observed in face memory. © 2014 ARVO.

  10. Patterns of Visual Attention to Faces and Objects in Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    McPartland, James C.; Webb, Sara Jane; Keehn, Brandon; Dawson, Geraldine

    2011-01-01

    This study used eye-tracking to examine visual attention to faces and objects in adolescents with autism spectrum disorder (ASD) and typical peers. Point of gaze was recorded during passive viewing of images of human faces, inverted human faces, monkey faces, three-dimensional curvilinear objects, and two-dimensional geometric patterns.…

  11. Sensitivity to spatial frequency content is not specific to face perception

    PubMed Central

    Williams, N. Rankin; Willenbockel, Verena; Gauthier, Isabel

    2010-01-01

    Prior work using a matching task between images that were complementary in spatial frequency and orientation information suggested that the representation of faces, but not objects, retains low-level spatial frequency (SF) information (Biederman & Kalocsai. 1997). In two experiments, we reexamine the claim that faces are uniquely sensitive to changes in SF. In contrast to prior work, we used a design allowing the computation of sensitivity and response criterion for each category, and in one experiment, equalized low-level image properties across object categories. In both experiments, we find that observers are sensitive to SF changes for upright and inverted faces and nonface objects. Differential response biases across categories contributed to a larger sensitivity for faces, but even sensitivity showed a larger effect for faces, especially when faces were upright and in a front-facing view. However, when objects were inverted, or upright but shown in a three-quarter view, the matching of objects and faces was equally sensitive to SF changes. Accordingly, face perception does not appear to be uniquely affected by changes in SF content. PMID:19576237

  12. Neural responses to facial expression and face identity in the monkey amygdala.

    PubMed

    Gothard, K M; Battaglia, F P; Erickson, C A; Spitler, K M; Amaral, D G

    2007-02-01

    The amygdala is purported to play an important role in face processing, yet the specificity of its activation to face stimuli and the relative contribution of identity and expression to its activation are unknown. In the current study, neural activity in the amygdala was recorded as monkeys passively viewed images of monkey faces, human faces, and objects on a computer monitor. Comparable proportions of neurons responded selectively to images from each category. Neural responses to monkey faces were further examined to determine whether face identity or facial expression drove the face-selective responses. The majority of these neurons (64%) responded both to identity and facial expression, suggesting that these parameters are processed jointly in the amygdala. Large fractions of neurons, however, showed pure identity-selective or expression-selective responses. Neurons were selective for a particular facial expression by either increasing or decreasing their firing rate compared with the firing rates elicited by the other expressions. Responses to appeasing faces were often marked by significant decreases of firing rates, whereas responses to threatening faces were strongly associated with increased firing rate. Thus global activation in the amygdala might be larger to threatening faces than to neutral or appeasing faces.

  13. Recognizing the same face in different contexts: Testing within-person face recognition in typical development and in autism

    PubMed Central

    Neil, Louise; Cappagli, Giulia; Karaminis, Themelis; Jenkins, Rob; Pellicano, Elizabeth

    2016-01-01

    Unfamiliar face recognition follows a particularly protracted developmental trajectory and is more likely to be atypical in children with autism than those without autism. There is a paucity of research, however, examining the ability to recognize the same face across multiple naturally varying images. Here, we investigated within-person face recognition in children with and without autism. In Experiment 1, typically developing 6- and 7-year-olds, 8- and 9-year-olds, 10- and 11-year-olds, 12- to 14-year-olds, and adults were given 40 grayscale photographs of two distinct male identities (20 of each face taken at different ages, from different angles, and in different lighting conditions) and were asked to sort them by identity. Children mistook images of the same person as images of different people, subdividing each individual into many perceived identities. Younger children divided images into more perceived identities than adults and also made more misidentification errors (placing two different identities together in the same group) than older children and adults. In Experiment 2, we used the same procedure with 32 cognitively able children with autism. Autistic children reported a similar number of identities and made similar numbers of misidentification errors to a group of typical children of similar age and ability. Fine-grained analysis using matrices revealed marginal group differences in overall performance. We suggest that the immature performance in typical and autistic children could arise from problems extracting the perceptual commonalities from different images of the same person and building stable representations of facial identity. PMID:26615971

  14. Principal component analysis for surface reflection components and structure in facial images and synthesis of facial images for various ages

    NASA Astrophysics Data System (ADS)

    Hirose, Misa; Toyota, Saori; Ojima, Nobutoshi; Ogawa-Ochiai, Keiko; Tsumura, Norimichi

    2017-08-01

    In this paper, principal component analysis is applied to the distribution of pigmentation, surface reflectance, and landmarks in whole facial images to obtain feature values. The relationship between the obtained feature vectors and the age of the face is then estimated by multiple regression analysis so that facial images can be modulated for woman aged 10-70. In a previous study, we analyzed only the distribution of pigmentation, and the reproduced images appeared to be younger than the apparent age of the initial images. We believe that this happened because we did not modulate the facial structures and detailed surfaces, such as wrinkles. By considering landmarks and surface reflectance over the entire face, we were able to analyze the variation in the distributions of facial structures and fine asperity, and pigmentation. As a result, our method is able to appropriately modulate the appearance of a face so that it appears to be the correct age.

  15. Remote tele-mentored ultrasound for non-physician learners using FaceTime: A feasibility study in a low-income country.

    PubMed

    Robertson, Thomas E; Levine, Andrea R; Verceles, Avelino C; Buchner, Jessica A; Lantry, James H; Papali, Alfred; Zubrow, Marc T; Colas, L Nathalie; Augustin, Marc E; McCurdy, Michael T

    2017-08-01

    Ultrasound (US) is a burgeoning diagnostic tool and is often the only available imaging modality in low- and middle-income countries (LMICs). However, bedside providers often lack training to acquire or interpret US images. We conducted a study to determine if a remote tele-intensivist could mentor geographically removed LMIC providers to obtain quality and clinically useful US images. Nine Haitian non-physician health care workers received a 20-minute training on basic US techniques. A volunteer was connected to an intensivist located in the USA via FaceTime. The intensivist remotely instructed the non-physicians to ultrasound five anatomic sites. The tele-intensivist evaluated the image quality and clinical utility of performing tele-ultrasound in a LMIC. The intensivist agreed (defined as "agree" or "strongly agree" on a five-point Likert scale) that 90% (57/63) of the FaceTime images were high quality. The intensivist felt comfortable making clinical decisions using FaceTime images 89% (56/63) of the time. Non-physicians can feasibly obtain high-quality and clinically relevant US images using video chat software in LMICs. Commercially available software can connect providers in institutions in LMICs to geographically removed intensivists at a relatively low cost and without the need for extensive training of local providers. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Ultrahigh speed en face OCT capsule for endoscopic imaging

    PubMed Central

    Liang, Kaicheng; Traverso, Giovanni; Lee, Hsiang-Chieh; Ahsen, Osman Oguz; Wang, Zhao; Potsaid, Benjamin; Giacomelli, Michael; Jayaraman, Vijaysekhar; Barman, Ross; Cable, Alex; Mashimo, Hiroshi; Langer, Robert; Fujimoto, James G.

    2015-01-01

    Depth resolved and en face OCT visualization in vivo may have important clinical applications in endoscopy. We demonstrate a high speed, two-dimensional (2D) distal scanning capsule with a micromotor for fast rotary scanning and a pneumatic actuator for precision longitudinal scanning. Longitudinal position measurement and image registration were performed by optical tracking of the pneumatic scanner. The 2D scanning device enables high resolution imaging over a small field of view and is suitable for OCT as well as other scanning microscopies. Large field of view imaging for screening or surveillance applications can also be achieved by proximally pulling back or advancing the capsule while scanning the distal high-speed micromotor. Circumferential en face OCT was demonstrated in living swine at 250 Hz frame rate and 1 MHz A-scan rate using a MEMS tunable VCSEL light source at 1300 nm. Cross-sectional and en face OCT views of the upper and lower gastrointestinal tract were generated with precision distal pneumatic longitudinal actuation as well as proximal manual longitudinal actuation. These devices could enable clinical studies either as an adjunct to endoscopy, attached to an endoscope, or as a swallowed tethered capsule for non-endoscopic imaging without sedation. The combination of ultrahigh speed imaging and distal scanning capsule technology could enable both screening and surveillance applications. PMID:25909001

  17. Ultrahigh speed en face OCT capsule for endoscopic imaging.

    PubMed

    Liang, Kaicheng; Traverso, Giovanni; Lee, Hsiang-Chieh; Ahsen, Osman Oguz; Wang, Zhao; Potsaid, Benjamin; Giacomelli, Michael; Jayaraman, Vijaysekhar; Barman, Ross; Cable, Alex; Mashimo, Hiroshi; Langer, Robert; Fujimoto, James G

    2015-04-01

    Depth resolved and en face OCT visualization in vivo may have important clinical applications in endoscopy. We demonstrate a high speed, two-dimensional (2D) distal scanning capsule with a micromotor for fast rotary scanning and a pneumatic actuator for precision longitudinal scanning. Longitudinal position measurement and image registration were performed by optical tracking of the pneumatic scanner. The 2D scanning device enables high resolution imaging over a small field of view and is suitable for OCT as well as other scanning microscopies. Large field of view imaging for screening or surveillance applications can also be achieved by proximally pulling back or advancing the capsule while scanning the distal high-speed micromotor. Circumferential en face OCT was demonstrated in living swine at 250 Hz frame rate and 1 MHz A-scan rate using a MEMS tunable VCSEL light source at 1300 nm. Cross-sectional and en face OCT views of the upper and lower gastrointestinal tract were generated with precision distal pneumatic longitudinal actuation as well as proximal manual longitudinal actuation. These devices could enable clinical studies either as an adjunct to endoscopy, attached to an endoscope, or as a swallowed tethered capsule for non-endoscopic imaging without sedation. The combination of ultrahigh speed imaging and distal scanning capsule technology could enable both screening and surveillance applications.

  18. Single-Molecule Tracking and Its Application in Biomolecular Binding Detection.

    PubMed

    Liu, Cong; Liu, Yen-Liang; Perillo, Evan P; Dunn, Andrew K; Yeh, Hsin-Chih

    2016-01-01

    In the past two decades significant advances have been made in single-molecule detection, which enables the direct observation of single biomolecules at work in real time and under physiological conditions. In particular, the development of single-molecule tracking (SMT) microscopy allows us to monitor the motion paths of individual biomolecules in living systems, unveiling the localization dynamics and transport modalities of the biomolecules that support the development of life. Beyond the capabilities of traditional camera-based tracking techniques, state-of-the-art SMT microscopies developed in recent years can record fluorescence lifetime while tracking a single molecule in the 3D space. This multiparameter detection capability can open the door to a wide range of investigations at the cellular or tissue level, including identification of molecular interaction hotspots and characterization of association/dissociation kinetics between molecules. In this review, we discuss various SMT techniques developed to date, with an emphasis on our recent development of the next generation 3D tracking system that not only achieves ultrahigh spatiotemporal resolution but also provides sufficient working depth suitable for live animal imaging. We also discuss the challenges that current SMT techniques are facing and the potential strategies to tackle those challenges.

  19. Single-Molecule Tracking and Its Application in Biomolecular Binding Detection

    PubMed Central

    Liu, Cong; Liu, Yen-Liang; Perillo, Evan P.; Dunn, Andrew K.; Yeh, Hsin-Chih

    2016-01-01

    In the past two decades significant advances have been made in single-molecule detection, which enables the direct observation of single biomolecules at work in real time and under physiological conditions. In particular, the development of single-molecule tracking (SMT) microscopy allows us to monitor the motion paths of individual biomolecules in living systems, unveiling the localization dynamics and transport modalities of the biomolecules that support the development of life. Beyond the capabilities of traditional camera-based tracking techniques, state-of-the-art SMT microscopies developed in recent years can record fluorescence lifetime while tracking a single molecule in the 3D space. This multiparameter detection capability can open the door to a wide range of investigations at the cellular or tissue level, including identification of molecular interaction hotspots and characterization of association/dissociation kinetics between molecules. In this review, we discuss various SMT techniques developed to date, with an emphasis on our recent development of the next generation 3D tracking system that not only achieves ultrahigh spatiotemporal resolution but also provides sufficient working depth suitable for live animal imaging. We also discuss the challenges that current SMT techniques are facing and the potential strategies to tackle those challenges. PMID:27660404

  20. Vitiligo on the face (image)

    MedlinePlus

    This is a picture of vitiligo on the face. Complete loss of melanin, the primary skin pigment, ... the same areas on both sides of the face -- symmetrically -- or it may be patchy -- asymmetrical. The ...

  1. Method for Face-Emotion Retrieval Using A Cartoon Emotional Expression Approach

    NASA Astrophysics Data System (ADS)

    Kostov, Vlaho; Yanagisawa, Hideyoshi; Johansson, Martin; Fukuda, Shuichi

    A simple method for extracting emotion from a human face, as a form of non-verbal communication, was developed to cope with and optimize mobile communication in a globalized and diversified society. A cartoon face based model was developed and used to evaluate emotional content of real faces. After a pilot survey, basic rules were defined and student subjects were asked to express emotion using the cartoon face. Their face samples were then analyzed using principal component analysis and the Mahalanobis distance method. Feature parameters considered as having relations with emotions were extracted and new cartoon faces (based on these parameters) were generated. The subjects evaluated emotion of these cartoon faces again and we confirmed these parameters were suitable. To confirm how these parameters could be applied to real faces, we asked subjects to express the same emotions which were then captured electronically. Simple image processing techniques were also developed to extract these features from real faces and we then compared them with the cartoon face parameters. It is demonstrated via the cartoon face that we are able to express the emotions from very small amounts of information. As a result, real and cartoon faces correspond to each other. It is also shown that emotion could be extracted from still and dynamic real face images using these cartoon-based features.

  2. Divided by a Common Degree Program? Profiling Online and Face-to-Face Information Science Students

    ERIC Educational Resources Information Center

    Haigh, Maria

    2007-01-01

    This study examines profiles of online and face-to-face students in a single information science school: the University of Wisconsin-Milwaukee School of Information Studies. A questionnaire was administered to 76 students enrolled in online course sections and 72 students enrolled in face-to-face course sections. The questionnaire examined student…

  3. An embedded face-classification system for infrared images on an FPGA

    NASA Astrophysics Data System (ADS)

    Soto, Javier E.; Figueroa, Miguel

    2014-10-01

    We present a face-classification architecture for long-wave infrared (IR) images implemented on a Field Programmable Gate Array (FPGA). The circuit is fast, compact and low power, can recognize faces in real time and be embedded in a larger image-processing and computer vision system operating locally on an IR camera. The algorithm uses Local Binary Patterns (LBP) to perform feature extraction on each IR image. First, each pixel in the image is represented as an LBP pattern that encodes the similarity between the pixel and its neighbors. Uniform LBP codes are then used to reduce the number of patterns to 59 while preserving more than 90% of the information contained in the original LBP representation. Then, the image is divided into 64 non-overlapping regions, and each region is represented as a 59-bin histogram of patterns. Finally, the algorithm concatenates all 64 regions to create a 3,776-bin spatially enhanced histogram. We reduce the dimensionality of this histogram using Linear Discriminant Analysis (LDA), which improves clustering and enables us to store an entire database of 53 subjects on-chip. During classification, the circuit applies LBP and LDA to each incoming IR image in real time, and compares the resulting feature vector to each pattern stored in the local database using the Manhattan distance. We implemented the circuit on a Xilinx Artix-7 XC7A100T FPGA and tested it with the UCHThermalFace database, which consists of 28 81 x 150-pixel images of 53 subjects in indoor and outdoor conditions. The circuit achieves a 98.6% hit ratio, trained with 16 images and tested with 12 images of each subject in the database. Using a 100 MHz clock, the circuit classifies 8,230 images per second, and consumes only 309mW.

  4. Fourier Domain Optical Coherence Tomography With 3D and En Face Imaging of the Punctum and Vertical Canaliculus: A Step Toward Establishing a Normative Database.

    PubMed

    Kamal, Saurabh; Ali, Mohammad Javed; Ali, Mohammad Hasnat; Naik, Milind N

    2016-01-01

    To report the features of Fourier domain optical coherence tomography imaging of the normal punctum and vertical canaliculus. Prospective, interventional series of consecutive healthy and asymptomatic adults, who volunteered for optical coherence tomography imaging, were included in the study. Fourier domain optical coherence tomography images of the punctum and vertical canaliculus along with 3D and En face images were captured using the RTVue scanner with a corneal adaptor module and a wide-angled lens. Maximum punctal diameter, mid-canalicular diameter, and vertical canalicular height were calculated. Statistical analysis was performed using Pearson correlation test, and scatter plot matrices were analyzed. A total of 103 puncta of 52 healthy subjects were studied. Although all the images could depict the punctum and vertical canaliculus and all the desired measurements could be obtained, occasional tear debris within the canaliculus was found to be interfering with the imaging. The mean maximum punctal diameter, mid-canalicular diameter, and vertical canalicular height were recorded as 214.71 ± 73 μm, 125.04 ± 60.69 μm, and 890.41 ± 154.76 μm, respectively, with an insignificant correlation between them. The maximum recorded vertical canalicular height in all the cases was far less than the widely reported depth of 2 mm. High-resolution 3D and En face images provided a detailed topography of punctal surface and overview of vertical canaliculus. Fourier domain optical coherence tomography with 3D and En face imaging is a useful noninvasive modality to image the proximal lacrimal system with consistently reproducible high-resolution images. This is likely to help clinicians in the management of proximal lacrimal disorders.

  5. Face Search at Scale.

    PubMed

    Wang, Dayong; Otto, Charles; Jain, Anil K

    2017-06-01

    Given the prevalence of social media websites, one challenge facing computer vision researchers is to devise methods to search for persons of interest among the billions of shared photos on these websites. Despite significant progress in face recognition, searching a large collection of unconstrained face images remains a difficult problem. To address this challenge, we propose a face search system which combines a fast search procedure, coupled with a state-of-the-art commercial off the shelf (COTS) matcher, in a cascaded framework. Given a probe face, we first filter the large gallery of photos to find the top- k most similar faces using features learned by a convolutional neural network. The k retrieved candidates are re-ranked by combining similarities based on deep features and those output by the COTS matcher. We evaluate the proposed face search system on a gallery containing 80 million web-downloaded face images. Experimental results demonstrate that while the deep features perform worse than the COTS matcher on a mugshot dataset (93.7 percent versus 98.6 percent TAR@FAR of 0.01 percent), fusing the deep features with the COTS matcher improves the overall performance ( 99.5 percent TAR@FAR of 0.01 percent). This shows that the learned deep features provide complementary information over representations used in state-of-the-art face matchers. On the unconstrained face image benchmarks, the performance of the learned deep features is competitive with reported accuracies. LFW database: 98.20 percent accuracy under the standard protocol and 88.03 percent TAR@FAR of 0.1 percent under the BLUFR protocol; IJB-A benchmark: 51.0 percent TAR@FAR of 0.1 percent (verification), rank 1 retrieval of 82.2 percent (closed-set search), 61.5 percent FNIR@FAR of 1 percent (open-set search). The proposed face search system offers an excellent trade-off between accuracy and scalability on galleries with millions of images. Additionally, in a face search experiment involving photos of the Tsarnaev brothers, convicted of the Boston Marathon bombing, the proposed cascade face search system could find the younger brother's (Dzhokhar Tsarnaev) photo at rank 1 in 1 second on a 5 M gallery and at rank 8 in 7 seconds on an 80 M gallery.

  6. Standoff imaging of a masked human face using a 670 GHz high resolution radar

    NASA Astrophysics Data System (ADS)

    Kjellgren, Jan; Svedin, Jan; Cooper, Ken B.

    2011-11-01

    This paper presents an exploratory attempt to use high-resolution radar measurements for face identification in forensic applications. An imaging radar system developed by JPL was used to measure a human face at 670 GHz. Frontal views of the face were measured both with and without a ski mask at a range of 25 m. The realized spatial resolution was roughly 1 cm in all three dimensions. The surfaces of the ski mask and the face were detected by using the two dominating reflections from amplitude data. Various methods for visualization of these surfaces are presented. The possibility to use radar data to determine certain face distance measures between well-defined face landmarks, typically used for anthropometric statistics, was explored. The measures used here were face length, frontal breadth and interpupillary distance. In many cases the radar system seems to provide sufficient information to exclude an innocent subject from suspicion. For an accurate identification it is believed that a system must provide significantly more information.

  7. Cultural similarities and differences in perceiving and recognizing facial expressions of basic emotions.

    PubMed

    Yan, Xiaoqian; Andrews, Timothy J; Young, Andrew W

    2016-03-01

    The ability to recognize facial expressions of basic emotions is often considered a universal human ability. However, recent studies have suggested that this commonality has been overestimated and that people from different cultures use different facial signals to represent expressions (Jack, Blais, Scheepers, Schyns, & Caldara, 2009; Jack, Caldara, & Schyns, 2012). We investigated this possibility by examining similarities and differences in the perception and categorization of facial expressions between Chinese and white British participants using whole-face and partial-face images. Our results showed no cultural difference in the patterns of perceptual similarity of expressions from whole-face images. When categorizing the same expressions, however, both British and Chinese participants were slightly more accurate with whole-face images of their own ethnic group. To further investigate potential strategy differences, we repeated the perceptual similarity and categorization tasks with presentation of only the upper or lower half of each face. Again, the perceptual similarity of facial expressions was similar between Chinese and British participants for both the upper and lower face regions. However, participants were slightly better at categorizing facial expressions of their own ethnic group for the lower face regions, indicating that the way in which culture shapes the categorization of facial expressions is largely driven by differences in information decoding from this part of the face. (c) 2016 APA, all rights reserved).

  8. An embedded system for face classification in infrared video using sparse representation

    NASA Astrophysics Data System (ADS)

    Saavedra M., Antonio; Pezoa, Jorge E.; Zarkesh-Ha, Payman; Figueroa, Miguel

    2017-09-01

    We propose a platform for robust face recognition in Infrared (IR) images using Compressive Sensing (CS). In line with CS theory, the classification problem is solved using a sparse representation framework, where test images are modeled by means of a linear combination of the training set. Because the training set constitutes an over-complete dictionary, we identify new images by finding their sparsest representation based on the training set, using standard l1-minimization algorithms. Unlike conventional face-recognition algorithms, we feature extraction is performed using random projections with a precomputed binary matrix, as proposed in the CS literature. This random sampling reduces the effects of noise and occlusions such as facial hair, eyeglasses, and disguises, which are notoriously challenging in IR images. Thus, the performance of our framework is robust to these noise and occlusion factors, achieving an average accuracy of approximately 90% when the UCHThermalFace database is used for training and testing purposes. We implemented our framework on a high-performance embedded digital system, where the computation of the sparse representation of IR images was performed by a dedicated hardware using a deeply pipelined architecture on an Field-Programmable Gate Array (FPGA).

  9. Semisupervised kernel marginal Fisher analysis for face recognition.

    PubMed

    Wang, Ziqiang; Sun, Xia; Sun, Lijun; Huang, Yuchun

    2013-01-01

    Dimensionality reduction is a key problem in face recognition due to the high-dimensionality of face image. To effectively cope with this problem, a novel dimensionality reduction algorithm called semisupervised kernel marginal Fisher analysis (SKMFA) for face recognition is proposed in this paper. SKMFA can make use of both labelled and unlabeled samples to learn the projection matrix for nonlinear dimensionality reduction. Meanwhile, it can successfully avoid the singularity problem by not calculating the matrix inverse. In addition, in order to make the nonlinear structure captured by the data-dependent kernel consistent with the intrinsic manifold structure, a manifold adaptive nonparameter kernel is incorporated into the learning process of SKMFA. Experimental results on three face image databases demonstrate the effectiveness of our proposed algorithm.

  10. Analog signal processing for optical coherence imaging systems

    NASA Astrophysics Data System (ADS)

    Xu, Wei

    Optical coherence tomography (OCT) and optical coherence microscopy (OCM) are non-invasive optical coherence imaging techniques, which enable micron-scale resolution, depth resolved imaging capability. Both OCT and OCM are based on Michelson interferometer theory. They are widely used in ophthalmology, gastroenterology and dermatology, because of their high resolution, safety and low cost. OCT creates cross sectional images whereas OCM obtains en face images. In this dissertation, the design and development of three increasingly complicated analog signal processing (ASP) solutions for optical coherence imaging are presented. The first ASP solution was implemented for a time domain OCT system with a Rapid Scanning Optical Delay line (RSOD)-based optical signal modulation and logarithmic amplifier (Log amp) based demodulation. This OCT system can acquire up to 1600 A-scans per second. The measured dynamic range is 106dB at 200A-scan per second. This OCT signal processing electronics includes an off-the-shelf filter box with a Log amp circuit implemented on a PCB board. The second ASP solution was developed for an OCM system with synchronized modulation and demodulation and compensation for interferometer phase drift. This OCM acquired micron-scale resolution, high dynamic range images at acquisition speeds up to 45,000 pixels/second. This OCM ASP solution is fully custom designed on a perforated circuit board. The third ASP solution was implemented on a single 2.2 mm x 2.2 mm complementary metal oxide semiconductor (CMOS) chip. This design is expandable to a multiple channel OCT system. A single on-chip CMOS photodetector and ASP channel was used for coherent demodulation in a time domain OCT system. Cross-sectional images were acquired with a dynamic range of 76dB (limited by photodetector responsivity). When incorporated with a bump-bonded InGaAs photodiode with higher responsivity, the expected dynamic range is close to 100dB.

  11. Sex and age differences in body-image, self-esteem, and body mass index in adolescents and adults after single-ventricle palliation.

    PubMed

    Pike, Nancy A; Evangelista, Lorraine S; Doering, Lynn V; Eastwood, Jo-Ann; Lewis, Alan B; Child, John S

    2012-06-01

    Single-ventricle congenital heart disease (SVCHD) requires multiple palliative surgical procedures that leave visible surgical scars and physical deficits, which can alter body-image and self-esteem. This study aimed to compare sex and age differences in body-image, self-esteem, and body mass index (BMI) in adolescents and adults with SVCHD after surgical palliation with those of a healthy control group. Using a comparative, cross-sectional design, 54 adolescent and adult (26 male and 28 female) patients, age 15–50 years, with SVCHD were compared with 66 age-matched healthy controls. Body-image and self-esteem were measured using the Multidimensional Body-Self Relations Questionnaire–Appearance Scale and Rosenberg Self-Esteem Scale. Height and weight were collected from retrospective chart review, and BMI was calculated. Female adolescents and adult patients with SVCHD reported lower body image compared with males patients with SVCHD and healthy controls (p = 0.003). Specific areas of concern were face (p = 0.002), upper torso or chest (p = 0.002), and muscle tone (p = 0.001). Patients with SVCHD who were \\21 years of age had lower body image compared with healthy controls (p = 0.006). Self-esteem was comparable for both patients with SVCHD and healthy peers. There were no sex differences in BMI; BMI was higher in subjects[21 years of age (p = 0.01). Despite the similarities observed in self-esteem between the two groups, female patients with SVCHD\\21 years of age reported lower perceived body-image. Our findings support the need to recognize poor psychological adjustment related to low self-esteem in patients with SVCHD; female patients warrant increased scrutiny. Strategies to help patients with SVCHD cope with nonmodifiable aspects of body-image during the difficult adolescent–to–young adult years may potentially enhance self-esteem and decrease psychological distress.

  12. Sex and Age Differences in Body-Image, Self-Esteem, and Body Mass Index in Adolescents and Adults After Single-Ventricle Palliation

    PubMed Central

    Evangelista, Lorraine S.; Doering, Lynn V.; Eastwood, Jo-Ann; Lewis, Alan B.; Child, John S.

    2012-01-01

    Single-ventricle congenital heart disease (SVCHD) requires multiple palliative surgical procedures that leave visible surgical scars and physical deficits, which can alter body-image and self-esteem. This study aimed to compare sex and age differences in body-image, self-esteem, and body mass index (BMI) in adolescents and adults with SVCHD after surgical palliation with those of a healthy control group. Using a comparative, cross-sectional design, 54 adolescent and adult (26 male and 28 female) patients, age 15–50 years, with SVCHD were compared with 66 age-matched healthy controls. Body-image and self-esteem were measured using the Multidimensional Body-Self Relations Questionnaire–Appearance Scale and Rosenberg Self-Esteem Scale. Height and weight were collected from retrospective chart review, and BMI was calculated. Female adolescents and adult patients with SVCHD reported lower body image compared with males patients with SVCHD and healthy controls (p = 0.003). Specific areas of concern were face (p = 0.002), upper torso or chest (p = 0.002), and muscle tone (p = 0.001). Patients with SVCHD who were <21 years of age had lower body image compared with healthy controls (p = 0.006). Self-esteem was comparable for both patients with SVCHD and healthy peers. There were no sex differences in BMI; BMI was higher in subjects >21 years of age (p = 0.01). Despite the similarities observed in self-esteem between the two groups, female patients with SVCHD <21 years of age reported lower perceived body-image. Our findings support the need to recognize poor psychological adjustment related to low self-esteem in patients with SVCHD; female patients warrant increased scrutiny. Strategies to help patients with SVCHD cope with nonmodifiable aspects of body-image during the difficult adolescent–to–young adult years may potentially enhance self-esteem and decrease psychological distress. PMID:22314368

  13. Facelock: familiarity-based graphical authentication.

    PubMed

    Jenkins, Rob; McLachlan, Jane L; Renaud, Karen

    2014-01-01

    Authentication codes such as passwords and PIN numbers are widely used to control access to resources. One major drawback of these codes is that they are difficult to remember. Account holders are often faced with a choice between forgetting a code, which can be inconvenient, or writing it down, which compromises security. In two studies, we test a new knowledge-based authentication method that does not impose memory load on the user. Psychological research on face recognition has revealed an important distinction between familiar and unfamiliar face perception: When a face is familiar to the observer, it can be identified across a wide range of images. However, when the face is unfamiliar, generalisation across images is poor. This contrast can be used as the basis for a personalised 'facelock', in which authentication succeeds or fails based on image-invariant recognition of faces that are familiar to the account holder. In Study 1, account holders authenticated easily by detecting familiar targets among other faces (97.5% success rate), even after a one-year delay (86.1% success rate). Zero-acquaintance attackers were reduced to guessing (<1% success rate). Even personal attackers who knew the account holder well were rarely able to authenticate (6.6% success rate). In Study 2, we found that shoulder-surfing attacks by strangers could be defeated by presenting different photos of the same target faces in observed and attacked grids (1.9% success rate). Our findings suggest that the contrast between familiar and unfamiliar face recognition may be useful for developers of graphical authentication systems.

  14. Factors contributing to the adaptation aftereffects of facial expression.

    PubMed

    Butler, Andrea; Oruc, Ipek; Fox, Christopher J; Barton, Jason J S

    2008-01-29

    Previous studies have demonstrated the existence of adaptation aftereffects for facial expressions. Here we investigated which aspects of facial stimuli contribute to these aftereffects. In Experiment 1, we examined the role of local adaptation to image elements such as curvature, shape and orientation, independent of expression, by using hybrid faces constructed from either the same or opposing expressions. While hybrid faces made with consistent expressions generated aftereffects as large as those with normal faces, there were no aftereffects from hybrid faces made from different expressions, despite the fact that these contained the same local image elements. In Experiment 2, we examined the role of facial features independent of the normal face configuration by contrasting adaptation with whole faces to adaptation with scrambled faces. We found that scrambled faces also generated significant aftereffects, indicating that expressive features without a normal facial configuration could generate expression aftereffects. In Experiment 3, we examined the role of facial configuration by using schematic faces made from line elements that in isolation do not carry expression-related information (e.g. curved segments and straight lines) but that convey an expression when arranged in a normal facial configuration. We obtained a significant aftereffect for facial configurations but not scrambled configurations of these line elements. We conclude that facial expression aftereffects are not due to local adaptation to image elements but due to high-level adaptation of neural representations that involve both facial features and facial configuration.

  15. Feeling Bad and Looking Worse: Negative Affect Is Associated with Reduced Perceptions of Face-Healthiness

    PubMed Central

    Mirams, Laura; Poliakoff, Ellen; Zandstra, Elizabeth H.; Hoeksma, Marco; Thomas, Anna; El-Deredy, Wael

    2014-01-01

    Some people perceive themselves to look more, or less attractive than they are in reality. We investigated the role of emotions in enhancement and derogation effects; specifically, whether the propensity to experience positive and negative emotions affects how healthy we perceive our own face to look and how we judge ourselves against others. A psychophysical method was used to measure healthiness of self-image and social comparisons of healthiness. Participants who self-reported high positive (N = 20) or negative affectivity (N = 20) judged themselves against healthy (red-tinged) and unhealthy looking (green-tinged) versions of their own and stranger’s faces. An adaptive staircase procedure was used to measure perceptual thresholds. Participants high in positive affectivity were un-biased in their face health judgement. Participants high in negative affectivity on the other hand, judged themselves as equivalent to less healthy looking versions of their own face and a stranger’s face. Affective traits modulated self-image and social comparisons of healthiness. Face health judgement was also related to physical symptom perception and self-esteem; high physical symptom reports were associated a less healthy self-image and high self-reported (but not implicit) self-esteem was associated with more favourable social comparisons of healthiness. Subject to further validation, our novel face health judgement task could have utility as a perceptual measure of well-being. We are currently investigating whether face health judgement is sensitive to laboratory manipulations of mood. PMID:25259802

  16. Determining the Molecular Growth Mechanisms of Protein Crystal Faces by Atomic Force Microscopy

    NASA Technical Reports Server (NTRS)

    Nadarajah, Arunan; Li, Huayu; Pusey, Marc L.

    1999-01-01

    A high resolution atomic force microscopy (AFM) study had shown that the molecular packing on the tetragonal lysozyme (110) face corresponded to only one of two possible packing arrangements, suggesting that growth layers on this face were of bimolecular height. Theoretical analyses of the packing also indicated that growth of this face should proceed by the addition of growth units of at least tetramer size corresponding to the 43 helices in the crystal. In this study an AFM linescan technique was devised to measure the dimensions of individual growth units on protein crystal faces as they were being incorporated into the lattice. Images of individual growth events on the (110) face of tetragonal lysozyme crystals were observed, shown by jump discontinuities in the growth step in the linescan images as shown in the figure. The growth unit dimension in the scanned direction was obtained from these images. A large number of scans in two directions on the (110) face were performed and the distribution of lysozyme growth unit sizes were obtained. A variety of unit sizes corresponding to 43 helices, were shown to participate in the growth process, with the 43 tetramer being the minimum observed size. This technique represents a new application for AFM allowing time resolved studies of molecular process to be carried out.

  17. Holistic face processing can inhibit recognition of forensic facial composites.

    PubMed

    McIntyre, Alex H; Hancock, Peter J B; Frowd, Charlie D; Langton, Stephen R H

    2016-04-01

    Facial composite systems help eyewitnesses to show the appearance of criminals. However, likenesses created by unfamiliar witnesses will not be completely accurate, and people familiar with the target can find them difficult to identify. Faces are processed holistically; we explore whether this impairs identification of inaccurate composite images and whether recognition can be improved. In Experiment 1 (n = 64) an imaging technique was used to make composites of celebrity faces more accurate and identification was contrasted with the original composite images. Corrected composites were better recognized, confirming that errors in production of the likenesses impair identification. The influence of holistic face processing was explored by misaligning the top and bottom parts of the composites (cf. Young, Hellawell, & Hay, 1987). Misalignment impaired recognition of corrected composites but identification of the original, inaccurate composites significantly improved. This effect was replicated with facial composites of noncelebrities in Experiment 2 (n = 57). We conclude that, like real faces, facial composites are processed holistically: recognition is impaired because unlike real faces, composites contain inaccuracies and holistic face processing makes it difficult to perceive identifiable features. This effect was consistent across composites of celebrities and composites of people who are personally familiar. Our findings suggest that identification of forensic facial composites can be enhanced by presenting composites in a misaligned format. (c) 2016 APA, all rights reserved).

  18. Locally linear regression for pose-invariant face recognition.

    PubMed

    Chai, Xiujuan; Shan, Shiguang; Chen, Xilin; Gao, Wen

    2007-07-01

    The variation of facial appearance due to the viewpoint (/pose) degrades face recognition systems considerably, which is one of the bottlenecks in face recognition. One of the possible solutions is generating virtual frontal view from any given nonfrontal view to obtain a virtual gallery/probe face. Following this idea, this paper proposes a simple, but efficient, novel locally linear regression (LLR) method, which generates the virtual frontal view from a given nonfrontal face image. We first justify the basic assumption of the paper that there exists an approximate linear mapping between a nonfrontal face image and its frontal counterpart. Then, by formulating the estimation of the linear mapping as a prediction problem, we present the regression-based solution, i.e., globally linear regression. To improve the prediction accuracy in the case of coarse alignment, LLR is further proposed. In LLR, we first perform dense sampling in the nonfrontal face image to obtain many overlapped local patches. Then, the linear regression technique is applied to each small patch for the prediction of its virtual frontal patch. Through the combination of all these patches, the virtual frontal view is generated. The experimental results on the CMU PIE database show distinct advantage of the proposed method over Eigen light-field method.

  19. Automatic forensic face recognition from digital images.

    PubMed

    Peacock, C; Goode, A; Brett, A

    2004-01-01

    Digital image evidence is now widely available from criminal investigations and surveillance operations, often captured by security and surveillance CCTV. This has resulted in a growing demand from law enforcement agencies for automatic person-recognition based on image data. In forensic science, a fundamental requirement for such automatic face recognition is to evaluate the weight that can justifiably be attached to this recognition evidence in a scientific framework. This paper describes a pilot study carried out by the Forensic Science Service (UK) which explores the use of digital facial images in forensic investigation. For the purpose of the experiment a specific software package was chosen (Image Metrics Optasia). The paper does not describe the techniques used by the software to reach its decision of probabilistic matches to facial images, but accepts the output of the software as though it were a 'black box'. In this way, the paper lays a foundation for how face recognition systems can be compared in a forensic framework. The aim of the paper is to explore how reliably and under what conditions digital facial images can be presented in evidence.

  20. TDC-based readout electronics for real-time acquisition of high resolution PET bio-images

    NASA Astrophysics Data System (ADS)

    Marino, N.; Saponara, S.; Ambrosi, G.; Baronti, F.; Bisogni, M. G.; Cerello, P.,; Ciciriello, F.; Corsi, F.; Fanucci, L.; Ionica, M.; Licciulli, F.; Marzocca, C.; Morrocchi, M.; Pennazio, F.; Roncella, R.; Santoni, C.; Wheadon, R.; Del Guerra, A.

    2013-02-01

    Positron emission tomography (PET) is a clinical and research tool for in vivo metabolic imaging. The demand for better image quality entails continuous research to improve PET instrumentation. In clinical applications, PET image quality benefits from the time of flight (TOF) feature. Indeed, by measuring the photons arrival time on the detectors with a resolution less than 100 ps, the annihilation point can be estimated with centimeter resolution. This leads to better noise level, contrast and clarity of detail in the images either using analytical or iterative reconstruction algorithms. This work discusses a silicon photomultiplier (SiPM)-based magnetic-field compatible TOF-PET module with depth of interaction (DOI) correction. The detector features a 3D architecture with two tiles of SiPMs coupled to a single LYSO scintillator on both its faces. The real-time front-end electronics is based on a current-mode ASIC where a low input impedance, fast current buffer allows achieving the required time resolution. A pipelined time to digital converter (TDC) measures and digitizes the arrival time and the energy of the events with a timestamp of 100 ps and 400 ps, respectively. An FPGA clusters the data and evaluates the DOI, with a simulated z resolution of the PET image of 1.4 mm FWHM.

Top