Schönborn, Sandro; Greifeneder, Rainer; Vetter, Thomas
2018-01-01
Upon a first encounter, individuals spontaneously associate faces with certain personality dimensions. Such first impressions can strongly impact judgments and decisions and may prove highly consequential. Researchers investigating the impact of facial information often rely on (a) real photographs that have been selected to vary on the dimension of interest, (b) morphed photographs, or (c) computer-generated faces (avatars). All three approaches have distinct advantages. Here we present the Basel Face Database, which combines these advantages. In particular, the Basel Face Database consists of real photographs that are subtly, but systematically manipulated to show variations in the perception of the Big Two and the Big Five personality dimensions. To this end, the information specific to each psychological dimension is isolated and modeled in new photographs. Two studies serve as systematic validation of the Basel Face Database. The Basel Face Database opens a new pathway for researchers across psychological disciplines to investigate effects of perceived personality. PMID:29590124
Walker, Mirella; Schönborn, Sandro; Greifeneder, Rainer; Vetter, Thomas
2018-01-01
Upon a first encounter, individuals spontaneously associate faces with certain personality dimensions. Such first impressions can strongly impact judgments and decisions and may prove highly consequential. Researchers investigating the impact of facial information often rely on (a) real photographs that have been selected to vary on the dimension of interest, (b) morphed photographs, or (c) computer-generated faces (avatars). All three approaches have distinct advantages. Here we present the Basel Face Database, which combines these advantages. In particular, the Basel Face Database consists of real photographs that are subtly, but systematically manipulated to show variations in the perception of the Big Two and the Big Five personality dimensions. To this end, the information specific to each psychological dimension is isolated and modeled in new photographs. Two studies serve as systematic validation of the Basel Face Database. The Basel Face Database opens a new pathway for researchers across psychological disciplines to investigate effects of perceived personality.
A Benchmark and Comparative Study of Video-Based Face Recognition on COX Face Database.
Huang, Zhiwu; Shan, Shiguang; Wang, Ruiping; Zhang, Haihong; Lao, Shihong; Kuerban, Alifu; Chen, Xilin
2015-12-01
Face recognition with still face images has been widely studied, while the research on video-based face recognition is inadequate relatively, especially in terms of benchmark datasets and comparisons. Real-world video-based face recognition applications require techniques for three distinct scenarios: 1) Videoto-Still (V2S); 2) Still-to-Video (S2V); and 3) Video-to-Video (V2V), respectively, taking video or still image as query or target. To the best of our knowledge, few datasets and evaluation protocols have benchmarked for all the three scenarios. In order to facilitate the study of this specific topic, this paper contributes a benchmarking and comparative study based on a newly collected still/video face database, named COX(1) Face DB. Specifically, we make three contributions. First, we collect and release a largescale still/video face database to simulate video surveillance with three different video-based face recognition scenarios (i.e., V2S, S2V, and V2V). Second, for benchmarking the three scenarios designed on our database, we review and experimentally compare a number of existing set-based methods. Third, we further propose a novel Point-to-Set Correlation Learning (PSCL) method, and experimentally show that it can be used as a promising baseline method for V2S/S2V face recognition on COX Face DB. Extensive experimental results clearly demonstrate that video-based face recognition needs more efforts, and our COX Face DB is a good benchmark database for evaluation.
NASA Astrophysics Data System (ADS)
Cui, Chen; Asari, Vijayan K.
2014-03-01
Biometric features such as fingerprints, iris patterns, and face features help to identify people and restrict access to secure areas by performing advanced pattern analysis and matching. Face recognition is one of the most promising biometric methodologies for human identification in a non-cooperative security environment. However, the recognition results obtained by face recognition systems are a affected by several variations that may happen to the patterns in an unrestricted environment. As a result, several algorithms have been developed for extracting different facial features for face recognition. Due to the various possible challenges of data captured at different lighting conditions, viewing angles, facial expressions, and partial occlusions in natural environmental conditions, automatic facial recognition still remains as a difficult issue that needs to be resolved. In this paper, we propose a novel approach to tackling some of these issues by analyzing the local textural descriptions for facial feature representation. The textural information is extracted by an enhanced local binary pattern (ELBP) description of all the local regions of the face. The relationship of each pixel with respect to its neighborhood is extracted and employed to calculate the new representation. ELBP reconstructs a much better textural feature extraction vector from an original gray level image in different lighting conditions. The dimensionality of the texture image is reduced by principal component analysis performed on each local face region. Each low dimensional vector representing a local region is now weighted based on the significance of the sub-region. The weight of each sub-region is determined by employing the local variance estimate of the respective region, which represents the significance of the region. The final facial textural feature vector is obtained by concatenating the reduced dimensional weight sets of all the modules (sub-regions) of the face image. Experiments conducted on various popular face databases show promising performance of the proposed algorithm in varying lighting, expression, and partial occlusion conditions. Four databases were used for testing the performance of the proposed system: Yale Face database, Extended Yale Face database B, Japanese Female Facial Expression database, and CMU AMP Facial Expression database. The experimental results in all four databases show the effectiveness of the proposed system. Also, the computation cost is lower because of the simplified calculation steps. Research work is progressing to investigate the effectiveness of the proposed face recognition method on pose-varying conditions as well. It is envisaged that a multilane approach of trained frameworks at different pose bins and an appropriate voting strategy would lead to a good recognition rate in such situation.
Novel face-detection method under various environments
NASA Astrophysics Data System (ADS)
Jing, Min-Quan; Chen, Ling-Hwei
2009-06-01
We propose a method to detect a face with different poses under various environments. On the basis of skin color information, skin regions are first extracted from an input image. Next, the shoulder part is cut out by using shape information and the head part is then identified as a face candidate. For a face candidate, a set of geometric features is applied to determine if it is a profile face. If not, then a set of eyelike rectangles extracted from the face candidate and the lighting distribution are used to determine if the face candidate is a nonprofile face. Experimental results show that the proposed method is robust under a wide range of lighting conditions, different poses, and races. The detection rate for the HHI face database is 93.68%. For the Champion face database, the detection rate is 95.15%.
Heterogeneous Face Attribute Estimation: A Deep Multi-Task Learning Approach.
Han, Hu; K Jain, Anil; Shan, Shiguang; Chen, Xilin
2017-08-10
Face attribute estimation has many potential applications in video surveillance, face retrieval, and social media. While a number of methods have been proposed for face attribute estimation, most of them did not explicitly consider the attribute correlation and heterogeneity (e.g., ordinal vs. nominal and holistic vs. local) during feature representation learning. In this paper, we present a Deep Multi-Task Learning (DMTL) approach to jointly estimate multiple heterogeneous attributes from a single face image. In DMTL, we tackle attribute correlation and heterogeneity with convolutional neural networks (CNNs) consisting of shared feature learning for all the attributes, and category-specific feature learning for heterogeneous attributes. We also introduce an unconstrained face database (LFW+), an extension of public-domain LFW, with heterogeneous demographic attributes (age, gender, and race) obtained via crowdsourcing. Experimental results on benchmarks with multiple face attributes (MORPH II, LFW+, CelebA, LFWA, and FotW) show that the proposed approach has superior performance compared to state of the art. Finally, evaluations on a public-domain face database (LAP) with a single attribute show that the proposed approach has excellent generalization ability.
Face liveness detection using shearlet-based feature descriptors
NASA Astrophysics Data System (ADS)
Feng, Litong; Po, Lai-Man; Li, Yuming; Yuan, Fang
2016-07-01
Face recognition is a widely used biometric technology due to its convenience but it is vulnerable to spoofing attacks made by nonreal faces such as photographs or videos of valid users. The antispoof problem must be well resolved before widely applying face recognition in our daily life. Face liveness detection is a core technology to make sure that the input face is a live person. However, this is still very challenging using conventional liveness detection approaches of texture analysis and motion detection. The aim of this paper is to propose a feature descriptor and an efficient framework that can be used to effectively deal with the face liveness detection problem. In this framework, new feature descriptors are defined using a multiscale directional transform (shearlet transform). Then, stacked autoencoders and a softmax classifier are concatenated to detect face liveness. We evaluated this approach using the CASIA Face antispoofing database and replay-attack database. The experimental results show that our approach performs better than the state-of-the-art techniques following the provided protocols of these databases, and it is possible to significantly enhance the security of the face recognition biometric system. In addition, the experimental results also demonstrate that this framework can be easily extended to classify different spoofing attacks.
Tensor discriminant color space for face recognition.
Wang, Su-Jing; Yang, Jian; Zhang, Na; Zhou, Chun-Guang
2011-09-01
Recent research efforts reveal that color may provide useful information for face recognition. For different visual tasks, the choice of a color space is generally different. How can a color space be sought for the specific face recognition problem? To address this problem, this paper represents a color image as a third-order tensor and presents the tensor discriminant color space (TDCS) model. The model can keep the underlying spatial structure of color images. With the definition of n-mode between-class scatter matrices and within-class scatter matrices, TDCS constructs an iterative procedure to obtain one color space transformation matrix and two discriminant projection matrices by maximizing the ratio of these two scatter matrices. The experiments are conducted on two color face databases, AR and Georgia Tech face databases, and the results show that both the performance and the efficiency of the proposed method are better than those of the state-of-the-art color image discriminant model, which involve one color space transformation matrix and one discriminant projection matrix, specifically in a complicated face database with various pose variations.
NASA Astrophysics Data System (ADS)
Petpairote, Chayanut; Madarasmi, Suthep; Chamnongthai, Kosin
2018-01-01
The practical identification of individuals using facial recognition techniques requires the matching of faces with specific expressions to faces from a neutral face database. A method for facial recognition under varied expressions against neutral face samples of individuals via recognition of expression warping and the use of a virtual expression-face database is proposed. In this method, facial expressions are recognized and the input expression faces are classified into facial expression groups. To aid facial recognition, the virtual expression-face database is sorted into average facial-expression shapes and by coarse- and fine-featured facial textures. Wrinkle information is also employed in classification by using a process of masking to adjust input faces to match the expression-face database. We evaluate the performance of the proposed method using the CMU multi-PIE, Cohn-Kanade, and AR expression-face databases, and we find that it provides significantly improved results in terms of face recognition accuracy compared to conventional methods and is acceptable for facial recognition under expression variation.
Embedded wavelet-based face recognition under variable position
NASA Astrophysics Data System (ADS)
Cotret, Pascal; Chevobbe, Stéphane; Darouich, Mehdi
2015-02-01
For several years, face recognition has been a hot topic in the image processing field: this technique is applied in several domains such as CCTV, electronic devices delocking and so on. In this context, this work studies the efficiency of a wavelet-based face recognition method in terms of subject position robustness and performance on various systems. The use of wavelet transform has a limited impact on the position robustness of PCA-based face recognition. This work shows, for a well-known database (Yale face database B*), that subject position in a 3D space can vary up to 10% of the original ROI size without decreasing recognition rates. Face recognition is performed on approximation coefficients of the image wavelet transform: results are still satisfying after 3 levels of decomposition. Furthermore, face database size can be divided by a factor 64 (22K with K = 3). In the context of ultra-embedded vision systems, memory footprint is one of the key points to be addressed; that is the reason why compression techniques such as wavelet transform are interesting. Furthermore, it leads to a low-complexity face detection stage compliant with limited computation resources available on such systems. The approach described in this work is tested on three platforms from a standard x86-based computer towards nanocomputers such as RaspberryPi and SECO boards. For K = 3 and a database with 40 faces, the execution mean time for one frame is 0.64 ms on a x86-based computer, 9 ms on a SECO board and 26 ms on a RaspberryPi (B model).
The construction FACE database - Codifying the NIOSH FACE reports.
Dong, Xiuwen Sue; Largay, Julie A; Wang, Xuanwen; Cain, Chris Trahan; Romano, Nancy
2017-09-01
The National Institute for Occupational Safety and Health (NIOSH) has published reports detailing the results of investigations on selected work-related fatalities through the Fatality Assessment and Control Evaluation (FACE) program since 1982. Information from construction-related FACE reports was coded into the Construction FACE Database (CFD). Use of the CFD was illustrated by analyzing major CFD variables. A total of 768 construction fatalities were included in the CFD. Information on decedents, safety training, use of PPE, and FACE recommendations were coded. Analysis shows that one in five decedents in the CFD died within the first two months on the job; 75% and 43% of reports recommended having safety training or installing protection equipment, respectively. Comprehensive research using FACE reports may improve understanding of work-related fatalities and provide much-needed information on injury prevention. The CFD allows researchers to analyze the FACE reports quantitatively and efficiently. Copyright © 2017 Elsevier Ltd and National Safety Council. All rights reserved.
DeitY-TU face database: its design, multiple camera capturing, characteristics, and evaluation
NASA Astrophysics Data System (ADS)
Bhowmik, Mrinal Kanti; Saha, Kankan; Saha, Priya; Bhattacharjee, Debotosh
2014-10-01
The development of the latest face databases is providing researchers different and realistic problems that play an important role in the development of efficient algorithms for solving the difficulties during automatic recognition of human faces. This paper presents the creation of a new visual face database, named the Department of Electronics and Information Technology-Tripura University (DeitY-TU) face database. It contains face images of 524 persons belonging to different nontribes and Mongolian tribes of north-east India, with their anthropometric measurements for identification. Database images are captured within a room with controlled variations in illumination, expression, and pose along with variability in age, gender, accessories, make-up, and partial occlusion. Each image contains the combined primary challenges of face recognition, i.e., illumination, expression, and pose. This database also represents some new features: soft biometric traits such as mole, freckle, scar, etc., and facial anthropometric variations that may be helpful for researchers for biometric recognition. It also gives an equivalent study of the existing two-dimensional face image databases. The database has been tested using two baseline algorithms: linear discriminant analysis and principal component analysis, which may be used by other researchers as the control algorithm performance score.
The Dartmouth Database of Children’s Faces: Acquisition and Validation of a New Face Stimulus Set
Dalrymple, Kirsten A.; Gomez, Jesse; Duchaine, Brad
2013-01-01
Facial identity and expression play critical roles in our social lives. Faces are therefore frequently used as stimuli in a variety of areas of scientific research. Although several extensive and well-controlled databases of adult faces exist, few databases include children’s faces. Here we present the Dartmouth Database of Children’s Faces, a set of photographs of 40 male and 40 female Caucasian children between 6 and 16 years-of-age. Models posed eight facial expressions and were photographed from five camera angles under two lighting conditions. Models wore black hats and black gowns to minimize extra-facial variables. To validate the images, independent raters identified facial expressions, rated their intensity, and provided an age estimate for each model. The Dartmouth Database of Children’s Faces is freely available for research purposes and can be downloaded by contacting the corresponding author by email. PMID:24244434
Multiple Representations-Based Face Sketch-Photo Synthesis.
Peng, Chunlei; Gao, Xinbo; Wang, Nannan; Tao, Dacheng; Li, Xuelong; Li, Jie
2016-11-01
Face sketch-photo synthesis plays an important role in law enforcement and digital entertainment. Most of the existing methods only use pixel intensities as the feature. Since face images can be described using features from multiple aspects, this paper presents a novel multiple representations-based face sketch-photo-synthesis method that adaptively combines multiple representations to represent an image patch. In particular, it combines multiple features from face images processed using multiple filters and deploys Markov networks to exploit the interacting relationships between the neighboring image patches. The proposed framework could be solved using an alternating optimization strategy and it normally converges in only five outer iterations in the experiments. Our experimental results on the Chinese University of Hong Kong (CUHK) face sketch database, celebrity photos, CUHK Face Sketch FERET Database, IIIT-D Viewed Sketch Database, and forensic sketches demonstrate the effectiveness of our method for face sketch-photo synthesis. In addition, cross-database and database-dependent style-synthesis evaluations demonstrate the generalizability of this novel method and suggest promising solutions for face identification in forensic science.
Multi-pose facial correction based on Gaussian process with combined kernel function
NASA Astrophysics Data System (ADS)
Shi, Shuyan; Ji, Ruirui; Zhang, Fan
2018-04-01
In order to improve the recognition rate of various postures, this paper proposes a method of facial correction based on Gaussian Process which build a nonlinear regression model between the front and the side face with combined kernel function. The face images with horizontal angle from -45° to +45° can be properly corrected to front faces. Finally, Support Vector Machine is employed for face recognition. Experiments on CAS PEAL R1 face database show that Gaussian process can weaken the influence of pose changes and improve the accuracy of face recognition to certain extent.
Uyghur face recognition method combining 2DDCT with POEM
NASA Astrophysics Data System (ADS)
Yi, Lihamu; Ya, Ermaimaiti
2017-11-01
In this paper, in light of the reduced recognition rate and poor robustness of Uyghur face under illumination and partial occlusion, a Uyghur face recognition method combining Two Dimension Discrete Cosine Transform (2DDCT) with Patterns Oriented Edge Magnitudes (POEM) was proposed. Firstly, the Uyghur face images were divided into 8×8 block matrix, and the Uyghur face images after block processing were converted into frequency-domain status using 2DDCT; secondly, the Uyghur face images were compressed to exclude non-sensitive medium frequency parts and non-high frequency parts, so it can reduce the feature dimensions necessary for the Uyghur face images, and further reduce the amount of computation; thirdly, the corresponding POEM histograms of the Uyghur face images were obtained by calculating the feature quantity of POEM; fourthly, the POEM histograms were cascaded together as the texture histogram of the center feature point to obtain the texture features of the Uyghur face feature points; finally, classification of the training samples was carried out using deep learning algorithm. The simulation experiment results showed that the proposed algorithm further improved the recognition rate of the self-built Uyghur face database, and greatly improved the computing speed of the self-built Uyghur face database, and had strong robustness.
Development of Human Face Literature Database Using Text Mining Approach: Phase I.
Kaur, Paramjit; Krishan, Kewal; Sharma, Suresh K
2018-06-01
The face is an important part of the human body by which an individual communicates in the society. Its importance can be highlighted by the fact that a person deprived of face cannot sustain in the living world. The amount of experiments being performed and the number of research papers being published under the domain of human face have surged in the past few decades. Several scientific disciplines, which are conducting research on human face include: Medical Science, Anthropology, Information Technology (Biometrics, Robotics, and Artificial Intelligence, etc.), Psychology, Forensic Science, Neuroscience, etc. This alarms the need of collecting and managing the data concerning human face so that the public and free access of it can be provided to the scientific community. This can be attained by developing databases and tools on human face using bioinformatics approach. The current research emphasizes on creating a database concerning literature data of human face. The database can be accessed on the basis of specific keywords, journal name, date of publication, author's name, etc. The collected research papers will be stored in the form of a database. Hence, the database will be beneficial to the research community as the comprehensive information dedicated to the human face could be found at one place. The information related to facial morphologic features, facial disorders, facial asymmetry, facial abnormalities, and many other parameters can be extracted from this database. The front end has been developed using Hyper Text Mark-up Language and Cascading Style Sheets. The back end has been developed using hypertext preprocessor (PHP). The JAVA Script has used as scripting language. MySQL (Structured Query Language) is used for database development as it is most widely used Relational Database Management System. XAMPP (X (cross platform), Apache, MySQL, PHP, Perl) open source web application software has been used as the server.The database is still under the developmental phase and discusses the initial steps of its creation. The current paper throws light on the work done till date.
Neuro-fuzzy model for estimating race and gender from geometric distances of human face across pose
NASA Astrophysics Data System (ADS)
Nanaa, K.; Rahman, M. N. A.; Rizon, M.; Mohamad, F. S.; Mamat, M.
2018-03-01
Classifying human face based on race and gender is a vital process in face recognition. It contributes to an index database and eases 3D synthesis of the human face. Identifying race and gender based on intrinsic factor is problematic, which is more fitting to utilizing nonlinear model for estimating process. In this paper, we aim to estimate race and gender in varied head pose. For this purpose, we collect dataset from PICS and CAS-PEAL databases, detect the landmarks and rotate them to the frontal pose. After geometric distances are calculated, all of distance values will be normalized. Implementation is carried out by using Neural Network Model and Fuzzy Logic Model. These models are combined by using Adaptive Neuro-Fuzzy Model. The experimental results showed that the optimization of address fuzzy membership. Model gives a better assessment rate and found that estimating race contributing to a more accurate gender assessment.
Abraham, Manoj T; Rousso, Joseph J; Hu, Shirley; Brown, Ryan F; Moscatello, Augustine L; Finn, J Charles; Patel, Neha A; Kadakia, Sameep P; Wood-Smith, Donald
2017-07-01
The American Academy of Facial Plastic and Reconstructive Surgery FACE TO FACE database was created to gather and organize patient data primarily from international humanitarian surgical mission trips, as well as local humanitarian initiatives. Similar to cloud-based Electronic Medical Records, this web-based user-generated database allows for more accurate tracking of provider and patient information and outcomes, regardless of site, and is useful when coordinating follow-up care for patients. The database is particularly useful on international mission trips as there are often different surgeons who may provide care to patients on subsequent missions, and patients who may visit more than 1 mission site. Ultimately, by pooling data across multiples sites and over time, the database has the potential to be a useful resource for population-based studies and outcome data analysis. The objective of this paper is to delineate the process involved in creating the AAFPRS FACE TO FACE database, to assess its functional utility, to draw comparisons to electronic medical records systems that are now widely implemented, and to explain the specific benefits and disadvantages of the use of the database as it was implemented on recent international surgical mission trips.
NASA Astrophysics Data System (ADS)
Poinsot, Audrey; Yang, Fan; Brost, Vincent
2011-02-01
Including multiple sources of information in personal identity recognition and verification gives the opportunity to greatly improve performance. We propose a contactless biometric system that combines two modalities: palmprint and face. Hardware implementations are proposed on the Texas Instrument Digital Signal Processor and Xilinx Field-Programmable Gate Array (FPGA) platforms. The algorithmic chain consists of a preprocessing (which includes palm extraction from hand images), Gabor feature extraction, comparison by Hamming distance, and score fusion. Fusion possibilities are discussed and tested first using a bimodal database of 130 subjects that we designed (uB database), and then two common public biometric databases (AR for face and PolyU for palmprint). High performance has been obtained for recognition and verification purpose: a recognition rate of 97.49% with AR-PolyU database and an equal error rate of 1.10% on the uB database using only two training samples per subject have been obtained. Hardware results demonstrate that preprocessing can easily be performed during the acquisition phase, and multimodal biometric recognition can be treated almost instantly (0.4 ms on FPGA). We show the feasibility of a robust and efficient multimodal hardware biometric system that offers several advantages, such as user-friendliness and flexibility.
The biometric-based module of smart grid system
NASA Astrophysics Data System (ADS)
Engel, E.; Kovalev, I. V.; Ermoshkina, A.
2015-10-01
Within Smart Grid concept the flexible biometric-based module base on Principal Component Analysis (PCA) and selective Neural Network is developed. The formation of the selective Neural Network the biometric-based module uses the method which includes three main stages: preliminary processing of the image, face localization and face recognition. Experiments on the Yale face database show that (i) selective Neural Network exhibits promising classification capability for face detection, recognition problems; and (ii) the proposed biometric-based module achieves near real-time face detection, recognition speed and the competitive performance, as compared to some existing subspaces-based methods.
Physiology-based face recognition in the thermal infrared spectrum.
Buddharaju, Pradeep; Pavlidis, Ioannis T; Tsiamyrtzis, Panagiotis; Bazakos, Mike
2007-04-01
The current dominant approaches to face recognition rely on facial characteristics that are on or over the skin. Some of these characteristics have low permanency can be altered, and their phenomenology varies significantly with environmental factors (e.g., lighting). Many methodologies have been developed to address these problems to various degrees. However, the current framework of face recognition research has a potential weakness due to its very nature. We present a novel framework for face recognition based on physiological information. The motivation behind this effort is to capitalize on the permanency of innate characteristics that are under the skin. To establish feasibility, we propose a specific methodology to capture facial physiological patterns using the bioheat information contained in thermal imagery. First, the algorithm delineates the human face from the background using the Bayesian framework. Then, it localizes the superficial blood vessel network using image morphology. The extracted vascular network produces contour shapes that are characteristic to each individual. The branching points of the skeletonized vascular network are referred to as Thermal Minutia Points (TMPs) and constitute the feature database. To render the method robust to facial pose variations, we collect for each subject to be stored in the database five different pose images (center, midleft profile, left profile, midright profile, and right profile). During the classification stage, the algorithm first estimates the pose of the test image. Then, it matches the local and global TMP structures extracted from the test image with those of the corresponding pose images in the database. We have conducted experiments on a multipose database of thermal facial images collected in our laboratory, as well as on the time-gap database of the University of Notre Dame. The good experimental results show that the proposed methodology has merit, especially with respect to the problem of low permanence over time. More importantly, the results demonstrate the feasibility of the physiological framework in face recognition and open the way for further methodological and experimental research in the area.
False match elimination for face recognition based on SIFT algorithm
NASA Astrophysics Data System (ADS)
Gu, Xuyuan; Shi, Ping; Shao, Meide
2011-06-01
The SIFT (Scale Invariant Feature Transform) is a well known algorithm used to detect and describe local features in images. It is invariant to image scale, rotation and robust to the noise and illumination. In this paper, a novel method used for face recognition based on SIFT is proposed, which combines the optimization of SIFT, mutual matching and Progressive Sample Consensus (PROSAC) together and can eliminate the false matches of face recognition effectively. Experiments on ORL face database show that many false matches can be eliminated and better recognition rate is achieved.
NASA Astrophysics Data System (ADS)
Zhao, Yiqun; Wang, Zhihui
2015-12-01
The Internet of things (IOT) is a kind of intelligent networks which can be used to locate, track, identify and supervise people and objects. One of important core technologies of intelligent visual internet of things ( IVIOT) is the intelligent visual tag system. In this paper, a research is done into visual feature extraction and establishment of visual tags of the human face based on ORL face database. Firstly, we use the principal component analysis (PCA) algorithm for face feature extraction, then adopt the support vector machine (SVM) for classifying and face recognition, finally establish a visual tag for face which is already classified. We conducted a experiment focused on a group of people face images, the result show that the proposed algorithm have good performance, and can show the visual tag of objects conveniently.
Performance analysis of different database in new internet mapping system
NASA Astrophysics Data System (ADS)
Yao, Xing; Su, Wei; Gao, Shuai
2017-03-01
In the Mapping System of New Internet, Massive mapping entries between AID and RID need to be stored, added, updated, and deleted. In order to better deal with the problem when facing a large number of mapping entries update and query request, the Mapping System of New Internet must use high-performance database. In this paper, we focus on the performance of Redis, SQLite, and MySQL these three typical databases, and the results show that the Mapping System based on different databases can adapt to different needs according to the actual situation.
In search of the emotional face: anger versus happiness superiority in visual search.
Savage, Ruth A; Lipp, Ottmar V; Craig, Belinda M; Becker, Stefanie I; Horstmann, Gernot
2013-08-01
Previous research has provided inconsistent results regarding visual search for emotional faces, yielding evidence for either anger superiority (i.e., more efficient search for angry faces) or happiness superiority effects (i.e., more efficient search for happy faces), suggesting that these results do not reflect on emotional expression, but on emotion (un-)related low-level perceptual features. The present study investigated possible factors mediating anger/happiness superiority effects; specifically search strategy (fixed vs. variable target search; Experiment 1), stimulus choice (Nimstim database vs. Ekman & Friesen database; Experiments 1 and 2), and emotional intensity (Experiment 3 and 3a). Angry faces were found faster than happy faces regardless of search strategy using faces from the Nimstim database (Experiment 1). By contrast, a happiness superiority effect was evident in Experiment 2 when using faces from the Ekman and Friesen database. Experiment 3 employed angry, happy, and exuberant expressions (Nimstim database) and yielded anger and happiness superiority effects, respectively, highlighting the importance of the choice of stimulus materials. Ratings of the stimulus materials collected in Experiment 3a indicate that differences in perceived emotional intensity, pleasantness, or arousal do not account for differences in search efficiency. Across three studies, the current investigation indicates that prior reports of anger or happiness superiority effects in visual search are likely to reflect on low-level visual features associated with the stimulus materials used, rather than on emotion. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Face antispoofing based on frame difference and multilevel representation
NASA Astrophysics Data System (ADS)
Benlamoudi, Azeddine; Aiadi, Kamal Eddine; Ouafi, Abdelkrim; Samai, Djamel; Oussalah, Mourad
2017-07-01
Due to advances in technology, today's biometric systems become vulnerable to spoof attacks made by fake faces. These attacks occur when an intruder attempts to fool an established face-based recognition system by presenting a fake face (e.g., print photo or replay attacks) in front of the camera instead of the intruder's genuine face. For this purpose, face antispoofing has become a hot topic in face analysis literature, where several applications with antispoofing task have emerged recently. We propose a solution for distinguishing between real faces and fake ones. Our approach is based on extracting features from the difference between successive frames instead of individual frames. We also used a multilevel representation that divides the frame difference into multiple multiblocks. Different texture descriptors (local binary patterns, local phase quantization, and binarized statistical image features) have then been applied to each block. After the feature extraction step, a Fisher score is applied to sort the features in ascending order according to the associated weights. Finally, a support vector machine is used to differentiate between real and fake faces. We tested our approach on three publicly available databases: CASIA Face Antispoofing database, Replay-Attack database, and MSU Mobile Face Spoofing database. The proposed approach outperforms the other state-of-the-art methods in different media and quality metrics.
A familiarity disadvantage for remembering specific images of faces.
Armann, Regine G M; Jenkins, Rob; Burton, A Mike
2016-04-01
Familiar faces are remembered better than unfamiliar faces. Furthermore, it is much easier to match images of familiar than unfamiliar faces. These findings could be accounted for by quantitative differences in the ease with which faces are encoded. However, it has been argued that there are also some qualitative differences in familiar and unfamiliar face processing. Unfamiliar faces are held to rely on superficial, pictorial representations, whereas familiar faces invoke more abstract representations. Here we present 2 studies that show, for 1 task, an advantage for unfamiliar faces. In recognition memory, viewers are better able to reject a new picture, if it depicts an unfamiliar face. This rare advantage for unfamiliar faces supports the notion that familiarity brings about some representational changes, and further emphasizes the idea that theoretical accounts of face processing should incorporate familiarity. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Low-level image properties in facial expressions.
Menzel, Claudia; Redies, Christoph; Hayn-Leichsenring, Gregor U
2018-06-04
We studied low-level image properties of face photographs and analyzed whether they change with different emotional expressions displayed by an individual. Differences in image properties were measured in three databases that depicted a total of 167 individuals. Face images were used either in their original form, cut to a standard format or superimposed with a mask. Image properties analyzed were: brightness, redness, yellowness, contrast, spectral slope, overall power and relative power in low, medium and high spatial frequencies. Results showed that image properties differed significantly between expressions within each individual image set. Further, specific facial expressions corresponded to patterns of image properties that were consistent across all three databases. In order to experimentally validate our findings, we equalized the luminance histograms and spectral slopes of three images from a given individual who showed two expressions. Participants were significantly slower in matching the expression in an equalized compared to an original image triad. Thus, existing differences in these image properties (i.e., spectral slope, brightness or contrast) facilitate emotion detection in particular sets of face images. Copyright © 2018. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Thanos, Konstantinos-Georgios; Thomopoulos, Stelios C. A.
2014-06-01
The study in this paper belongs to a more general research of discovering facial sub-clusters in different ethnicity face databases. These new sub-clusters along with other metadata (such as race, sex, etc.) lead to a vector for each face in the database where each vector component represents the likelihood of participation of a given face to each cluster. This vector is then used as a feature vector in a human identification and tracking system based on face and other biometrics. The first stage in this system involves a clustering method which evaluates and compares the clustering results of five different clustering algorithms (average, complete, single hierarchical algorithm, k-means and DIGNET), and selects the best strategy for each data collection. In this paper we present the comparative performance of clustering results of DIGNET and four clustering algorithms (average, complete, single hierarchical and k-means) on fabricated 2D and 3D samples, and on actual face images from various databases, using four different standard metrics. These metrics are the silhouette figure, the mean silhouette coefficient, the Hubert test Γ coefficient, and the classification accuracy for each clustering result. The results showed that, in general, DIGNET gives more trustworthy results than the other algorithms when the metrics values are above a specific acceptance threshold. However when the evaluation results metrics have values lower than the acceptance threshold but not too low (too low corresponds to ambiguous results or false results), then it is necessary for the clustering results to be verified by the other algorithms.
Wavelet filtered shifted phase-encoded joint transform correlation for face recognition
NASA Astrophysics Data System (ADS)
Moniruzzaman, Md.; Alam, Mohammad S.
2017-05-01
A new wavelet-filtered-based Shifted- phase-encoded Joint Transform Correlation (WPJTC) technique has been proposed for efficient face recognition. The proposed technique uses discrete wavelet decomposition for preprocessing and can effectively accommodate various 3D facial distortions, effects of noise, and illumination variations. After analyzing different forms of wavelet basis functions, an optimal method has been proposed by considering the discrimination capability and processing speed as performance trade-offs. The proposed technique yields better correlation discrimination compared to alternate pattern recognition techniques such as phase-shifted phase-encoded fringe-adjusted joint transform correlator. The performance of the proposed WPJTC has been tested using the Yale facial database and extended Yale facial database under different environments such as illumination variation, noise, and 3D changes in facial expressions. Test results show that the proposed WPJTC yields better performance compared to alternate JTC based face recognition techniques.
Demirkus, Meltem; Precup, Doina; Clark, James J; Arbel, Tal
2016-06-01
Recent literature shows that facial attributes, i.e., contextual facial information, can be beneficial for improving the performance of real-world applications, such as face verification, face recognition, and image search. Examples of face attributes include gender, skin color, facial hair, etc. How to robustly obtain these facial attributes (traits) is still an open problem, especially in the presence of the challenges of real-world environments: non-uniform illumination conditions, arbitrary occlusions, motion blur and background clutter. What makes this problem even more difficult is the enormous variability presented by the same subject, due to arbitrary face scales, head poses, and facial expressions. In this paper, we focus on the problem of facial trait classification in real-world face videos. We have developed a fully automatic hierarchical and probabilistic framework that models the collective set of frame class distributions and feature spatial information over a video sequence. The experiments are conducted on a large real-world face video database that we have collected, labelled and made publicly available. The proposed method is flexible enough to be applied to any facial classification problem. Experiments on a large, real-world video database McGillFaces [1] of 18,000 video frames reveal that the proposed framework outperforms alternative approaches, by up to 16.96 and 10.13%, for the facial attributes of gender and facial hair, respectively.
Perceptual dehumanization of faces is activated by norm violations and facilitates norm enforcement.
Fincher, Katrina M; Tetlock, Philip E
2016-02-01
This article uses methods drawn from perceptual psychology to answer a basic social psychological question: Do people process the faces of norm violators differently from those of others--and, if so, what is the functional significance? Seven studies suggest that people process these faces different and the differential processing makes it easier to punish norm violators. Studies 1 and 2 use a recognition-recall paradigm that manipulated facial-inversion and spatial frequency to show that people rely upon face-typical processing less when they perceive norm violators' faces. Study 3 uses a facial composite task to demonstrate that the effect is actor dependent, not action dependent, and to suggest that configural processing is the mechanism of perceptual change. Studies 4 and 5 use offset faces to show that configural processing is only attenuated when they belong to perpetrators who are culpable. Studies 6 and 7 show that people find it easier to punish inverted faces and harder to punish faces displayed in low spatial frequency. Taken together, these data suggest a bidirectional flow of causality between lower-order perceptual and higher-order cognitive processes in norm enforcement. PsycINFO Database Record (c) 2016 APA, all rights reserved.
Person Authentication Using Learned Parameters of Lifting Wavelet Filters
NASA Astrophysics Data System (ADS)
Niijima, Koichi
2006-10-01
This paper proposes a method for identifying persons by the use of the lifting wavelet parameters learned by kurtosis-minimization. Our learning method uses desirable properties of kurtosis and wavelet coefficients of a facial image. Exploiting these properties, the lifting parameters are trained so as to minimize the kurtosis of lifting wavelet coefficients computed for the facial image. Since this minimization problem is an ill-posed problem, it is solved by the aid of Tikhonov's regularization method. Our learning algorithm is applied to each of the faces to be identified to generate its feature vector whose components consist of the learned parameters. The constructed feature vectors are memorized together with the corresponding faces in a feature vectors database. Person authentication is performed by comparing the feature vector of a query face with those stored in the database. In numerical experiments, the lifting parameters are trained for each of the neutral faces of 132 persons (74 males and 58 females) in the AR face database. Person authentication is executed by using the smile and anger faces of the same persons in the database.
Emotion Words: Adding Face Value.
Fugate, Jennifer M B; Gendron, Maria; Nakashima, Satoshi F; Barrett, Lisa Feldman
2017-06-12
Despite a growing number of studies suggesting that emotion words affect perceptual judgments of emotional stimuli, little is known about how emotion words affect perceptual memory for emotional faces. In Experiments 1 and 2 we tested how emotion words (compared with control words) affected participants' abilities to select a target emotional face from among distractor faces. Participants were generally more likely to false alarm to distractor emotional faces when primed with an emotion word congruent with the face (compared with a control word). Moreover, participants showed both decreased sensitivity (d') to discriminate between target and distractor faces, as well as altered response biases (c; more likely to answer "yes") when primed with an emotion word (compared with a control word). In Experiment 3 we showed that emotion words had more of an effect on perceptual memory judgments when the structural information in the target face was limited, as well as when participants were only able to categorize the face with a partially congruent emotion word. The overall results are consistent with the idea that emotion words affect the encoding of emotional faces in perceptual memory. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Face recognition: database acquisition, hybrid algorithms, and human studies
NASA Astrophysics Data System (ADS)
Gutta, Srinivas; Huang, Jeffrey R.; Singh, Dig; Wechsler, Harry
1997-02-01
One of the most important technologies absent in traditional and emerging frontiers of computing is the management of visual information. Faces are accessible `windows' into the mechanisms that govern our emotional and social lives. The corresponding face recognition tasks considered herein include: (1) Surveillance, (2) CBIR, and (3) CBIR subject to correct ID (`match') displaying specific facial landmarks such as wearing glasses. We developed robust matching (`classification') and retrieval schemes based on hybrid classifiers and showed their feasibility using the FERET database. The hybrid classifier architecture consist of an ensemble of connectionist networks--radial basis functions-- and decision trees. The specific characteristics of our hybrid architecture include (a) query by consensus as provided by ensembles of networks for coping with the inherent variability of the image formation and data acquisition process, and (b) flexible and adaptive thresholds as opposed to ad hoc and hard thresholds. Experimental results, proving the feasibility of our approach, yield (i) 96% accuracy, using cross validation (CV), for surveillance on a data base consisting of 904 images (ii) 97% accuracy for CBIR tasks, on a database of 1084 images, and (iii) 93% accuracy, using CV, for CBIR subject to correct ID match tasks on a data base of 200 images.
Qian, Jianjun; Yang, Jian; Xu, Yong
2013-09-01
This paper presents a robust but simple image feature extraction method, called image decomposition based on local structure (IDLS). It is assumed that in the local window of an image, the macro-pixel (patch) of the central pixel, and those of its neighbors, are locally linear. IDLS captures the local structural information by describing the relationship between the central macro-pixel and its neighbors. This relationship is represented with the linear representation coefficients determined using ridge regression. One image is actually decomposed into a series of sub-images (also called structure images) according to a local structure feature vector. All the structure images, after being down-sampled for dimensionality reduction, are concatenated into one super-vector. Fisher linear discriminant analysis is then used to provide a low-dimensional, compact, and discriminative representation for each super-vector. The proposed method is applied to face recognition and examined using our real-world face image database, NUST-RWFR, and five popular, publicly available, benchmark face image databases (AR, Extended Yale B, PIE, FERET, and LFW). Experimental results show the performance advantages of IDLS over state-of-the-art algorithms.
Rhodes, Gillian; Jeffery, Linda; Taylor, Libby; Hayward, William G; Ewing, Louise
2014-06-01
Despite their similarity as visual patterns, we can discriminate and recognize many thousands of faces. This expertise has been linked to 2 coding mechanisms: holistic integration of information across the face and adaptive coding of face identity using norms tuned by experience. Recently, individual differences in face recognition ability have been discovered and linked to differences in holistic coding. Here we show that they are also linked to individual differences in adaptive coding of face identity, measured using face identity aftereffects. Identity aftereffects correlated significantly with several measures of face-selective recognition ability. They also correlated marginally with own-race face recognition ability, suggesting a role for adaptive coding in the well-known other-race effect. More generally, these results highlight the important functional role of adaptive face-coding mechanisms in face expertise, taking us beyond the traditional focus on holistic coding mechanisms. PsycINFO Database Record (c) 2014 APA, all rights reserved.
NASA Astrophysics Data System (ADS)
Karam, Lina J.; Zhu, Tong
2015-03-01
The varying quality of face images is an important challenge that limits the effectiveness of face recognition technology when applied in real-world applications. Existing face image databases do not consider the effect of distortions that commonly occur in real-world environments. This database (QLFW) represents an initial attempt to provide a set of labeled face images spanning the wide range of quality, from no perceived impairment to strong perceived impairment for face detection and face recognition applications. Types of impairment include JPEG2000 compression, JPEG compression, additive white noise, Gaussian blur and contrast change. Subjective experiments are conducted to assess the perceived visual quality of faces under different levels and types of distortions and also to assess the human recognition performance under the considered distortions. One goal of this work is to enable automated performance evaluation of face recognition technologies in the presence of different types and levels of visual distortions. This will consequently enable the development of face recognition systems that can operate reliably on real-world visual content in the presence of real-world visual distortions. Another goal is to enable the development and assessment of visual quality metrics for face images and for face detection and recognition applications.
Adaptive non-local smoothing-based weberface for illumination-insensitive face recognition
NASA Astrophysics Data System (ADS)
Yao, Min; Zhu, Changming
2017-07-01
Compensating the illumination of a face image is an important process to achieve effective face recognition under severe illumination conditions. This paper present a novel illumination normalization method which specifically considers removing the illumination boundaries as well as reducing the regional illumination. We begin with the analysis of the commonly used reflectance model and then expatiate the hybrid usage of adaptive non-local smoothing and the local information coding based on Weber's law. The effectiveness and advantages of this combination are evidenced visually and experimentally. Results on Extended YaleB database show its better performance than several other famous methods.
Improvement of Janus Using Pegasus 1-Meter Resolution Database With a Transputer Network
1994-03-01
Figure 4.9 shows the six jacks on the end of the HSI-card. Facing the back of the SPARC Station LINKO LINKI LINK2 LINK3 DOWN UP Figure 4.9: HSI-Card Link...shown in Figure 4.22. Facing the back of the Sun SPARC Station LINK0 LINKI LINK2 LINK3 DOWN UP "b Telephone Cable Facing the front of the Remote Tram...Holder LINKO LINKI LINK2 LINK3 DOWN UPI Figure 4.20: The Connection Between Sun SPARC Station and Remote Tram Holder 58 (3) Se.inu Up t• Link Speed
Fusion of LBP and SWLD using spatio-spectral information for hyperspectral face recognition
NASA Astrophysics Data System (ADS)
Xie, Zhihua; Jiang, Peng; Zhang, Shuai; Xiong, Jinquan
2018-01-01
Hyperspectral imaging, recording intrinsic spectral information of the skin cross different spectral bands, become an important issue for robust face recognition. However, the main challenges for hyperspectral face recognition are high data dimensionality, low signal to noise ratio and inter band misalignment. In this paper, hyperspectral face recognition based on LBP (Local binary pattern) and SWLD (Simplified Weber local descriptor) is proposed to extract discriminative local features from spatio-spectral fusion information. Firstly, the spatio-spectral fusion strategy based on statistical information is used to attain discriminative features of hyperspectral face images. Secondly, LBP is applied to extract the orientation of the fusion face edges. Thirdly, SWLD is proposed to encode the intensity information in hyperspectral images. Finally, we adopt a symmetric Kullback-Leibler distance to compute the encoded face images. The hyperspectral face recognition is tested on Hong Kong Polytechnic University Hyperspectral Face database (PolyUHSFD). Experimental results show that the proposed method has higher recognition rate (92.8%) than the state of the art hyperspectral face recognition algorithms.
Yang, Jian; Zhang, David; Yang, Jing-Yu; Niu, Ben
2007-04-01
This paper develops an unsupervised discriminant projection (UDP) technique for dimensionality reduction of high-dimensional data in small sample size cases. UDP can be seen as a linear approximation of a multimanifolds-based learning framework which takes into account both the local and nonlocal quantities. UDP characterizes the local scatter as well as the nonlocal scatter, seeking to find a projection that simultaneously maximizes the nonlocal scatter and minimizes the local scatter. This characteristic makes UDP more intuitive and more powerful than the most up-to-date method, Locality Preserving Projection (LPP), which considers only the local scatter for clustering or classification tasks. The proposed method is applied to face and palm biometrics and is examined using the Yale, FERET, and AR face image databases and the PolyU palmprint database. The experimental results show that UDP consistently outperforms LPP and PCA and outperforms LDA when the training sample size per class is small. This demonstrates that UDP is a good choice for real-world biometrics applications.
Integrating image quality in 2nu-SVM biometric match score fusion.
Vatsa, Mayank; Singh, Richa; Noore, Afzel
2007-10-01
This paper proposes an intelligent 2nu-support vector machine based match score fusion algorithm to improve the performance of face and iris recognition by integrating the quality of images. The proposed algorithm applies redundant discrete wavelet transform to evaluate the underlying linear and non-linear features present in the image. A composite quality score is computed to determine the extent of smoothness, sharpness, noise, and other pertinent features present in each subband of the image. The match score and the corresponding quality score of an image are fused using 2nu-support vector machine to improve the verification performance. The proposed algorithm is experimentally validated using the FERET face database and the CASIA iris database. The verification performance and statistical evaluation show that the proposed algorithm outperforms existing fusion algorithms.
Face recognition using total margin-based adaptive fuzzy support vector machines.
Liu, Yi-Hung; Chen, Yen-Ting
2007-01-01
This paper presents a new classifier called total margin-based adaptive fuzzy support vector machines (TAF-SVM) that deals with several problems that may occur in support vector machines (SVMs) when applied to the face recognition. The proposed TAF-SVM not only solves the overfitting problem resulted from the outlier with the approach of fuzzification of the penalty, but also corrects the skew of the optimal separating hyperplane due to the very imbalanced data sets by using different cost algorithm. In addition, by introducing the total margin algorithm to replace the conventional soft margin algorithm, a lower generalization error bound can be obtained. Those three functions are embodied into the traditional SVM so that the TAF-SVM is proposed and reformulated in both linear and nonlinear cases. By using two databases, the Chung Yuan Christian University (CYCU) multiview and the facial recognition technology (FERET) face databases, and using the kernel Fisher's discriminant analysis (KFDA) algorithm to extract discriminating face features, experimental results show that the proposed TAF-SVM is superior to SVM in terms of the face-recognition accuracy. The results also indicate that the proposed TAF-SVM can achieve smaller error variances than SVM over a number of tests such that better recognition stability can be obtained.
Face recognition via edge-based Gabor feature representation for plastic surgery-altered images
NASA Astrophysics Data System (ADS)
Chude-Olisah, Chollette C.; Sulong, Ghazali; Chude-Okonkwo, Uche A. K.; Hashim, Siti Z. M.
2014-12-01
Plastic surgery procedures on the face introduce skin texture variations between images of the same person (intra-subject), thereby making the task of face recognition more difficult than in normal scenario. Usually, in contemporary face recognition systems, the original gray-level face image is used as input to the Gabor descriptor, which translates to encoding some texture properties of the face image. The texture-encoding process significantly degrades the performance of such systems in the case of plastic surgery due to the presence of surgically induced intra-subject variations. Based on the proposition that the shape of significant facial components such as eyes, nose, eyebrow, and mouth remains unchanged after plastic surgery, this paper employs an edge-based Gabor feature representation approach for the recognition of surgically altered face images. We use the edge information, which is dependent on the shapes of the significant facial components, to address the plastic surgery-induced texture variation problems. To ensure that the significant facial components represent useful edge information with little or no false edges, a simple illumination normalization technique is proposed for preprocessing. Gabor wavelet is applied to the edge image to accentuate on the uniqueness of the significant facial components for discriminating among different subjects. The performance of the proposed method is evaluated on the Georgia Tech (GT) and the Labeled Faces in the Wild (LFW) databases with illumination and expression problems, and the plastic surgery database with texture changes. Results show that the proposed edge-based Gabor feature representation approach is robust against plastic surgery-induced face variations amidst expression and illumination problems and outperforms the existing plastic surgery face recognition methods reported in the literature.
Men appear more lateralized when noticing emotion in male faces.
Rahman, Qazi; Anchassi, Tarek
2012-02-01
Empirical tests of the "right hemisphere dominance" versus "valence" theories of emotion processing are confounded by known sex differences in lateralization. Moreover, information about the sex of the person posing an emotion might be processed differently by men and women because of an adaptive male bias to notice expressions of threat and vigilance in other male faces. The purpose of this study was to investigate whether sex of poser and emotion displayed influenced lateralization in men and women by analyzing "laterality quotient" scores on a test which depicts vertically split chimeric faces, formed with one half showing a neutral expression and the other half showing an emotional expression. We found that men (N = 50) were significantly more lateralized for emotions indicative of vigilance and threat (happy, sad, angry, and surprised) in male faces relative to female faces and compared to women (N = 44). These data indicate that sex differences in functional cerebral lateralization for facial emotion may be specific to the emotion presented and the sex of face presenting it. PsycINFO Database Record (c) 2012 APA, all rights reserved
Centre-based restricted nearest feature plane with angle classifier for face recognition
NASA Astrophysics Data System (ADS)
Tang, Linlin; Lu, Huifen; Zhao, Liang; Li, Zuohua
2017-10-01
An improved classifier based on the nearest feature plane (NFP), called the centre-based restricted nearest feature plane with the angle (RNFPA) classifier, is proposed for the face recognition problems here. The famous NFP uses the geometrical information of samples to increase the number of training samples, but it increases the computation complexity and it also has an inaccuracy problem coursed by the extended feature plane. To solve the above problems, RNFPA exploits a centre-based feature plane and utilizes a threshold of angle to restrict extended feature space. By choosing the appropriate angle threshold, RNFPA can improve the performance and decrease computation complexity. Experiments in the AT&T face database, AR face database and FERET face database are used to evaluate the proposed classifier. Compared with the original NFP classifier, the nearest feature line (NFL) classifier, the nearest neighbour (NN) classifier and some other improved NFP classifiers, the proposed one achieves competitive performance.
Fujimura, Tomomi; Umemura, Hiroyuki
2018-01-15
The present study describes the development and validation of a facial expression database comprising five different horizontal face angles in dynamic and static presentations. The database includes twelve expression types portrayed by eight Japanese models. This database was inspired by the dimensional and categorical model of emotions: surprise, fear, sadness, anger with open mouth, anger with closed mouth, disgust with open mouth, disgust with closed mouth, excitement, happiness, relaxation, sleepiness, and neutral (static only). The expressions were validated using emotion classification and Affect Grid rating tasks [Russell, Weiss, & Mendelsohn, 1989. Affect Grid: A single-item scale of pleasure and arousal. Journal of Personality and Social Psychology, 57(3), 493-502]. The results indicate that most of the expressions were recognised as the intended emotions and could systematically represent affective valence and arousal. Furthermore, face angle and facial motion information influenced emotion classification and valence and arousal ratings. Our database will be available online at the following URL. https://www.dh.aist.go.jp/database/face2017/ .
Fusion of Visible and Thermal Descriptors Using Genetic Algorithms for Face Recognition Systems.
Hermosilla, Gabriel; Gallardo, Francisco; Farias, Gonzalo; San Martin, Cesar
2015-07-23
The aim of this article is to present a new face recognition system based on the fusion of visible and thermal features obtained from the most current local matching descriptors by maximizing face recognition rates through the use of genetic algorithms. The article considers a comparison of the performance of the proposed fusion methodology against five current face recognition methods and classic fusion techniques used commonly in the literature. These were selected by considering their performance in face recognition. The five local matching methods and the proposed fusion methodology are evaluated using the standard visible/thermal database, the Equinox database, along with a new database, the PUCV-VTF, designed for visible-thermal studies in face recognition and described for the first time in this work. The latter is created considering visible and thermal image sensors with different real-world conditions, such as variations in illumination, facial expression, pose, occlusion, etc. The main conclusions of this article are that two variants of the proposed fusion methodology surpass current face recognition methods and the classic fusion techniques reported in the literature, attaining recognition rates of over 97% and 99% for the Equinox and PUCV-VTF databases, respectively. The fusion methodology is very robust to illumination and expression changes, as it combines thermal and visible information efficiently by using genetic algorithms, thus allowing it to choose optimal face areas where one spectrum is more representative than the other.
Fusion of Visible and Thermal Descriptors Using Genetic Algorithms for Face Recognition Systems
Hermosilla, Gabriel; Gallardo, Francisco; Farias, Gonzalo; San Martin, Cesar
2015-01-01
The aim of this article is to present a new face recognition system based on the fusion of visible and thermal features obtained from the most current local matching descriptors by maximizing face recognition rates through the use of genetic algorithms. The article considers a comparison of the performance of the proposed fusion methodology against five current face recognition methods and classic fusion techniques used commonly in the literature. These were selected by considering their performance in face recognition. The five local matching methods and the proposed fusion methodology are evaluated using the standard visible/thermal database, the Equinox database, along with a new database, the PUCV-VTF, designed for visible-thermal studies in face recognition and described for the first time in this work. The latter is created considering visible and thermal image sensors with different real-world conditions, such as variations in illumination, facial expression, pose, occlusion, etc. The main conclusions of this article are that two variants of the proposed fusion methodology surpass current face recognition methods and the classic fusion techniques reported in the literature, attaining recognition rates of over 97% and 99% for the Equinox and PUCV-VTF databases, respectively. The fusion methodology is very robust to illumination and expression changes, as it combines thermal and visible information efficiently by using genetic algorithms, thus allowing it to choose optimal face areas where one spectrum is more representative than the other. PMID:26213932
Joint Sparse Representation for Robust Multimodal Biometrics Recognition
2012-01-01
described in III. Experimental evaluations on a comprehensive multimodal dataset and a face database have been described in section V. Finally, in...WVU Multimodal Dataset The WVU multimodal dataset is a comprehensive collection of different biometric modalities such as fingerprint, iris, palmprint ...Martnez and R. Benavente, “The AR face database ,” CVC Technical Report, June 1998. [29] U. Park and A. Jain, “Face matching and retrieval using soft
Remembering faces and scenes: The mixed-category advantage in visual working memory.
Jiang, Yuhong V; Remington, Roger W; Asaad, Anthony; Lee, Hyejin J; Mikkalson, Taylor C
2016-09-01
We examined the mixed-category memory advantage for faces and scenes to determine how domain-specific cortical resources constrain visual working memory. Consistent with previous findings, visual working memory for a display of 2 faces and 2 scenes was better than that for a display of 4 faces or 4 scenes. This pattern was unaffected by manipulations of encoding duration. However, the mixed-category advantage was carried solely by faces: Memory for scenes was not better when scenes were encoded with faces rather than with other scenes. The asymmetry between faces and scenes was found when items were presented simultaneously or sequentially, centrally, or peripherally, and when scenes were drawn from a narrow category. A further experiment showed a mixed-category advantage in memory for faces and bodies, but not in memory for scenes and objects. The results suggest that unique category-specific interactions contribute significantly to the mixed-category advantage in visual working memory. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Holistic processing of static and moving faces.
Zhao, Mintao; Bülthoff, Isabelle
2017-07-01
Humans' face ability develops and matures with extensive experience in perceiving, recognizing, and interacting with faces that move most of the time. However, how facial movements affect 1 core aspect of face ability-holistic face processing-remains unclear. Here we investigated the influence of rigid facial motion on holistic and part-based face processing by manipulating the presence of facial motion during study and at test in a composite face task. The results showed that rigidly moving faces were processed as holistically as static faces (Experiment 1). Holistic processing of moving faces persisted whether facial motion was presented during study, at test, or both (Experiment 2). Moreover, when faces were inverted to eliminate the contributions of both an upright face template and observers' expertise with upright faces, rigid facial motion facilitated holistic face processing (Experiment 3). Thus, holistic processing represents a general principle of face perception that applies to both static and dynamic faces, rather than being limited to static faces. These results support an emerging view that both perceiver-based and face-based factors contribute to holistic face processing, and they offer new insights on what underlies holistic face processing, how information supporting holistic face processing interacts with each other, and why facial motion may affect face recognition and holistic face processing differently. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Individual differences in perceiving and recognizing faces-One element of social cognition.
Wilhelm, Oliver; Herzmann, Grit; Kunina, Olga; Danthiir, Vanessa; Schacht, Annekathrin; Sommer, Werner
2010-09-01
Recognizing faces swiftly and accurately is of paramount importance to humans as a social species. Individual differences in the ability to perform these tasks may therefore reflect important aspects of social or emotional intelligence. Although functional models of face cognition based on group and single case studies postulate multiple component processes, little is known about the ability structure underlying individual differences in face cognition. In 2 large individual differences experiments (N = 151 and N = 209), a broad variety of face-cognition tasks were tested and the component abilities of face cognition-face perception, face memory, and the speed of face cognition-were identified and then replicated. Experiment 2 also showed that the 3 face-cognition abilities are clearly distinct from immediate and delayed memory, mental speed, general cognitive ability, and object cognition. These results converge with functional and neuroanatomical models of face cognition by demonstrating the difference between face perception and face memory. The results also underline the importance of distinguishing between speed and accuracy of face cognition. Together our results provide a first step toward establishing face-processing abilities as an independent ability reflecting elements of social intelligence. (PsycINFO Database Record (c) 2010 APA, all rights reserved).
Local intensity area descriptor for facial recognition in ideal and noise conditions
NASA Astrophysics Data System (ADS)
Tran, Chi-Kien; Tseng, Chin-Dar; Chao, Pei-Ju; Ting, Hui-Min; Chang, Liyun; Huang, Yu-Jie; Lee, Tsair-Fwu
2017-03-01
We propose a local texture descriptor, local intensity area descriptor (LIAD), which is applied for human facial recognition in ideal and noisy conditions. Each facial image is divided into small regions from which LIAD histograms are extracted and concatenated into a single feature vector to represent the facial image. The recognition is performed using a nearest neighbor classifier with histogram intersection and chi-square statistics as dissimilarity measures. Experiments were conducted with LIAD using the ORL database of faces (Olivetti Research Laboratory, Cambridge), the Face94 face database, the Georgia Tech face database, and the FERET database. The results demonstrated the improvement in accuracy of our proposed descriptor compared to conventional descriptors [local binary pattern (LBP), uniform LBP, local ternary pattern, histogram of oriented gradients, and local directional pattern]. Moreover, the proposed descriptor was less sensitive to noise and had low histogram dimensionality. Thus, it is expected to be a powerful texture descriptor that can be used for various computer vision problems.
Learning Deep Representations for Ground to Aerial Geolocalization (Open Access)
2015-10-15
proposed approach, Where-CNN, is inspired by deep learning success in face verification and achieves significant improvements over tra- ditional hand...crafted features and existing deep features learned from other large-scale databases. We show the ef- fectiveness of Where-CNN in finding matches
NASA Astrophysics Data System (ADS)
Feng, Guang; Li, Hengjian; Dong, Jiwen; Chen, Xi; Yang, Huiru
2018-04-01
In this paper, we proposed a joint and collaborative representation with Volterra kernel convolution feature (JCRVK) for face recognition. Firstly, the candidate face images are divided into sub-blocks in the equal size. The blocks are extracted feature using the two-dimensional Voltera kernels discriminant analysis, which can better capture the discrimination information from the different faces. Next, the proposed joint and collaborative representation is employed to optimize and classify the local Volterra kernels features (JCR-VK) individually. JCR-VK is very efficiently for its implementation only depending on matrix multiplication. Finally, recognition is completed by using the majority voting principle. Extensive experiments on the Extended Yale B and AR face databases are conducted, and the results show that the proposed approach can outperform other recently presented similar dictionary algorithms on recognition accuracy.
Unconscious evaluation of faces on social dimensions.
Stewart, Lorna H; Ajina, Sara; Getov, Spas; Bahrami, Bahador; Todorov, Alexander; Rees, Geraint
2012-11-01
It has been proposed that two major axes, dominance and trustworthiness, characterize the social dimensions of face evaluation. Whether evaluation of faces on these social dimensions is restricted to conscious appraisal or happens at a preconscious level is unknown. Here we provide behavioral evidence that such preconscious evaluations exist and that they are likely to be interpretations arising from interactions between the face stimuli and observer-specific traits. Monocularly viewed faces that varied independently along two social dimensions of trust and dominance were rendered invisible by continuous flash suppression (CFS) when a flashing pattern was presented to the other eye. Participants pressed a button as soon as they saw the face emerge from suppression to indicate whether the previously hidden face was located slightly to the left or right of central fixation. Dominant and untrustworthy faces took significantly longer time to emerge (T2E) compared with neutral faces. A control experiment showed these findings could not reflect delayed motor responses to conscious faces. Finally, we showed that participants' self-reported propensity to trust was strongly predictive of untrust avoidance (i.e., difference in T2E for untrustworthy vs neutral faces) as well as dominance avoidance (i.e., difference in T2E for dominant vs neutral faces). Dominance avoidance was also correlated with submissive behavior. We suggest that such prolongation of suppression for threatening faces may result from a passive fear response, leading to slowed visual perception. (PsycINFO Database Record (c) 2012 APA, all rights reserved).
From scores to face templates: a model-based approach.
Mohanty, Pranab; Sarkar, Sudeep; Kasturi, Rangachar
2007-12-01
Regeneration of templates from match scores has security and privacy implications related to any biometric authentication system. We propose a novel paradigm to reconstruct face templates from match scores using a linear approach. It proceeds by first modeling the behavior of the given face recognition algorithm by an affine transformation. The goal of the modeling is to approximate the distances computed by a face recognition algorithm between two faces by distances between points, representing these faces, in an affine space. Given this space, templates from an independent image set (break-in) are matched only once with the enrolled template of the targeted subject and match scores are recorded. These scores are then used to embed the targeted subject in the approximating affine (non-orthogonal) space. Given the coordinates of the targeted subject in the affine space, the original template of the targeted subject is reconstructed using the inverse of the affine transformation. We demonstrate our ideas using three, fundamentally different, face recognition algorithms: Principal Component Analysis (PCA) with Mahalanobis cosine distance measure, Bayesian intra-extrapersonal classifier (BIC), and a feature-based commercial algorithm. To demonstrate the independence of the break-in set with the gallery set, we select face templates from two different databases: Face Recognition Grand Challenge (FRGC) and Facial Recognition Technology (FERET) Database (FERET). With an operational point set at 1 percent False Acceptance Rate (FAR) and 99 percent True Acceptance Rate (TAR) for 1,196 enrollments (FERET gallery), we show that at most 600 attempts (score computations) are required to achieve a 73 percent chance of breaking in as a randomly chosen target subject for the commercial face recognition system. With similar operational set up, we achieve a 72 percent and 100 percent chance of breaking in for the Bayesian and PCA based face recognition systems, respectively. With three different levels of score quantization, we achieve 69 percent, 68 percent and 49 percent probability of break-in, indicating the robustness of our proposed scheme to score quantization. We also show that the proposed reconstruction scheme has 47 percent more probability of breaking in as a randomly chosen target subject for the commercial system as compared to a hill climbing approach with the same number of attempts. Given that the proposed template reconstruction method uses distinct face templates to reconstruct faces, this work exposes a more severe form of vulnerability than a hill climbing kind of attack where incrementally different versions of the same face are used. Also, the ability of the proposed approach to reconstruct actual face templates of the users increases privacy concerns in biometric systems.
[Establishment of the database of the 3D facial models for the plastic surgery based on network].
Liu, Zhe; Zhang, Hai-Lin; Zhang, Zheng-Guo; Qiao, Qun
2008-07-01
To collect the three-dimensional (3D) facial data of 30 facial deformity patients by the 3D scanner and establish a professional database based on Internet. It can be helpful for the clinical intervention. The primitive point data of face topography were collected by the 3D scanner. Then the 3D point cloud was edited by reverse engineering software to reconstruct the 3D model of the face. The database system was divided into three parts, including basic information, disease information and surgery information. The programming language of the web system is Java. The linkages between every table of the database are credibility. The query operation and the data mining are convenient. The users can visit the database via the Internet and use the image analysis system to observe the 3D facial models interactively. In this paper we presented a database and a web system adapt to the plastic surgery of human face. It can be used both in clinic and in basic research.
Applied learning-based color tone mapping for face recognition in video surveillance system
NASA Astrophysics Data System (ADS)
Yew, Chuu Tian; Suandi, Shahrel Azmin
2012-04-01
In this paper, we present an applied learning-based color tone mapping technique for video surveillance system. This technique can be applied onto both color and grayscale surveillance images. The basic idea is to learn the color or intensity statistics from a training dataset of photorealistic images of the candidates appeared in the surveillance images, and remap the color or intensity of the input image so that the color or intensity statistics match those in the training dataset. It is well known that the difference in commercial surveillance cameras models, and signal processing chipsets used by different manufacturers will cause the color and intensity of the images to differ from one another, thus creating additional challenges for face recognition in video surveillance system. Using Multi-Class Support Vector Machines as the classifier on a publicly available video surveillance camera database, namely SCface database, this approach is validated and compared to the results of using holistic approach on grayscale images. The results show that this technique is suitable to improve the color or intensity quality of video surveillance system for face recognition.
Accuracy and consensus in judgments of trustworthiness from faces: behavioral and neural correlates.
Rule, Nicholas O; Krendl, Anne C; Ivcevic, Zorana; Ambady, Nalini
2013-03-01
Perceivers' inferences about individuals based on their faces often show high interrater consensus and can even accurately predict behavior in some domains. Here we investigated the consensus and accuracy of judgments of trustworthiness. In Study 1, we showed that the type of photo judged makes a significant difference for whether an individual is judged as trustworthy. In Study 2, we found that inferences of trustworthiness made from the faces of corporate criminals did not differ from inferences made from the faces of noncriminal executives. In Study 3, we found that judgments of trustworthiness did not differ between the faces of military criminals and the faces of military heroes. In Study 4, we tempted undergraduates to cheat on a test. Although we found that judgments of intelligence from the students' faces were related to students' scores on the test and that judgments of students' extraversion were correlated with self-reported extraversion, there was no relationship between judgments of trustworthiness from the students' faces and students' cheating behavior. Finally, in Study 5, we examined the neural correlates of the accuracy of judgments of trustworthiness from faces. Replicating previous research, we found that perceptions of trustworthiness from the faces in Study 4 corresponded to participants' amygdala response. However, we found no relationship between the amygdala response and the targets' actual cheating behavior. These data suggest that judgments of trustworthiness may not be accurate but, rather, reflect subjective impressions for which people show high agreement. PsycINFO Database Record (c) 2013 APA, all rights reserved
Holistic processing of face configurations and components.
Hayward, William G; Crookes, Kate; Chu, Ming Hon; Favelle, Simone K; Rhodes, Gillian
2016-10-01
Although many researchers agree that faces are processed holistically, we know relatively little about what information holistic processing captures from a face. Most studies that assess the nature of holistic processing do so with changes to the face affecting many different aspects of face information (e.g., different identities). Does holistic processing affect every aspect of a face? We used the composite task, a common means of examining the strength of holistic processing, with participants making same-different judgments about configuration changes or component changes to 1 portion of a face. Configuration changes involved changes in spatial position of the eyes, whereas component changes involved lightening or darkening the eyebrows. Composites were either aligned or misaligned, and were presented either upright or inverted. Both configuration judgments and component judgments showed evidence of holistic processing, and in both cases it was strongest for upright face composites. These results suggest that holistic processing captures a broad range of information about the face, including both configuration-based and component-based information. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Performance evaluation of no-reference image quality metrics for face biometric images
NASA Astrophysics Data System (ADS)
Liu, Xinwei; Pedersen, Marius; Charrier, Christophe; Bours, Patrick
2018-03-01
The accuracy of face recognition systems is significantly affected by the quality of face sample images. The recent established standardization proposed several important aspects for the assessment of face sample quality. There are many existing no-reference image quality metrics (IQMs) that are able to assess natural image quality by taking into account similar image-based quality attributes as introduced in the standardization. However, whether such metrics can assess face sample quality is rarely considered. We evaluate the performance of 13 selected no-reference IQMs on face biometrics. The experimental results show that several of them can assess face sample quality according to the system performance. We also analyze the strengths and weaknesses of different IQMs as well as why some of them failed to assess face sample quality. Retraining an original IQM by using face database can improve the performance of such a metric. In addition, the contribution of this paper can be used for the evaluation of IQMs on other biometric modalities; furthermore, it can be used for the development of multimodality biometric IQMs.
FaceWarehouse: a 3D facial expression database for visual computing.
Cao, Chen; Weng, Yanlin; Zhou, Shun; Tong, Yiying; Zhou, Kun
2014-03-01
We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image.
Maack, Jana K; Bohne, Agnes; Nordahl, Dag; Livsdatter, Lina; Lindahl, Åsne A W; Øvervoll, Morten; Wang, Catharina E A; Pfuhl, Gerit
2017-01-01
Newborns and infants are highly depending on successfully communicating their needs; e.g., through crying and facial expressions. Although there is a growing interest in the mechanisms of and possible influences on the recognition of facial expressions in infants, heretofore there exists no validated database of emotional infant faces. In the present article we introduce a standardized and freely available face database containing Caucasian infant face images from 18 infants 4 to 12 months old. The development and validation of the Tromsø Infant Faces (TIF) database is presented in Study 1. Over 700 adults categorized the photographs by seven emotion categories (happy, sad, disgusted, angry, afraid, surprised, neutral) and rated intensity, clarity and their valance. In order to examine the relevance of TIF, we then present its first application in Study 2, investigating differences in emotion recognition across different stages of parenthood. We found a small gender effect in terms of women giving higher intensity and clarity ratings than men. Moreover, parents of young children rated the images as clearer than all the other groups, and parents rated "neutral" expressions as more clearly and more intense. Our results suggest that caretaking experience provides an implicit advantage in the processing of emotional expressions in infant faces, especially for the more difficult, ambiguous expressions.
Li, Yuan Hang; Tottenham, Nim
2013-04-01
A growing literature suggests that the self-face is involved in processing the facial expressions of others. The authors experimentally activated self-face representations to assess its effects on the recognition of dynamically emerging facial expressions of others. They exposed participants to videos of either their own faces (self-face prime) or faces of others (nonself-face prime) prior to a facial expression judgment task. Their results show that experimentally activating self-face representations results in earlier recognition of dynamically emerging facial expression. As a group, participants in the self-face prime condition recognized expressions earlier (when less affective perceptual information was available) compared to participants in the nonself-face prime condition. There were individual differences in performance, such that poorer expression identification was associated with higher autism traits (in this neurocognitively healthy sample). However, when randomized into the self-face prime condition, participants with high autism traits performed as well as those with low autism traits. Taken together, these data suggest that the ability to recognize facial expressions in others is linked with the internal representations of our own faces. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Locating faces in color photographs using neural networks
NASA Astrophysics Data System (ADS)
Brown, Joe R.; Talley, Jim
1994-03-01
This paper summarizes a research effort in finding the locations and sizes of faces in color images (photographs, video stills, etc.) if, in fact, faces are presented. Scenarios for using such a system include serving as the means of localizing skin for automatic color balancing during photo processing or it could be used as a front-end in a customs port of energy context for a system which identified persona non grata given a database of known faces. The approach presented here is a hybrid system including: a neural pre-processor, some conventional image processing steps, and a neural classifier as the final face/non-face discriminator. Neither the training (containing 17,655 faces) nor the test (containing 1829 faces) imagery databases were constrained in their content or quality. The results for the pilot system are reported along with a discussion for improving the current system.
Face recognition based on two-dimensional discriminant sparse preserving projection
NASA Astrophysics Data System (ADS)
Zhang, Dawei; Zhu, Shanan
2018-04-01
In this paper, a supervised dimensionality reduction algorithm named two-dimensional discriminant sparse preserving projection (2DDSPP) is proposed for face recognition. In order to accurately model manifold structure of data, 2DDSPP constructs within-class affinity graph and between-class affinity graph by the constrained least squares (LS) and l1 norm minimization problem, respectively. Based on directly operating on image matrix, 2DDSPP integrates graph embedding (GE) with Fisher criterion. The obtained projection subspace preserves within-class neighborhood geometry structure of samples, while keeping away samples from different classes. The experimental results on the PIE and AR face databases show that 2DDSPP can achieve better recognition performance.
Face Recognition Using Local Quantized Patterns and Gabor Filters
NASA Astrophysics Data System (ADS)
Khryashchev, V.; Priorov, A.; Stepanova, O.; Nikitin, A.
2015-05-01
The problem of face recognition in a natural or artificial environment has received a great deal of researchers' attention over the last few years. A lot of methods for accurate face recognition have been proposed. Nevertheless, these methods often fail to accurately recognize the person in difficult scenarios, e.g. low resolution, low contrast, pose variations, etc. We therefore propose an approach for accurate and robust face recognition by using local quantized patterns and Gabor filters. The estimation of the eye centers is used as a preprocessing stage. The evaluation of our algorithm on different samples from a standardized FERET database shows that our method is invariant to the general variations of lighting, expression, occlusion and aging. The proposed approach allows about 20% correct recognition accuracy increase compared with the known face recognition algorithms from the OpenCV library. The additional use of Gabor filters can significantly improve the robustness to changes in lighting conditions.
Subject-specific and pose-oriented facial features for face recognition across poses.
Lee, Ping-Han; Hsu, Gee-Sern; Wang, Yun-Wen; Hung, Yi-Ping
2012-10-01
Most face recognition scenarios assume that frontal faces or mug shots are available for enrollment to the database, faces of other poses are collected in the probe set. Given a face from the probe set, one needs to determine whether a match in the database exists. This is under the assumption that in forensic applications, most suspects have their mug shots available in the database, and face recognition aims at recognizing the suspects when their faces of various poses are captured by a surveillance camera. This paper considers a different scenario: given a face with multiple poses available, which may or may not include a mug shot, develop a method to recognize the face with poses different from those captured. That is, given two disjoint sets of poses of a face, one for enrollment and the other for recognition, this paper reports a method best for handling such cases. The proposed method includes feature extraction and classification. For feature extraction, we first cluster the poses of each subject's face in the enrollment set into a few pose classes and then decompose the appearance of the face in each pose class using Embedded Hidden Markov Model, which allows us to define a set of subject-specific and pose-priented (SSPO) facial components for each subject. For classification, an Adaboost weighting scheme is used to fuse the component classifiers with SSPO component features. The proposed method is proven to outperform other approaches, including a component-based classifier with local facial features cropped manually, in an extensive performance evaluation study.
Sugden, Nicole A; Marquis, Alexandra R
2017-11-01
Infants show facility for discriminating between individual faces within hours of birth. Over the first year of life, infants' face discrimination shows continued improvement with familiar face types, such as own-race faces, but not with unfamiliar face types, like other-race faces. The goal of this meta-analytic review is to provide an effect size for infants' face discrimination ability overall, with own-race faces, and with other-race faces within the first year of life, how this differs with age, and how it is influenced by task methodology. Inclusion criteria were (a) infant participants aged 0 to 12 months, (b) completing a human own- or other-race face discrimination task, (c) with discrimination being determined by infant looking. Our analysis included 30 works (165 samples, 1,926 participants participated in 2,623 tasks). The effect size for infants' face discrimination was small, 6.53% greater than chance (i.e., equal looking to the novel and familiar). There was a significant difference in discrimination by race, overall (own-race, 8.18%; other-race, 3.18%) and between ages (own-race: 0- to 4.5-month-olds, 7.32%; 5- to 7.5-month-olds, 9.17%; and 8- to 12-month-olds, 7.68%; other-race: 0- to 4.5-month-olds, 6.12%; 5- to 7.5-month-olds, 3.70%; and 8- to 12-month-olds, 2.79%). Multilevel linear (mixed-effects) models were used to predict face discrimination; infants' capacity to discriminate faces is sensitive to face characteristics including race, gender, and emotion as well as the methods used, including task timing, coding method, and visual angle. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Method for secure electronic voting system: face recognition based approach
NASA Astrophysics Data System (ADS)
Alim, M. Affan; Baig, Misbah M.; Mehboob, Shahzain; Naseem, Imran
2017-06-01
In this paper, we propose a framework for low cost secure electronic voting system based on face recognition. Essentially Local Binary Pattern (LBP) is used for face feature characterization in texture format followed by chi-square distribution is used for image classification. Two parallel systems are developed based on smart phone and web applications for face learning and verification modules. The proposed system has two tire security levels by using person ID followed by face verification. Essentially class specific threshold is associated for controlling the security level of face verification. Our system is evaluated three standard databases and one real home based database and achieve the satisfactory recognition accuracies. Consequently our propose system provides secure, hassle free voting system and less intrusive compare with other biometrics.
Hepatitis Diagnosis Using Facial Color Image
NASA Astrophysics Data System (ADS)
Liu, Mingjia; Guo, Zhenhua
Facial color diagnosis is an important diagnostic method in traditional Chinese medicine (TCM). However, due to its qualitative, subjective and experi-ence-based nature, traditional facial color diagnosis has a very limited application in clinical medicine. To circumvent the subjective and qualitative problems of facial color diagnosis of Traditional Chinese Medicine, in this paper, we present a novel computer aided facial color diagnosis method (CAFCDM). The method has three parts: face Image Database, Image Preprocessing Module and Diagnosis Engine. Face Image Database is carried out on a group of 116 patients affected by 2 kinds of liver diseases and 29 healthy volunteers. The quantitative color feature is extracted from facial images by using popular digital image processing techni-ques. Then, KNN classifier is employed to model the relationship between the quantitative color feature and diseases. The results show that the method can properly identify three groups: healthy, severe hepatitis with jaundice and severe hepatitis without jaundice with accuracy higher than 73%.
Walker, Mirella; Vetter, Thomas
2009-10-13
The social judgments people make on the basis of the facial appearance of strangers strongly affect their behavior in different contexts. However, almost nothing is known about the physical information underlying these judgments. In this article, we present a new technology (a) to quantify the information in faces that is used for social judgments and (b) to manipulate the image of a human face in a way which is almost imperceptible but changes the personality traits ascribed to the depicted person. This method was developed in a high-dimensional face space by identifying vectors that capture maximum variability in judgments of personality traits. Our method of manipulating the salience of these vectors in faces was successfully transferred to novel photographs from an independent database. We evaluated this method by showing pairs of face photographs which differed only in the salience of one of six personality traits. Subjects were asked to decide which face was more extreme with respect to the trait in question. Results show that the image manipulation produced the intended attribution effect. All response accuracies were significantly above chance level. This approach to understanding and manipulating how a person is socially perceived could be useful in psychological research and could also be applied in advertising or the film industries.
A Viola-Jones based hybrid face detection framework
NASA Astrophysics Data System (ADS)
Murphy, Thomas M.; Broussard, Randy; Schultz, Robert; Rakvic, Ryan; Ngo, Hau
2013-12-01
Improvements in face detection performance would benefit many applications. The OpenCV library implements a standard solution, the Viola-Jones detector, with a statistically boosted rejection cascade of binary classifiers. Empirical evidence has shown that Viola-Jones underdetects in some instances. This research shows that a truncated cascade augmented by a neural network could recover these undetected faces. A hybrid framework is constructed, with a truncated Viola-Jones cascade followed by an artificial neural network, used to refine the face decision. Optimally, a truncation stage that captured all faces and allowed the neural network to remove the false alarms is selected. A feedforward backpropagation network with one hidden layer is trained to discriminate faces based upon the thresholding (detection) values of intermediate stages of the full rejection cascade. A clustering algorithm is used as a precursor to the neural network, to group significant overlappings. Evaluated on the CMU/VASC Image Database, comparison with an unmodified OpenCV approach shows: (1) a 37% increase in detection rates if constrained by the requirement of no increase in false alarms, (2) a 48% increase in detection rates if some additional false alarms are tolerated, and (3) an 82% reduction in false alarms with no reduction in detection rates. These results demonstrate improved face detection and could address the need for such improvement in various applications.
ERIC Educational Resources Information Center
Dhar, Vasant
1998-01-01
Shows how counterfactuals and machine learning methods can be used to guide exploration of large databases that addresses some of the fundamental problems that organizations face in learning from data. Discusses data mining, particularly in the financial arena; generating useful knowledge from data; and the evaluation of counterfactuals. (LRW)
Reduced isothermal feature set for long wave infrared (LWIR) face recognition
NASA Astrophysics Data System (ADS)
Donoso, Ramiro; San Martín, Cesar; Hermosilla, Gabriel
2017-06-01
In this paper, we introduce a new concept in the thermal face recognition area: isothermal features. This consists of a feature vector built from a thermal signature that depends on the emission of the skin of the person and its temperature. A thermal signature is the appearance of the face to infrared sensors and is unique to each person. The infrared face is decomposed into isothermal regions that present the thermal features of the face. Each isothermal region is modeled as circles within a center representing the pixel of the image, and the feature vector is composed of a maximum radius of the circles at the isothermal region. This feature vector corresponds to the thermal signature of a person. The face recognition process is built using a modification of the Expectation Maximization (EM) algorithm in conjunction with a proposed probabilistic index to the classification process. Results obtained using an infrared database are compared with typical state-of-the-art techniques showing better performance, especially in uncontrolled acquisition conditions scenarios.
Hammer, Jennifer L; Marsh, Abigail A
2015-04-01
Despite communicating a "negative" emotion, fearful facial expressions predominantly elicit behavioral approach from perceivers. It has been hypothesized that this seemingly paradoxical effect may occur due to fearful expressions' resemblance to vulnerable, infantile faces. However, this hypothesis has not yet been tested. We used a combined approach-avoidance/implicit association test (IAT) to test this hypothesis. Participants completed an approach-avoidance lever task during which they responded to fearful and angry facial expressions as well as neutral infant and adult faces presented in an IAT format. Results demonstrated an implicit association between fearful facial expressions and infant faces and showed that both fearful expressions and infant faces primarily elicit behavioral approach. The dominance of approach responses to both fearful expressions and infant faces decreased as a function of psychopathic personality traits. Results suggest that the prosocial responses to fearful expressions observed in most individuals may stem from their associations with infantile faces. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Hierarchical ensemble of global and local classifiers for face recognition.
Su, Yu; Shan, Shiguang; Chen, Xilin; Gao, Wen
2009-08-01
In the literature of psychophysics and neurophysiology, many studies have shown that both global and local features are crucial for face representation and recognition. This paper proposes a novel face recognition method which exploits both global and local discriminative features. In this method, global features are extracted from the whole face images by keeping the low-frequency coefficients of Fourier transform, which we believe encodes the holistic facial information, such as facial contour. For local feature extraction, Gabor wavelets are exploited considering their biological relevance. After that, Fisher's linear discriminant (FLD) is separately applied to the global Fourier features and each local patch of Gabor features. Thus, multiple FLD classifiers are obtained, each embodying different facial evidences for face recognition. Finally, all these classifiers are combined to form a hierarchical ensemble classifier. We evaluate the proposed method using two large-scale face databases: FERET and FRGC version 2.0. Experiments show that the results of our method are impressively better than the best known results with the same evaluation protocol.
Databases in the United Kingdom.
ERIC Educational Resources Information Center
Chadwyck-Healey, Charles
This overview of the status of online databases in the United Kingdom describes online users' attitudes and practices in light of two surveys conducted in the past two years. The Online Information Centre at ASLIB sampled 325 users, and Chadwyck-Healey, Ltd., conducted a face-to-face survey of librarians in a broad cross-section of 76 libraries.…
Database Design Learning: A Project-Based Approach Organized through a Course Management System
ERIC Educational Resources Information Center
Dominguez, Cesar; Jaime, Arturo
2010-01-01
This paper describes an active method for database design learning through practical tasks development by student teams in a face-to-face course. This method integrates project-based learning, and project management techniques and tools. Some scaffolding is provided at the beginning that forms a skeleton that adapts to a great variety of…
High-resolution face verification using pore-scale facial features.
Li, Dong; Zhou, Huiling; Lam, Kin-Man
2015-08-01
Face recognition methods, which usually represent face images using holistic or local facial features, rely heavily on alignment. Their performances also suffer a severe degradation under variations in expressions or poses, especially when there is one gallery per subject only. With the easy access to high-resolution (HR) face images nowadays, some HR face databases have recently been developed. However, few studies have tackled the use of HR information for face recognition or verification. In this paper, we propose a pose-invariant face-verification method, which is robust to alignment errors, using the HR information based on pore-scale facial features. A new keypoint descriptor, namely, pore-Principal Component Analysis (PCA)-Scale Invariant Feature Transform (PPCASIFT)-adapted from PCA-SIFT-is devised for the extraction of a compact set of distinctive pore-scale facial features. Having matched the pore-scale features of two-face regions, an effective robust-fitting scheme is proposed for the face-verification task. Experiments show that, with one frontal-view gallery only per subject, our proposed method outperforms a number of standard verification methods, and can achieve excellent accuracy even the faces are under large variations in expression and pose.
Robust kernel collaborative representation for face recognition
NASA Astrophysics Data System (ADS)
Huang, Wei; Wang, Xiaohui; Ma, Yanbo; Jiang, Yuzheng; Zhu, Yinghui; Jin, Zhong
2015-05-01
One of the greatest challenges of representation-based face recognition is that the training samples are usually insufficient. In other words, the training set usually does not include enough samples to show varieties of high-dimensional face images caused by illuminations, facial expressions, and postures. When the test sample is significantly different from the training samples of the same subject, the recognition performance will be sharply reduced. We propose a robust kernel collaborative representation based on virtual samples for face recognition. We think that the virtual training set conveys some reasonable and possible variations of the original training samples. Hence, we design a new object function to more closely match the representation coefficients generated from the original and virtual training sets. In order to further improve the robustness, we implement the corresponding representation-based face recognition in kernel space. It is noteworthy that any kind of virtual training samples can be used in our method. We use noised face images to obtain virtual face samples. The noise can be approximately viewed as a reflection of the varieties of illuminations, facial expressions, and postures. Our work is a simple and feasible way to obtain virtual face samples to impose Gaussian noise (and other types of noise) specifically to the original training samples to obtain possible variations of the original samples. Experimental results on the FERET, Georgia Tech, and ORL face databases show that the proposed method is more robust than two state-of-the-art face recognition methods, such as CRC and Kernel CRC.
Content-based video indexing and searching with wavelet transformation
NASA Astrophysics Data System (ADS)
Stumpf, Florian; Al-Jawad, Naseer; Du, Hongbo; Jassim, Sabah
2006-05-01
Biometric databases form an essential tool in the fight against international terrorism, organised crime and fraud. Various government and law enforcement agencies have their own biometric databases consisting of combination of fingerprints, Iris codes, face images/videos and speech records for an increasing number of persons. In many cases personal data linked to biometric records are incomplete and/or inaccurate. Besides, biometric data in different databases for the same individual may be recorded with different personal details. Following the recent terrorist atrocities, law enforcing agencies collaborate more than before and have greater reliance on database sharing. In such an environment, reliable biometric-based identification must not only determine who you are but also who else you are. In this paper we propose a compact content-based video signature and indexing scheme that can facilitate retrieval of multiple records in face biometric databases that belong to the same person even if their associated personal data are inconsistent. We shall assess the performance of our system using a benchmark audio visual face biometric database that has multiple videos for each subject but with different identity claims. We shall demonstrate that retrieval of relatively small number of videos that are nearest, in terms of the proposed index, to any video in the database results in significant proportion of that individual biometric data.
Emotion-independent face recognition
NASA Astrophysics Data System (ADS)
De Silva, Liyanage C.; Esther, Kho G. P.
2000-12-01
Current face recognition techniques tend to work well when recognizing faces under small variations in lighting, facial expression and pose, but deteriorate under more extreme conditions. In this paper, a face recognition system to recognize faces of known individuals, despite variations in facial expression due to different emotions, is developed. The eigenface approach is used for feature extraction. Classification methods include Euclidean distance, back propagation neural network and generalized regression neural network. These methods yield 100% recognition accuracy when the training database is representative, containing one image representing the peak expression for each emotion of each person apart from the neutral expression. The feature vectors used for comparison in the Euclidean distance method and for training the neural network must be all the feature vectors of the training set. These results are obtained for a face database consisting of only four persons.
Multi-stream face recognition for crime-fighting
NASA Astrophysics Data System (ADS)
Jassim, Sabah A.; Sellahewa, Harin
2007-04-01
Automatic face recognition (AFR) is a challenging task that is increasingly becoming the preferred biometric trait for identification and has the potential of becoming an essential tool in the fight against crime and terrorism. Closed-circuit television (CCTV) cameras have increasingly been used over the last few years for surveillance in public places such as airports, train stations and shopping centers. They are used to detect and prevent crime, shoplifting, public disorder and terrorism. The work of law-enforcing and intelligence agencies is becoming more reliant on the use of databases of biometric data for large section of the population. Face is one of the most natural biometric traits that can be used for identification and surveillance. However, variations in lighting conditions, facial expressions, face size and pose are a great obstacle to AFR. This paper is concerned with using waveletbased face recognition schemes in the presence of variations of expressions and illumination. In particular, we will investigate the use of a combination of wavelet frequency channels for a multi-stream face recognition using various wavelet subbands as different face signal streams. The proposed schemes extend our recently developed face veri.cation scheme for implementation on mobile devices. We shall present experimental results on the performance of our proposed schemes for a number of face databases including a new AV database recorded on a PDA. By analyzing the various experimental data, we shall demonstrate that the multi-stream approach is robust against variations in illumination and facial expressions than the previous single-stream approach.
Knowledge representation in metabolic pathway databases.
Stobbe, Miranda D; Jansen, Gerbert A; Moerland, Perry D; van Kampen, Antoine H C
2014-05-01
The accurate representation of all aspects of a metabolic network in a structured format, such that it can be used for a wide variety of computational analyses, is a challenge faced by a growing number of researchers. Analysis of five major metabolic pathway databases reveals that each database has made widely different choices to address this challenge, including how to deal with knowledge that is uncertain or missing. In concise overviews, we show how concepts such as compartments, enzymatic complexes and the direction of reactions are represented in each database. Importantly, also concepts which a database does not represent are described. Which aspects of the metabolic network need to be available in a structured format and to what detail differs per application. For example, for in silico phenotype prediction, a detailed representation of gene-protein-reaction relations and the compartmentalization of the network is essential. Our analysis also shows that current databases are still limited in capturing all details of the biology of the metabolic network, further illustrated with a detailed analysis of three metabolic processes. Finally, we conclude that the conceptual differences between the databases, which make knowledge exchange and integration a challenge, have not been resolved, so far, by the exchange formats in which knowledge representation is standardized.
Deep neural network features for horses identity recognition using multiview horses' face pattern
NASA Astrophysics Data System (ADS)
Jarraya, Islem; Ouarda, Wael; Alimi, Adel M.
2017-03-01
To control the state of horses in the born, breeders needs a monitoring system with a surveillance camera that can identify and distinguish between horses. We proposed in [5] a method of horse's identification at a distance using the frontal facial biometric modality. Due to the change of views, the face recognition becomes more difficult. In this paper, the number of images used in our THoDBRL'2015 database (Tunisian Horses DataBase of Regim Lab) is augmented by adding other images of other views. Thus, we used front, right and left profile face's view. Moreover, we suggested an approach for multiview face recognition. First, we proposed to use the Gabor filter for face characterization. Next, due to the augmentation of the number of images, and the large number of Gabor features, we proposed to test the Deep Neural Network with the auto-encoder to obtain the more pertinent features and to reduce the size of features vector. Finally, we performed the proposed approach on our THoDBRL'2015 database and we used the linear SVM for classification.
Face detection on distorted images using perceptual quality-aware features
NASA Astrophysics Data System (ADS)
Gunasekar, Suriya; Ghosh, Joydeep; Bovik, Alan C.
2014-02-01
We quantify the degradation in performance of a popular and effective face detector when human-perceived image quality is degraded by distortions due to additive white gaussian noise, gaussian blur or JPEG compression. It is observed that, within a certain range of perceived image quality, a modest increase in image quality can drastically improve face detection performance. These results can be used to guide resource or bandwidth allocation in a communication/delivery system that is associated with face detection tasks. A new face detector based on QualHOG features is also proposed that augments face-indicative HOG features with perceptual quality-aware spatial Natural Scene Statistics (NSS) features, yielding improved tolerance against image distortions. The new detector provides statistically significant improvements over a strong baseline on a large database of face images representing a wide range of distortions. To facilitate this study, we created a new Distorted Face Database, containing face and non-face patches from images impaired by a variety of common distortion types and levels. This new dataset is available for download and further experimentation at www.ideal.ece.utexas.edu/˜suriya/DFD/.
Sorted Index Numbers for Privacy Preserving Face Recognition
NASA Astrophysics Data System (ADS)
Wang, Yongjin; Hatzinakos, Dimitrios
2009-12-01
This paper presents a novel approach for changeable and privacy preserving face recognition. We first introduce a new method of biometric matching using the sorted index numbers (SINs) of feature vectors. Since it is impossible to recover any of the exact values of the original features, the transformation from original features to the SIN vectors is noninvertible. To address the irrevocable nature of biometric signals whilst obtaining stronger privacy protection, a random projection-based method is employed in conjunction with the SIN approach to generate changeable and privacy preserving biometric templates. The effectiveness of the proposed method is demonstrated on a large generic data set, which contains images from several well-known face databases. Extensive experimentation shows that the proposed solution may improve the recognition accuracy.
Face-blind for other-race faces: Individual differences in other-race recognition impairments.
Wan, Lulu; Crookes, Kate; Dawel, Amy; Pidcock, Madeleine; Hall, Ashleigh; McKone, Elinor
2017-01-01
We report the existence of a previously undescribed group of people, namely individuals who are so poor at recognition of other-race faces that they meet criteria for clinical-level impairment (i.e., they are "face-blind" for other-race faces). Testing 550 participants, and using the well-validated Cambridge Face Memory Test for diagnosing face blindness, results show the rate of other-race face blindness to be nontrivial, specifically 8.1% of Caucasians and Asians raised in majority own-race countries. Results also show risk factors for other-race face blindness to include: a lack of interracial contact; and being at the lower end of the normal range of general face recognition ability (i.e., even for own-race faces); but not applying less individuating effort to other-race than own-race faces. Findings provide a potential resolution of contradictory evidence concerning the importance of the other-race effect (ORE), by explaining how it is possible for the mean ORE to be modest in size (suggesting a genuine but minor problem), and simultaneously for individuals to suffer major functional consequences in the real world (e.g., eyewitness misidentification of other-race offenders leading to wrongful imprisonment). Findings imply that, in legal settings, evaluating an eyewitness's chance of having made an other-race misidentification requires information about the underlying face recognition abilities of the individual witness. Additionally, analogy with prosopagnosia (inability to recognize even own-race faces) suggests everyday social interactions with other-race people, such as those between colleagues in the workplace, will be seriously impacted by the ORE in some people. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Earlinet database: new design and new products for a wider use of aerosol lidar data
NASA Astrophysics Data System (ADS)
Mona, Lucia; D'Amico, Giuseppe; Amato, Francesco; Linné, Holger; Baars, Holger; Wandinger, Ulla; Pappalardo, Gelsomina
2018-04-01
The EARLINET database is facing a complete reshaping to meet the wide request for more intuitive products and to face the even wider request related to the new initiatives such as Copernicus, the European Earth observation programme. The new design has been carried out in continuity with the past, to take advantage from long-term database. In particular, the new structure will provide information suitable for synergy with other instruments, near real time (NRT) applications, validation and process studies and climate applications.
Cheng, Xue Jun; McCarthy, Callum J; Wang, Tony S L; Palmeri, Thomas J; Little, Daniel R
2018-06-01
Upright faces are thought to be processed more holistically than inverted faces. In the widely used composite face paradigm, holistic processing is inferred from interference in recognition performance from a to-be-ignored face half for upright and aligned faces compared with inverted or misaligned faces. We sought to characterize the nature of holistic processing in composite faces in computational terms. We use logical-rule models (Fifić, Little, & Nosofsky, 2010) and Systems Factorial Technology (Townsend & Nozawa, 1995) to examine whether composite faces are processed through pooling top and bottom face halves into a single processing channel-coactive processing-which is one common mechanistic definition of holistic processing. By specifically operationalizing holistic processing as the pooling of features into a single decision process in our task, we are able to distinguish it from other processing models that may underlie composite face processing. For instance, a failure of selective attention might result even when top and bottom components of composite faces are processed in serial or in parallel without processing the entire face coactively. Our results show that performance is best explained by a mixture of serial and parallel processing architectures across all 4 upright and inverted, aligned and misaligned face conditions. The results indicate multichannel, featural processing of composite faces in a manner inconsistent with the notion of coactivity. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Fatal falls and PFAS use in the construction industry: Findings from the NIOSH FACE reports.
Dong, Xiuwen Sue; Largay, Julie A; Choi, Sang D; Wang, Xuanwen; Cain, Chris Trahan; Romano, Nancy
2017-05-01
This study analyzed the Construction FACE Database (CFD), a quantitative database developed from reports of the Fatality Assessment and Control Evaluation (FACE) program conducted by the National Institute for Occupational Safety and Health (NIOSH). The CFD contains detailed data on 768 fatalities in the construction industry reported by NIOSH and individual states from 1982 through June 30, 2015. The results show that falls accounted for 42% (325) of the 768 fatalities included in the CFD. Personal fall arrest systems (PFAS) were not available to more than half of the fall decedents (54%); nearly one in four fall decedents (23%) had access to PFAS, but were not using it at the time of the fall. Lack of access to PFAS was particularly high among residential building contractors as well as roofing, siding, and sheet metal industry sectors (∼70%). Although the findings may not represent the entire construction industry today, they do provide strong evidence in favor of fall protection requirements by the Occupational Safety and Health Administration (OSHA). In addition to stronger enforcement, educating employers and workers about the importance and effectiveness of fall protection is crucial for compliance and fall prevention. Copyright © 2017 Elsevier Ltd. All rights reserved.
Fatal falls and PFAS use in the construction industry: Findings from the NIOSH FACE reports
Dong, Xiuwen Sue; Largay, Julie A.; Choi, Sang D.; Wang, Xuanwen; Cain, Chris Trahan; Romano, Nancy
2017-01-01
This study analyzed the Construction FACE Database (CFD), a quantitative database developed from reports of the Fatality Assessment and Control Evaluation (FACE) program conducted by the National Institute for Occupational Safety and Health (NIOSH). The CFD contains detailed data on 768 fatalities in the construction industry reported by NIOSH and individual states from 1982 through June 30, 2015. The results show that falls accounted for 42% (325) of the 768 fatalities included in the CFD. Personal fall arrest systems (PFAS) were not available to more than half of the fall decedents (54%); nearly one in four fall decedents (23%) had access to PFAS, but were not using it at the time of the fall. Lack of access to PFAS was particularly high among residential building contractors as well as roofing, siding, and sheet metal industry sectors (~70%). Although the findings may not represent the entire construction industry today, they do provide strong evidence in favor of fall protection requirements by the Occupational Safety and Health Administration (OSHA). In addition to stronger enforcement, educating employers and workers about the importance and effectiveness of fall protection is crucial for compliance and fall prevention. PMID:28292698
Probabilistic Elastic Part Model: A Pose-Invariant Representation for Real-World Face Verification.
Li, Haoxiang; Hua, Gang
2018-04-01
Pose variation remains to be a major challenge for real-world face recognition. We approach this problem through a probabilistic elastic part model. We extract local descriptors (e.g., LBP or SIFT) from densely sampled multi-scale image patches. By augmenting each descriptor with its location, a Gaussian mixture model (GMM) is trained to capture the spatial-appearance distribution of the face parts of all face images in the training corpus, namely the probabilistic elastic part (PEP) model. Each mixture component of the GMM is confined to be a spherical Gaussian to balance the influence of the appearance and the location terms, which naturally defines a part. Given one or multiple face images of the same subject, the PEP-model builds its PEP representation by sequentially concatenating descriptors identified by each Gaussian component in a maximum likelihood sense. We further propose a joint Bayesian adaptation algorithm to adapt the universally trained GMM to better model the pose variations between the target pair of faces/face tracks, which consistently improves face verification accuracy. Our experiments show that we achieve state-of-the-art face verification accuracy with the proposed representations on the Labeled Face in the Wild (LFW) dataset, the YouTube video face database, and the CMU MultiPIE dataset.
Gaze-fixation to happy faces predicts mood repair after a negative mood induction.
Sanchez, Alvaro; Vazquez, Carmelo; Gomez, Diego; Joormann, Jutta
2014-02-01
The present study tested the interplay between mood and attentional deployment by examining attention to positive (i.e., happy faces) and negative (i.e., angry and sad faces) stimuli in response to experimental inductions of sad and happy mood. Participants underwent a negative, neutral, or positive mood induction procedure (MIP) which was followed by an assessment of their attentional deployment to emotional faces using eye-tracking technology. Faces were selected from the Karolinska Directed Emotional Faces (KDEF) database (Lundqvist, Flykt, & Öhman, 1998). In the positive MIP condition, analyses revealed a mood-congruent relation between positive mood and greater attentional deployment to happy faces. In the negative MIP condition, however, analyses revealed a mood-incongruent relation between increased negative mood and greater attentional deployment to happy faces. Furthermore, attentional deployment to happy faces after the negative MIP predicted participants' mood recovery at the end of the experimental session. These results suggest that attentional processing of positive information may play a role in mood repair, which may have important clinical implications. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Beyond attentional bias: a perceptual bias in a dot-probe task.
Bocanegra, Bruno R; Huijding, Jorg; Zeelenberg, René
2012-12-01
Previous dot-probe studies indicate that threat-related face cues induce a bias in spatial attention. Independently of spatial attention, a recent psychophysical study suggests that a bilateral fearful face cue improves low spatial-frequency perception (LSF) and impairs high spatial-frequency perception (HSF). Here, we combine these separate lines of research within a single dot-probe paradigm. We found that a bilateral fearful face cue, compared with a bilateral neutral face cue, speeded up responses to LSF targets and slowed down responses to HSF targets. This finding is important, as it shows that emotional cues in dot-probe tasks not only bias where information is preferentially processed (i.e., an attentional bias in spatial location), but also bias what type of information is preferentially processed (i.e., a perceptual bias in spatial frequency). PsycINFO Database Record (c) 2012 APA, all rights reserved.
Foveation: an alternative method to simultaneously preserve privacy and information in face images
NASA Astrophysics Data System (ADS)
Alonso, Víctor E.; Enríquez-Caldera, Rogerio; Sucar, Luis Enrique
2017-03-01
This paper presents a real-time foveation technique proposed as an alternative method for image obfuscation while simultaneously preserving privacy in face deidentification. Relevance of the proposed technique is discussed through a comparative study of the most common distortions methods in face images and an assessment on performance and effectiveness of privacy protection. All the different techniques presented here are evaluated when they go through a face recognition software. Evaluating the data utility preservation was carried out under gender and facial expression classification. Results on quantifying the tradeoff between privacy protection and image information preservation at different obfuscation levels are presented. Comparative results using the facial expression subset of the FERET database show that the technique achieves a good tradeoff between privacy and awareness with 30% of recognition rate and a classification accuracy as high as 88% obtained from the common figures of merit using the privacy-awareness map.
Qian, Miao K; Quinn, Paul C; Heyman, Gail D; Pascalis, Olivier; Fu, Genyue; Lee, Kang
2017-05-01
Two studies with preschool-age children examined the effectiveness of perceptual individuation training at reducing racial bias (Study 1, N = 32; Study 2, N = 56). We found that training preschool-age children to individuate other-race faces resulted in a reduction in implicit racial bias while mere exposure to other-race faces produced no such effect. We also showed that neither individuation training nor mere exposure reduced explicit racial bias. Theoretically, our findings provide strong evidence for a causal link between individual-level face processing and implicit racial bias, and are consistent with the newly proposed perceptual-social linkage hypothesis. Practically, our findings suggest that offering children experiences that allow them to increase their expertise in processing individual other-race faces will help reduce their implicit racial bias. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Interactive searching of facial image databases
NASA Astrophysics Data System (ADS)
Nicholls, Robert A.; Shepherd, John W.; Shepherd, Jean
1995-09-01
A set of psychological facial descriptors has been devised to enable computerized searching of criminal photograph albums. The descriptors have been used to encode image databased of up to twelve thousand images. Using a system called FACES, the databases are searched by translating a witness' verbal description into corresponding facial descriptors. Trials of FACES have shown that this coding scheme is more productive and efficient than searching traditional photograph albums. An alternative method of searching the encoded database using a genetic algorithm is currenly being tested. The genetic search method does not require the witness to verbalize a description of the target but merely to indicate a degree of similarity between the target and a limited selection of images from the database. The major drawback of FACES is that is requires a manual encoding of images. Research is being undertaken to automate the process, however, it will require an algorithm which can predict human descriptive values. Alternatives to human derived coding schemes exist using statistical classifications of images. Since databases encoded using statistical classifiers do not have an obvious direct mapping to human derived descriptors, a search method which does not require the entry of human descriptors is required. A genetic search algorithm is being tested for such a purpose.
Mere social categorization modulates identification of facial expressions of emotion.
Young, Steven G; Hugenberg, Kurt
2010-12-01
The ability of the human face to communicate emotional states via facial expressions is well known, and past research has established the importance and universality of emotional facial expressions. However, recent evidence has revealed that facial expressions of emotion are most accurately recognized when the perceiver and expresser are from the same cultural ingroup. The current research builds on this literature and extends this work. Specifically, we find that mere social categorization, using a minimal-group paradigm, can create an ingroup emotion-identification advantage even when the culture of the target and perceiver is held constant. Follow-up experiments show that this effect is supported by differential motivation to process ingroup versus outgroup faces and that this motivational disparity leads to more configural processing of ingroup faces than of outgroup faces. Overall, the results point to distinct processing modes for ingroup and outgroup faces, resulting in differential identification accuracy for facial expressions of emotion. PsycINFO Database Record (c) 2010 APA, all rights reserved.
Redesigning photo-ID to improve unfamiliar face matching performance.
White, David; Burton, A Mike; Jenkins, Rob; Kemp, Richard I
2014-06-01
Viewers find it difficult to match photos of unfamiliar faces for identity. Despite this, the use of photographic ID is widespread. In this study we ask whether it is possible to improve face matching performance by replacing single photographs on ID documents with multiple photos or an average image of the bearer. In 3 experiments we compare photo-to-photo matching with photo-to-average matching (where the average is formed from multiple photos of the same person) and photo-to-array matching (where the array comprises separate photos of the same person). We consistently find an accuracy advantage for average images and photo arrays over single photos, and show that this improvement is driven by performance in match trials. In the final experiment, we find a benefit of 4-image arrays relative to average images for unfamiliar faces, but not for familiar faces. We propose that conventional photo-ID format can be improved, and discuss this finding in the context of face recognition more generally. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Cross-modal face recognition using multi-matcher face scores
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Blasch, Erik
2015-05-01
The performance of face recognition can be improved using information fusion of multimodal images and/or multiple algorithms. When multimodal face images are available, cross-modal recognition is meaningful for security and surveillance applications. For example, a probe face is a thermal image (especially at nighttime), while only visible face images are available in the gallery database. Matching a thermal probe face onto the visible gallery faces requires crossmodal matching approaches. A few such studies were implemented in facial feature space with medium recognition performance. In this paper, we propose a cross-modal recognition approach, where multimodal faces are cross-matched in feature space and the recognition performance is enhanced with stereo fusion at image, feature and/or score level. In the proposed scenario, there are two cameras for stereo imaging, two face imagers (visible and thermal images) in each camera, and three recognition algorithms (circular Gaussian filter, face pattern byte, linear discriminant analysis). A score vector is formed with three cross-matched face scores from the aforementioned three algorithms. A classifier (e.g., k-nearest neighbor, support vector machine, binomial logical regression [BLR]) is trained then tested with the score vectors by using 10-fold cross validations. The proposed approach was validated with a multispectral stereo face dataset from 105 subjects. Our experiments show very promising results: ACR (accuracy rate) = 97.84%, FAR (false accept rate) = 0.84% when cross-matching the fused thermal faces onto the fused visible faces by using three face scores and the BLR classifier.
Infrared and visible fusion face recognition based on NSCT domain
NASA Astrophysics Data System (ADS)
Xie, Zhihua; Zhang, Shuai; Liu, Guodong; Xiong, Jinquan
2018-01-01
Visible face recognition systems, being vulnerable to illumination, expression, and pose, can not achieve robust performance in unconstrained situations. Meanwhile, near infrared face images, being light- independent, can avoid or limit the drawbacks of face recognition in visible light, but its main challenges are low resolution and signal noise ratio (SNR). Therefore, near infrared and visible fusion face recognition has become an important direction in the field of unconstrained face recognition research. In this paper, a novel fusion algorithm in non-subsampled contourlet transform (NSCT) domain is proposed for Infrared and visible face fusion recognition. Firstly, NSCT is used respectively to process the infrared and visible face images, which exploits the image information at multiple scales, orientations, and frequency bands. Then, to exploit the effective discriminant feature and balance the power of high-low frequency band of NSCT coefficients, the local Gabor binary pattern (LGBP) and Local Binary Pattern (LBP) are applied respectively in different frequency parts to obtain the robust representation of infrared and visible face images. Finally, the score-level fusion is used to fuse the all the features for final classification. The visible and near infrared face recognition is tested on HITSZ Lab2 visible and near infrared face database. Experiments results show that the proposed method extracts the complementary features of near-infrared and visible-light images and improves the robustness of unconstrained face recognition.
Face photo-sketch synthesis and recognition.
Wang, Xiaogang; Tang, Xiaoou
2009-11-01
In this paper, we propose a novel face photo-sketch synthesis and recognition method using a multiscale Markov Random Fields (MRF) model. Our system has three components: 1) given a face photo, synthesizing a sketch drawing; 2) given a face sketch drawing, synthesizing a photo; and 3) searching for face photos in the database based on a query sketch drawn by an artist. It has useful applications for both digital entertainment and law enforcement. We assume that faces to be studied are in a frontal pose, with normal lighting and neutral expression, and have no occlusions. To synthesize sketch/photo images, the face region is divided into overlapping patches for learning. The size of the patches decides the scale of local face structures to be learned. From a training set which contains photo-sketch pairs, the joint photo-sketch model is learned at multiple scales using a multiscale MRF model. By transforming a face photo to a sketch (or transforming a sketch to a photo), the difference between photos and sketches is significantly reduced, thus allowing effective matching between the two in face sketch recognition. After the photo-sketch transformation, in principle, most of the proposed face photo recognition approaches can be applied to face sketch recognition in a straightforward way. Extensive experiments are conducted on a face sketch database including 606 faces, which can be downloaded from our Web site (http://mmlab.ie.cuhk.edu.hk/facesketch.html).
Gandy, Jessica R; Fossett, Lela; Wong, Brian J F
2016-05-01
This study aims to: 1) determine the current consumer trends of over-the-counter (OTC) and custom-made face mask usage among National Collegiate Athletic Association (NCAA) Division I athletic programs; and 2) provide a literature review of OTC face guards and a classified database. Literature review and survey. Consumer trends were obtained by contacting all 352 NCAA Division I programs. Athletic trainers present in the office when called answered the following questions: 1) "When an athlete breaks his or her nose, is a custom or generic face guard used?" and 2) "What brand is the generic face guard that is used?" Data was analyzed to determine trends among athletic programs. Also, a database of OTC devices available was generated using PubMed, Google, and manufacturer Web sites. Among the 352 NCAA Division I athletic programs, 254 programs participated in the survey (72% response rate). The majority preferred custom-made guards (46%). Disadvantages included high cost and slow manufacture turnaround time. Only 20% of the programs strictly used generic brands. For the face mask database, 10 OTC products were identified and classified into four categories based on design, with pricing ranging between $35.99 and $69.95. Only a handful of face masks exist for U.S. consumers, but none of them have been reviewed or classified by product design, sport application, price, and collegiate consumer use. This project details usage trends among NCAA Division I athletic programs and provides a list of available devices that can be purchased to protect the nose and face during sports. NA. Laryngoscope, 126:1054-1060, 2016. © 2015 The American Laryngological, Rhinological and Otological Society, Inc.
From Pixels to Response Maps: Discriminative Image Filtering for Face Alignment in the Wild.
Asthana, Akshay; Zafeiriou, Stefanos; Tzimiropoulos, Georgios; Cheng, Shiyang; Pantic, Maja
2015-06-01
We propose a face alignment framework that relies on the texture model generated by the responses of discriminatively trained part-based filters. Unlike standard texture models built from pixel intensities or responses generated by generic filters (e.g. Gabor), our framework has two important advantages. First, by virtue of discriminative training, invariance to external variations (like identity, pose, illumination and expression) is achieved. Second, we show that the responses generated by discriminatively trained filters (or patch-experts) are sparse and can be modeled using a very small number of parameters. As a result, the optimization methods based on the proposed texture model can better cope with unseen variations. We illustrate this point by formulating both part-based and holistic approaches for generic face alignment and show that our framework outperforms the state-of-the-art on multiple "wild" databases. The code and dataset annotations are available for research purposes from http://ibug.doc.ic.ac.uk/resources.
Super-resolution method for face recognition using nonlinear mappings on coherent features.
Huang, Hua; He, Huiting
2011-01-01
Low-resolution (LR) of face images significantly decreases the performance of face recognition. To address this problem, we present a super-resolution method that uses nonlinear mappings to infer coherent features that favor higher recognition of the nearest neighbor (NN) classifiers for recognition of single LR face image. Canonical correlation analysis is applied to establish the coherent subspaces between the principal component analysis (PCA) based features of high-resolution (HR) and LR face images. Then, a nonlinear mapping between HR/LR features can be built by radial basis functions (RBFs) with lower regression errors in the coherent feature space than in the PCA feature space. Thus, we can compute super-resolved coherent features corresponding to an input LR image according to the trained RBF model efficiently and accurately. And, face identity can be obtained by feeding these super-resolved features to a simple NN classifier. Extensive experiments on the Facial Recognition Technology, University of Manchester Institute of Science and Technology, and Olivetti Research Laboratory databases show that the proposed method outperforms the state-of-the-art face recognition algorithms for single LR image in terms of both recognition rate and robustness to facial variations of pose and expression.
Methodology of the determination of the uncertainties by using the biometric device the broadway 3D
NASA Astrophysics Data System (ADS)
Jasek, Roman; Talandova, Hana; Adamek, Milan
2016-06-01
The biometric identification by face is among one of the most widely used methods of biometric identification. Due to it provides a faster and more accurate identification; it was implemented into area of security 3D face reader by Broadway manufacturer was used to measure. It is equipped with the 3D camera system, which uses the method of structured light scanning and saves the template into the 3D model of face. The obtained data were evaluated by software Turnstile Enrolment Application (TEA). The measurements were used 3D face reader the Broadway 3D. First, the person was scanned and stored in the database. Thereafter person has already been compared with the stored template in the database for each method. Finally, a measure of reliability was evaluated for the Broadway 3D face reader.
A smart technique for attendance system to recognize faces through parallelism
NASA Astrophysics Data System (ADS)
Prabhavathi, B.; Tanuja, V.; Madhu Viswanatham, V.; Rajashekhara Babu, M.
2017-11-01
Major part of recognising a person is face with the help of image processing techniques we can exploit the physical features of a person. In the old approach method that is used in schools and colleges it is there that the professor calls the student name and then the attendance for the students marked. Here in paper want to deviate from the old approach and go with the new approach by using techniques that are there in image processing. In this paper we presenting spontaneous presence for students in classroom. At first classroom image has been in use and after that image is kept in data record. For the images that are stored in the database we apply system algorithm which includes steps such as, histogram classification, noise removal, face detection and face recognition methods. So by using these steps we detect the faces and then compare it with the database. The attendance gets marked automatically if the system recognizes the faces.
A robust human face detection algorithm
NASA Astrophysics Data System (ADS)
Raviteja, Thaluru; Karanam, Srikrishna; Yeduguru, Dinesh Reddy V.
2012-01-01
Human face detection plays a vital role in many applications like video surveillance, managing a face image database, human computer interface among others. This paper proposes a robust algorithm for face detection in still color images that works well even in a crowded environment. The algorithm uses conjunction of skin color histogram, morphological processing and geometrical analysis for detecting human faces. To reinforce the accuracy of face detection, we further identify mouth and eye regions to establish the presence/absence of face in a particular region of interest.
Facial soft biometric features for forensic face recognition.
Tome, Pedro; Vera-Rodriguez, Ruben; Fierrez, Julian; Ortega-Garcia, Javier
2015-12-01
This paper proposes a functional feature-based approach useful for real forensic caseworks, based on the shape, orientation and size of facial traits, which can be considered as a soft biometric approach. The motivation of this work is to provide a set of facial features, which can be understood by non-experts such as judges and support the work of forensic examiners who, in practice, carry out a thorough manual comparison of face images paying special attention to the similarities and differences in shape and size of various facial traits. This new approach constitutes a tool that automatically converts a set of facial landmarks to a set of features (shape and size) corresponding to facial regions of forensic value. These features are furthermore evaluated in a population to generate statistics to support forensic examiners. The proposed features can also be used as additional information that can improve the performance of traditional face recognition systems. These features follow the forensic methodology and are obtained in a continuous and discrete manner from raw images. A statistical analysis is also carried out to study the stability, discrimination power and correlation of the proposed facial features on two realistic databases: MORPH and ATVS Forensic DB. Finally, the performance of both continuous and discrete features is analyzed using different similarity measures. Experimental results show high discrimination power and good recognition performance, especially for continuous features. A final fusion of the best systems configurations achieves rank 10 match results of 100% for ATVS database and 75% for MORPH database demonstrating the benefits of using this information in practice. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Le, T Hoang Ngan; Luu, Khoa; Savvides, Marios
2013-08-01
Robust facial hair detection and segmentation is a highly valued soft biometric attribute for carrying out forensic facial analysis. In this paper, we propose a novel and fully automatic system, called SparCLeS, for beard/moustache detection and segmentation in challenging facial images. SparCLeS uses the multiscale self-quotient (MSQ) algorithm to preprocess facial images and deal with illumination variation. Histogram of oriented gradients (HOG) features are extracted from the preprocessed images and a dynamic sparse classifier is built using these features to classify a facial region as either containing skin or facial hair. A level set based approach, which makes use of the advantages of both global and local information, is then used to segment the regions of a face containing facial hair. Experimental results demonstrate the effectiveness of our proposed system in detecting and segmenting facial hair regions in images drawn from three databases, i.e., the NIST Multiple Biometric Grand Challenge (MBGC) still face database, the NIST Color Facial Recognition Technology FERET database, and the Labeled Faces in the Wild (LFW) database.
The effect of patient-practitioner communication on pain: a systematic review.
Mistiaen, P; van Osch, M; van Vliet, L; Howick, J; Bishop, F L; Di Blasi, Z; Bensing, J; van Dulmen, S
2016-05-01
Communication between patients and health care practitioners is expected to benefit health outcomes. The objective of this review was to assess the effects of experimentally varied communication on clinical patients' pain. We searched in July 2012, 11 databases supplemented with forward and backward searches for (quasi-) randomized controlled trials in which face-to-face communication was manipulated. We updated in June 2015 using the four most relevant databases (CINAHL, Cochrane Central, Psychinfo, PubMed). Fifty-one studies covering 5079 patients were included. The interventions were separated into three categories: cognitive care, emotional care, procedural preparation. In all but five studies the outcome concerned acute pain. We found that, in general, communication has a small effect on (acute) pain. The 19 cognitive care studies showed that a positive suggestion may reduce pain, whereas a negative suggestion may increase pain, but effects are small. The 14 emotional care studies showed no evidence of a direct effect on pain, although four studies showed a tendency for emotional care lowering patients' pain. Some of the 23 procedural preparation interventions showed a weak to moderate effect on lowering pain. Different types of communication have a significant but small effect on (acute) pain. Positive suggestions and informational preparation seem to lower patients' pain. Communication interventions show a large variety in quality, complexity and methodological rigour; they often used multiple components and it remains unclear what the effective elements of communication are. Future research is warranted to identify the effective components. © 2015 European Pain Federation - EFIC®
2D DOST based local phase pattern for face recognition
NASA Astrophysics Data System (ADS)
Moniruzzaman, Md.; Alam, Mohammad S.
2017-05-01
A new two dimensional (2-D) Discrete Orthogonal Stcokwell Transform (DOST) based Local Phase Pattern (LPP) technique has been proposed for efficient face recognition. The proposed technique uses 2-D DOST as preliminary preprocessing and local phase pattern to form robust feature signature which can effectively accommodate various 3D facial distortions and illumination variations. The S-transform, is an extension of the ideas of the continuous wavelet transform (CWT), is also known for its local spectral phase properties in time-frequency representation (TFR). It provides a frequency dependent resolution of the time-frequency space and absolutely referenced local phase information while maintaining a direct relationship with the Fourier spectrum which is unique in TFR. After utilizing 2-D Stransform as the preprocessing and build local phase pattern from extracted phase information yield fast and efficient technique for face recognition. The proposed technique shows better correlation discrimination compared to alternate pattern recognition techniques such as wavelet or Gabor based face recognition. The performance of the proposed method has been tested using the Yale and extended Yale facial database under different environments such as illumination variation and 3D changes in facial expressions. Test results show that the proposed technique yields better performance compared to alternate time-frequency representation (TFR) based face recognition techniques.
Spatial Pyramid Covariance based Compact Video Code for Robust Face Retrieval in TV-series.
Li, Yan; Wang, Ruiping; Cui, Zhen; Shan, Shiguang; Chen, Xilin
2016-10-10
We address the problem of face video retrieval in TV-series which searches video clips based on the presence of specific character, given one face track of his/her. This is tremendously challenging because on one hand, faces in TV-series are captured in largely uncontrolled conditions with complex appearance variations, and on the other hand retrieval task typically needs efficient representation with low time and space complexity. To handle this problem, we propose a compact and discriminative representation for the huge body of video data, named Compact Video Code (CVC). Our method first models the face track by its sample (i.e., frame) covariance matrix to capture the video data variations in a statistical manner. To incorporate discriminative information and obtain more compact video signature suitable for retrieval, the high-dimensional covariance representation is further encoded as a much lower-dimensional binary vector, which finally yields the proposed CVC. Specifically, each bit of the code, i.e., each dimension of the binary vector, is produced via supervised learning in a max margin framework, which aims to make a balance between the discriminability and stability of the code. Besides, we further extend the descriptive granularity of covariance matrix from traditional pixel-level to more general patchlevel, and proceed to propose a novel hierarchical video representation named Spatial Pyramid Covariance (SPC) along with a fast calculation method. Face retrieval experiments on two challenging TV-series video databases, i.e., the Big Bang Theory and Prison Break, demonstrate the competitiveness of the proposed CVC over state-of-the-art retrieval methods. In addition, as a general video matching algorithm, CVC is also evaluated in traditional video face recognition task on a standard Internet database, i.e., YouTube Celebrities, showing its quite promising performance by using an extremely compact code with only 128 bits.
Hjordt, Liv V; Stenbæk, Dea S; Madsen, Kathrine Skak; Mc Mahon, Brenda; Jensen, Christian G; Vestergaard, Martin; Hageman, Ida; Meder, David; Hasselbalch, Steen G; Knudsen, Gitte M
2017-04-01
Depressed individuals often exhibit impaired inhibition to negative input and identification of positive stimuli, but it is unclear whether this is a state or trait feature. We here exploited a naturalistic model, namely individuals with seasonal affective disorder (SAD), to study this feature longitudinally. The goal of this study was to examine seasonal changes in inhibitory control and identification of emotional faces in individuals with SAD. Twenty-nine individuals diagnosed with winter-SAD and 30 demographically matched controls with no seasonality symptoms completed an emotional Go/NoGo task, requiring inhibition of prepotent responses to emotional facial expressions and an emotional face identification task twice, in winter and summer. In winter, individuals with SAD showed impaired ability to inhibit responses to angry (p = .0006) and sad faces (p = .011), and decreased identification of happy faces (p = .032) compared with controls. In summer, individuals with SAD and controls performed similarly on these tasks (ps > .24). We provide novel evidence that inhibition of angry and sad faces and identification of happy faces are impaired in SAD in the symptomatic phase, but not in the remitted phase. The affective biases in cognitive processing constitute state-dependent features of SAD. Our data show that reinstatement of a normal affective cognition should be possible and would constitute a major goal in psychiatric treatment to improve the quality of life for these patients. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
The neurobiology of self-face recognition in depressed adolescents with low or high suicidality.
Quevedo, Karina; Ng, Rowena; Scott, Hannah; Martin, Jodi; Smyda, Garry; Keener, Matt; Oppenheimer, Caroline W
2016-11-01
This study sought to test whether the neurobiology of self-processing differentiated depressed adolescents with high suicidality (HS) from those with low suicidality (LS) and healthy controls (HC; N = 119, MAGE = 14.79, SD = 1.64, Min = 11.3, Max = 17.8). Participants completed a visual self-recognition task in the scanner during which they identified their own or an unfamiliar adolescent face across 3 emotional expressions (happy, neutral or sad). A 3-group (HS, LS, HC) by 2 within-subject factors (2 Self conditions [self, other] and 3 Emotions [happy, neutral, sad]) GLM yielded (a) a main effect of Self condition with all participants showing higher activity in the right occipital, precuneus and fusiform during the self- versus other-face conditions; (b) a main effect of Group where all depressed youth showed higher dorsolateral prefrontal cortex activity than HC across all conditions, and with HS showing higher cuneus and occipital activity versus both LS and HC; and (c) a Group by Self by Emotion interaction with HS showing lower activity in both mid parietal, limbic, and prefrontal areas in the Happy self versus other-face condition relative to the LS group, who in turn had less activity compared to HC youth. Covarying for depression severity replicated all results except the third finding; In this subsequent analysis, a Group by Self interaction showed that although HC had similar midline cortical structure (MCS) activity for all faces, LS showed higher MCS activity for the self versus other faces, whereas HS showed the opposite pattern. Results suggest that the neurophysiology of emotionally charged self-referential information can distinguish depressed, suicidal youth versus nonsuicidal depressed and healthy adolescents. Neurophysiological differences and implications for the prediction of suicidality in youth are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
A novel deep learning algorithm for incomplete face recognition: Low-rank-recovery network.
Zhao, Jianwei; Lv, Yongbiao; Zhou, Zhenghua; Cao, Feilong
2017-10-01
There have been a lot of methods to address the recognition of complete face images. However, in real applications, the images to be recognized are usually incomplete, and it is more difficult to realize such a recognition. In this paper, a novel convolution neural network frame, named a low-rank-recovery network (LRRNet), is proposed to conquer the difficulty effectively inspired by matrix completion and deep learning techniques. The proposed LRRNet first recovers the incomplete face images via an approach of matrix completion with the truncated nuclear norm regularization solution, and then extracts some low-rank parts of the recovered images as the filters. With these filters, some important features are obtained by means of the binaryzation and histogram algorithms. Finally, these features are classified with the classical support vector machines (SVMs). The proposed LRRNet method has high face recognition rate for the heavily corrupted images, especially for the images in the large databases. The proposed LRRNet performs well and efficiently for the images with heavily corrupted, especially in the case of large databases. Extensive experiments on several benchmark databases demonstrate that the proposed LRRNet performs better than some other excellent robust face recognition methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
A self-organized learning strategy for object recognition by an embedded line of attraction
NASA Astrophysics Data System (ADS)
Seow, Ming-Jung; Alex, Ann T.; Asari, Vijayan K.
2012-04-01
For humans, a picture is worth a thousand words, but to a machine, it is just a seemingly random array of numbers. Although machines are very fast and efficient, they are vastly inferior to humans for everyday information processing. Algorithms that mimic the way the human brain computes and learns may be the solution. In this paper we present a theoretical model based on the observation that images of similar visual perceptions reside in a complex manifold in an image space. The perceived features are often highly structured and hidden in a complex set of relationships or high-dimensional abstractions. To model the pattern manifold, we present a novel learning algorithm using a recurrent neural network. The brain memorizes information using a dynamical system made of interconnected neurons. Retrieval of information is accomplished in an associative sense. It starts from an arbitrary state that might be an encoded representation of a visual image and converges to another state that is stable. The stable state is what the brain remembers. In designing a recurrent neural network, it is usually of prime importance to guarantee the convergence in the dynamics of the network. We propose to modify this picture: if the brain remembers by converging to the state representing familiar patterns, it should also diverge from such states when presented with an unknown encoded representation of a visual image belonging to a different category. That is, the identification of an instability mode is an indication that a presented pattern is far away from any stored pattern and therefore cannot be associated with current memories. These properties can be used to circumvent the plasticity-stability dilemma by using the fluctuating mode as an indicator to create new states. We capture this behavior using a novel neural architecture and learning algorithm, in which the system performs self-organization utilizing a stability mode and an instability mode for the dynamical system. Based on this observation we developed a self- organizing line attractor, which is capable of generating new lines in the feature space to learn unrecognized patterns. Experiments performed on UMIST pose database and CMU face expression variant database for face recognition have shown that the proposed nonlinear line attractor is able to successfully identify the individuals and it provided better recognition rate when compared to the state of the art face recognition techniques. Experiments on FRGC version 2 database has also provided excellent recognition rate in images captured in complex lighting environments. Experiments performed on the Japanese female face expression database and Essex Grimace database using the self organizing line attractor have also shown successful expression invariant face recognition. These results show that the proposed model is able to create nonlinear manifolds in a multidimensional feature space to distinguish complex patterns.
Alternative face models for 3D face registration
NASA Astrophysics Data System (ADS)
Salah, Albert Ali; Alyüz, Neşe; Akarun, Lale
2007-01-01
3D has become an important modality for face biometrics. The accuracy of a 3D face recognition system depends on a correct registration that aligns the facial surfaces and makes a comparison possible. The best results obtained so far use a one-to-all registration approach, which means each new facial surface is registered to all faces in the gallery, at a great computational cost. We explore the approach of registering the new facial surface to an average face model (AFM), which automatically establishes correspondence to the pre-registered gallery faces. Going one step further, we propose that using a couple of well-selected AFMs can trade-off computation time with accuracy. Drawing on cognitive justifications, we propose to employ category-specific alternative average face models for registration, which is shown to increase the accuracy of the subsequent recognition. We inspect thin-plate spline (TPS) and iterative closest point (ICP) based registration schemes under realistic assumptions on manual or automatic landmark detection prior to registration. We evaluate several approaches for the coarse initialization of ICP. We propose a new algorithm for constructing an AFM, and show that it works better than a recent approach. Finally, we perform simulations with multiple AFMs that correspond to different clusters in the face shape space and compare these with gender and morphology based groupings. We report our results on the FRGC 3D face database.
Yi, Jizheng; Mao, Xia; Chen, Lijiang; Xue, Yuli; Rovetta, Alberto; Caleanu, Catalin-Daniel
2015-01-01
Illumination normalization of face image for face recognition and facial expression recognition is one of the most frequent and difficult problems in image processing. In order to obtain a face image with normal illumination, our method firstly divides the input face image into sixteen local regions and calculates the edge level percentage in each of them. Secondly, three local regions, which meet the requirements of lower complexity and larger average gray value, are selected to calculate the final illuminant direction according to the error function between the measured intensity and the calculated intensity, and the constraint function for an infinite light source model. After knowing the final illuminant direction of the input face image, the Retinex algorithm is improved from two aspects: (1) we optimize the surround function; (2) we intercept the values in both ends of histogram of face image, determine the range of gray levels, and stretch the range of gray levels into the dynamic range of display device. Finally, we achieve illumination normalization and get the final face image. Unlike previous illumination normalization approaches, the method proposed in this paper does not require any training step or any knowledge of 3D face and reflective surface model. The experimental results using extended Yale face database B and CMU-PIE show that our method achieves better normalization effect comparing with the existing techniques.
Image preprocessing study on KPCA-based face recognition
NASA Astrophysics Data System (ADS)
Li, Xuan; Li, Dehua
2015-12-01
Face recognition as an important biometric identification method, with its friendly, natural, convenient advantages, has obtained more and more attention. This paper intends to research a face recognition system including face detection, feature extraction and face recognition, mainly through researching on related theory and the key technology of various preprocessing methods in face detection process, using KPCA method, focuses on the different recognition results in different preprocessing methods. In this paper, we choose YCbCr color space for skin segmentation and choose integral projection for face location. We use erosion and dilation of the opening and closing operation and illumination compensation method to preprocess face images, and then use the face recognition method based on kernel principal component analysis method for analysis and research, and the experiments were carried out using the typical face database. The algorithms experiment on MATLAB platform. Experimental results show that integration of the kernel method based on PCA algorithm under certain conditions make the extracted features represent the original image information better for using nonlinear feature extraction method, which can obtain higher recognition rate. In the image preprocessing stage, we found that images under various operations may appear different results, so as to obtain different recognition rate in recognition stage. At the same time, in the process of the kernel principal component analysis, the value of the power of the polynomial function can affect the recognition result.
Multi-stream face recognition on dedicated mobile devices for crime-fighting
NASA Astrophysics Data System (ADS)
Jassim, Sabah A.; Sellahewa, Harin
2006-09-01
Automatic face recognition is a useful tool in the fight against crime and terrorism. Technological advance in mobile communication systems and multi-application mobile devices enable the creation of hybrid platforms for active and passive surveillance. A dedicated mobile device that incorporates audio-visual sensors would not only complement existing networks of fixed surveillance devices (e.g. CCTV) but could also provide wide geographical coverage in almost any situation and anywhere. Such a device can hold a small portion of a law-enforcing agency biometric database that consist of audio and/or visual data of a number of suspects/wanted or missing persons who are expected to be in a local geographical area. This will assist law-enforcing officers on the ground in identifying persons whose biometric templates are downloaded onto their devices. Biometric data on the device can be regularly updated which will reduce the number of faces an officer has to remember. Such a dedicated device would act as an active/passive mobile surveillance unit that incorporate automatic identification. This paper is concerned with the feasibility of using wavelet-based face recognition schemes on such devices. The proposed schemes extend our recently developed face verification scheme for implementation on a currently available PDA. In particular we will investigate the use of a combination of wavelet frequency channels for multi-stream face recognition. We shall present experimental results on the performance of our proposed schemes for a number of publicly available face databases including a new AV database of videos recorded on a PDA.
Gender classification system in uncontrolled environments
NASA Astrophysics Data System (ADS)
Zeng, Pingping; Zhang, Yu-Jin; Duan, Fei
2011-01-01
Most face analysis systems available today perform mainly on restricted databases of images in terms of size, age, illumination. In addition, it is frequently assumed that all images are frontal and unconcealed. Actually, in a non-guided real-time supervision, the face pictures taken may often be partially covered and with head rotation less or more. In this paper, a special system supposed to be used in real-time surveillance with un-calibrated camera and non-guided photography is described. It mainly consists of five parts: face detection, non-face filtering, best-angle face selection, texture normalization, and gender classification. Emphases are focused on non-face filtering and best-angle face selection parts as well as texture normalization. Best-angle faces are figured out by PCA reconstruction, which equals to an implicit face alignment and results in a huge increase of the accuracy for gender classification. Dynamic skin model and a masked PCA reconstruction algorithm are applied to filter out faces detected in error. In order to fully include facial-texture and shape-outline features, a hybrid feature that is a combination of Gabor wavelet and PHoG (pyramid histogram of gradients) was proposed to equitable inner texture and outer contour. Comparative study on the effects of different non-face filtering and texture masking methods in the context of gender classification by SVM is reported through experiments on a set of UT (a company name) face images, a large number of internet images and CAS (Chinese Academy of Sciences) face database. Some encouraging results are obtained.
Neural networks related to dysfunctional face processing in autism spectrum disorder
Nickl-Jockschat, Thomas; Rottschy, Claudia; Thommes, Johanna; Schneider, Frank; Laird, Angela R.; Fox, Peter T.; Eickhoff, Simon B.
2016-01-01
One of the most consistent neuropsychological findings in autism spectrum disorders (ASD) is a reduced interest in and impaired processing of human faces. We conducted an activation likelihood estimation meta-analysis on 14 functional imaging studies on neural correlates of face processing enrolling a total of 164 ASD patients. Subsequently, normative whole-brain functional connectivity maps for the identified regions of significant convergence were computed for the task-independent (resting-state) and task-dependent (co-activations) state in healthy subjects. Quantitative functional decoding was performed by reference to the BrainMap database. Finally, we examined the overlap of the delineated network with the results of a previous meta-analysis on structural abnormalities in ASD as well as with brain regions involved in human action observation/imitation. We found a single cluster in the left fusiform gyrus showing significantly reduced activation during face processing in ASD across all studies. Both task-dependent and task-independent analyses indicated significant functional connectivity of this region with the temporo-occipital and lateral occipital cortex, the inferior frontal and parietal cortices, the thalamus and the amygdala. Quantitative reverse inference then indicated an association of these regions mainly with face processing, affective processing, and language-related tasks. Moreover, we found that the cortex in the region of right area V5 displaying structural changes in ASD patients showed consistent connectivity with the region showing aberrant responses in the context of face processing. Finally, this network was also implicated in the human action observation/imitation network. In summary, our findings thus suggest a functionally and structurally disturbed network of occipital regions related primarily to face (but potentially also language) processing, which interact with inferior frontal as well as limbic regions and may be the core of aberrant face processing and reduced interest in faces in ASD. PMID:24869925
Dawel, Amy; Wright, Luke; Irons, Jessica; Dumbleton, Rachael; Palermo, Romina; O'Kearney, Richard; McKone, Elinor
2017-08-01
In everyday social interactions, people's facial expressions sometimes reflect genuine emotion (e.g., anger in response to a misbehaving child) and sometimes do not (e.g., smiling for a school photo). There is increasing theoretical interest in this distinction, but little is known about perceived emotion genuineness for existing facial expression databases. We present a new method for rating perceived genuineness using a neutral-midpoint scale (-7 = completely fake; 0 = don't know; +7 = completely genuine) that, unlike previous methods, provides data on both relative and absolute perceptions. Normative ratings from typically developing adults for five emotions (anger, disgust, fear, sadness, and happiness) provide three key contributions. First, the widely used Pictures of Facial Affect (PoFA; i.e., "the Ekman faces") and the Radboud Faces Database (RaFD) are typically perceived as not showing genuine emotion. Also, in the only published set for which the actual emotional states of the displayers are known (via self-report; the McLellan faces), percepts of emotion genuineness often do not match actual emotion genuineness. Second, we provide genuine/fake norms for 558 faces from several sources (PoFA, RaFD, KDEF, Gur, FacePlace, McLellan, News media), including a list of 143 stimuli that are event-elicited (rather than posed) and, congruently, perceived as reflecting genuine emotion. Third, using the norms we develop sets of perceived-as-genuine (from event-elicited sources) and perceived-as-fake (from posed sources) stimuli, matched on sex, viewpoint, eye-gaze direction, and rated intensity. We also outline the many types of research questions that these norms and stimulus sets could be used to answer.
Hierarchical Representation Learning for Kinship Verification.
Kohli, Naman; Vatsa, Mayank; Singh, Richa; Noore, Afzel; Majumdar, Angshul
2017-01-01
Kinship verification has a number of applications such as organizing large collections of images and recognizing resemblances among humans. In this paper, first, a human study is conducted to understand the capabilities of human mind and to identify the discriminatory areas of a face that facilitate kinship-cues. The visual stimuli presented to the participants determine their ability to recognize kin relationship using the whole face as well as specific facial regions. The effect of participant gender and age and kin-relation pair of the stimulus is analyzed using quantitative measures such as accuracy, discriminability index d' , and perceptual information entropy. Utilizing the information obtained from the human study, a hierarchical kinship verification via representation learning (KVRL) framework is utilized to learn the representation of different face regions in an unsupervised manner. We propose a novel approach for feature representation termed as filtered contractive deep belief networks (fcDBN). The proposed feature representation encodes relational information present in images using filters and contractive regularization penalty. A compact representation of facial images of kin is extracted as an output from the learned model and a multi-layer neural network is utilized to verify the kin accurately. A new WVU kinship database is created, which consists of multiple images per subject to facilitate kinship verification. The results show that the proposed deep learning framework (KVRL-fcDBN) yields the state-of-the-art kinship verification accuracy on the WVU kinship database and on four existing benchmark data sets. Furthermore, kinship information is used as a soft biometric modality to boost the performance of face verification via product of likelihood ratio and support vector machine based approaches. Using the proposed KVRL-fcDBN framework, an improvement of over 20% is observed in the performance of face verification.
Tracking and recognition face in videos with incremental local sparse representation model
NASA Astrophysics Data System (ADS)
Wang, Chao; Wang, Yunhong; Zhang, Zhaoxiang
2013-10-01
This paper addresses the problem of tracking and recognizing faces via incremental local sparse representation. First a robust face tracking algorithm is proposed via employing local sparse appearance and covariance pooling method. In the following face recognition stage, with the employment of a novel template update strategy, which combines incremental subspace learning, our recognition algorithm adapts the template to appearance changes and reduces the influence of occlusion and illumination variation. This leads to a robust video-based face tracking and recognition with desirable performance. In the experiments, we test the quality of face recognition in real-world noisy videos on YouTube database, which includes 47 celebrities. Our proposed method produces a high face recognition rate at 95% of all videos. The proposed face tracking and recognition algorithms are also tested on a set of noisy videos under heavy occlusion and illumination variation. The tracking results on challenging benchmark videos demonstrate that the proposed tracking algorithm performs favorably against several state-of-the-art methods. In the case of the challenging dataset in which faces undergo occlusion and illumination variation, and tracking and recognition experiments under significant pose variation on the University of California, San Diego (Honda/UCSD) database, our proposed method also consistently demonstrates a high recognition rate.
NASA Astrophysics Data System (ADS)
Abdullah, Nurul Azma; Saidi, Md. Jamri; Rahman, Nurul Hidayah Ab; Wen, Chuah Chai; Hamid, Isredza Rahmi A.
2017-10-01
In practice, identification of criminal in Malaysia is done through thumbprint identification. However, this type of identification is constrained as most of criminal nowadays getting cleverer not to leave their thumbprint on the scene. With the advent of security technology, cameras especially CCTV have been installed in many public and private areas to provide surveillance activities. The footage of the CCTV can be used to identify suspects on scene. However, because of limited software developed to automatically detect the similarity between photo in the footage and recorded photo of criminals, the law enforce thumbprint identification. In this paper, an automated facial recognition system for criminal database was proposed using known Principal Component Analysis approach. This system will be able to detect face and recognize face automatically. This will help the law enforcements to detect or recognize suspect of the case if no thumbprint present on the scene. The results show that about 80% of input photo can be matched with the template data.
A face and palmprint recognition approach based on discriminant DCT feature extraction.
Jing, Xiao-Yuan; Zhang, David
2004-12-01
In the field of image processing and recognition, discrete cosine transform (DCT) and linear discrimination are two widely used techniques. Based on them, we present a new face and palmprint recognition approach in this paper. It first uses a two-dimensional separability judgment to select the DCT frequency bands with favorable linear separability. Then from the selected bands, it extracts the linear discriminative features by an improved Fisherface method and performs the classification by the nearest neighbor classifier. We detailedly analyze theoretical advantages of our approach in feature extraction. The experiments on face databases and palmprint database demonstrate that compared to the state-of-the-art linear discrimination methods, our approach obtains better classification performance. It can significantly improve the recognition rates for face and palmprint data and effectively reduce the dimension of feature space.
Bennett, Tellen D; Kaufman, Robert; Schiff, Melissa; Mock, Charles; Quan, Linda
2006-09-01
The mechanism, crash characteristics, and spectrum of lower extremity injuries in children restrained in forward-facing car seats during front and rear impacts have not been described. We identified in two databases children who sustained lower extremity injuries while restrained in forward-facing car seats. To identify the mechanism, we analyzed crash reconstructions from three frontal-impact cases from the Crash Injury Research and Engineering Network. To further describe the crash and injury characteristics we evaluated children between 1 and 4 years of age with lower extremity injuries from front or rear impacts in the National Automotive Sampling System (NASS) Crashworthiness Data System (CDS) database. Crash reconstruction data demonstrated that the likely mechanism of lower extremity injury was contact between the legs and the front seatbacks. In the CDS database, we identified 15 children with lower extremity injuries in a forward-facing child seat, usually (13 out of 15) placed in the rear seat, incurred in frontal impacts (11 out of 15). Several (5 out of 15) children were in unbelted or improperly secured forward-facing car seats. Injury Severity Scores varied widely (5-50). Children in forward-facing car seats involved in severe front or rear crashes may incur a range of lower extremity injury from impact with the car interior component in front of them. Crash scene photography can provide useful information about anatomic sites at risk for injury and alert emergency department providers to possible subtle injury.
Lavelli, Manuela; Fogel, Alan
2013-12-01
A microgenetic research design with a multiple case study method and a combination of quantitative and qualitative analyses was used to investigate interdyad differences in real-time dynamics and developmental change processes in mother-infant face-to-face communication over the first 3 months of life. Weekly observations of 24 mother-infant dyads with analyses performed dyad by dyad showed that most dyads go through 2 qualitatively different developmental phases of early face-to-face communication: After a phase of mutual attentiveness, mutual engagement begins in Weeks 7-8, with infant smiling and cooing bidirectionally linked with maternal mirroring. This gives rise to sequences of positive feedback that, by the 3rd month, dynamically stabilizes into innovative play routines. However, when there is a lack of bidirectional positive feedback between infant and maternal behaviors, and a lack of permeability of the early communicative patterns to incorporate innovations, the development of the mutual engagement phase is compromised. The findings contribute both to theories of relationship change processes and to clinical work with at-risk mother-infant interactions. PsycINFO Database Record (c) 2013 APA, all rights reserved.
The roles of perceptual and conceptual information in face recognition.
Schwartz, Linoy; Yovel, Galit
2016-11-01
The representation of familiar objects is comprised of perceptual information about their visual properties as well as the conceptual knowledge that we have about them. What is the relative contribution of perceptual and conceptual information to object recognition? Here, we examined this question by designing a face familiarization protocol during which participants were either exposed to rich perceptual information (viewing each face in different angles and illuminations) or with conceptual information (associating each face with a different name). Both conditions were compared with single-view faces presented with no labels. Recognition was tested on new images of the same identities to assess whether learning generated a view-invariant representation. Results showed better recognition of novel images of the learned identities following association of a face with a name label, but no enhancement following exposure to multiple face views. Whereas these findings may be consistent with the role of category learning in object recognition, face recognition was better for labeled faces only when faces were associated with person-related labels (name, occupation), but not with person-unrelated labels (object names or symbols). These findings suggest that association of meaningful conceptual information with an image shifts its representation from an image-based percept to a view-invariant concept. They further indicate that the role of conceptual information should be considered to account for the superior recognition that we have for familiar faces and objects. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Maximal likelihood correspondence estimation for face recognition across pose.
Li, Shaoxin; Liu, Xin; Chai, Xiujuan; Zhang, Haihong; Lao, Shihong; Shan, Shiguang
2014-10-01
Due to the misalignment of image features, the performance of many conventional face recognition methods degrades considerably in across pose scenario. To address this problem, many image matching-based methods are proposed to estimate semantic correspondence between faces in different poses. In this paper, we aim to solve two critical problems in previous image matching-based correspondence learning methods: 1) fail to fully exploit face specific structure information in correspondence estimation and 2) fail to learn personalized correspondence for each probe image. To this end, we first build a model, termed as morphable displacement field (MDF), to encode face specific structure information of semantic correspondence from a set of real samples of correspondences calculated from 3D face models. Then, we propose a maximal likelihood correspondence estimation (MLCE) method to learn personalized correspondence based on maximal likelihood frontal face assumption. After obtaining the semantic correspondence encoded in the learned displacement, we can synthesize virtual frontal images of the profile faces for subsequent recognition. Using linear discriminant analysis method with pixel-intensity features, state-of-the-art performance is achieved on three multipose benchmarks, i.e., CMU-PIE, FERET, and MultiPIE databases. Owe to the rational MDF regularization and the usage of novel maximal likelihood objective, the proposed MLCE method can reliably learn correspondence between faces in different poses even in complex wild environment, i.e., labeled face in the wild database.
West, Gordon; Patrician, Patricia A; Loan, Lori
2012-12-01
Data from the Military Nursing Outcomes Database (MilNOD) project demonstrate that inadequately staffed shifts can increase the likelihood of adverse events, such as falls with injury, medication errors, and needlestick injuries to nurses. Such evidence can be used to show that it takes not only the right number of nursing staff on every shift to ensure safe patient care, but also the right mix of expertise and experience. Based on findings from the MilNOD project, the authors present realistic scenarios of common dilemmas hospitals face in nurse staffing, illustrating the potential hazards for patients and nurses alike.
Pose Invariant Face Recognition Based on Hybrid Dominant Frequency Features
NASA Astrophysics Data System (ADS)
Wijaya, I. Gede Pasek Suta; Uchimura, Keiichi; Hu, Zhencheng
Face recognition is one of the most active research areas in pattern recognition, not only because the face is a human biometric characteristics of human being but also because there are many potential applications of the face recognition which range from human-computer interactions to authentication, security, and surveillance. This paper presents an approach to pose invariant human face image recognition. The proposed scheme is based on the analysis of discrete cosine transforms (DCT) and discrete wavelet transforms (DWT) of face images. From both the DCT and DWT domain coefficients, which describe the facial information, we build compact and meaningful features vector, using simple statistical measures and quantization. This feature vector is called as the hybrid dominant frequency features. Then, we apply a combination of the L2 and Lq metric to classify the hybrid dominant frequency features to a person's class. The aim of the proposed system is to overcome the high memory space requirement, the high computational load, and the retraining problems of previous methods. The proposed system is tested using several face databases and the experimental results are compared to a well-known Eigenface method. The proposed method shows good performance, robustness, stability, and accuracy without requiring geometrical normalization. Furthermore, the purposed method has low computational cost, requires little memory space, and can overcome retraining problem.
A Comparative Survey of Methods for Remote Heart Rate Detection From Frontal Face Videos
Wang, Chen; Pun, Thierry; Chanel, Guillaume
2018-01-01
Remotely measuring physiological activity can provide substantial benefits for both the medical and the affective computing applications. Recent research has proposed different methodologies for the unobtrusive detection of heart rate (HR) using human face recordings. These methods are based on subtle color changes or motions of the face due to cardiovascular activities, which are invisible to human eyes but can be captured by digital cameras. Several approaches have been proposed such as signal processing and machine learning. However, these methods are compared with different datasets, and there is consequently no consensus on method performance. In this article, we describe and evaluate several methods defined in literature, from 2008 until present day, for the remote detection of HR using human face recordings. The general HR processing pipeline is divided into three stages: face video processing, face blood volume pulse (BVP) signal extraction, and HR computation. Approaches presented in the paper are classified and grouped according to each stage. At each stage, algorithms are analyzed and compared based on their performance using the public database MAHNOB-HCI. Results found in this article are limited on MAHNOB-HCI dataset. Results show that extracted face skin area contains more BVP information. Blind source separation and peak detection methods are more robust with head motions for estimating HR. PMID:29765940
Locally linear regression for pose-invariant face recognition.
Chai, Xiujuan; Shan, Shiguang; Chen, Xilin; Gao, Wen
2007-07-01
The variation of facial appearance due to the viewpoint (/pose) degrades face recognition systems considerably, which is one of the bottlenecks in face recognition. One of the possible solutions is generating virtual frontal view from any given nonfrontal view to obtain a virtual gallery/probe face. Following this idea, this paper proposes a simple, but efficient, novel locally linear regression (LLR) method, which generates the virtual frontal view from a given nonfrontal face image. We first justify the basic assumption of the paper that there exists an approximate linear mapping between a nonfrontal face image and its frontal counterpart. Then, by formulating the estimation of the linear mapping as a prediction problem, we present the regression-based solution, i.e., globally linear regression. To improve the prediction accuracy in the case of coarse alignment, LLR is further proposed. In LLR, we first perform dense sampling in the nonfrontal face image to obtain many overlapped local patches. Then, the linear regression technique is applied to each small patch for the prediction of its virtual frontal patch. Through the combination of all these patches, the virtual frontal view is generated. The experimental results on the CMU PIE database show distinct advantage of the proposed method over Eigen light-field method.
Implementing a Dynamic Database-Driven Course Using LAMP
ERIC Educational Resources Information Center
Laverty, Joseph Packy; Wood, David; Turchek, John
2011-01-01
This paper documents the formulation of a database driven open source architecture web development course. The design of a web-based curriculum faces many challenges: a) relative emphasis of client and server-side technologies, b) choice of a server-side language, and c) the cost and efficient delivery of a dynamic web development, database-driven…
Content based information retrieval in forensic image databases.
Geradts, Zeno; Bijhold, Jurrien
2002-03-01
This paper gives an overview of the various available image databases and ways of searching these databases on image contents. The developments in research groups of searching in image databases is evaluated and compared with the forensic databases that exist. Forensic image databases of fingerprints, faces, shoeprints, handwriting, cartridge cases, drugs tablets, and tool marks are described. The developments in these fields appear to be valuable for forensic databases, especially that of the framework in MPEG-7, where the searching in image databases is standardized. In the future, the combination of the databases (also DNA-databases) and possibilities to combine these can result in stronger forensic evidence.
Multi-Directional Multi-Level Dual-Cross Patterns for Robust Face Recognition.
Ding, Changxing; Choi, Jonghyun; Tao, Dacheng; Davis, Larry S
2016-03-01
To perform unconstrained face recognition robust to variations in illumination, pose and expression, this paper presents a new scheme to extract "Multi-Directional Multi-Level Dual-Cross Patterns" (MDML-DCPs) from face images. Specifically, the MDML-DCPs scheme exploits the first derivative of Gaussian operator to reduce the impact of differences in illumination and then computes the DCP feature at both the holistic and component levels. DCP is a novel face image descriptor inspired by the unique textural structure of human faces. It is computationally efficient and only doubles the cost of computing local binary patterns, yet is extremely robust to pose and expression variations. MDML-DCPs comprehensively yet efficiently encodes the invariant characteristics of a face image from multiple levels into patterns that are highly discriminative of inter-personal differences but robust to intra-personal variations. Experimental results on the FERET, CAS-PERL-R1, FRGC 2.0, and LFW databases indicate that DCP outperforms the state-of-the-art local descriptors (e.g., LBP, LTP, LPQ, POEM, tLBP, and LGXP) for both face identification and face verification tasks. More impressively, the best performance is achieved on the challenging LFW and FRGC 2.0 databases by deploying MDML-DCPs in a simple recognition scheme.
Near infrared and visible face recognition based on decision fusion of LBP and DCT features
NASA Astrophysics Data System (ADS)
Xie, Zhihua; Zhang, Shuai; Liu, Guodong; Xiong, Jinquan
2018-03-01
Visible face recognition systems, being vulnerable to illumination, expression, and pose, can not achieve robust performance in unconstrained situations. Meanwhile, near infrared face images, being light- independent, can avoid or limit the drawbacks of face recognition in visible light, but its main challenges are low resolution and signal noise ratio (SNR). Therefore, near infrared and visible fusion face recognition has become an important direction in the field of unconstrained face recognition research. In order to extract the discriminative complementary features between near infrared and visible images, in this paper, we proposed a novel near infrared and visible face fusion recognition algorithm based on DCT and LBP features. Firstly, the effective features in near-infrared face image are extracted by the low frequency part of DCT coefficients and the partition histograms of LBP operator. Secondly, the LBP features of visible-light face image are extracted to compensate for the lacking detail features of the near-infrared face image. Then, the LBP features of visible-light face image, the DCT and LBP features of near-infrared face image are sent to each classifier for labeling. Finally, decision level fusion strategy is used to obtain the final recognition result. The visible and near infrared face recognition is tested on HITSZ Lab2 visible and near infrared face database. The experiment results show that the proposed method extracts the complementary features of near-infrared and visible face images and improves the robustness of unconstrained face recognition. Especially for the circumstance of small training samples, the recognition rate of proposed method can reach 96.13%, which has improved significantly than 92.75 % of the method based on statistical feature fusion.
Automated facial attendance logger for students
NASA Astrophysics Data System (ADS)
Krithika, L. B.; Kshitish, S.; Kishore, M. R.
2017-11-01
From the past two decades, various spheres of activity are in the aspect of ‘Face recognition’ as an essential tool. The complete series of actions of face recognition is composed of 3 stages: Face Detection, Feature Extraction and Recognition. In this paper, we make an effort to put forth a new application of face recognition and detection in education. The proposed system scans the classroom and detects the face of the students in class and matches the scanned face with the templates that is available in the database and updates the attendance of the respective students.
Score Fusion and Decision Fusion for the Performance Improvement of Face Recognition
2013-07-01
0.1). A Hamming distance (HD) [7] is calculated with the FP-CGF to measure the similarities among faces. The matched face has the shortest HD from...then put into a face pattern byte (FPB) pixel- by-pixel. A HD is calculated with the FPB to measure the similarities among faces, and recognition is...all query users are included in the database), the recognition performance can be measured by a verification rate (VR), the percentage of the
Face recognition using slow feature analysis and contourlet transform
NASA Astrophysics Data System (ADS)
Wang, Yuehao; Peng, Lingling; Zhe, Fuchuan
2018-04-01
In this paper we propose a novel face recognition approach based on slow feature analysis (SFA) in contourlet transform domain. This method firstly use contourlet transform to decompose the face image into low frequency and high frequency part, and then takes technological advantages of slow feature analysis for facial feature extraction. We named the new method combining the slow feature analysis and contourlet transform as CT-SFA. The experimental results on international standard face database demonstrate that the new face recognition method is effective and competitive.
A biologically inspired neural network model to transformation invariant object recognition
NASA Astrophysics Data System (ADS)
Iftekharuddin, Khan M.; Li, Yaqin; Siddiqui, Faraz
2007-09-01
Transformation invariant image recognition has been an active research area due to its widespread applications in a variety of fields such as military operations, robotics, medical practices, geographic scene analysis, and many others. The primary goal for this research is detection of objects in the presence of image transformations such as changes in resolution, rotation, translation, scale and occlusion. We investigate a biologically-inspired neural network (NN) model for such transformation-invariant object recognition. In a classical training-testing setup for NN, the performance is largely dependent on the range of transformation or orientation involved in training. However, an even more serious dilemma is that there may not be enough training data available for successful learning or even no training data at all. To alleviate this problem, a biologically inspired reinforcement learning (RL) approach is proposed. In this paper, the RL approach is explored for object recognition with different types of transformations such as changes in scale, size, resolution and rotation. The RL is implemented in an adaptive critic design (ACD) framework, which approximates the neuro-dynamic programming of an action network and a critic network, respectively. Two ACD algorithms such as Heuristic Dynamic Programming (HDP) and Dual Heuristic dynamic Programming (DHP) are investigated to obtain transformation invariant object recognition. The two learning algorithms are evaluated statistically using simulated transformations in images as well as with a large-scale UMIST face database with pose variations. In the face database authentication case, the 90° out-of-plane rotation of faces from 20 different subjects in the UMIST database is used. Our simulations show promising results for both designs for transformation-invariant object recognition and authentication of faces. Comparing the two algorithms, DHP outperforms HDP in learning capability, as DHP takes fewer steps to perform a successful recognition task in general. Further, the residual critic error in DHP is generally smaller than that of HDP, and DHP achieves a 100% success rate more frequently than HDP for individual objects/subjects. On the other hand, HDP is more robust than the DHP as far as success rate across the database is concerned when applied in a stochastic and uncertain environment, and the computational time involved in DHP is more.
Appearance-based multimodal human tracking and identification for healthcare in the digital home.
Yang, Mau-Tsuen; Huang, Shen-Yen
2014-08-05
There is an urgent need for intelligent home surveillance systems to provide home security, monitor health conditions, and detect emergencies of family members. One of the fundamental problems to realize the power of these intelligent services is how to detect, track, and identify people at home. Compared to RFID tags that need to be worn all the time, vision-based sensors provide a natural and nonintrusive solution. Observing that body appearance and body build, as well as face, provide valuable cues for human identification, we model and record multi-view faces, full-body colors and shapes of family members in an appearance database by using two Kinects located at a home's entrance. Then the Kinects and another set of color cameras installed in other parts of the house are used to detect, track, and identify people by matching the captured color images with the registered templates in the appearance database. People are detected and tracked by multisensor fusion (Kinects and color cameras) using a Kalman filter that can handle duplicate or partial measurements. People are identified by multimodal fusion (face, body appearance, and silhouette) using a track-based majority voting. Moreover, the appearance-based human detection, tracking, and identification modules can cooperate seamlessly and benefit from each other. Experimental results show the effectiveness of the human tracking across multiple sensors and human identification considering the information of multi-view faces, full-body clothes, and silhouettes. The proposed home surveillance system can be applied to domestic applications in digital home security and intelligent healthcare.
Appearance-Based Multimodal Human Tracking and Identification for Healthcare in the Digital Home
Yang, Mau-Tsuen; Huang, Shen-Yen
2014-01-01
There is an urgent need for intelligent home surveillance systems to provide home security, monitor health conditions, and detect emergencies of family members. One of the fundamental problems to realize the power of these intelligent services is how to detect, track, and identify people at home. Compared to RFID tags that need to be worn all the time, vision-based sensors provide a natural and nonintrusive solution. Observing that body appearance and body build, as well as face, provide valuable cues for human identification, we model and record multi-view faces, full-body colors and shapes of family members in an appearance database by using two Kinects located at a home's entrance. Then the Kinects and another set of color cameras installed in other parts of the house are used to detect, track, and identify people by matching the captured color images with the registered templates in the appearance database. People are detected and tracked by multisensor fusion (Kinects and color cameras) using a Kalman filter that can handle duplicate or partial measurements. People are identified by multimodal fusion (face, body appearance, and silhouette) using a track-based majority voting. Moreover, the appearance-based human detection, tracking, and identification modules can cooperate seamlessly and benefit from each other. Experimental results show the effectiveness of the human tracking across multiple sensors and human identification considering the information of multi-view faces, full-body clothes, and silhouettes. The proposed home surveillance system can be applied to domestic applications in digital home security and intelligent healthcare. PMID:25098207
Garrido, Margarida V; Prada, Marília
2017-01-01
The Karolinska Directed Emotional Faces (KDEF) is one of the most widely used human facial expressions database. Almost a decade after the original validation study (Goeleven et al., 2008), we present subjective rating norms for a sub-set of 210 pictures which depict 70 models (half female) each displaying an angry, happy and neutral facial expressions. Our main goals were to provide an additional and updated validation to this database, using a sample from a different nationality ( N = 155 Portuguese students, M = 23.73 years old, SD = 7.24) and to extend the number of subjective dimensions used to evaluate each image. Specifically, participants reported emotional labeling (forced-choice task) and evaluated the emotional intensity and valence of the expression, as well as the attractiveness and familiarity of the model (7-points rating scales). Overall, results show that happy faces obtained the highest ratings across evaluative dimensions and emotion labeling accuracy. Female (vs. male) models were perceived as more attractive, familiar and positive. The sex of the model also moderated the accuracy of emotional labeling and ratings of different facial expressions. Each picture of the set was categorized as low, moderate, or high for each dimension. Normative data for each stimulus (hits proportion, means, standard deviations, and confidence intervals per evaluative dimension) is available as supplementary material (available at https://osf.io/fvc4m/).
Supervised orthogonal discriminant subspace projects learning for face recognition.
Chen, Yu; Xu, Xiao-Hong
2014-02-01
In this paper, a new linear dimension reduction method called supervised orthogonal discriminant subspace projection (SODSP) is proposed, which addresses high-dimensionality of data and the small sample size problem. More specifically, given a set of data points in the ambient space, a novel weight matrix that describes the relationship between the data points is first built. And in order to model the manifold structure, the class information is incorporated into the weight matrix. Based on the novel weight matrix, the local scatter matrix as well as non-local scatter matrix is defined such that the neighborhood structure can be preserved. In order to enhance the recognition ability, we impose an orthogonal constraint into a graph-based maximum margin analysis, seeking to find a projection that maximizes the difference, rather than the ratio between the non-local scatter and the local scatter. In this way, SODSP naturally avoids the singularity problem. Further, we develop an efficient and stable algorithm for implementing SODSP, especially, on high-dimensional data set. Moreover, the theoretical analysis shows that LPP is a special instance of SODSP by imposing some constraints. Experiments on the ORL, Yale, Extended Yale face database B and FERET face database are performed to test and evaluate the proposed algorithm. The results demonstrate the effectiveness of SODSP. Copyright © 2013 Elsevier Ltd. All rights reserved.
The depth estimation of 3D face from single 2D picture based on manifold learning constraints
NASA Astrophysics Data System (ADS)
Li, Xia; Yang, Yang; Xiong, Hailiang; Liu, Yunxia
2018-04-01
The estimation of depth is virtual important in 3D face reconstruction. In this paper, we propose a t-SNE based on manifold learning constraints and introduce K-means method to divide the original database into several subset, and the selected optimal subset to reconstruct the 3D face depth information can greatly reduce the computational complexity. Firstly, we carry out the t-SNE operation to reduce the key feature points in each 3D face model from 1×249 to 1×2. Secondly, the K-means method is applied to divide the training 3D database into several subset. Thirdly, the Euclidean distance between the 83 feature points of the image to be estimated and the feature point information before the dimension reduction of each cluster center is calculated. The category of the image to be estimated is judged according to the minimum Euclidean distance. Finally, the method Kong D will be applied only in the optimal subset to estimate the depth value information of 83 feature points of 2D face images. Achieving the final depth estimation results, thus the computational complexity is greatly reduced. Compared with the traditional traversal search estimation method, although the proposed method error rate is reduced by 0.49, the number of searches decreases with the change of the category. In order to validate our approach, we use a public database to mimic the task of estimating the depth of face images from 2D images. The average number of searches decreased by 83.19%.
Medial temporal lobe contributions to short-term memory for faces.
Race, Elizabeth; LaRocque, Karen F; Keane, Margaret M; Verfaellie, Mieke
2013-11-01
The role of the medial temporal lobes (MTL) in short-term memory (STM) remains a matter of debate. Whereas imaging studies commonly show hippocampal activation during short-delay memory tasks, evidence from amnesic patients with MTL lesions is mixed. It has been argued that apparent STM impairments in amnesia may reflect long-term memory (LTM) contributions to performance. We challenge this conclusion by demonstrating that MTL amnesic patients show impaired delayed matching-to-sample (DMS) for faces in a task that meets both a traditional delay-based and a recently proposed distractor-based criterion for classification as an STM task. In Experiment 1, we demonstrate that our face DMS task meets the proposed distractor-based criterion for STM classification, in that extensive processing of delay-period distractor stimuli disrupts performance of healthy individuals. In Experiment 2, MTL amnesic patients with lesions extending into anterior subhippocampal cortex, but not patients with lesions limited to the hippocampus, show impaired performance on this task without distraction at delays as short as 8 s, within temporal range of delay-based STM classification, in the context of intact perceptual matching performance. Experiment 3 provides support for the hypothesis that STM for faces relies on configural processing by showing that the extent to which healthy participants' performance is disrupted by interference depends on the configural demands of the distractor task. Together, these findings are consistent with the notion that the amnesic impairment in STM for faces reflects a deficit in configural processing associated with subhippocampal cortices and provide novel evidence that the MTL supports cognition beyond the LTM domain. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Production does not improve memory for face-name associations.
Hourihan, Kathleen L; Smith, Alexis R S
2016-06-01
Strategies for learning face-name associations are generally difficult and time-consuming. However, research has shown that saying a word aloud improves our memory for that word relative to words from the same set that were read silently. Such production effects have been shown for words, pictures, text material, and even word pairs. Can production improve memory for face-name associations? In Experiment 1, participants studied face-name pairs by reading half of the names aloud and half of the names silently, and were tested with cued recall. In Experiment 2, names were repeated aloud (or silently) for the full trial duration. Neither experiment showed a production effect in cued recall. Bayesian analyses showed positive support for the null effect. One possibility is that participants spontaneously implemented more elaborate encoding strategies that overrode any influence of production. However, a more likely explanation for the null production effect is that only half of each stimulus pair was produced-the name, but not the face. Consistent with this explanation, in Experiment 3 a production effect was not observed in cued recall of word-word pairs in which only the target words were read aloud or silently. Averaged across all 3 experiments, aloud targets were more likely to be recalled than silent targets (though not associated with the correct cue). The production effect in associative memory appears to require both members of a pair to be produced. Surprisingly, production shows little promise as a strategy for improving memory for the names of people we have just met. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Face recognition using 3D facial shape and color map information: comparison and combination
NASA Astrophysics Data System (ADS)
Godil, Afzal; Ressler, Sandy; Grother, Patrick
2004-08-01
In this paper, we investigate the use of 3D surface geometry for face recognition and compare it to one based on color map information. The 3D surface and color map data are from the CAESAR anthropometric database. We find that the recognition performance is not very different between 3D surface and color map information using a principal component analysis algorithm. We also discuss the different techniques for the combination of the 3D surface and color map information for multi-modal recognition by using different fusion approaches and show that there is significant improvement in results. The effectiveness of various techniques is compared and evaluated on a dataset with 200 subjects in two different positions.
Modeling first impressions from highly variable facial images.
Vernon, Richard J W; Sutherland, Clare A M; Young, Andrew W; Hartley, Tom
2014-08-12
First impressions of social traits, such as trustworthiness or dominance, are reliably perceived in faces, and despite their questionable validity they can have considerable real-world consequences. We sought to uncover the information driving such judgments, using an attribute-based approach. Attributes (physical facial features) were objectively measured from feature positions and colors in a database of highly variable "ambient" face photographs, and then used as input for a neural network to model factor dimensions (approachability, youthful-attractiveness, and dominance) thought to underlie social attributions. A linear model based on this approach was able to account for 58% of the variance in raters' impressions of previously unseen faces, and factor-attribute correlations could be used to rank attributes by their importance to each factor. Reversing this process, neural networks were then used to predict facial attributes and corresponding image properties from specific combinations of factor scores. In this way, the factors driving social trait impressions could be visualized as a series of computer-generated cartoon face-like images, depicting how attributes change along each dimension. This study shows that despite enormous variation in ambient images of faces, a substantial proportion of the variance in first impressions can be accounted for through linear changes in objectively defined features.
Face recognition based on symmetrical virtual image and original training image
NASA Astrophysics Data System (ADS)
Ke, Jingcheng; Peng, Yali; Liu, Shigang; Li, Jun; Pei, Zhao
2018-02-01
In face representation-based classification methods, we are able to obtain high recognition rate if a face has enough available training samples. However, in practical applications, we only have limited training samples to use. In order to obtain enough training samples, many methods simultaneously use the original training samples and corresponding virtual samples to strengthen the ability of representing the test sample. One is directly using the original training samples and corresponding mirror samples to recognize the test sample. However, when the test sample is nearly symmetrical while the original training samples are not, the integration of the original training and mirror samples might not well represent the test samples. To tackle the above-mentioned problem, in this paper, we propose a novel method to obtain a kind of virtual samples which are generated by averaging the original training samples and corresponding mirror samples. Then, the original training samples and the virtual samples are integrated to recognize the test sample. Experimental results on five face databases show that the proposed method is able to partly overcome the challenges of the various poses, facial expressions and illuminations of original face image.
Robust Face Recognition via Multi-Scale Patch-Based Matrix Regression.
Gao, Guangwei; Yang, Jian; Jing, Xiaoyuan; Huang, Pu; Hua, Juliang; Yue, Dong
2016-01-01
In many real-world applications such as smart card solutions, law enforcement, surveillance and access control, the limited training sample size is the most fundamental problem. By making use of the low-rank structural information of the reconstructed error image, the so-called nuclear norm-based matrix regression has been demonstrated to be effective for robust face recognition with continuous occlusions. However, the recognition performance of nuclear norm-based matrix regression degrades greatly in the face of the small sample size problem. An alternative solution to tackle this problem is performing matrix regression on each patch and then integrating the outputs from all patches. However, it is difficult to set an optimal patch size across different databases. To fully utilize the complementary information from different patch scales for the final decision, we propose a multi-scale patch-based matrix regression scheme based on which the ensemble of multi-scale outputs can be achieved optimally. Extensive experiments on benchmark face databases validate the effectiveness and robustness of our method, which outperforms several state-of-the-art patch-based face recognition algorithms.
Guo, Jianming; Shang, Erxin; Zhao, Jinlong; Fan, Xinsheng; Duan, Jinao; Qian, Dawei; Tao, Weiwei; Tang, Yuping
2014-09-25
Liquorice is the root of Glycyrrhiza uralensis Fisch. or Glycyrrhiza glabra L., Leguminosae. Licorice is described as 'National Venerable Master' in Chinese medicine and plays paradoxical roles, i.e. detoxification/strengthen efficacy and inducing/enhancing toxicity. Therefore, licorice was called "Two-Face" herb in this paper. The aim of this study is to discuss the paradoxical roles and the perspective usage of this "Two-Face" herb using data mining and frequency analysis. More than 96,000 prescriptions from Chinese Formulae Database were selected. The frequency and the prescription patterns were analyzed using Microsoft SQL Server 2000. Data mining methods (frequent itemsets) were used to analyze the regular patterns and compatibility laws of the constituent herbs in the selected prescriptions. The result showed that licorice (Radix glycyrrhizae) was the most frequently used herb in Chinese Formulae Database, other frequently used herbs including Radix Angelicae Sinensis (Dang gui), Radix et rhizoma ginseng (Ren shen), etc. Toxic herbs such as Radix aconiti lateralis praeparata (Fu zi), Rhizoma pinelliae (Ban xia) and Cinnabaris (Zhu sha) are top 3 herbs that most frequently used in combination with licorice. Radix et rhizoma ginseng (Ren shen), Poria (Fu ling), Radix Angelicae Sinensis (Dang gui) are top 3 nontoxic herbs that most frequently used in combination with licorice. Moreover, Licorice was seldom used with sargassum (Hai Zao), Herba Cirsii Japonici (Da Ji), Euphorbia kansui (Gan Sui) and Flos genkwa (Yuan Hua), which proved the description of contradictory effect of Radix glycyrrhizae and these herbs as recorded in Chinese medicine theory. This study showed the principle pattern of Chinese herbal drugs used in combination with licorice or not. The principle patterns and special compatibility laws reported here could be useful and instructive for scientific usage of licorice in clinic application. Further pharmacological and chemical researches are needed to evaluate the efficacy and the combination pattern of these Chinese herbs. The mechanism of the combination pattern of these prescriptions should also be investigated whether additive, synergistic or antagonistic effect exist using in vitro or in vivo models. Copyright © 2014 Elsevier GmbH. All rights reserved.
Wang, Jian-Gang; Sung, Eric; Yau, Wei-Yun
2011-07-01
Facial age classification is an approach to classify face images into one of several predefined age groups. One of the difficulties in applying learning techniques to the age classification problem is the large amount of labeled training data required. Acquiring such training data is very costly in terms of age progress, privacy, human time, and effort. Although unlabeled face images can be obtained easily, it would be expensive to manually label them on a large scale and getting the ground truth. The frugal selection of the unlabeled data for labeling to quickly reach high classification performance with minimal labeling efforts is a challenging problem. In this paper, we present an active learning approach based on an online incremental bilateral two-dimension linear discriminant analysis (IB2DLDA) which initially learns from a small pool of labeled data and then iteratively selects the most informative samples from the unlabeled set to increasingly improve the classifier. Specifically, we propose a novel data selection criterion called the furthest nearest-neighbor (FNN) that generalizes the margin-based uncertainty to the multiclass case and which is easy to compute, so that the proposed active learning algorithm can handle a large number of classes and large data sizes efficiently. Empirical experiments on FG-NET and Morph databases together with a large unlabeled data set for age categorization problems show that the proposed approach can achieve results comparable or even outperform a conventionally trained active classifier that requires much more labeling effort. Our IB2DLDA-FNN algorithm can achieve similar results much faster than random selection and with fewer samples for age categorization. It also can achieve comparable results with active SVM but is much faster than active SVM in terms of training because kernel methods are not needed. The results on the face recognition database and palmprint/palm vein database showed that our approach can handle problems with large number of classes. Our contributions in this paper are twofold. First, we proposed the IB2DLDA-FNN, the FNN being our novel idea, as a generic on-line or active learning paradigm. Second, we showed that it can be another viable tool for active learning of facial age range classification.
NASA Astrophysics Data System (ADS)
Song, Xiaoning; Feng, Zhen-Hua; Hu, Guosheng; Yang, Xibei; Yang, Jingyu; Qi, Yunsong
2015-09-01
This paper proposes a progressive sparse representation-based classification algorithm using local discrete cosine transform (DCT) evaluation to perform face recognition. Specifically, the sum of the contributions of all training samples of each subject is first taken as the contribution of this subject, then the redundant subject with the smallest contribution to the test sample is iteratively eliminated. Second, the progressive method aims at representing the test sample as a linear combination of all the remaining training samples, by which the representation capability of each training sample is exploited to determine the optimal "nearest neighbors" for the test sample. Third, the transformed DCT evaluation is constructed to measure the similarity between the test sample and each local training sample using cosine distance metrics in the DCT domain. The final goal of the proposed method is to determine an optimal weighted sum of nearest neighbors that are obtained under the local correlative degree evaluation, which is approximately equal to the test sample, and we can use this weighted linear combination to perform robust classification. Experimental results conducted on the ORL database of faces (created by the Olivetti Research Laboratory in Cambridge), the FERET face database (managed by the Defense Advanced Research Projects Agency and the National Institute of Standards and Technology), AR face database (created by Aleix Martinez and Robert Benavente in the Computer Vision Center at U.A.B), and USPS handwritten digit database (gathered at the Center of Excellence in Document Analysis and Recognition at SUNY Buffalo) demonstrate the effectiveness of the proposed method.
L.N. Hudson; T. Newbold; S. Contu
2014-01-01
Biodiversity continues to decline in the face of increasing anthropogenic pressures such as habitat destruction, exploitation, pollution and introduction of alien species. Existing global databases of speciesâ threat status or population time series are dominated by charismatic species. The collation of datasets with broad taxonomic and biogeographic extents, and that...
Real-time teleophthalmology versus face-to-face consultation: A systematic review.
Tan, Irene J; Dobson, Lucy P; Bartnik, Stephen; Muir, Josephine; Turner, Angus W
2017-08-01
Introduction Advances in imaging capabilities and the evolution of real-time teleophthalmology have the potential to provide increased coverage to areas with limited ophthalmology services. However, there is limited research assessing the diagnostic accuracy of face-to-face teleophthalmology consultation. This systematic review aims to determine if real-time teleophthalmology provides comparable accuracy to face-to-face consultation for the diagnosis of common eye health conditions. Methods A search of PubMed, Embase, Medline and Cochrane databases and manual citation review was conducted on 6 February and 7 April 2016. Included studies involved real-time telemedicine in the field of ophthalmology or optometry, and assessed diagnostic accuracy against gold-standard face-to-face consultation. The revised quality assessment of diagnostic accuracy studies (QUADAS-2) tool assessed risk of bias. Results Twelve studies were included, with participants ranging from four to 89 years old. A broad number of conditions were assessed and include corneal and retinal pathologies, strabismus, oculoplastics and post-operative review. Quality assessment identified a high or unclear risk of bias in patient selection (75%) due to an undisclosed recruitment processes. The index test showed high risk of bias in the included studies, due to the varied interpretation and conduct of real-time teleophthalmology methods. Reference standard risk was overall low (75%), as was the risk due to flow and timing (75%). Conclusion In terms of diagnostic accuracy, real-time teleophthalmology was considered superior to face-to-face consultation in one study and comparable in six studies. Store-and-forward image transmission coupled with real-time videoconferencing is a suitable alternative to overcome poor internet transmission speeds.
Minot, Thomas; Dury, Hannah L; Eguchi, Akihiro; Humphreys, Glyn W; Stringer, Simon M
2017-03-01
We use an established neural network model of the primate visual system to show how neurons might learn to encode the gender of faces. The model consists of a hierarchy of 4 competitive neuronal layers with associatively modifiable feedforward synaptic connections between successive layers. During training, the network was presented with many realistic images of male and female faces, during which the synaptic connections are modified using biologically plausible local associative learning rules. After training, we found that different subsets of output neurons have learned to respond exclusively to either male or female faces. With the inclusion of short range excitation within each neuronal layer to implement a self-organizing map architecture, neurons representing either male or female faces were clustered together in the output layer. This learning process is entirely unsupervised, as the gender of the face images is not explicitly labeled and provided to the network as a supervisory training signal. These simulations are extended to training the network on rotating faces. It is found that by using a trace learning rule incorporating a temporal memory trace of recent neuronal activity, neurons responding selectively to either male or female faces were also able to learn to respond invariantly over different views of the faces. This kind of trace learning has been previously shown to operate within the primate visual system by neurophysiological and psychophysical studies. The computer simulations described here predict that similar neurons encoding the gender of faces will be present within the primate visual system. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Can we match ultraviolet face images against their visible counterparts?
NASA Astrophysics Data System (ADS)
Narang, Neeru; Bourlai, Thirimachos; Hornak, Lawrence A.
2015-05-01
In law enforcement and security applications, the acquisition of face images is critical in producing key trace evidence for the successful identification of potential threats. However, face recognition (FR) for face images captured using different camera sensors, and under variable illumination conditions, and expressions is very challenging. In this paper, we investigate the advantages and limitations of the heterogeneous problem of matching ultra violet (from 100 nm to 400 nm in wavelength) or UV, face images against their visible (VIS) counterparts, when all face images are captured under controlled conditions. The contributions of our work are three-fold; (i) We used a camera sensor designed with the capability to acquire UV images at short-ranges, and generated a dual-band (VIS and UV) database that is composed of multiple, full frontal, face images of 50 subjects. Two sessions were collected that span over the period of 2 months. (ii) For each dataset, we determined which set of face image pre-processing algorithms are more suitable for face matching, and, finally, (iii) we determined which FR algorithm better matches cross-band face images, resulting in high rank-1 identification rates. Experimental results show that our cross spectral matching (the heterogeneous problem, where gallery and probe sets consist of face images acquired in different spectral bands) algorithms achieve sufficient identification performance. However, we also conclude that the problem under study, is very challenging, and it requires further investigation to address real-world law enforcement or military applications. To the best of our knowledge, this is first time in the open literature the problem of cross-spectral matching of UV against VIS band face images is being investigated.
Bell, Raoul; Giang, Trang; Mund, Iris; Buchner, Axel
2013-12-01
How do younger and older adults remember reputational trait information about other people? In the present study, trustworthy-looking and untrustworthy-looking faces were paired with cooperation or cheating in a cooperation game. In a surprise source-memory test, participants were asked to rate the likability of the faces, and were required to remember whether the faces were associated with negative or positive outcomes. The social expectations of younger and older adults were clearly affected by a priori facial trustworthiness. Facial trustworthiness was associated with high cooperation-game investments, high likability ratings, and a tendency toward guessing that a face belonged to a cooperator instead of a cheater in both age groups. Consistent with previous results showing that emotional memory is spared from age-related decline, memory for the association between faces and emotional reputational information was well preserved in older adults. However, younger adults used a flexible encoding strategy to remember the social interaction partners. Source-memory was best for information that violated their (positive) expectations. Older adults, in contrast, showed a uniform memory bias for negative social information; their memory performance was not modulated by their expectations. This finding suggests that older adults are less likely to adjust their encoding strategies to their social expectations than younger adults. This may be in line with older adults' motivational goals to avoid risks in social interactions. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Visual search for emotional expressions: Effect of stimulus set on anger and happiness superiority.
Savage, Ruth A; Becker, Stefanie I; Lipp, Ottmar V
2016-01-01
Prior reports of preferential detection of emotional expressions in visual search have yielded inconsistent results, even for face stimuli that avoid obvious expression-related perceptual confounds. The current study investigated inconsistent reports of anger and happiness superiority effects using face stimuli drawn from the same database. Experiment 1 excluded procedural differences as a potential factor, replicating a happiness superiority effect in a procedure that previously yielded an anger superiority effect. Experiments 2a and 2b confirmed that image colour or poser gender did not account for prior inconsistent findings. Experiments 3a and 3b identified stimulus set as the critical variable, revealing happiness or anger superiority effects for two partially overlapping sets of face stimuli. The current results highlight the critical role of stimulus selection for the observation of happiness or anger superiority effects in visual search even for face stimuli that avoid obvious expression related perceptual confounds and are drawn from a single database.
Face recognition using tridiagonal matrix enhanced multivariance products representation
NASA Astrophysics Data System (ADS)
Ã-zay, Evrim Korkmaz
2017-01-01
This study aims to retrieve face images from a database according to a target face image. For this purpose, Tridiagonal Matrix Enhanced Multivariance Products Representation (TMEMPR) is taken into consideration. TMEMPR is a recursive algorithm based on Enhanced Multivariance Products Representation (EMPR). TMEMPR decomposes a matrix into three components which are a matrix of left support terms, a tridiagonal matrix of weight parameters for each recursion, and a matrix of right support terms, respectively. In this sense, there is an analogy between Singular Value Decomposition (SVD) and TMEMPR. However TMEMPR is a more flexible algorithm since its initial support terms (or vectors) can be chosen as desired. Low computational complexity is another advantage of TMEMPR because the algorithm has been constructed with recursions of certain arithmetic operations without requiring any iteration. The algorithm has been trained and tested with ORL face image database with 400 different grayscale images of 40 different people. TMEMPR's performance has been compared with SVD's performance as a result.
NASA Astrophysics Data System (ADS)
Gaspar Aparicio, R.; Gomez, D.; Coterillo Coz, I.; Wojcik, D.
2012-12-01
At CERN a number of key database applications are running on user-managed MySQL database services. The database on demand project was born out of an idea to provide the CERN user community with an environment to develop and run database services outside of the actual centralised Oracle based database services. The Database on Demand (DBoD) empowers the user to perform certain actions that had been traditionally done by database administrators, DBA's, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently open community version of MySQL and single instance Oracle database server. This article describes a technology approach to face this challenge, a service level agreement, the SLA that the project provides, and an evolution of possible scenarios.
Comparison of face types in Chinese women using three-dimensional computed tomography.
Zhou, Rong-Rong; Zhao, Qi-Ming; Liu, Miao
2015-04-01
This study compared inverted triangle and square faces of 21 young Chinese Han women (18-25 years old) using three-dimensional computed tomography images retrieved from a records database. In this study, 11 patients had inverted triangle faces and 10 had square faces. The anatomic features were examined and compared. There were significant differences in lower face width, lower face height, masseter thickness, middle/lower face width ratio, and lower face width/height ratio between the two facial types (p < 0.01). Lower face width was positively correlated with masseter thickness and negatively correlated with gonial angle. Lower face height was positively correlated with gonial angle and negatively correlated with masseter thickness, and gonial angle was negatively correlated with masseter thickness. In young Chinese Han women, inverted triangle faces and square faces differ significantly in masseter thickness and lower face height. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Database for propagation models
NASA Astrophysics Data System (ADS)
Kantak, Anil V.
1991-07-01
A propagation researcher or a systems engineer who intends to use the results of a propagation experiment is generally faced with various database tasks such as the selection of the computer software, the hardware, and the writing of the programs to pass the data through the models of interest. This task is repeated every time a new experiment is conducted or the same experiment is carried out at a different location generating different data. Thus the users of this data have to spend a considerable portion of their time learning how to implement the computer hardware and the software towards the desired end. This situation may be facilitated considerably if an easily accessible propagation database is created that has all the accepted (standardized) propagation phenomena models approved by the propagation research community. Also, the handling of data will become easier for the user. Such a database construction can only stimulate the growth of the propagation research it if is available to all the researchers, so that the results of the experiment conducted by one researcher can be examined independently by another, without different hardware and software being used. The database may be made flexible so that the researchers need not be confined only to the contents of the database. Another way in which the database may help the researchers is by the fact that they will not have to document the software and hardware tools used in their research since the propagation research community will know the database already. The following sections show a possible database construction, as well as properties of the database for the propagation research.
Dourson, Michael L
2018-05-03
The Integrated Risk Information System (IRIS) of the U.S. Environmental Protection Agency (EPA) has an important role in protecting public health. Originally it provided a single database listing official risk values equally valid for all Agency offices, and was an important tool for risk assessment communication across EPA. Started in 1986, IRIS achieved full standing in 1990 when it listed 500 risk values, the effort of two senior EPA groups over 5 years of monthly face-to-face meetings, to assess combined risk data from multiple Agency offices. Those groups were disbanded in 1995, and the lack of continuing face-to-face meetings meant that IRIS became no longer EPA's comprehensive database of risk values or their latest evaluations. As a remedy, a work group of the Agency's senior scientists should be re-established to evaluate new risks and to update older ones. Risk values to be reviewed would come from the same EPA offices now developing such information on their own. Still, this senior group would have the final authority on posting a risk value in IRIS, independently of individual EPA offices. This approach could also lay the groundwork for an all-government IRIS database, especially needed as more government Agencies, industries and non-governmental organizations are addressing evolving risk characterizations. Copyright © 2018. Published by Elsevier Inc.
Performance evaluation of wavelet-based face verification on a PDA recorded database
NASA Astrophysics Data System (ADS)
Sellahewa, Harin; Jassim, Sabah A.
2006-05-01
The rise of international terrorism and the rapid increase in fraud and identity theft has added urgency to the task of developing biometric-based person identification as a reliable alternative to conventional authentication methods. Human Identification based on face images is a tough challenge in comparison to identification based on fingerprints or Iris recognition. Yet, due to its unobtrusive nature, face recognition is the preferred method of identification for security related applications. The success of such systems will depend on the support of massive infrastructures. Current mobile communication devices (3G smart phones) and PDA's are equipped with a camera which can capture both still and streaming video clips and a touch sensitive display panel. Beside convenience, such devices provide an adequate secure infrastructure for sensitive & financial transactions, by protecting against fraud and repudiation while ensuring accountability. Biometric authentication systems for mobile devices would have obvious advantages in conflict scenarios when communication from beyond enemy lines is essential to save soldier and civilian life. In areas of conflict or disaster the luxury of fixed infrastructure is not available or destroyed. In this paper, we present a wavelet-based face verification scheme that have been specifically designed and implemented on a currently available PDA. We shall report on its performance on the benchmark audio-visual BANCA database and on a newly developed PDA recorded audio-visual database that take include indoor and outdoor recordings.
Kruskal-Wallis-based computationally efficient feature selection for face recognition.
Ali Khan, Sajid; Hussain, Ayyaz; Basit, Abdul; Akram, Sheeraz
2014-01-01
Face recognition in today's technological world, and face recognition applications attain much more importance. Most of the existing work used frontal face images to classify face image. However these techniques fail when applied on real world face images. The proposed technique effectively extracts the prominent facial features. Most of the features are redundant and do not contribute to representing face. In order to eliminate those redundant features, computationally efficient algorithm is used to select the more discriminative face features. Extracted features are then passed to classification step. In the classification step, different classifiers are ensemble to enhance the recognition accuracy rate as single classifier is unable to achieve the high accuracy. Experiments are performed on standard face database images and results are compared with existing techniques.
An embedded face-classification system for infrared images on an FPGA
NASA Astrophysics Data System (ADS)
Soto, Javier E.; Figueroa, Miguel
2014-10-01
We present a face-classification architecture for long-wave infrared (IR) images implemented on a Field Programmable Gate Array (FPGA). The circuit is fast, compact and low power, can recognize faces in real time and be embedded in a larger image-processing and computer vision system operating locally on an IR camera. The algorithm uses Local Binary Patterns (LBP) to perform feature extraction on each IR image. First, each pixel in the image is represented as an LBP pattern that encodes the similarity between the pixel and its neighbors. Uniform LBP codes are then used to reduce the number of patterns to 59 while preserving more than 90% of the information contained in the original LBP representation. Then, the image is divided into 64 non-overlapping regions, and each region is represented as a 59-bin histogram of patterns. Finally, the algorithm concatenates all 64 regions to create a 3,776-bin spatially enhanced histogram. We reduce the dimensionality of this histogram using Linear Discriminant Analysis (LDA), which improves clustering and enables us to store an entire database of 53 subjects on-chip. During classification, the circuit applies LBP and LDA to each incoming IR image in real time, and compares the resulting feature vector to each pattern stored in the local database using the Manhattan distance. We implemented the circuit on a Xilinx Artix-7 XC7A100T FPGA and tested it with the UCHThermalFace database, which consists of 28 81 x 150-pixel images of 53 subjects in indoor and outdoor conditions. The circuit achieves a 98.6% hit ratio, trained with 16 images and tested with 12 images of each subject in the database. Using a 100 MHz clock, the circuit classifies 8,230 images per second, and consumes only 309mW.
Carter, Mary; Fletcher, Emily; Sansom, Anna; Warren, Fiona C; Campbell, John L
2018-01-01
Objectives To evaluate the feasibility, acceptability and effectiveness of webGP as piloted by six general practices. Methods Mixed-methods evaluation, including data extraction from practice databases, general practitioner (GP) completion of case reports, patient questionnaires and staff interviews. Setting General practices in NHS Northern, Eastern and Western Devon Clinical Commissioning Group’s area approximately 6 months after implementing webGP (February–July 2016). Participants Six practices provided consultations data; 20 GPs completed case reports (regarding 61 e-consults); 81 patients completed questionnaires; 5 GPs and 5 administrators were interviewed. Outcome measures Attitudes and experiences of practice staff and patients regarding webGP. Results WebGP uptake during the evaluation was small, showing no discernible impact on practice workload. The completeness of cross-sectional data on consultation workload varied between practices. GPs judged 41/61 (72%) of webGP requests to require a face-to-face or telephone consultation. Introducing webGP appeared to be associated with shifts in responsibility and workload between practice staff and between practices and patients. 81/231 patients completed a postal survey (35.1% response rate). E-Consulters were somewhat younger and more likely to be employed than face-to-face respondents. WebGP appeared broadly acceptable to patients regarding timeliness and quality/experience of care provided. Similar problems were presented by all respondents. Both groups appeared equally familiar with other practice online services; e-consulters were somewhat more likely to have used them. From semistructured staff interviews, it appeared that, while largely acceptable within practice, introducing e-consults had potential for adverse interactions with pre-existing practice systems. Conclusions There is potential to assess the impact of new systems on consultation patterns by extracting routine data from practice databases. Staff and patients noticed subtle changes to responsibilities associated with online options. Greater uptake requires good communication between practice and patients, and organisation of systems to avoid conflicts and misuse. Further research is required to evaluate the full potential of webGP in managing practice workload. PMID:29449293
1983-06-01
be registered on the agenda. At each step or analysis, the action with the highest score is executed and the database is changed. The agenda controls...activation of production rules according to changes in the database . The agenda is updated whenever the database is changed. Each time, the number of...views of an object. Total prediction has combinatorial complexity. For a polyhedron with n distinct faces, there are 2" views. Instead, ACRONYM predicts
Huang, Charles Lung-Cheng; Hsiao, Sigmund; Hwu, Hai-Gwo; Howng, Shen-Long
2012-12-30
The Chinese Facial Emotion Recognition Database (CFERD), a computer-generated three-dimensional (3D) paradigm, was developed to measure the recognition of facial emotional expressions at different intensities. The stimuli consisted of 3D colour photographic images of six basic facial emotional expressions (happiness, sadness, disgust, fear, anger and surprise) and neutral faces of the Chinese. The purpose of the present study is to describe the development and validation of CFERD with nonclinical healthy participants (N=100; 50 men; age ranging between 18 and 50 years), and to generate normative data set. The results showed that the sensitivity index d' [d'=Z(hit rate)-Z(false alarm rate), where function Z(p), p∈[0,1
Modeling first impressions from highly variable facial images
Vernon, Richard J. W.; Sutherland, Clare A. M.; Young, Andrew W.; Hartley, Tom
2014-01-01
First impressions of social traits, such as trustworthiness or dominance, are reliably perceived in faces, and despite their questionable validity they can have considerable real-world consequences. We sought to uncover the information driving such judgments, using an attribute-based approach. Attributes (physical facial features) were objectively measured from feature positions and colors in a database of highly variable “ambient” face photographs, and then used as input for a neural network to model factor dimensions (approachability, youthful-attractiveness, and dominance) thought to underlie social attributions. A linear model based on this approach was able to account for 58% of the variance in raters’ impressions of previously unseen faces, and factor-attribute correlations could be used to rank attributes by their importance to each factor. Reversing this process, neural networks were then used to predict facial attributes and corresponding image properties from specific combinations of factor scores. In this way, the factors driving social trait impressions could be visualized as a series of computer-generated cartoon face-like images, depicting how attributes change along each dimension. This study shows that despite enormous variation in ambient images of faces, a substantial proportion of the variance in first impressions can be accounted for through linear changes in objectively defined features. PMID:25071197
Facial expression identification using 3D geometric features from Microsoft Kinect device
NASA Astrophysics Data System (ADS)
Han, Dongxu; Al Jawad, Naseer; Du, Hongbo
2016-05-01
Facial expression identification is an important part of face recognition and closely related to emotion detection from face images. Various solutions have been proposed in the past using different types of cameras and features. Microsoft Kinect device has been widely used for multimedia interactions. More recently, the device has been increasingly deployed for supporting scientific investigations. This paper explores the effectiveness of using the device in identifying emotional facial expressions such as surprise, smile, sad, etc. and evaluates the usefulness of 3D data points on a face mesh structure obtained from the Kinect device. We present a distance-based geometric feature component that is derived from the distances between points on the face mesh and selected reference points in a single frame. The feature components extracted across a sequence of frames starting and ending by neutral emotion represent a whole expression. The feature vector eliminates the need for complex face orientation correction, simplifying the feature extraction process and making it more efficient. We applied the kNN classifier that exploits a feature component based similarity measure following the principle of dynamic time warping to determine the closest neighbors. Preliminary tests on a small scale database of different facial expressions show promises of the newly developed features and the usefulness of the Kinect device in facial expression identification.
Lagattuta, Kristin Hansen; Kramer, Hannah J
2017-01-01
We used eye tracking to examine 4- to 10-year-olds' and adults' (N = 173) visual attention to negative (anger, fear, sadness, disgust) and neutral faces when paired with happy faces in 2 experimental conditions: free-viewing ("look at the faces") and directed ("look only at the happy faces"). Regardless of instruction, all age groups more often looked first to negative versus positive faces (no age differences), suggesting that initial orienting is driven by bottom-up processes. In contrast, biases in more sustained attention-last looks and looking duration-varied by age and could be modified by top-down instruction. On the free-viewing task, all age groups exhibited a negativity bias which attenuated with age and remained stable across trials. When told to look only at happy faces (directed task), all age groups shifted to a positivity bias, with linear age-related improvements. This ability to implement the "look only at the happy faces" instruction, however, fatigued over time, with the decrement stronger for children. Controlling for age, individual differences in executive function (working memory and inhibitory control) had no relation to the free-viewing task; however, these variables explained substantial variance on the directed task, with children and adults higher in executive function showing better skill at looking last and looking longer at happy faces. Greater anxiety predicted more first looks to angry faces on the directed task. These findings advance theory and research on normative development and individual differences in the bias to prioritize negative information, including contributions of bottom-up salience and top-down control. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Sub-pattern based multi-manifold discriminant analysis for face recognition
NASA Astrophysics Data System (ADS)
Dai, Jiangyan; Guo, Changlu; Zhou, Wei; Shi, Yanjiao; Cong, Lin; Yi, Yugen
2018-04-01
In this paper, we present a Sub-pattern based Multi-manifold Discriminant Analysis (SpMMDA) algorithm for face recognition. Unlike existing Multi-manifold Discriminant Analysis (MMDA) approach which is based on holistic information of face image for recognition, SpMMDA operates on sub-images partitioned from the original face image and then extracts the discriminative local feature from the sub-images separately. Moreover, the structure information of different sub-images from the same face image is considered in the proposed method with the aim of further improve the recognition performance. Extensive experiments on three standard face databases (Extended YaleB, CMU PIE and AR) demonstrate that the proposed method is effective and outperforms some other sub-pattern based face recognition methods.
Age and gender estimation using Region-SIFT and multi-layered SVM
NASA Astrophysics Data System (ADS)
Kim, Hyunduk; Lee, Sang-Heon; Sohn, Myoung-Kyu; Hwang, Byunghun
2018-04-01
In this paper, we propose an age and gender estimation framework using the region-SIFT feature and multi-layered SVM classifier. The suggested framework entails three processes. The first step is landmark based face alignment. The second step is the feature extraction step. In this step, we introduce the region-SIFT feature extraction method based on facial landmarks. First, we define sub-regions of the face. We then extract SIFT features from each sub-region. In order to reduce the dimensions of features we employ a Principal Component Analysis (PCA) and a Linear Discriminant Analysis (LDA). Finally, we classify age and gender using a multi-layered Support Vector Machines (SVM) for efficient classification. Rather than performing gender estimation and age estimation independently, the use of the multi-layered SVM can improve the classification rate by constructing a classifier that estimate the age according to gender. Moreover, we collect a dataset of face images, called by DGIST_C, from the internet. A performance evaluation of proposed method was performed with the FERET database, CACD database, and DGIST_C database. The experimental results demonstrate that the proposed approach classifies age and performs gender estimation very efficiently and accurately.
Visual cryptography for face privacy
NASA Astrophysics Data System (ADS)
Ross, Arun; Othman, Asem A.
2010-04-01
We discuss the problem of preserving the privacy of a digital face image stored in a central database. In the proposed scheme, a private face image is dithered into two host face images such that it can be revealed only when both host images are simultaneously available; at the same time, the individual host images do not reveal the identity of the original image. In order to accomplish this, we appeal to the field of Visual Cryptography. Experimental results confirm the following: (a) the possibility of hiding a private face image in two unrelated host face images; (b) the successful matching of face images that are reconstructed by superimposing the host images; and (c) the inability of the host images, known as sheets, to reveal the identity of the secret face image.
Folgerø, Per O; Hodne, Lasse; Johansson, Christer; Andresen, Alf E; Sætren, Lill C; Specht, Karsten; Skaar, Øystein O; Reber, Rolf
2016-01-01
This article explores the possibility of testing hypotheses about art production in the past by collecting data in the present. We call this enterprise "experimental art history". Why did medieval artists prefer to paint Christ with his face directed towards the beholder, while profane faces were noticeably more often painted in different degrees of profile? Is a preference for frontal faces motivated by deeper evolutionary and biological considerations? Head and gaze direction is a significant factor for detecting the intentions of others, and accurate detection of gaze direction depends on strong contrast between a dark iris and a bright sclera, a combination that is only found in humans among the primates. One uniquely human capacity is language acquisition, where the detection of shared or joint attention, for example through detection of gaze direction, contributes significantly to the ease of acquisition. The perceived face and gaze direction is also related to fundamental emotional reactions such as fear, aggression, empathy and sympathy. The fast-track modulator model presents a related fast and unconscious subcortical route that involves many central brain areas. Activity in this pathway mediates the affective valence of the stimulus. In particular, different sub-regions of the amygdala show specific activation as response to gaze direction, head orientation and the valence of facial expression. We present three experiments on the effects of face orientation and gaze direction on the judgments of social attributes. We observed that frontal faces with direct gaze were more highly associated with positive adjectives. Does this help to associate positive values to the Holy Face in a Western context? The formal result indicates that the Holy Face is perceived more positively than profiles with both direct and averted gaze. Two control studies, using a Brazilian and a Dutch database of photographs, showed a similar but weaker effect with a larger contrast between the gaze directions for profiles. Our findings indicate that many factors affect the impression of a face, and that eye contact in combination with face direction reinforce the general impression of portraits, rather than determine it.
Mapping correspondence between facial mimicry and emotion recognition in healthy subjects.
Ponari, Marta; Conson, Massimiliano; D'Amico, Nunzia Pina; Grossi, Dario; Trojano, Luigi
2012-12-01
We aimed at verifying the hypothesis that facial mimicry is causally and selectively involved in emotion recognition. For this purpose, in Experiment 1, we explored the effect of tonic contraction of muscles in upper or lower half of participants' face on their ability to recognize emotional facial expressions. We found that the "lower" manipulation specifically impaired recognition of happiness and disgust, the "upper" manipulation impaired recognition of anger, while both manipulations affected recognition of fear; recognition of surprise and sadness were not affected by either blocking manipulations. In Experiment 2, we verified whether emotion recognition is hampered by stimuli in which an upper or lower half-face showing an emotional expression is combined with a neutral half-face. We found that the neutral lower half-face interfered with recognition of happiness and disgust, whereas the neutral upper half impaired recognition of anger; recognition of fear and sadness was impaired by both manipulations, whereas recognition of surprise was not affected by either manipulation. Taken together, the present findings support simulation models of emotion recognition and provide insight into the role of mimicry in comprehension of others' emotional facial expressions. PsycINFO Database Record (c) 2012 APA, all rights reserved.
Log-Gabor Weber descriptor for face recognition
NASA Astrophysics Data System (ADS)
Li, Jing; Sang, Nong; Gao, Changxin
2015-09-01
The Log-Gabor transform, which is suitable for analyzing gradually changing data such as in iris and face images, has been widely used in image processing, pattern recognition, and computer vision. In most cases, only the magnitude or phase information of the Log-Gabor transform is considered. However, the complementary effect taken by combining magnitude and phase information simultaneously for an image-feature extraction problem has not been systematically explored in the existing works. We propose a local image descriptor for face recognition, called Log-Gabor Weber descriptor (LGWD). The novelty of our LGWD is twofold: (1) to fully utilize the information from the magnitude or phase feature of multiscale and orientation Log-Gabor transform, we apply the Weber local binary pattern operator to each transform response. (2) The encoded Log-Gabor magnitude and phase information are fused at the feature level by utilizing kernel canonical correlation analysis strategy, considering that feature level information fusion is effective when the modalities are correlated. Experimental results on the AR, Extended Yale B, and UMIST face databases, compared with those available from recent experiments reported in the literature, show that our descriptor yields a better performance than state-of-the art methods.
Face Hallucination with Linear Regression Model in Semi-Orthogonal Multilinear PCA Method
NASA Astrophysics Data System (ADS)
Asavaskulkiet, Krissada
2018-04-01
In this paper, we propose a new face hallucination technique, face images reconstruction in HSV color space with a semi-orthogonal multilinear principal component analysis method. This novel hallucination technique can perform directly from tensors via tensor-to-vector projection by imposing the orthogonality constraint in only one mode. In our experiments, we use facial images from FERET database to test our hallucination approach which is demonstrated by extensive experiments with high-quality hallucinated color faces. The experimental results assure clearly demonstrated that we can generate photorealistic color face images by using the SO-MPCA subspace with a linear regression model.
Online Survey Design and Development: A Janus-Faced Approach
ERIC Educational Resources Information Center
Lauer, Claire; McLeod, Michael; Blythe, Stuart
2013-01-01
In this article we propose a "Janus-faced" approach to survey design--an approach that encourages researchers to consider how they can design and implement surveys more effectively using the latest web and database tools. Specifically, this approach encourages researchers to look two ways at once; attending to both the survey interface…
ERIC Educational Resources Information Center
Bodily, Robert; Verbert, Katrien
2017-01-01
This article is a comprehensive literature review of student-facing learning analytics reporting systems that track learning analytics data and report it directly to students. This literature review builds on four previously conducted literature reviews in similar domains. Out of the 945 articles retrieved from databases and journals, 93 articles…
Novel dynamic Bayesian networks for facial action element recognition and understanding
NASA Astrophysics Data System (ADS)
Zhao, Wei; Park, Jeong-Seon; Choi, Dong-You; Lee, Sang-Woong
2011-12-01
In daily life, language is an important tool of communication between people. Besides language, facial action can also provide a great amount of information. Therefore, facial action recognition has become a popular research topic in the field of human-computer interaction (HCI). However, facial action recognition is quite a challenging task due to its complexity. In a literal sense, there are thousands of facial muscular movements, many of which have very subtle differences. Moreover, muscular movements always occur simultaneously when the pose is changed. To address this problem, we first build a fully automatic facial points detection system based on a local Gabor filter bank and principal component analysis. Then, novel dynamic Bayesian networks are proposed to perform facial action recognition using the junction tree algorithm over a limited number of feature points. In order to evaluate the proposed method, we have used the Korean face database for model training. For testing, we used the CUbiC FacePix, facial expressions and emotion database, Japanese female facial expression database, and our own database. Our experimental results clearly demonstrate the feasibility of the proposed approach.
Pose-Invariant Face Recognition via RGB-D Images.
Sang, Gaoli; Li, Jing; Zhao, Qijun
2016-01-01
Three-dimensional (3D) face models can intrinsically handle large pose face recognition problem. In this paper, we propose a novel pose-invariant face recognition method via RGB-D images. By employing depth, our method is able to handle self-occlusion and deformation, both of which are challenging problems in two-dimensional (2D) face recognition. Texture images in the gallery can be rendered to the same view as the probe via depth. Meanwhile, depth is also used for similarity measure via frontalization and symmetric filling. Finally, both texture and depth contribute to the final identity estimation. Experiments on Bosphorus, CurtinFaces, Eurecom, and Kiwi databases demonstrate that the additional depth information has improved the performance of face recognition with large pose variations and under even more challenging conditions.
Becoming a Lunari or Taiyo expert: learned attention to parts drives holistic processing of faces.
Chua, Kao-Wei; Richler, Jennifer J; Gauthier, Isabel
2014-06-01
Faces are processed holistically, but the locus of holistic processing remains unclear. We created two novel races of faces (Lunaris and Taiyos) to study how experience with face parts influences holistic processing. In Experiment 1, subjects individuated Lunaris wherein the top, bottom, or both face halves contained diagnostic information. Subjects who learned to attend to face parts exhibited no holistic processing. This suggests that individuation only leads to holistic processing when the whole face is attended. In Experiment 2, subjects individuated both Lunaris and Taiyos, with diagnostic information in complementary face halves of the two races. Holistic processing was measured with composites made of either diagnostic or nondiagnostic face parts. Holistic processing was only observed for composites made from diagnostic face parts, demonstrating that holistic processing can occur for diagnostic face parts that were never seen together. These results suggest that holistic processing is an expression of learned attention to diagnostic face parts. PsycINFO Database Record (c) 2014 APA, all rights reserved.
NASA Astrophysics Data System (ADS)
Balbin, Jessie R.; Pinugu, Jasmine Nadja J.; Basco, Abigail Joy S.; Cabanada, Myla B.; Gonzales, Patrisha Melrose V.; Marasigan, Juan Carlos C.
2017-06-01
The research aims to build a tool in assessing patients for post-traumatic stress disorder or PTSD. The parameters used are heart rate, skin conductivity, and facial gestures. Facial gestures are recorded using OpenFace, an open-source face recognition program that uses facial action units in to track facial movements. Heart rate and skin conductivity is measured through sensors operated using Raspberry Pi. Results are stored in a database for easy and quick access. Databases to be used are uploaded to a cloud platform so that doctors have direct access to the data. This research aims to analyze these parameters and give accurate assessment of the patient.
NASA Astrophysics Data System (ADS)
Hsieh, Cheng-Ta; Huang, Kae-Horng; Lee, Chang-Hsing; Han, Chin-Chuan; Fan, Kuo-Chin
2017-12-01
Robust face recognition under illumination variations is an important and challenging task in a face recognition system, particularly for face recognition in the wild. In this paper, a face image preprocessing approach, called spatial adaptive shadow compensation (SASC), is proposed to eliminate shadows in the face image due to different lighting directions. First, spatial adaptive histogram equalization (SAHE), which uses face intensity prior model, is proposed to enhance the contrast of each local face region without generating visible noises in smooth face areas. Adaptive shadow compensation (ASC), which performs shadow compensation in each local image block, is then used to produce a wellcompensated face image appropriate for face feature extraction and recognition. Finally, null-space linear discriminant analysis (NLDA) is employed to extract discriminant features from SASC compensated images. Experiments performed on the Yale B, Yale B extended, and CMU PIE face databases have shown that the proposed SASC always yields the best face recognition accuracy. That is, SASC is more robust to face recognition under illumination variations than other shadow compensation approaches.
Meeting the Aims of Honors in the Online Environment
ERIC Educational Resources Information Center
Johnson, Melissa L.
2013-01-01
While little data-based research is available on the use of technology in the honors classroom, data on the nature of online honors courses are even rarer. In undergraduate education generally, enrollment in online courses has been increasing annually, outpacing enrollment in traditional, face-to-face environments. During fall 2011, more than 6.7…
Intelligent Virtual Assistant's Impact on Technical Proficiency within Virtual Teams
ERIC Educational Resources Information Center
Graham, Christian; Jones, Nory B.
2016-01-01
Information-systems development continues to be a difficult process, particularly for virtual teams that do not have the luxury of meeting face-to-face. The research literature on this topic reinforces this point: the greater part of database systems development projects ends in failure. The use of virtual teams to complete projects further…
Social Networking as a Platform for Role-Playing Scientific Case Studies
ERIC Educational Resources Information Center
Geyer, Andrea M.
2014-01-01
This work discusses the design and implementation of two online case studies in a face-to-face general chemistry course. The case studies were integrated into the course to emphasize the need for science literacy in general society, to enhance critical thinking, to introduce database searching, and to improve primary literature reading skills. An…
We look like our names: The manifestation of name stereotypes in facial appearance.
Zwebner, Yonat; Sellier, Anne-Laure; Rosenfeld, Nir; Goldenberg, Jacob; Mayo, Ruth
2017-04-01
Research demonstrates that facial appearance affects social perceptions. The current research investigates the reverse possibility: Can social perceptions influence facial appearance? We examine a social tag that is associated with us early in life-our given name. The hypothesis is that name stereotypes can be manifested in facial appearance, producing a face-name matching effect , whereby both a social perceiver and a computer are able to accurately match a person's name to his or her face. In 8 studies we demonstrate the existence of this effect, as participants examining an unfamiliar face accurately select the person's true name from a list of several names, significantly above chance level. We replicate the effect in 2 countries and find that it extends beyond the limits of socioeconomic cues. We also find the effect using a computer-based paradigm and 94,000 faces. In our exploration of the underlying mechanism, we show that existing name stereotypes produce the effect, as its occurrence is culture-dependent. A self-fulfilling prophecy seems to be at work, as initial evidence shows that facial appearance regions that are controlled by the individual (e.g., hairstyle) are sufficient to produce the effect, and socially using one's given name is necessary to generate the effect. Together, these studies suggest that facial appearance represents social expectations of how a person with a specific name should look. In this way a social tag may influence one's facial appearance. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Global Binary Continuity for Color Face Detection With Complex Background
NASA Astrophysics Data System (ADS)
Belavadi, Bhaskar; Mahendra Prashanth, K. V.; Joshi, Sujay S.; Suprathik, N.
2017-08-01
In this paper, we propose a method to detect human faces in color images, with complex background. The proposed algorithm makes use of basically two color space models, specifically HSV and YCgCr. The color segmented image is filled uniformly with a single color (binary) and then all unwanted discontinuous lines are removed to get the final image. Experimental results on Caltech database manifests that the purported model is able to accomplish far better segmentation for faces of varying orientations, skin color and background environment.
Interfering effects of retrieval in learning new information.
Finn, Bridgid; Roediger, Henry L
2013-11-01
In 7 experiments, we explored the role of retrieval in associative updating, that is, in incorporating new information into an associative memory. We tested the hypothesis that retrieval would facilitate incorporating a new contextual detail into a learned association. Participants learned 3 pieces of information-a person's face, name, and profession (in Experiments 1-5). In the 1st phase, participants in all conditions learned faces and names. In the 2nd phase, participants either restudied the face-name pair (the restudy condition) or were given the face and asked to retrieve the name (the test condition). In the 3rd phase, professions were presented for study just after restudy or testing. Our prediction was that the new information (the profession) would be more readily learned following retrieval of the face-name association compared to restudy of the face-name association. However, we found that the act of retrieval generally undermined acquisition of new associations rather than facilitating them. This detrimental effect emerged on both immediate and delayed tests. Further, the effect was not due to selective attention to feedback because we found impairment whether or not feedback was provided after the Phase 2 test. The data are novel in showing that the act of retrieving information can inhibit the ability to learn new information shortly thereafter. The results are difficult to accommodate within current theories that mostly emphasize benefits of retrieval for learning. PsycINFO Database Record (c) 2013 APA, all rights reserved.
ERIC Educational Resources Information Center
Brown, Cecelia
2003-01-01
Discusses the growth in use and acceptance of Web-based genomic and proteomic databases (GPD) in scholarly communication. Confirms the role of GPD in the scientific literature cycle, suggests GPD are a storage and retrieval mechanism for molecular biology information, and recommends that existing models of scientific communication be updated to…
a Review on State-Of Face Recognition Approaches
NASA Astrophysics Data System (ADS)
Mahmood, Zahid; Muhammad, Nazeer; Bibi, Nargis; Ali, Tauseef
Automatic Face Recognition (FR) presents a challenging task in the field of pattern recognition and despite the huge research in the past several decades; it still remains an open research problem. This is primarily due to the variability in the facial images, such as non-uniform illuminations, low resolution, occlusion, and/or variation in poses. Due to its non-intrusive nature, the FR is an attractive biometric modality and has gained a lot of attention in the biometric research community. Driven by the enormous number of potential application domains, many algorithms have been proposed for the FR. This paper presents an overview of the state-of-the-art FR algorithms, focusing their performances on publicly available databases. We highlight the conditions of the image databases with regard to the recognition rate of each approach. This is useful as a quick research overview and for practitioners as well to choose an algorithm for their specified FR application. To provide a comprehensive survey, the paper divides the FR algorithms into three categories: (1) intensity-based, (2) video-based, and (3) 3D based FR algorithms. In each category, the most commonly used algorithms and their performance is reported on standard face databases and a brief critical discussion is carried out.
VIEWCACHE: An incremental pointer-based access method for autonomous interoperable databases
NASA Technical Reports Server (NTRS)
Roussopoulos, N.; Sellis, Timos
1992-01-01
One of biggest problems facing NASA today is to provide scientists efficient access to a large number of distributed databases. Our pointer-based incremental database access method, VIEWCACHE, provides such an interface for accessing distributed data sets and directories. VIEWCACHE allows database browsing and search performing inter-database cross-referencing with no actual data movement between database sites. This organization and processing is especially suitable for managing Astrophysics databases which are physically distributed all over the world. Once the search is complete, the set of collected pointers pointing to the desired data are cached. VIEWCACHE includes spatial access methods for accessing image data sets, which provide much easier query formulation by referring directly to the image and very efficient search for objects contained within a two-dimensional window. We will develop and optimize a VIEWCACHE External Gateway Access to database management systems to facilitate distributed database search.
Facial expressions of emotion (KDEF): identification under different display-duration conditions.
Calvo, Manuel G; Lundqvist, Daniel
2008-02-01
Participants judged which of seven facial expressions (neutrality, happiness, anger, sadness, surprise, fear, and disgust) were displayed by a set of 280 faces corresponding to 20 female and 20 male models of the Karolinska Directed Emotional Faces database (Lundqvist, Flykt, & Ohman, 1998). Each face was presented under free-viewing conditions (to 63 participants) and also for 25, 50, 100, 250, and 500 msec (to 160 participants), to examine identification thresholds. Measures of identification accuracy, types of errors, and reaction times were obtained for each expression. In general, happy faces were identified more accurately, earlier, and faster than other faces, whereas judgments of fearful faces were the least accurate, the latest, and the slowest. Norms for each face and expression regarding level of identification accuracy, errors, and reaction times may be downloaded from www.psychonomic.org/archive/.
Does it matter where we meet? The role of emotional context in evaluative first impressions.
Koji, Shahnaz; Fernandes, Myra
2010-06-01
We investigated how emotionality of visual background context influenced perceptual ratings of faces. In two experiments participants rated how positive or negative a face, with a neutral expression (Experiment 1), or unambiguous emotional expression (happy/angry; Experiment 2), appeared when viewed overlaid onto positive, negative, or neutral background context scenes. Faces viewed in a positive context were rated as appearing more positive than when in a neutral or negative context, and faces in negative contexts were rated more negative than when in a positive or neutral context, regardless of the emotional expression portrayed. Notably, congruency of valence in face expression and background context significantly influenced face ratings. These findings suggest that human judgements of faces are relative, and significantly influenced by contextual factors. (PsycINFO Database Record (c) 2010 APA, all rights reserved).
Formal implementation of a performance evaluation model for the face recognition system.
Shin, Yong-Nyuo; Kim, Jason; Lee, Yong-Jun; Shin, Woochang; Choi, Jin-Young
2008-01-01
Due to usability features, practical applications, and its lack of intrusiveness, face recognition technology, based on information, derived from individuals' facial features, has been attracting considerable attention recently. Reported recognition rates of commercialized face recognition systems cannot be admitted as official recognition rates, as they are based on assumptions that are beneficial to the specific system and face database. Therefore, performance evaluation methods and tools are necessary to objectively measure the accuracy and performance of any face recognition system. In this paper, we propose and formalize a performance evaluation model for the biometric recognition system, implementing an evaluation tool for face recognition systems based on the proposed model. Furthermore, we performed evaluations objectively by providing guidelines for the design and implementation of a performance evaluation system, formalizing the performance test process.
Quantitative assessment of the facial features of a Mexican population dataset.
Farrera, Arodi; García-Velasco, Maria; Villanueva, Maria
2016-05-01
The present study describes the morphological variation of a large database of facial photographs. The database comprises frontal (386 female, 764 males) and lateral (312 females, 666 males) images of Mexican individuals aged 14-69 years that were obtained under controlled conditions. We used geometric morphometric methods and multivariate statistics to describe the phenotypic variation within the dataset as well as the variation regarding sex and age groups. In addition, we explored the correlation between facial traits in both views. We found a spectrum of variation that encompasses broad and narrow faces. In frontal view, the latter is associated to a longer nose, a thinner upper lip, a shorter lower face and to a longer upper face, than individuals with broader faces. In lateral view, antero-posteriorly shortened faces are associated to a longer profile and to a shortened helix, than individuals with longer faces. Sexual dimorphism is found in all age groups except for individuals above 39 years old in lateral view. Likewise, age-related changes are significant for both sexes, except for females above 29 years old in both views. Finally, we observed that the pattern of covariation between views differs in males and females mainly in the thickness of the upper lip and the angle of the facial profile and the auricle. The results of this study could contribute to the forensic practices as a complement for the construction of biological profiles, for example, to improve facial reconstruction procedures. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Template protection and its implementation in 3D face recognition systems
NASA Astrophysics Data System (ADS)
Zhou, Xuebing
2007-04-01
As biometric recognition systems are widely applied in various application areas, security and privacy risks have recently attracted the attention of the biometric community. Template protection techniques prevent stored reference data from revealing private biometric information and enhance the security of biometrics systems against attacks such as identity theft and cross matching. This paper concentrates on a template protection algorithm that merges methods from cryptography, error correction coding and biometrics. The key component of the algorithm is to convert biometric templates into binary vectors. It is shown that the binary vectors should be robust, uniformly distributed, statistically independent and collision-free so that authentication performance can be optimized and information leakage can be avoided. Depending on statistical character of the biometric template, different approaches for transforming biometric templates into compact binary vectors are presented. The proposed methods are integrated into a 3D face recognition system and tested on the 3D facial images of the FRGC database. It is shown that the resulting binary vectors provide an authentication performance that is similar to the original 3D face templates. A high security level is achieved with reasonable false acceptance and false rejection rates of the system, based on an efficient statistical analysis. The algorithm estimates the statistical character of biometric templates from a number of biometric samples in the enrollment database. For the FRGC 3D face database, the small distinction of robustness and discriminative power between the classification results under the assumption of uniquely distributed templates and the ones under the assumption of Gaussian distributed templates is shown in our tests.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bourham, Mohamed A.; Gilligan, John G.
Safety considerations in large future fusion reactors like ITER are important before licensing the reactor. Several scenarios are considered hazardous, which include safety of plasma-facing components during hard disruptions, high heat fluxes and thermal stresses during normal operation, accidental energy release, and aerosol formation and transport. Disruption events, in large tokamaks like ITER, are expected to produce local heat fluxes on plasma-facing components, which may exceed 100 GW/m{sup 2} over a period of about 0.1 ms. As a result, the surface temperature dramatically increases, which results in surface melting and vaporization, and produces thermal stresses and surface erosion. Plasma-facing componentsmore » safety issues extends to cover a wide range of possible scenarios, including disruption severity and the impact of plasma-facing components on disruption parameters, accidental energy release and short/long term LOCA's, and formation of airborne particles by convective current transport during a LOVA (water/air ingress disruption) accident scenario. Study, and evaluation of, disruption-induced aerosol generation and mobilization is essential to characterize database on particulate formation and distribution for large future fusion tokamak reactor like ITER. In order to provide database relevant to ITER, the SIRENS electrothermal plasma facility at NCSU has been modified to closely simulate heat fluxes expected in ITER.« less
3D facial expression recognition using maximum relevance minimum redundancy geometrical features
NASA Astrophysics Data System (ADS)
Rabiu, Habibu; Saripan, M. Iqbal; Mashohor, Syamsiah; Marhaban, Mohd Hamiruce
2012-12-01
In recent years, facial expression recognition (FER) has become an attractive research area, which besides the fundamental challenges, it poses, finds application in areas, such as human-computer interaction, clinical psychology, lie detection, pain assessment, and neurology. Generally the approaches to FER consist of three main steps: face detection, feature extraction and expression recognition. The recognition accuracy of FER hinges immensely on the relevance of the selected features in representing the target expressions. In this article, we present a person and gender independent 3D facial expression recognition method, using maximum relevance minimum redundancy geometrical features. The aim is to detect a compact set of features that sufficiently represents the most discriminative features between the target classes. Multi-class one-against-one SVM classifier was employed to recognize the seven facial expressions; neutral, happy, sad, angry, fear, disgust, and surprise. The average recognition accuracy of 92.2% was recorded. Furthermore, inter database homogeneity was investigated between two independent databases the BU-3DFE and UPM-3DFE the results showed a strong homogeneity between the two databases.
Gender classification from video under challenging operating conditions
NASA Astrophysics Data System (ADS)
Mendoza-Schrock, Olga; Dong, Guozhu
2014-06-01
The literature is abundant with papers on gender classification research. However the majority of such research is based on the assumption that there is enough resolution so that the subject's face can be resolved. Hence the majority of the research is actually in the face recognition and facial feature area. A gap exists for gender classification under challenging operating conditions—different seasonal conditions, different clothing, etc.—and when the subject's face cannot be resolved due to lack of resolution. The Seasonal Weather and Gender (SWAG) Database is a novel database that contains subjects walking through a scene under operating conditions that span a calendar year. This paper exploits a subset of that database—the SWAG One dataset—using data mining techniques, traditional classifiers (ex. Naïve Bayes, Support Vector Machine, etc.) and traditional (canny edge detection, etc.) and non-traditional (height/width ratios, etc.) feature extractors to achieve high correct gender classification rates (greater than 85%). Another novelty includes exploiting frame differentials.
Carter, Mary; Fletcher, Emily; Sansom, Anna; Warren, Fiona C; Campbell, John L
2018-02-15
To evaluate the feasibility, acceptability and effectiveness of webGP as piloted by six general practices. Mixed-methods evaluation, including data extraction from practice databases, general practitioner (GP) completion of case reports, patient questionnaires and staff interviews. General practices in NHS Northern, Eastern and Western Devon Clinical Commissioning Group's area approximately 6 months after implementing webGP (February-July 2016). Six practices provided consultations data; 20 GPs completed case reports (regarding 61 e-consults); 81 patients completed questionnaires; 5 GPs and 5 administrators were interviewed. Attitudes and experiences of practice staff and patients regarding webGP. WebGP uptake during the evaluation was small, showing no discernible impact on practice workload. The completeness of cross-sectional data on consultation workload varied between practices.GPs judged 41/61 (72%) of webGP requests to require a face-to-face or telephone consultation. Introducing webGP appeared to be associated with shifts in responsibility and workload between practice staff and between practices and patients.81/231 patients completed a postal survey (35.1% response rate). E-Consulters were somewhat younger and more likely to be employed than face-to-face respondents. WebGP appeared broadly acceptable to patients regarding timeliness and quality/experience of care provided. Similar problems were presented by all respondents. Both groups appeared equally familiar with other practice online services; e-consulters were somewhat more likely to have used them.From semistructured staff interviews, it appeared that, while largely acceptable within practice, introducing e-consults had potential for adverse interactions with pre-existing practice systems. There is potential to assess the impact of new systems on consultation patterns by extracting routine data from practice databases. Staff and patients noticed subtle changes to responsibilities associated with online options. Greater uptake requires good communication between practice and patients, and organisation of systems to avoid conflicts and misuse. Further research is required to evaluate the full potential of webGP in managing practice workload. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Nixon, Mark S.; Komogortsev, Oleg V.
2017-01-01
We introduce the intraclass correlation coefficient (ICC) to the biometric community as an index of the temporal persistence, or stability, of a single biometric feature. It requires, as input, a feature on an interval or ratio scale, and which is reasonably normally distributed, and it can only be calculated if each subject is tested on 2 or more occasions. For a biometric system, with multiple features available for selection, the ICC can be used to measure the relative stability of each feature. We show, for 14 distinct data sets (1 synthetic, 8 eye-movement-related, 2 gait-related, and 2 face-recognition-related, and one brain-structure-related), that selecting the most stable features, based on the ICC, resulted in the best biometric performance generally. Analyses based on using only the most stable features produced superior Rank-1-Identification Rate (Rank-1-IR) performance in 12 of 14 databases (p = 0.0065, one-tailed), when compared to other sets of features, including the set of all features. For Equal Error Rate (EER), using a subset of only high-ICC features also produced superior performance in 12 of 14 databases (p = 0. 0065, one-tailed). In general, then, for our databases, prescreening potential biometric features, and choosing only highly reliable features yields better performance than choosing lower ICC features or than choosing all features combined. We also determined that, as the ICC of a group of features increases, the median of the genuine similarity score distribution increases and the spread of this distribution decreases. There was no statistically significant similar relationships for the impostor distributions. We believe that the ICC will find many uses in biometric research. In case of the eye movement-driven biometrics, the use of reliable features, as measured by ICC, allowed to us achieve the authentication performance with EER = 2.01%, which was not possible before. PMID:28575030
Friedman, Lee; Nixon, Mark S; Komogortsev, Oleg V
2017-01-01
We introduce the intraclass correlation coefficient (ICC) to the biometric community as an index of the temporal persistence, or stability, of a single biometric feature. It requires, as input, a feature on an interval or ratio scale, and which is reasonably normally distributed, and it can only be calculated if each subject is tested on 2 or more occasions. For a biometric system, with multiple features available for selection, the ICC can be used to measure the relative stability of each feature. We show, for 14 distinct data sets (1 synthetic, 8 eye-movement-related, 2 gait-related, and 2 face-recognition-related, and one brain-structure-related), that selecting the most stable features, based on the ICC, resulted in the best biometric performance generally. Analyses based on using only the most stable features produced superior Rank-1-Identification Rate (Rank-1-IR) performance in 12 of 14 databases (p = 0.0065, one-tailed), when compared to other sets of features, including the set of all features. For Equal Error Rate (EER), using a subset of only high-ICC features also produced superior performance in 12 of 14 databases (p = 0. 0065, one-tailed). In general, then, for our databases, prescreening potential biometric features, and choosing only highly reliable features yields better performance than choosing lower ICC features or than choosing all features combined. We also determined that, as the ICC of a group of features increases, the median of the genuine similarity score distribution increases and the spread of this distribution decreases. There was no statistically significant similar relationships for the impostor distributions. We believe that the ICC will find many uses in biometric research. In case of the eye movement-driven biometrics, the use of reliable features, as measured by ICC, allowed to us achieve the authentication performance with EER = 2.01%, which was not possible before.
Face biometrics with renewable templates
NASA Astrophysics Data System (ADS)
van der Veen, Michiel; Kevenaar, Tom; Schrijen, Geert-Jan; Akkermans, Ton H.; Zuo, Fei
2006-02-01
In recent literature, privacy protection technologies for biometric templates were proposed. Among these is the so-called helper-data system (HDS) based on reliable component selection. In this paper we integrate this approach with face biometrics such that we achieve a system in which the templates are privacy protected, and multiple templates can be derived from the same facial image for the purpose of template renewability. Extracting binary feature vectors forms an essential step in this process. Using the FERET and Caltech databases, we show that this quantization step does not significantly degrade the classification performance compared to, for example, traditional correlation-based classifiers. The binary feature vectors are integrated in the HDS leading to a privacy protected facial recognition algorithm with acceptable FAR and FRR, provided that the intra-class variation is sufficiently small. This suggests that a controlled enrollment procedure with a sufficient number of enrollment measurements is required.
Online counseling: a narrative and critical review of the literature.
Richards, Derek; Viganó, Noemi
2013-09-01
This article aimed to critically review the literature on online counseling. Database and hand-searches were made using search terms and eligibility criteria, yielding a total of 123 studies. The review begins with what characterizes online counseling. Outcome and process research in online counseling is reviewed. Features and cyberbehaviors of online counseling such as anonymity and disinhibition, convenience, time-delay, the loss of social signaling, and writing behavior in cyberspace are discussed. Ethical behavior, professional training, client suitability, and clients' and therapists' attitudes and experiences of online counseling are reviewed. A growing body of knowledge to date is positive in showing that online counseling can have a similar impact and is capable of replicating the facilitative conditions as face-to-face encounters. A need remains for stronger empirical evidence to establish efficacy and effectiveness and to understand better the unique mediating and facilitative variables. © 2013 Wiley Periodicals, Inc.
An improved SRC method based on virtual samples for face recognition
NASA Astrophysics Data System (ADS)
Fu, Lijun; Chen, Deyun; Lin, Kezheng; Li, Ao
2018-07-01
The sparse representation classifier (SRC) performs classification by evaluating which class leads to the minimum representation error. However, in real world, the number of available training samples is limited due to noise interference, training samples cannot accurately represent the test sample linearly. Therefore, in this paper, we first produce virtual samples by exploiting original training samples at the aim of increasing the number of training samples. Then, we take the intra-class difference as data representation of partial noise, and utilize the intra-class differences and training samples simultaneously to represent the test sample in a linear way according to the theory of SRC algorithm. Using weighted score level fusion, the respective representation scores of the virtual samples and the original training samples are fused together to obtain the final classification results. The experimental results on multiple face databases show that our proposed method has a very satisfactory classification performance.
VIEWCACHE: An incremental pointer-based access method for autonomous interoperable databases
NASA Technical Reports Server (NTRS)
Roussopoulos, N.; Sellis, Timos
1993-01-01
One of the biggest problems facing NASA today is to provide scientists efficient access to a large number of distributed databases. Our pointer-based incremental data base access method, VIEWCACHE, provides such an interface for accessing distributed datasets and directories. VIEWCACHE allows database browsing and search performing inter-database cross-referencing with no actual data movement between database sites. This organization and processing is especially suitable for managing Astrophysics databases which are physically distributed all over the world. Once the search is complete, the set of collected pointers pointing to the desired data are cached. VIEWCACHE includes spatial access methods for accessing image datasets, which provide much easier query formulation by referring directly to the image and very efficient search for objects contained within a two-dimensional window. We will develop and optimize a VIEWCACHE External Gateway Access to database management systems to facilitate database search.
The CompTox Dashboard is a publicly accessible database provided by the National Center for Computational Toxicology at the US-EPA. The dashboard provides access to a database containing ~720,000 chemicals and integrates a number of our public-facing projects (e.g. ToxCast and Ex...
Federated Search Tools in Fusion Centers: Bridging Databases in the Information Sharing Environment
2012-09-01
considerable variation in how fusion centers plan for, gather requirements, select and acquire federated search tools to bridge disparate databases...centers, when considering integrating federated search tools; by evaluating the importance of the planning, requirements gathering, selection and...acquisition processes for integrating federated search tools; by acknowledging the challenges faced by some fusion centers during these integration processes
Franklin, Robert G; Adams, Reginald B; Steiner, Troy G; Zebrowitz, Leslie A
2018-05-14
Through 3 studies, we investigated whether angularity and roundness present in faces contributes to the perception of anger and joyful expressions, respectively. First, in Study 1 we found that angry expressions naturally contain more inward-pointing lines, whereas joyful expressions contain more outward-pointing lines. Then, using image-processing techniques in Studies 2 and 3, we filtered images to contain only inward-pointing or outward-pointing lines as a way to approximate angularity and roundness. We found that filtering images to be more angular increased how threatening and angry a neutral face was rated, increased how intense angry expressions were rated, and enhanced the recognition of anger. Conversely, filtering images to be rounder increased how warm and joyful a neutral face was rated, increased the intensity of joyful expressions, and enhanced recognition of joy. Together these findings show that angularity and roundness play a direct role in the recognition of angry and joyful expressions. Given evidence that angularity and roundness may play a biological role in indicating threat and safety in the environment, this suggests that angularity and roundness represent primitive facial cues used to signal threat-anger and warmth-joy pairings. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Semisupervised kernel marginal Fisher analysis for face recognition.
Wang, Ziqiang; Sun, Xia; Sun, Lijun; Huang, Yuchun
2013-01-01
Dimensionality reduction is a key problem in face recognition due to the high-dimensionality of face image. To effectively cope with this problem, a novel dimensionality reduction algorithm called semisupervised kernel marginal Fisher analysis (SKMFA) for face recognition is proposed in this paper. SKMFA can make use of both labelled and unlabeled samples to learn the projection matrix for nonlinear dimensionality reduction. Meanwhile, it can successfully avoid the singularity problem by not calculating the matrix inverse. In addition, in order to make the nonlinear structure captured by the data-dependent kernel consistent with the intrinsic manifold structure, a manifold adaptive nonparameter kernel is incorporated into the learning process of SKMFA. Experimental results on three face image databases demonstrate the effectiveness of our proposed algorithm.
Linkage disequilibrium matches forensic genetic records to disjoint genomic marker sets.
Edge, Michael D; Algee-Hewitt, Bridget F B; Pemberton, Trevor J; Li, Jun Z; Rosenberg, Noah A
2017-05-30
Combining genotypes across datasets is central in facilitating advances in genetics. Data aggregation efforts often face the challenge of record matching-the identification of dataset entries that represent the same individual. We show that records can be matched across genotype datasets that have no shared markers based on linkage disequilibrium between loci appearing in different datasets. Using two datasets for the same 872 people-one with 642,563 genome-wide SNPs and the other with 13 short tandem repeats (STRs) used in forensic applications-we find that 90-98% of forensic STR records can be connected to corresponding SNP records and vice versa. Accuracy increases to 99-100% when ∼30 STRs are used. Our method expands the potential of data aggregation, but it also suggests privacy risks intrinsic in maintenance of databases containing even small numbers of markers-including databases of forensic significance.
Rhodes, Gillian; Lie, Hanne C; Ewing, Louise; Evangelista, Emma; Tanaka, James W
2010-01-01
Discrimination and recognition are often poorer for other-race than own-race faces. These other-race effects (OREs) have traditionally been attributed to reduced perceptual expertise, resulting from more limited experience, with other-race faces. However, recent findings suggest that sociocognitive factors, such as reduced motivation to individuate other-race faces, may also contribute. If the sociocognitive hypothesis is correct, then it should be possible to alter discrimination and memory performance for identical faces by altering their perceived race. We made identical ambiguous-race morphed faces look either Asian or Caucasian by presenting them in Caucasian or Asian face contexts, respectively. However, this perceived-race manipulation had no effect on either discrimination (Experiment 1) or memory (Experiment 2) for the ambiguous-race faces, despite the presence of the usual OREs in discrimination and recognition of unambiguous Asian and Caucasian faces in our participant population. These results provide no support for the sociocognitive hypothesis. (PsycINFO Database Record (c) 2009 APA, all rights reserved).
Fusar-Poli, Paolo; Placentino, Anna; Carletti, Francesco; Landi, Paola; Allen, Paul; Surguladze, Simon; Benedetti, Francesco; Abbamonte, Marta; Gasparotti, Roberto; Barale, Francesco; Perez, Jorge; McGuire, Philip; Politi, Pierluigi
2009-01-01
Background Most of our social interactions involve perception of emotional information from the faces of other people. Furthermore, such emotional processes are thought to be aberrant in a range of clinical disorders, including psychosis and depression. However, the exact neurofunctional maps underlying emotional facial processing are not well defined. Methods Two independent researchers conducted separate comprehensive PubMed (1990 to May 2008) searches to find all functional magnetic resonance imaging (fMRI) studies using a variant of the emotional faces paradigm in healthy participants. The search terms were: “fMRI AND happy faces,” “fMRI AND sad faces,” “fMRI AND fearful faces,” “fMRI AND angry faces,” “fMRI AND disgusted faces” and “fMRI AND neutral faces.” We extracted spatial coordinates and inserted them in an electronic database. We performed activation likelihood estimation analysis for voxel-based meta-analyses. Results Of the originally identified studies, 105 met our inclusion criteria. The overall database consisted of 1785 brain coordinates that yielded an overall sample of 1600 healthy participants. Quantitative voxel-based meta-analysis of brain activation provided neurofunctional maps for 1) main effect of human faces; 2) main effect of emotional valence; and 3) modulatory effect of age, sex, explicit versus implicit processing and magnetic field strength. Processing of emotional faces was associated with increased activation in a number of visual, limbic, temporoparietal and prefrontal areas; the putamen; and the cerebellum. Happy, fearful and sad faces specifically activated the amygdala, whereas angry or disgusted faces had no effect on this brain region. Furthermore, amygdala sensitivity was greater for fearful than for happy or sad faces. Insular activation was selectively reported during processing of disgusted and angry faces. However, insular sensitivity was greater for disgusted than for angry faces. Conversely, neural response in the visual cortex and cerebellum was observable across all emotional conditions. Limitations Although the activation likelihood estimation approach is currently one of the most powerful and reliable meta-analytical methods in neuroimaging research, it is insensitive to effect sizes. Conclusion Our study has detailed neurofunctional maps to use as normative references in future fMRI studies of emotional facial processing in psychiatric populations. We found selective differences between neural networks underlying the basic emotions in limbic and insular brain regions. PMID:19949718
Discriminative Projection Selection Based Face Image Hashing
NASA Astrophysics Data System (ADS)
Karabat, Cagatay; Erdogan, Hakan
Face image hashing is an emerging method used in biometric verification systems. In this paper, we propose a novel face image hashing method based on a new technique called discriminative projection selection. We apply the Fisher criterion for selecting the rows of a random projection matrix in a user-dependent fashion. Moreover, another contribution of this paper is to employ a bimodal Gaussian mixture model at the quantization step. Our simulation results on three different databases demonstrate that the proposed method has superior performance in comparison to previously proposed random projection based methods.
2011-04-01
to iris or face recognition. In addition, breathing the dust is prevented with something covering the mouth , which will cause problems with face...headgear that “obscures the hair or hairline”. [58] One database of face images has images of people with and without scarves over their mouth . [59... Electronic Systems Magazine. Vol 19, Number 9, September 2004. [58] U.S. Department of State. How to Apply [for a Passport] in Person. [On-line
van der Vaart, Rosalie; Witting, Marjon; Riper, Heleen; Kooistra, Lisa; Bohlmeijer, Ernst T; van Gemert-Pijnen, Lisette J E W C
2014-12-14
Blending online modules into face-to-face therapy offers perspectives to enhance patient self-management and to increase the (cost-)effectiveness of therapy, while still providing the support patients need. The aim of this study was to outline optimal usage of blended care for depression, according to patients and therapists. A Delphi method was used to find consensus on suitable blended protocols (content, sequence and ratio). Phase 1 was an explorative phase, conducted in two rounds of online questionnaires, in which patients' and therapists' preferences and opinions about online psychotherapy were surveyed. In phase 2, data from phase 1 was used in face-to-face interviews with therapists to investigate how blended therapy protocols could be set up and what essential preconditions would be. Twelve therapists and nine patients completed the surveys. Blended therapy was positively perceived among all respondents, especially to enhance the self-management of patients. According to most respondents, practical therapy components (assignments, diaries and psycho-education) may be provided via online modules, while process-related components (introduction, evaluation and discussing thoughts and feelings), should be supported face-to-face. The preferred blend of online and face-to-face sessions differs between therapists and patients; most therapists prefer 75% face-to-face sessions, most patients 50 to 60%. The interviews showed that tailoring treatment to individual patients is essential in secondary mental health care, due to the complexity of their problems. The amount and ratio of online modules needs to be adjusted according to the patient's problems, skills and characteristics. Therapists themselves should also develop skills to integrate online and face-to-face sessions. Blending online and face-to-face sessions in an integrated depression therapy is viewed as a positive innovation by patients and therapists. Following a standard blended protocol, however, would be difficult in secondary mental health care. A database of online modules could provide flexibility to tailor treatment to individual patients, which asks motivation and skills of both patients and therapists. Further research is necessary to determine the (cost-)effectiveness of blended care, but this study provides starting points and preconditions to blend online and face-to-face sessions and create a treatment combining the best of both worlds.
Combination of Face Regions in Forensic Scenarios.
Tome, Pedro; Fierrez, Julian; Vera-Rodriguez, Ruben; Ortega-Garcia, Javier
2015-07-01
This article presents an experimental analysis of the combination of different regions of the human face on various forensic scenarios to generate scientific knowledge useful for the forensic experts. Three scenarios of interest at different distances are considered comparing mugshot and CCTV face images using MORPH and SC face databases. One of the main findings is that inner facial regions combine better in mugshot and close CCTV scenarios and outer facial regions combine better in far CCTV scenarios. This means, that depending of the acquisition distance, the discriminative power of the facial regions change, having in some cases better performance than the full face. This effect can be exploited by considering the fusion of facial regions which results in a very significant improvement of the discriminative performance compared to just using the full face. © 2015 American Academy of Forensic Sciences.
Rocco, Philip; Kelly, Andrew S; Béland, Daniel; Kinane, Michael
2017-02-01
Prices are a significant driver of health care cost in the United States. Existing research on the politics of health system reform has emphasized the limited nature of policy entrepreneurs' efforts at solving the problem of rising prices through direct regulation at the state level. Yet this literature fails to account for how change agents in the states gradually reconfigured the politics of prices, forging new, transparency-based policy instruments called all-payer claims databases (APCDs), which are designed to empower consumers, purchasers, and states to make informed market and policy choices. Drawing on pragmatist institutional theory, this article shows how APCDs emerged as the dominant model for reforming health care prices. While APCD advocates faced significant institutional barriers to policy change, we show how they reconfigured existing ideas, tactical repertoires, and legal-technical infrastructures to develop a politically and technologically robust reform. Our analysis has important implications for theories of how change agents overcome structural barriers to health reform. Copyright © 2017 by Duke University Press.
Appearance-based first impressions and person memory.
Bell, Raoul; Mieth, Laura; Buchner, Axel
2015-03-01
Previous research has demonstrated that people preferentially remember reputational information that is emotionally incongruent to their expectations, but it has left open the question of the generality of this effect. Three conflicting hypotheses were proposed: (a) The effect is restricted to information relevant to reciprocal social exchange. (b) The effect is most pronounced for emotional (approach-and-avoidance-relevant) information. (c) The effect is due to a general tendency of the cognitive system to attend to unexpected and novel information regardless of its (emotional) content. Here, we varied the type of to-be-remembered person information across experiments. To stimulate expectations, we selected faces whose facial appearance was rated as pleasant or disgusting (Experiments 1 and 2), as intelligent or unintelligent (Experiment 3), or as being that of a lawyer or a farmer (Experiment 4). These faces were paired with behavior descriptions that violated or confirmed these appearance-based 1st impressions. Source memory for the association between the faces and the descriptions was assessed with surprise memory tests. The results show that people are willing to form various social expectations based on facial appearance alone, and they support the hypothesis that the classification of the faces in the memory test is biased by schema-congruent guessing. Source memory was generally enhanced for information violating appearance-based social expectations. In sum, the results show that person memory is consistently affected by different kinds of social expectations, supporting the idea that the mechanisms determining memory performance generalize beyond exchange-relevant reputational and emotional information. PsycINFO Database Record (c) 2015 APA, all rights reserved.
A Comparison of Global Indexing Schemes to Facilitate Earth Science Data Management
NASA Astrophysics Data System (ADS)
Griessbaum, N.; Frew, J.; Rilee, M. L.; Kuo, K. S.
2017-12-01
Recent advances in database technology have led to systems optimized for managing petabyte-scale multidimensional arrays. These array databases are a good fit for subsets of the Earth's surface that can be projected into a rectangular coordinate system with acceptable geometric fidelity. However, for global analyses, array databases must address the same distortions and discontinuities that apply to map projections in general. The array database SciDB supports enormous databases spread across thousands of computing nodes. Additionally, the following SciDB characteristics are particularly germane to the coordinate system problem: SciDB efficiently stores and manipulates sparse (i.e. mostly empty) arrays. SciDB arrays have 64-bit indexes. SciDB supports user-defined data types, functions, and operators. We have implemented two geospatial indexing schemes in SciDB. The simplest uses two array dimensions to represent longitude and latitude. For representation as 64-bit integers, the coordinates are multiplied by a scale factor large enough to yield an appropriate Earth surface resolution (e.g., a scale factor of 100,000 yields a resolution of approximately 1m at the equator). Aside from the longitudinal discontinuity, the principal disadvantage of this scheme is its fixed scale factor. The second scheme uses a single array dimension to represent the bit-codes for locations in a hierarchical triangular mesh (HTM) coordinate system. A HTM maps the Earth's surface onto an octahedron, and then recursively subdivides each triangular face to the desired resolution. Earth surface locations are represented as the concatenation of an octahedron face code and a quadtree code within the face. Unlike our integerized lat-lon scheme, the HTM allow for objects of different size (e.g., pixels with differing resolutions) to be represented in the same indexing scheme. We present an evaluation of the relative utility of these two schemes for managing and analyzing MODIS swath data.
ERIC Educational Resources Information Center
Feinberg, Daniel A.
2017-01-01
This study examined the supports that female students sought out and found of value in an online database design course in a health informatics master's program. A target outcome was to help inform the practice of faculty and administrators in similar programs. Health informatics is a growing field that has faced shortages of qualified workers who…
Three-Dimensional Anthropometric Database of Attractive Caucasian Women: Standards and Comparisons.
Galantucci, Luigi Maria; Deli, Roberto; Laino, Alberto; Di Gioia, Eliana; D'Alessio, Raoul; Lavecchia, Fulvio; Percoco, Gianluca; Savastano, Carmela
2016-10-01
The aim of this paper is to develop a database to determine a new biomorphometric standard of attractiveness. Sampling was carried out using noninvasive three-dimensional relief methods to measure the soft tissues of the face. These anthropometric measurements were analyzed to verify the existence of any canons with respect to shape, size, and measurement proportions which proved to be significant with regard to the aesthetics of the face. Finally, the anthropometric parameters obtained were compared with findings described in the international literature.The study sample was made up competitors in the Miss Italy 2010 and 2009 beauty contest. The three-dimensional (3D) scanning of soft tissue surfaces allowed 3D digital models of the faces and the spatial 3D coordinates of 25 anthropometric landmarks to be obtained and used to calculate linear and angular measurements. A paired Student t test for the analysis of the means allowed 3 key questions in the study of biomorphometric parameters of the face to be addressed through comparison with the data available in the literature.The question of statistical evidence for the samples analyzed being members of the populations samples reported in literature was also addressed.The critical analysis of the data helped to identify the anthropometric measurements of the upper, middle, and lower thirds of the face, variations in which have a major influence on the attractiveness of the face. These changes involve facial width, height, and depth. Changes in measurements of length, angles, and proportions found in the sample considered were also analyzed.
Automatic recognition of emotions from facial expressions
NASA Astrophysics Data System (ADS)
Xue, Henry; Gertner, Izidor
2014-06-01
In the human-computer interaction (HCI) process it is desirable to have an artificial intelligent (AI) system that can identify and categorize human emotions from facial expressions. Such systems can be used in security, in entertainment industries, and also to study visual perception, social interactions and disorders (e.g. schizophrenia and autism). In this work we survey and compare the performance of different feature extraction algorithms and classification schemes. We introduce a faster feature extraction method that resizes and applies a set of filters to the data images without sacrificing the accuracy. In addition, we have enhanced SVM to multiple dimensions while retaining the high accuracy rate of SVM. The algorithms were tested using the Japanese Female Facial Expression (JAFFE) Database and the Database of Faces (AT&T Faces).
Energy conservation using face detection
NASA Astrophysics Data System (ADS)
Deotale, Nilesh T.; Kalbande, Dhananjay R.; Mishra, Akassh A.
2011-10-01
Computerized Face Detection, is concerned with the difficult task of converting a video signal of a person to written text. It has several applications like face recognition, simultaneous multiple face processing, biometrics, security, video surveillance, human computer interface, image database management, digital cameras use face detection for autofocus, selecting regions of interest in photo slideshows that use a pan-and-scale and The Present Paper deals with energy conservation using face detection. Automating the process to a computer requires the use of various image processing techniques. There are various methods that can be used for Face Detection such as Contour tracking methods, Template matching, Controlled background, Model based, Motion based and color based. Basically, the video of the subject are converted into images are further selected manually for processing. However, several factors like poor illumination, movement of face, viewpoint-dependent Physical appearance, Acquisition geometry, Imaging conditions, Compression artifacts makes Face detection difficult. This paper reports an algorithm for conservation of energy using face detection for various devices. The present paper suggests Energy Conservation can be done by Detecting the Face and reducing the brightness of complete image and then adjusting the brightness of the particular area of an image where the face is located using histogram equalization.
Energy in the urban environment. Proceedings of the 22. annual Illinois energy conference
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1994-12-31
The conference addressed the energy and environmental challenges facing large metropolitan areas. The topics included a comparison of the environmental status of cities twenty years ago with the challenges facing today`s large cities, sustainable economic development, improving the energy and environmental infrastructure, and the changing urban transportation sector. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.
Massively-Parallel Architectures for Automatic Recognition of Visual Speech Signals
1988-10-12
Secusrity Clamifieation, Nlassively-Parallel Architectures for Automa ic Recognitio of Visua, Speech Signals 12. PERSONAL AUTHOR(S) Terrence J...characteristics of speech from tJhe, visual speech signals. Neural networks have been trained on a database of vowels. The rqw images of faces , aligned and...images of faces , aligned and preprocessed, were used as input to these network which were trained to estimate the corresponding envelope of the
Joint Sparse Representation for Robust Multimodal Biometrics Recognition
2014-01-01
comprehensive multimodal dataset and a face database are described in section V. Finally, in section VI, we discuss the computational complexity of...fingerprint, iris, palmprint , hand geometry and voice from subjects of different age, gender and ethnicity as described in Table I. It is a...Taylor, “Constructing nonlinear discriminants from multiple data views,” Machine Learning and Knowl- edge Discovery in Databases , pp. 328–343, 2010
NASA Technical Reports Server (NTRS)
1996-01-01
Topics considered include: New approach to turbulence modeling; Second moment closure analysis of the backstep flow database; Prediction of the backflow and recovery regions in the backward facing step at various Reynolds numbers; Turbulent flame propagation in partially premixed flames; Ensemble averaged dynamic modeling. Also included a study of the turbulence structures of wall-bounded shear flows; Simulation and modeling of the elliptic streamline flow.
Subtle perceptions of male sexual orientation influence occupational opportunities.
Rule, Nicholas O; Bjornsdottir, R Thora; Tskhay, Konstantin O; Ambady, Nalini
2016-12-01
Theories linking the literatures on stereotyping and human resource management have proposed that individuals may enjoy greater success obtaining jobs congruent with stereotypes about their social categories or traits. Here, we explored such effects for a detectable, but not obvious, social group distinction: male sexual orientation. Bridging previous work on prejudice and occupational success with that on social perception, we found that perceivers rated gay and straight men as more suited to professions consistent with stereotypes about their groups (nurses, pediatricians, and English teachers vs. engineers, managers, surgeons, and math teachers) from mere photos of their faces. Notably, distinct evaluations of the gay and straight men emerged based on perceptions of their faces with no explicit indication of sexual orientation. Neither perceivers' expertise with hiring decisions nor diagnostic information about the targets eliminated these biases, but encouraging fair decisions did contribute to partly ameliorating the differences. Mediation analysis further showed that perceptions of the targets' sexual orientations and facial affect accounted for these effects. Individuals may therefore infer characteristics about individuals' group memberships from their faces and use this information in a way that meaningfully influences evaluations of their suitability for particular jobs. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
A framework for the recognition of 3D faces and expressions
NASA Astrophysics Data System (ADS)
Li, Chao; Barreto, Armando
2006-04-01
Face recognition technology has been a focus both in academia and industry for the last couple of years because of its wide potential applications and its importance to meet the security needs of today's world. Most of the systems developed are based on 2D face recognition technology, which uses pictures for data processing. With the development of 3D imaging technology, 3D face recognition emerges as an alternative to overcome the difficulties inherent with 2D face recognition, i.e. sensitivity to illumination conditions and orientation positioning of the subject. But 3D face recognition still needs to tackle the problem of deformation of facial geometry that results from the expression changes of a subject. To deal with this issue, a 3D face recognition framework is proposed in this paper. It is composed of three subsystems: an expression recognition system, a system for the identification of faces with expression, and neutral face recognition system. A system for the recognition of faces with one type of expression (happiness) and neutral faces was implemented and tested on a database of 30 subjects. The results proved the feasibility of this framework.
Spirituality in childhood cancer care
Lima, Nádia Nara Rolim; do Nascimento, Vânia Barbosa; de Carvalho, Sionara Melo Figueiredo; Neto, Modesto Leite Rolim; Moreira, Marcial Moreno; Brasil, Aline Quental; Junior, Francisco Telésforo Celestino; de Oliveira, Gislene Farias; Reis, Alberto Olavo Advíncula
2013-01-01
To deal with the suffering caused by childhood cancer, patients and their families use different coping strategies, among which, spirituality appears a way of minimizing possible damage. In this context, the purpose of the present study was to analyze the influence of spirituality in childhood cancer care, involving biopsychosocial aspects of the child, the family, and the health care team facing the disease. To accomplish this purpose, a nonsystematic review of literature of articles on national and international electronic databases (Scientific Electronic Library Online [SciELO], PubMed, and Latin American and Caribbean Health Sciences Literature [LILACS]) was conducted using the search terms “spirituality,” “child psychology,” “child,” and “cancer,” as well as on other available resources. After the search, 20 articles met the eligibility criteria and were included in the final sample. Our review showed that the relation between spirituality and health has lately become a subject of growing interest among researchers, as a positive influence of spirituality in the people’s welfare was noted. Studies that were retrieved using the mentioned search strategy in electronic databases, independently assessed by the authors according to the systematic review, showed that spirituality emerges as a driving force that helps pediatric patients and their families in coping with cancer. Health care workers have been increasingly attentive to this dimension of care. However, it is necessary to improve their knowledge regarding the subject. The search highlighted that spirituality is considered a source of comfort and hope, contributing to a better acceptance of his/her chronic condition by the child with cancer, as well as by the family. Further up-to-date studies facing the subject are, thus, needed. It is also necessary to better train health care practitioners, so as to provide humanized care to the child with cancer. PMID:24133371
NASA Astrophysics Data System (ADS)
Chen, Cunjian; Ross, Arun
2013-05-01
Researchers in face recognition have been using Gabor filters for image representation due to their robustness to complex variations in expression and illumination. Numerous methods have been proposed to model the output of filter responses by employing either local or global descriptors. In this work, we propose a novel but simple approach for encoding Gradient information on Gabor-transformed images to represent the face, which can be used for identity, gender and ethnicity assessment. Extensive experiments on the standard face benchmark FERET (Visible versus Visible), as well as the heterogeneous face dataset HFB (Near-infrared versus Visible), suggest that the matching performance due to the proposed descriptor is comparable against state-of-the-art descriptor-based approaches in face recognition applications. Furthermore, the same feature set is used in the framework of a Collaborative Representation Classification (CRC) scheme for deducing soft biometric traits such as gender and ethnicity from face images in the AR, Morph and CAS-PEAL databases.
Wang, Dayong; Otto, Charles; Jain, Anil K
2017-06-01
Given the prevalence of social media websites, one challenge facing computer vision researchers is to devise methods to search for persons of interest among the billions of shared photos on these websites. Despite significant progress in face recognition, searching a large collection of unconstrained face images remains a difficult problem. To address this challenge, we propose a face search system which combines a fast search procedure, coupled with a state-of-the-art commercial off the shelf (COTS) matcher, in a cascaded framework. Given a probe face, we first filter the large gallery of photos to find the top- k most similar faces using features learned by a convolutional neural network. The k retrieved candidates are re-ranked by combining similarities based on deep features and those output by the COTS matcher. We evaluate the proposed face search system on a gallery containing 80 million web-downloaded face images. Experimental results demonstrate that while the deep features perform worse than the COTS matcher on a mugshot dataset (93.7 percent versus 98.6 percent TAR@FAR of 0.01 percent), fusing the deep features with the COTS matcher improves the overall performance ( 99.5 percent TAR@FAR of 0.01 percent). This shows that the learned deep features provide complementary information over representations used in state-of-the-art face matchers. On the unconstrained face image benchmarks, the performance of the learned deep features is competitive with reported accuracies. LFW database: 98.20 percent accuracy under the standard protocol and 88.03 percent TAR@FAR of 0.1 percent under the BLUFR protocol; IJB-A benchmark: 51.0 percent TAR@FAR of 0.1 percent (verification), rank 1 retrieval of 82.2 percent (closed-set search), 61.5 percent FNIR@FAR of 1 percent (open-set search). The proposed face search system offers an excellent trade-off between accuracy and scalability on galleries with millions of images. Additionally, in a face search experiment involving photos of the Tsarnaev brothers, convicted of the Boston Marathon bombing, the proposed cascade face search system could find the younger brother's (Dzhokhar Tsarnaev) photo at rank 1 in 1 second on a 5 M gallery and at rank 8 in 7 seconds on an 80 M gallery.
Jeddi, Fatemeh Rangraz; Farzandipoor, Mehrdad; Arabfard, Masoud; Hosseini, Azam Haj Mohammad
2014-04-01
The purpose of this study was investigating situation and presenting a conceptual model for clinical governance information system by using UML in two sample hospitals. However, use of information is one of the fundamental components of clinical governance; but unfortunately, it does not pay much attention to information management. A cross sectional study was conducted in October 2012- May 2013. Data were gathered through questionnaires and interviews in two sample hospitals. Face and content validity of the questionnaire has been confirmed by experts. Data were collected from a pilot hospital and reforms were carried out and Final questionnaire was prepared. Data were analyzed by descriptive statistics and SPSS 16 software. With the scenario derived from questionnaires, UML diagrams are presented by using Rational Rose 7 software. The results showed that 32.14 percent Indicators of the hospitals were calculated. Database was not designed and 100 percent of the hospital's clinical governance was required to create a database. Clinical governance unit of hospitals to perform its mission, do not have access to all the needed indicators. Defining of Processes and drawing of models and creating of database are essential for designing of information systems.
Jeddi, Fatemeh Rangraz; Farzandipoor, Mehrdad; Arabfard, Masoud; Hosseini, Azam Haj Mohammad
2016-04-01
The purpose of this study was investigating situation and presenting a conceptual model for clinical governance information system by using UML in two sample hospitals. However, use of information is one of the fundamental components of clinical governance; but unfortunately, it does not pay much attention to information management. A cross sectional study was conducted in October 2012- May 2013. Data were gathered through questionnaires and interviews in two sample hospitals. Face and content validity of the questionnaire has been confirmed by experts. Data were collected from a pilot hospital and reforms were carried out and Final questionnaire was prepared. Data were analyzed by descriptive statistics and SPSS 16 software. With the scenario derived from questionnaires, UML diagrams are presented by using Rational Rose 7 software. The results showed that 32.14 percent Indicators of the hospitals were calculated. Database was not designed and 100 percent of the hospital's clinical governance was required to create a database. Clinical governance unit of hospitals to perform its mission, do not have access to all the needed indicators. Defining of Processes and drawing of models and creating of database are essential for designing of information systems.
Lewis, Amelia K; Porter, Melanie A; Williams, Tracey A; Bzishvili, Samantha; North, Kathryn N; Payne, Jonathan M
2017-05-01
This study aimed to investigate face scan paths and face perception abilities in children with Neurofibromatosis Type 1 (NF1) and how these might relate to emotion recognition abilities in this population. The authors investigated facial emotion recognition, face scan paths, and face perception in 29 children with NF1 compared to 29 chronological age-matched typically developing controls. Correlations between facial emotion recognition, face scan paths, and face perception in children with NF1 were examined. Children with NF1 displayed significantly poorer recognition of fearful expressions compared to controls, as well as a nonsignificant trend toward poorer recognition of anger. Although there was no significant difference between groups in time spent viewing individual core facial features (eyes, nose, mouth, and nonfeature regions), children with NF1 spent significantly less time than controls viewing the face as a whole. Children with NF1 also displayed significantly poorer face perception abilities than typically developing controls. Facial emotion recognition deficits were not significantly associated with aberrant face scan paths or face perception abilities in the NF1 group. These results suggest that impairments in the perception, identification, and interpretation of information from faces are important aspects of the social-cognitive phenotype of NF1. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Face recognition algorithm using extended vector quantization histogram features.
Yan, Yan; Lee, Feifei; Wu, Xueqian; Chen, Qiu
2018-01-01
In this paper, we propose a face recognition algorithm based on a combination of vector quantization (VQ) and Markov stationary features (MSF). The VQ algorithm has been shown to be an effective method for generating features; it extracts a codevector histogram as a facial feature representation for face recognition. Still, the VQ histogram features are unable to convey spatial structural information, which to some extent limits their usefulness in discrimination. To alleviate this limitation of VQ histograms, we utilize Markov stationary features (MSF) to extend the VQ histogram-based features so as to add spatial structural information. We demonstrate the effectiveness of our proposed algorithm by achieving recognition results superior to those of several state-of-the-art methods on publicly available face databases.
Face verification system for Android mobile devices using histogram based features
NASA Astrophysics Data System (ADS)
Sato, Sho; Kobayashi, Kazuhiro; Chen, Qiu
2016-07-01
This paper proposes a face verification system that runs on Android mobile devices. In this system, facial image is captured by a built-in camera on the Android device firstly, and then face detection is implemented using Haar-like features and AdaBoost learning algorithm. The proposed system verify the detected face using histogram based features, which are generated by binary Vector Quantization (VQ) histogram using DCT coefficients in low frequency domains, as well as Improved Local Binary Pattern (Improved LBP) histogram in spatial domain. Verification results with different type of histogram based features are first obtained separately and then combined by weighted averaging. We evaluate our proposed algorithm by using publicly available ORL database and facial images captured by an Android tablet.
Improve Performance of Data Warehouse by Query Cache
NASA Astrophysics Data System (ADS)
Gour, Vishal; Sarangdevot, S. S.; Sharma, Anand; Choudhary, Vinod
2010-11-01
The primary goal of data warehouse is to free the information locked up in the operational database so that decision makers and business analyst can make queries, analysis and planning regardless of the data changes in operational database. As the number of queries is large, therefore, in certain cases there is reasonable probability that same query submitted by the one or multiple users at different times. Each time when query is executed, all the data of warehouse is analyzed to generate the result of that query. In this paper we will study how using query cache improves performance of Data Warehouse and try to find the common problems faced. These kinds of problems are faced by Data Warehouse administrators which are minimizes response time and improves the efficiency of query in data warehouse overall, particularly when data warehouse is updated at regular interval.
Using Organizational Behavior To Increase the Efficiency of The Total Force Enterprise
2013-06-01
database and through personal interviews , the reader will be shown the common struggles faced by units undergoing change without following the...active, ARC, or hybrid may all play a role. Through this database and personal interviews , we can determine how the unit change was dealt with, if...organizational structures, as well as strategy . If we overlay the speed of technological change and the power of social media, military organizations failing
Technology survey on video face tracking
NASA Astrophysics Data System (ADS)
Zhang, Tong; Gomes, Herman Martins
2014-03-01
With the pervasiveness of monitoring cameras installed in public areas, schools, hospitals, work places and homes, video analytics technologies for interpreting these video contents are becoming increasingly relevant to people's lives. Among such technologies, human face detection and tracking (and face identification in many cases) are particularly useful in various application scenarios. While plenty of research has been conducted on face tracking and many promising approaches have been proposed, there are still significant challenges in recognizing and tracking people in videos with uncontrolled capturing conditions, largely due to pose and illumination variations, as well as occlusions and cluttered background. It is especially complex to track and identify multiple people simultaneously in real time due to the large amount of computation involved. In this paper, we present a survey on literature and software that are published or developed during recent years on the face tracking topic. The survey covers the following topics: 1) mainstream and state-of-the-art face tracking methods, including features used to model the targets and metrics used for tracking; 2) face identification and face clustering from face sequences; and 3) software packages or demonstrations that are available for algorithm development or trial. A number of publically available databases for face tracking are also introduced.
Robust kernel representation with statistical local features for face recognition.
Yang, Meng; Zhang, Lei; Shiu, Simon Chi-Keung; Zhang, David
2013-06-01
Factors such as misalignment, pose variation, and occlusion make robust face recognition a difficult problem. It is known that statistical features such as local binary pattern are effective for local feature extraction, whereas the recently proposed sparse or collaborative representation-based classification has shown interesting results in robust face recognition. In this paper, we propose a novel robust kernel representation model with statistical local features (SLF) for robust face recognition. Initially, multipartition max pooling is used to enhance the invariance of SLF to image registration error. Then, a kernel-based representation model is proposed to fully exploit the discrimination information embedded in the SLF, and robust regression is adopted to effectively handle the occlusion in face images. Extensive experiments are conducted on benchmark face databases, including extended Yale B, AR (A. Martinez and R. Benavente), multiple pose, illumination, and expression (multi-PIE), facial recognition technology (FERET), face recognition grand challenge (FRGC), and labeled faces in the wild (LFW), which have different variations of lighting, expression, pose, and occlusions, demonstrating the promising performance of the proposed method.
Neural architecture underlying classification of face perception paradigms.
Laird, Angela R; Riedel, Michael C; Sutherland, Matthew T; Eickhoff, Simon B; Ray, Kimberly L; Uecker, Angela M; Fox, P Mickle; Turner, Jessica A; Fox, Peter T
2015-10-01
We present a novel strategy for deriving a classification system of functional neuroimaging paradigms that relies on hierarchical clustering of experiments archived in the BrainMap database. The goal of our proof-of-concept application was to examine the underlying neural architecture of the face perception literature from a meta-analytic perspective, as these studies include a wide range of tasks. Task-based results exhibiting similar activation patterns were grouped as similar, while tasks activating different brain networks were classified as functionally distinct. We identified four sub-classes of face tasks: (1) Visuospatial Attention and Visuomotor Coordination to Faces, (2) Perception and Recognition of Faces, (3) Social Processing and Episodic Recall of Faces, and (4) Face Naming and Lexical Retrieval. Interpretation of these sub-classes supports an extension of a well-known model of face perception to include a core system for visual analysis and extended systems for personal information, emotion, and salience processing. Overall, these results demonstrate that a large-scale data mining approach can inform the evolution of theoretical cognitive models by probing the range of behavioral manipulations across experimental tasks. Copyright © 2015 Elsevier Inc. All rights reserved.
Rejecting a bad option feels like choosing a good one.
Perfecto, Hannah; Galak, Jeff; Simmons, Joseph P; Nelson, Leif D
2017-11-01
Across 4,151 participants, the authors demonstrate a novel framing effect, attribute matching, whereby matching a salient attribute of a decision frame with that of a decision's options facilitates decision-making. This attribute matching is shown to increase decision confidence and, ultimately, consensus estimates by increasing feelings of metacognitive ease. In Study 1, participants choosing the more attractive of two faces or rejecting the less attractive face reported greater confidence in and perceived consensus around their decision. Using positive and negative words, Study 2 showed that the attribute's extremity moderates the size of the effect. Study 3 found decision ease mediates these changes in confidence and consensus estimates. Consistent with a misattribution account, when participants were warned about this external source of ease in Study 4, the effect disappeared. Study 5 extended attribute matching beyond valence to objective judgments. The authors conclude by discussing related psychological constructs as well as downstream consequences. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Facial expression recognition based on weber local descriptor and sparse representation
NASA Astrophysics Data System (ADS)
Ouyang, Yan
2018-03-01
Automatic facial expression recognition has been one of the research hotspots in the area of computer vision for nearly ten years. During the decade, many state-of-the-art methods have been proposed which perform very high accurate rate based on the face images without any interference. Nowadays, many researchers begin to challenge the task of classifying the facial expression images with corruptions and occlusions and the Sparse Representation based Classification framework has been wildly used because it can robust to the corruptions and occlusions. Therefore, this paper proposed a novel facial expression recognition method based on Weber local descriptor (WLD) and Sparse representation. The method includes three parts: firstly the face images are divided into many local patches, and then the WLD histograms of each patch are extracted, finally all the WLD histograms features are composed into a vector and combined with SRC to classify the facial expressions. The experiment results on the Cohn-Kanade database show that the proposed method is robust to occlusions and corruptions.
Age differences in poignancy: Cognitive reappraisal as a moderator.
Zhang, Xin; Ersner-Hershfield, Hal; Fung, Helene H
2010-06-01
Poignancy is defined as a mixed emotional experience that arises when one faces meaningful endings. According to socioemotional selectivity theory (Carstensen, 2006), when people are aware of the finitude of time, they tend to experience more poignancy. In Study 1, we found that Chinese younger, but not older, participants experienced more poignancy under time limitations. In Study 2, we found that an emotion regulation strategy-namely, cognitive reappraisal-moderated the relationship between limited time and poignancy, such that the increases in poignancy under time limitations were found only among older Chinese participants with lower levels of cognitive reappraisal but not among those with higher levels of cognitive reappraisal. These findings contribute to the existing literature on poignancy by showing that not every older adult exhibits poignancy in the face of an ending: The poignancy phenomenon may occur among only older adults who are less likely to use an emotion regulation strategy, such as cognitive reappraisal, to reinterpret the anticipated ending. (PsycINFO Database Record (c) 2010 APA, all rights reserved).
Education as an intergenerational process of human learning, teaching, and development.
Cole, Michael
2010-11-01
In this article I argue that the future of psychological research on educational processes would benefit from an interdisciplinary approach that enables psychologists to locate their objects of study within the cultural, social, and historical contexts of their research. To make this argument, I begin by examining anthropological accounts of the characteristics of education in small, face-to-face, preindustrial societies. I then turn to a sample of contemporary psychoeducational research that seeks to implement major, qualitative changes in modern educational practices by transforming them to have the properties of education in those self-same face-to-face societies. Next I examine the challenges faced by these modern approaches and briefly describe a multi-institutional, multidisciplinary system of education that responds to these challenges while offering a model for educating psychology students in a multigenerational system of activities with potential widespread benefits. PsycINFO Database Record (c) 2010 APA, all rights reserved.
Face Liveness Detection Using Defocus
Kim, Sooyeon; Ban, Yuseok; Lee, Sangyoun
2015-01-01
In order to develop security systems for identity authentication, face recognition (FR) technology has been applied. One of the main problems of applying FR technology is that the systems are especially vulnerable to attacks with spoofing faces (e.g., 2D pictures). To defend from these attacks and to enhance the reliability of FR systems, many anti-spoofing approaches have been recently developed. In this paper, we propose a method for face liveness detection using the effect of defocus. From two images sequentially taken at different focuses, three features, focus, power histogram and gradient location and orientation histogram (GLOH), are extracted. Afterwards, we detect forged faces through the feature-level fusion approach. For reliable performance verification, we develop two databases with a handheld digital camera and a webcam. The proposed method achieves a 3.29% half total error rate (HTER) at a given depth of field (DoF) and can be extended to camera-equipped devices, like smartphones. PMID:25594594
Krupnick, Janice L; Green, Bonnie L; Amdur, Richard; Alaoui, Adil; Belouali, Anas; Roberge, Erika; Cueva, David; Roberts, Miguel; Melnikoff, Elizabeth; Dutton, Mary Ann
2017-07-01
[Correction Notice: An Erratum for this article was reported in Vol 9(4) of Psychological Trauma: Theory, Research, Practice, and Policy (see record 2016-54154-001). In the article, the names of authors Adil Alaoui and Anas Belouali were misspelled as Adil Aloui and Anas Beloui respectively. All versions of this article have been corrected.] Objective: Veterans suffering from posttraumatic stress disorder (PTSD) may avoid or fail to follow through with a full course of face-to-face mental health treatment for a variety of reasons. We conducted a pilot effectiveness trial of an online intervention for veterans with current PTSD to determine the feasibility, safety, and preliminary effectiveness of an online writing intervention (i.e., Warriors Internet Recovery & EDucation [WIRED]) as an adjunct to face-to-face psychotherapy. Method: Veterans ( N = 34) who had served in Iraq or Afghanistan with current PTSD subsequent to deployment-related trauma were randomized to Veterans Affairs (VA) mental health treatment as usual (TAU) or to treatment as usual plus the online intervention (TAU + WIRED). All research participants were recruited from the Trauma Services Program, VA Medical Center, Washington, DC. They completed baseline assessments as well as assessments 12 weeks and 24 weeks after the baseline assessment. The online intervention consisted of therapist-guided writing, using principles of prolonged exposure and cognitive therapy. The intervention was adapted from an evidence-based treatment used in The Netherlands and Germany for individuals who had been exposed to nonmilitary traumas. Results: In addition to showing that the online intervention was both feasible to develop and implement, as well as being safe, the results showed preliminary evidence of the effectiveness of the TAU + WIRED intervention in this patient population, with particular evidence in reducing PTSD symptoms of hyperarousal. Conclusion: With minor modifications to enhance the therapeutic alliance, this intervention should be tested in a larger clinical trial to determine whether this method of online intervention might provide another alternative to face-to-face treatment for veterans with PTSD. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Face Aging Effect Simulation Using Hidden Factor Analysis Joint Sparse Representation.
Yang, Hongyu; Huang, Di; Wang, Yunhong; Wang, Heng; Tang, Yuanyan
2016-06-01
Face aging simulation has received rising investigations nowadays, whereas it still remains a challenge to generate convincing and natural age-progressed face images. In this paper, we present a novel approach to such an issue using hidden factor analysis joint sparse representation. In contrast to the majority of tasks in the literature that integrally handle the facial texture, the proposed aging approach separately models the person-specific facial properties that tend to be stable in a relatively long period and the age-specific clues that gradually change over time. It then transforms the age component to a target age group via sparse reconstruction, yielding aging effects, which is finally combined with the identity component to achieve the aged face. Experiments are carried out on three face aging databases, and the results achieved clearly demonstrate the effectiveness and robustness of the proposed method in rendering a face with aging effects. In addition, a series of evaluations prove its validity with respect to identity preservation and aging effect generation.
Face recognition ability matures late: evidence from individual differences in young adults.
Susilo, Tirta; Germine, Laura; Duchaine, Bradley
2013-10-01
Does face recognition ability mature early in childhood (early maturation hypothesis) or does it continue to develop well into adulthood (late maturation hypothesis)? This fundamental issue in face recognition is typically addressed by comparing child and adult participants. However, the interpretation of such studies is complicated by children's inferior test-taking abilities and general cognitive functions. Here we examined the developmental trajectory of face recognition ability in an individual differences study of 18-33 year-olds (n = 2,032), an age interval in which participants are competent test takers with comparable general cognitive functions. We found a positive association between age and face recognition, controlling for nonface visual recognition, verbal memory, sex, and own-race bias. Our study supports the late maturation hypothesis in face recognition, and illustrates how individual differences investigations of young adults can address theoretical issues concerning the development of perceptual and cognitive abilities. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Recognizing Disguised Faces: Human and Machine Evaluation
Dhamecha, Tejas Indulal; Singh, Richa; Vatsa, Mayank; Kumar, Ajay
2014-01-01
Face verification, though an easy task for humans, is a long-standing open research area. This is largely due to the challenging covariates, such as disguise and aging, which make it very hard to accurately verify the identity of a person. This paper investigates human and machine performance for recognizing/verifying disguised faces. Performance is also evaluated under familiarity and match/mismatch with the ethnicity of observers. The findings of this study are used to develop an automated algorithm to verify the faces presented under disguise variations. We use automatically localized feature descriptors which can identify disguised face patches and account for this information to achieve improved matching accuracy. The performance of the proposed algorithm is evaluated on the IIIT-Delhi Disguise database that contains images pertaining to 75 subjects with different kinds of disguise variations. The experiments suggest that the proposed algorithm can outperform a popular commercial system and evaluates them against humans in matching disguised face images. PMID:25029188
The review and results of different methods for facial recognition
NASA Astrophysics Data System (ADS)
Le, Yifan
2017-09-01
In recent years, facial recognition draws much attention due to its wide potential applications. As a unique technology in Biometric Identification, facial recognition represents a significant improvement since it could be operated without cooperation of people under detection. Hence, facial recognition will be taken into defense system, medical detection, human behavior understanding, etc. Several theories and methods have been established to make progress in facial recognition: (1) A novel two-stage facial landmark localization method is proposed which has more accurate facial localization effect under specific database; (2) A statistical face frontalization method is proposed which outperforms state-of-the-art methods for face landmark localization; (3) It proposes a general facial landmark detection algorithm to handle images with severe occlusion and images with large head poses; (4) There are three methods proposed on Face Alignment including shape augmented regression method, pose-indexed based multi-view method and a learning based method via regressing local binary features. The aim of this paper is to analyze previous work of different aspects in facial recognition, focusing on concrete method and performance under various databases. In addition, some improvement measures and suggestions in potential applications will be put forward.
Face verification with balanced thresholds.
Yan, Shuicheng; Xu, Dong; Tang, Xiaoou
2007-01-01
The process of face verification is guided by a pre-learned global threshold, which, however, is often inconsistent with class-specific optimal thresholds. It is, hence, beneficial to pursue a balance of the class-specific thresholds in the model-learning stage. In this paper, we present a new dimensionality reduction algorithm tailored to the verification task that ensures threshold balance. This is achieved by the following aspects. First, feasibility is guaranteed by employing an affine transformation matrix, instead of the conventional projection matrix, for dimensionality reduction, and, hence, we call the proposed algorithm threshold balanced transformation (TBT). Then, the affine transformation matrix, constrained as the product of an orthogonal matrix and a diagonal matrix, is optimized to improve the threshold balance and classification capability in an iterative manner. Unlike most algorithms for face verification which are directly transplanted from face identification literature, TBT is specifically designed for face verification and clarifies the intrinsic distinction between these two tasks. Experiments on three benchmark face databases demonstrate that TBT significantly outperforms the state-of-the-art subspace techniques for face verification.
Facial contrast is a cue for perceiving health from the face.
Russell, Richard; Porcheron, Aurélie; Sweda, Jennifer R; Jones, Alex L; Mauger, Emmanuelle; Morizot, Frederique
2016-09-01
How healthy someone appears has important social consequences. Yet the visual cues that determine perceived health remain poorly understood. Here we report evidence that facial contrast-the luminance and color contrast between internal facial features and the surrounding skin-is a cue for the perception of health from the face. Facial contrast was measured from a large sample of Caucasian female faces, and was found to predict ratings of perceived health. Most aspects of facial contrast were positively related to perceived health, meaning that faces with higher facial contrast appeared healthier. In 2 subsequent experiments, we manipulated facial contrast and found that participants perceived faces with increased facial contrast as appearing healthier than faces with decreased facial contrast. These results support the idea that facial contrast is a cue for perceived health. This finding adds to the growing knowledge about perceived health from the face, and helps to ground our understanding of perceived health in terms of lower-level perceptual features such as contrast. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldberg, Louise F.; Harmon, Anna C.
2015-04-01
Thermal and moisture problems in existing basements create a unique challenge because the exterior face of the wall is not easily or inexpensively accessible. This approach addresses thermal and moisture management from the interior face of the wall without disturbing the exterior soil and landscaping. the interior and exterior environments. This approach has the potential for improving durability, comfort, and indoor air quality. This project was funded jointly by the National Renewable Energy Laboratory (NREL) and Oak Ridge National Laboratory (ORNL). ORNL focused on developing a full basement wall system experimental database to enable others to validate hygrothermal simulation codes.more » NREL focused on testing the moisture durability of practical basement wall interior insulation retrofit solutions for cold climates. The project has produced a physically credible and reliable long-term hygrothermal performance database for retrofit foundation wall insulation systems in zone 6 and 7 climates that are fully compliant with the performance criteria in the 2009 Minnesota Energy Code. The experimental data were configured into a standard format that can be published online and that is compatible with standard commercially available spreadsheet and database software.« less
Russo, Frank A.
2018-01-01
The RAVDESS is a validated multimodal database of emotional speech and song. The database is gender balanced consisting of 24 professional actors, vocalizing lexically-matched statements in a neutral North American accent. Speech includes calm, happy, sad, angry, fearful, surprise, and disgust expressions, and song contains calm, happy, sad, angry, and fearful emotions. Each expression is produced at two levels of emotional intensity, with an additional neutral expression. All conditions are available in face-and-voice, face-only, and voice-only formats. The set of 7356 recordings were each rated 10 times on emotional validity, intensity, and genuineness. Ratings were provided by 247 individuals who were characteristic of untrained research participants from North America. A further set of 72 participants provided test-retest data. High levels of emotional validity and test-retest intrarater reliability were reported. Corrected accuracy and composite "goodness" measures are presented to assist researchers in the selection of stimuli. All recordings are made freely available under a Creative Commons license and can be downloaded at https://doi.org/10.5281/zenodo.1188976. PMID:29768426
Establishment of an Italian chronic migraine database: a multicenter pilot study.
Barbanti, Piero; Fofi, L; Cevoli, S; Torelli, P; Aurilia, C; Egeo, G; Grazzi, L; D'Amico, D; Manzoni, G C; Cortelli, P; Infarinato, F; Vanacore, N
2018-05-01
To optimize chronic migraine (CM) ascertainment and phenotype definition, provide adequate clinical management and health care procedures, and rationalize economic resources allocation, we performed an exploratory multicenter pilot study aimed at establishing a CM database, the first step for developing a future Italian CM registry. We enrolled 63 consecutive CM patients in four tertiary headache centers screened with face-to-face interviews using an ad hoc dedicated semi-structured questionnaire gathering detailed information on life-style, behavioral and socio-demographic factors, comorbidities, and migraine features before and after chronicization and healthcare resource use. Our pilot study provided useful insights revealing that CM patients (1) presented in most cases symptoms of peripheral trigeminal sensitization, a relatively unexpected feature which could be useful to unravel different CM endophenotypes and to predict trigeminal-targeted treatments' responsiveness; (2) had been frequently admitted to emergency departments; (3) had undergone, sometime repeatedly, unnecessary or inappropriate investigations; (4) got rarely illness benefit exemption or disability allowance only. We deem that the expansion of the database-shortly including many other Italian headache centers-will contribute to more precisely outline CM endophenotypes, hence improving management, treatment, and economic resource allocation, ultimately reducing CM burden on both patients and health system.
On the Privacy Protection of Biometric Traits: Palmprint, Face, and Signature
NASA Astrophysics Data System (ADS)
Panigrahy, Saroj Kumar; Jena, Debasish; Korra, Sathya Babu; Jena, Sanjay Kumar
Biometrics are expected to add a new level of security to applications, as a person attempting access must prove who he or she really is by presenting a biometric to the system. The recent developments in the biometrics area have lead to smaller, faster and cheaper systems, which in turn has increased the number of possible application areas for biometric identity verification. The biometric data, being derived from human bodies (and especially when used to identify or verify those bodies) is considered personally identifiable information (PII). The collection, use and disclosure of biometric data — image or template, invokes rights on the part of an individual and obligations on the part of an organization. As biometric uses and databases grow, so do concerns that the personal data collected will not be used in reasonable and accountable ways. Privacy concerns arise when biometric data are used for secondary purposes, invoking function creep, data matching, aggregation, surveillance and profiling. Biometric data transmitted across networks and stored in various databases by others can also be stolen, copied, or otherwise misused in ways that can materially affect the individual involved. As Biometric systems are vulnerable to replay, database and brute-force attacks, such potential attacks must be analysed before they are massively deployed in security systems. Along with security, also the privacy of the users is an important factor as the constructions of lines in palmprints contain personal characteristics, from face images a person can be recognised, and fake signatures can be practised by carefully watching the signature images available in the database. We propose a cryptographic approach to encrypt the images of palmprints, faces, and signatures by an advanced Hill cipher technique for hiding the information in the images. It also provides security to these images from being attacked by above mentioned attacks. So, during the feature extraction, the encrypted images are first decrypted, then the features are extracted, and used for identification or verification.
Locality constrained joint dynamic sparse representation for local matching based face recognition.
Wang, Jianzhong; Yi, Yugen; Zhou, Wei; Shi, Yanjiao; Qi, Miao; Zhang, Ming; Zhang, Baoxue; Kong, Jun
2014-01-01
Recently, Sparse Representation-based Classification (SRC) has attracted a lot of attention for its applications to various tasks, especially in biometric techniques such as face recognition. However, factors such as lighting, expression, pose and disguise variations in face images will decrease the performances of SRC and most other face recognition techniques. In order to overcome these limitations, we propose a robust face recognition method named Locality Constrained Joint Dynamic Sparse Representation-based Classification (LCJDSRC) in this paper. In our method, a face image is first partitioned into several smaller sub-images. Then, these sub-images are sparsely represented using the proposed locality constrained joint dynamic sparse representation algorithm. Finally, the representation results for all sub-images are aggregated to obtain the final recognition result. Compared with other algorithms which process each sub-image of a face image independently, the proposed algorithm regards the local matching-based face recognition as a multi-task learning problem. Thus, the latent relationships among the sub-images from the same face image are taken into account. Meanwhile, the locality information of the data is also considered in our algorithm. We evaluate our algorithm by comparing it with other state-of-the-art approaches. Extensive experiments on four benchmark face databases (ORL, Extended YaleB, AR and LFW) demonstrate the effectiveness of LCJDSRC.
NASA Astrophysics Data System (ADS)
Huang, Jian; Yuen, Pong C.; Chen, Wen-Sheng; Lai, J. H.
2005-05-01
Many face recognition algorithms/systems have been developed in the last decade and excellent performances have also been reported when there is a sufficient number of representative training samples. In many real-life applications such as passport identification, only one well-controlled frontal sample image is available for training. Under this situation, the performance of existing algorithms will degrade dramatically or may not even be implemented. We propose a component-based linear discriminant analysis (LDA) method to solve the one training sample problem. The basic idea of the proposed method is to construct local facial feature component bunches by moving each local feature region in four directions. In this way, we not only generate more samples with lower dimension than the original image, but also consider the face detection localization error while training. After that, we propose a subspace LDA method, which is tailor-made for a small number of training samples, for the local feature projection to maximize the discrimination power. Theoretical analysis and experiment results show that our proposed subspace LDA is efficient and overcomes the limitations in existing LDA methods. Finally, we combine the contributions of each local component bunch with a weighted combination scheme to draw the recognition decision. A FERET database is used for evaluating the proposed method and results are encouraging.
Age synthesis and estimation via faces: a survey.
Fu, Yun; Guo, Guodong; Huang, Thomas S
2010-11-01
Human age, as an important personal trait, can be directly inferred by distinct patterns emerging from the facial appearance. Derived from rapid advances in computer graphics and machine vision, computer-based age synthesis and estimation via faces have become particularly prevalent topics recently because of their explosively emerging real-world applications, such as forensic art, electronic customer relationship management, security control and surveillance monitoring, biometrics, entertainment, and cosmetology. Age synthesis is defined to rerender a face image aesthetically with natural aging and rejuvenating effects on the individual face. Age estimation is defined to label a face image automatically with the exact age (year) or the age group (year range) of the individual face. Because of their particularity and complexity, both problems are attractive yet challenging to computer-based application system designers. Large efforts from both academia and industry have been devoted in the last a few decades. In this paper, we survey the complete state-of-the-art techniques in the face image-based age synthesis and estimation topics. Existing models, popular algorithms, system performances, technical difficulties, popular face aging databases, evaluation protocols, and promising future directions are also provided with systematic discussions.
Development of preference for conspecific faces in human infants.
Sanefuji, Wakako; Wada, Kazuko; Yamamoto, Tomoka; Mohri, Ikuko; Taniike, Masako
2014-04-01
Previous studies have proposed that humans may be born with mechanisms that attend to conspecifics. However, as previous studies have relied on stimuli featuring human adults, it remains unclear whether infants attend only to adult humans or to the entire human species. We found that 1-month-old infants (n = 23) were able to differentiate between human and monkey infants' faces; however, they exhibited no preference for human infants' faces over monkey infants' faces (n = 24) and discriminated individual differences only within the category of human infants' faces (n = 30). We successfully replicated previous findings that 1-month-old infants (n = 42) preferred adult humans, even adults of other races, to adult monkeys. Further, by 3 months of age, infants (n = 55) preferred human faces to monkey faces with both infant and adult stimuli. Human infants' spontaneous preference for conspecific faces appears to be initially limited to conspecific adults and afterward extended to conspecific infants. Future research should attempt to determine whether preference for human adults results from some innate tendency to attend to conspecific adults or from the impact of early experiences with adults. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Face sketch recognition based on edge enhancement via deep learning
NASA Astrophysics Data System (ADS)
Xie, Zhenzhu; Yang, Fumeng; Zhang, Yuming; Wu, Congzhong
2017-11-01
In this paper,we address the face sketch recognition problem. Firstly, we utilize the eigenface algorithm to convert a sketch image into a synthesized sketch face image. Subsequently, considering the low-level vision problem in synthesized face sketch image .Super resolution reconstruction algorithm based on CNN(convolutional neural network) is employed to improve the visual effect. To be specific, we uses a lightweight super-resolution structure to learn a residual mapping instead of directly mapping the feature maps from the low-level space to high-level patch representations, which making the networks are easier to optimize and have lower computational complexity. Finally, we adopt LDA(Linear Discriminant Analysis) algorithm to realize face sketch recognition on synthesized face image before super resolution and after respectively. Extensive experiments on the face sketch database(CUFS) from CUHK demonstrate that the recognition rate of SVM(Support Vector Machine) algorithm improves from 65% to 69% and the recognition rate of LDA(Linear Discriminant Analysis) algorithm improves from 69% to 75%.What'more,the synthesized face image after super resolution can not only better describer image details such as hair ,nose and mouth etc, but also improve the recognition accuracy effectively.
Jeddi, Fatemeh Rangraz; Farzandipoor, Mehrdad; Arabfard, Masoud; Hosseini, Azam Haj Mohammad
2016-01-01
Objective: The purpose of this study was investigating situation and presenting a conceptual model for clinical governance information system by using UML in two sample hospitals. Background: However, use of information is one of the fundamental components of clinical governance; but unfortunately, it does not pay much attention to information management. Material and Methods: A cross sectional study was conducted in October 2012- May 2013. Data were gathered through questionnaires and interviews in two sample hospitals. Face and content validity of the questionnaire has been confirmed by experts. Data were collected from a pilot hospital and reforms were carried out and Final questionnaire was prepared. Data were analyzed by descriptive statistics and SPSS 16 software. Results: With the scenario derived from questionnaires, UML diagrams are presented by using Rational Rose 7 software. The results showed that 32.14 percent Indicators of the hospitals were calculated. Database was not designed and 100 percent of the hospital’s clinical governance was required to create a database. Conclusion: Clinical governance unit of hospitals to perform its mission, do not have access to all the needed indicators. Defining of Processes and drawing of models and creating of database are essential for designing of information systems. PMID:27147804
Jeddi, Fatemeh Rangraz; Farzandipoor, Mehrdad; Arabfard, Masoud; Hosseini, Azam Haj Mohammad
2014-01-01
Objective: The purpose of this study was investigating situation and presenting a conceptual model for clinical governance information system by using UML in two sample hospitals. Background: However, use of information is one of the fundamental components of clinical governance; but unfortunately, it does not pay much attention to information management. Material and Methods: A cross sectional study was conducted in October 2012- May 2013. Data were gathered through questionnaires and interviews in two sample hospitals. Face and content validity of the questionnaire has been confirmed by experts. Data were collected from a pilot hospital and reforms were carried out and Final questionnaire was prepared. Data were analyzed by descriptive statistics and SPSS 16 software. Results: With the scenario derived from questionnaires, UML diagrams are presented by using Rational Rose 7 software. The results showed that 32.14 percent Indicators of the hospitals were calculated. Database was not designed and 100 percent of the hospital’s clinical governance was required to create a database. Conclusion: Clinical governance unit of hospitals to perform its mission, do not have access to all the needed indicators. Defining of Processes and drawing of models and creating of database are essential for designing of information systems. PMID:24825933
Human emotion detector based on genetic algorithm using lip features
NASA Astrophysics Data System (ADS)
Brown, Terrence; Fetanat, Gholamreza; Homaifar, Abdollah; Tsou, Brian; Mendoza-Schrock, Olga
2010-04-01
We predicted human emotion using a Genetic Algorithm (GA) based lip feature extractor from facial images to classify all seven universal emotions of fear, happiness, dislike, surprise, anger, sadness and neutrality. First, we isolated the mouth from the input images using special methods, such as Region of Interest (ROI) acquisition, grayscaling, histogram equalization, filtering, and edge detection. Next, the GA determined the optimal or near optimal ellipse parameters that circumvent and separate the mouth into upper and lower lips. The two ellipses then went through fitness calculation and were followed by training using a database of Japanese women's faces expressing all seven emotions. Finally, our proposed algorithm was tested using a published database consisting of emotions from several persons. The final results were then presented in confusion matrices. Our results showed an accuracy that varies from 20% to 60% for each of the seven emotions. The errors were mainly due to inaccuracies in the classification, and also due to the different expressions in the given emotion database. Detailed analysis of these errors pointed to the limitation of detecting emotion based on the lip features alone. Similar work [1] has been done in the literature for emotion detection in only one person, we have successfully extended our GA based solution to include several subjects.
Eye center localization and gaze gesture recognition for human-computer interaction.
Zhang, Wenhao; Smith, Melvyn L; Smith, Lyndon N; Farooq, Abdul
2016-03-01
This paper introduces an unsupervised modular approach for accurate and real-time eye center localization in images and videos, thus allowing a coarse-to-fine, global-to-regional scheme. The trajectories of eye centers in consecutive frames, i.e., gaze gestures, are further analyzed, recognized, and employed to boost the human-computer interaction (HCI) experience. This modular approach makes use of isophote and gradient features to estimate the eye center locations. A selective oriented gradient filter has been specifically designed to remove strong gradients from eyebrows, eye corners, and shadows, which sabotage most eye center localization methods. A real-world implementation utilizing these algorithms has been designed in the form of an interactive advertising billboard to demonstrate the effectiveness of our method for HCI. The eye center localization algorithm has been compared with 10 other algorithms on the BioID database and six other algorithms on the GI4E database. It outperforms all the other algorithms in comparison in terms of localization accuracy. Further tests on the extended Yale Face Database b and self-collected data have proved this algorithm to be robust against moderate head poses and poor illumination conditions. The interactive advertising billboard has manifested outstanding usability and effectiveness in our tests and shows great potential for benefiting a wide range of real-world HCI applications.
Door Security using Face Detection and Raspberry Pi
NASA Astrophysics Data System (ADS)
Bhutra, Venkatesh; Kumar, Harshav; Jangid, Santosh; Solanki, L.
2018-03-01
With the world moving towards advanced technologies, security forms a crucial part in daily life. Among the many techniques used for this purpose, Face Recognition stands as effective means of authentication and security. This paper deals with the user of principal component and security. PCA is a statistical approach used to simplify a data set. The minimum Euclidean distance found from the PCA technique is used to recognize the face. Raspberry Pi a low cost ARM based computer on a small circuit board, controls the servo motor and other sensors. The servo-motor is in turn attached to the doors of home and opens up when the face is recognized. The proposed work has been done using a self-made training database of students from B.K. Birla Institute of Engineering and Technology, Pilani, Rajasthan, India.
Good match exploration for infrared face recognition
NASA Astrophysics Data System (ADS)
Yang, Changcai; Zhou, Huabing; Sun, Sheng; Liu, Renfeng; Zhao, Ji; Ma, Jiayi
2014-11-01
Establishing good feature correspondence is a critical prerequisite and a challenging task for infrared (IR) face recognition. Recent studies revealed that the scale invariant feature transform (SIFT) descriptor outperforms other local descriptors for feature matching. However, it only uses local appearance information for matching, and hence inevitably leads to a number of false matches. To address this issue, this paper explores global structure information (GSI) among SIFT correspondences, and proposes a new method SIFT-GSI for good match exploration. This is achieved by fitting a smooth mapping function for the underlying correct matches, which involves softassign and deterministic annealing. Quantitative comparisons with state-of-the-art methods on a publicly available IR human face database demonstrate that SIFT-GSI significantly outperforms other methods for feature matching, and hence it is able to improve the reliability of IR face recognition systems.
Rear-facing versus forward-facing child restraints: an updated assessment.
McMurry, Timothy L; Arbogast, Kristy B; Sherwood, Christopher P; Vaca, Federico; Bull, Marilyn; Crandall, Jeff R; Kent, Richard W
2018-02-01
The National Highway Traffic Safety Administration and the American Academy of Pediatrics recommend children be placed in rear-facing child restraint systems (RFCRS) until at least age 2. These recommendations are based on laboratory biomechanical tests and field data analyses. Due to concerns raised by an independent researcher, we re-evaluated the field evidence in favour of RFCRS using the National Automotive Sampling System Crashworthiness Data System (NASS-CDS) database. Children aged 0 or 1 year old (0-23 months) riding in either rear-facing or forward-facing child restraint systems (FFCRS) were selected from the NASS-CDS database, and injury rates were compared by seat orientation using survey-weighted χ 2 tests. In order to compare with previous work, we analysed NASS-CDS years 1988-2003, and then updated the analyses to include all available data using NASS-CDS years 1988-2015. Years 1988-2015 of NASS-CDS contained 1107 children aged 0 or 1 year old meeting inclusion criteria, with 47 of these children sustaining injuries with Injury Severity Score of at least 9. Both 0-year-old and 1-year-old children in RFCRS had lower rates of injury than children in FFCRS, but the available sample size was too small for reasonable statistical power or to allow meaningful regression controlling for covariates. Non-US field data and laboratory tests support the recommendation that children be kept in RFCRS for as long as possible, but the US NASS-CDS field data are too limited to serve as a strong statistical basis for these recommendations. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
NASA Astrophysics Data System (ADS)
Watanabe, Eriko; Ishikawa, Mami; Ohta, Maiko; Kodate, Kashiko
2005-09-01
Face recognition is used in a wide range of security systems, such as monitoring credit card use, searching for individuals with street cameras via Internet and maintaining immigration control. There are still many technical subjects under study. For instance, the number of images that can be stored is limited under the current system, and the rate of recognition must be improved to account for photo shots taken at different angles under various conditions. We implemented a fully automatic Fast Face Recognition Optical Correlator (FARCO) system by using a 1000 frame/s optical parallel correlator designed and assembled by us. Operational speed for the 1: N (i.e. matching a pair of images among N, where N refers to the number of images in the database) identification experiment (4000 face images) amounts to less than 1.5 seconds, including the pre/post processing. From trial 1: N identification experiments using FARCO, we acquired low error rates of 2.6% False Reject Rate and 1.3% False Accept Rate. By making the most of the high-speed data-processing capability of this system, much more robustness can be achieved for various recognition conditions when large-category data are registered for a single person. We propose a face recognition algorithm for the FARCO while employing a temporal image sequence of moving images. Applying this algorithm to a natural posture, a two times higher recognition rate scored compared with our conventional system. The system has high potential for future use in a variety of purposes such as search for criminal suspects by use of street and airport video cameras, registration of babies at hospitals or handling of an immeasurable number of images in a database.
Data Dealers Face Stormy Weather.
ERIC Educational Resources Information Center
Tenopir, Carol; Barry, Jeff
1998-01-01
This report, the second annual Database Marketplace survey, analyzes information gathered from 29 companies that distribute and produce information available through online, Web, or CD-ROM systems. In addition to data, topics include company mergers, takeovers, sales, accomplishments, and future plans. (Author/LRW)
EUnetHTA information management system: development and lessons learned.
Chalon, Patrice X; Kraemer, Peter
2014-11-01
The aim of this study was to describe the techniques used in achieving consensus on common standards to be implemented in the EUnetHTA Information Management System (IMS); and to describe how interoperability between tools was explored. Three face to face meetings were organized to identify and agree on common standards to the development of online tools. Two tools were created to demonstrate the added value of implementing interoperability standards at local levels. Developers of tools outside EUnetHTA were identified and contacted. Four common standards have been agreed on by consensus; and consequently all EUnetHTA tools have been modified or designed accordingly. RDF Site Summary (RSS) has demonstrated a good potential to support rapid dissemination of HTA information. Contacts outside EUnetHTA resulted in direct collaboration (HTA glossary, HTAi Vortal), evaluation of options for interoperability between tools (CRD HTA database) or a formal framework to prepare cooperation on concrete projects (INAHTA projects database). While being entitled a project on IT infrastructure, the work program was also about people. When having to agree on complex topics, fostering a cohesive group dynamic and hosting face to face meetings brings added value and enhances understanding between partners. The adoption of widespread standards enhanced the homogeneity of the EUnetHTA tools and should thus contribute to their wider use, therefore, to the general objective of EUnetHTA. The initiatives on interoperability of systems need to be developed further to support a general interoperable information system that could benefit the whole HTA community.
NASA Astrophysics Data System (ADS)
Hsu, Charles; Viazanko, Michael; O'Looney, Jimmy; Szu, Harold
2009-04-01
Modularity Biometric System (MBS) is an approach to support AiTR of the cooperated and/or non-cooperated standoff biometric in an area persistent surveillance. Advanced active and passive EOIR and RF sensor suite is not considered here. Neither will we consider the ROC, PD vs. FAR, versus the standoff POT in this paper. Our goal is to catch the "most wanted (MW)" two dozens, separately furthermore ad hoc woman MW class from man MW class, given their archrivals sparse front face data basis, by means of various new instantaneous input called probing faces. We present an advanced algorithm: mini-Max classifier, a sparse sample realization of Cramer-Rao Fisher bound of the Maximum Likelihood classifier that minimize the dispersions among the same woman classes and maximize the separation among different man-woman classes, based on the simple feature space of MIT Petland eigen-faces. The original aspect consists of a modular structured design approach at the system-level with multi-level architectures, multiple computing paradigms, and adaptable/evolvable techniques to allow for achieving a scalable structure in terms of biometric algorithms, identification quality, sensors, database complexity, database integration, and component heterogenity. MBS consist of a number of biometric technologies including fingerprints, vein maps, voice and face recognitions with innovative DSP algorithm, and their hardware implementations such as using Field Programmable Gate arrays (FPGAs). Biometric technologies and the composed modularity biometric system are significant for governmental agencies, enterprises, banks and all other organizations to protect people or control access to critical resources.
Facial Expression Recognition with Fusion Features Extracted from Salient Facial Areas.
Liu, Yanpeng; Li, Yibin; Ma, Xin; Song, Rui
2017-03-29
In the pattern recognition domain, deep architectures are currently widely used and they have achieved fine results. However, these deep architectures make particular demands, especially in terms of their requirement for big datasets and GPU. Aiming to gain better results without deep networks, we propose a simplified algorithm framework using fusion features extracted from the salient areas of faces. Furthermore, the proposed algorithm has achieved a better result than some deep architectures. For extracting more effective features, this paper firstly defines the salient areas on the faces. This paper normalizes the salient areas of the same location in the faces to the same size; therefore, it can extracts more similar features from different subjects. LBP and HOG features are extracted from the salient areas, fusion features' dimensions are reduced by Principal Component Analysis (PCA) and we apply several classifiers to classify the six basic expressions at once. This paper proposes a salient areas definitude method which uses peak expressions frames compared with neutral faces. This paper also proposes and applies the idea of normalizing the salient areas to align the specific areas which express the different expressions. As a result, the salient areas found from different subjects are the same size. In addition, the gamma correction method is firstly applied on LBP features in our algorithm framework which improves our recognition rates significantly. By applying this algorithm framework, our research has gained state-of-the-art performances on CK+ database and JAFFE database.
Walker, Mirella; Vetter, Thomas
2016-04-01
General, spontaneous evaluations of strangers based on their faces have been shown to reflect judgments of these persons' intention and ability to harm. These evaluations can be mapped onto a 2D space defined by the dimensions trustworthiness (intention) and dominance (ability). Here we go beyond general evaluations and focus on more specific personality judgments derived from the Big Two and Big Five personality concepts. In particular, we investigate whether Big Two/Big Five personality judgments can be mapped onto the 2D space defined by the dimensions trustworthiness and dominance. Results indicate that judgments of the Big Two personality dimensions almost perfectly map onto the 2D space. In contrast, at least 3 of the Big Five dimensions (i.e., neuroticism, extraversion, and conscientiousness) go beyond the 2D space, indicating that additional dimensions are necessary to describe more specific face-based personality judgments accurately. Building on this evidence, we model the Big Two/Big Five personality dimensions in real facial photographs. Results from 2 validation studies show that the Big Two/Big Five are perceived reliably across different samples of faces and participants. Moreover, results reveal that participants differentiate reliably between the different Big Two/Big Five dimensions. Importantly, this high level of agreement and differentiation in personality judgments from faces likely creates a subjective reality which may have serious consequences for those being perceived-notably, these consequences ensue because the subjective reality is socially shared, irrespective of the judgments' validity. The methodological approach introduced here might prove useful in various psychological disciplines. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Towards a method for determining age ranges from faces of juveniles on photographs.
Cummaudo, M; Guerzoni, M; Gibelli, D; Cigada, A; Obertovà, Z; Ratnayake, M; Poppa, P; Gabriel, P; Ritz-Timme, S; Cattaneo, C
2014-06-01
The steady increase in the distribution of juvenile pornographic material in recent years strongly required valid methods for estimating the age of the victims. At the present in fact forensic experts still commonly use the assessment of sexual characteristics by Tanner staging, although they have proven to be too subjective and deceiving for age estimation. The objective of this study, inspired by a previous EU project involving Italy, Germany and Lithuania, is to verify the applicability of certain anthropometric indices of faces in order to determine age and to create a database of facial measurements on a population of children in order to improve face ageing techniques. In this study, 1924 standardized facial images in frontal view and 1921 in lateral view of individuals from 7 age groups (3-5 years, 6-8 years, 9-11 years, 12-14 years, 15-17 years, 18-20 years, 21-24 years) underwent metric analysis. Individuals were all of Caucasoid ancestry and Italian nationality. Eighteen anthropometric indices in the frontal view and five in the lateral view were then calculated from the obtained measurements. Indices showing a correlation with age were ch-ch/ex-ex, ch-ch/pu-pu, en-en/ch-ch and se-sto/ex-ex in the frontal view, se-prn/se-sn, se-prn/se-sto and se-sn/se-sto in the lateral view. All the indices increased with age except for en-en/ch-ch, without relevant differences between males and females. These results provide an interesting starting point not only for placing a photographed face in an age range but also for refining the techniques of face ageing and personal identification. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
A computer-generated animated face stimulus set for psychophysiological research
Naples, Adam; Nguyen-Phuc, Alyssa; Coffman, Marika; Kresse, Anna; Faja, Susan; Bernier, Raphael; McPartland., James
2014-01-01
Human faces are fundamentally dynamic, but experimental investigations of face perception traditionally rely on static images of faces. While naturalistic videos of actors have been used with success in some contexts, much research in neuroscience and psychophysics demands carefully controlled stimuli. In this paper, we describe a novel set of computer generated, dynamic, face stimuli. These grayscale faces are tightly controlled for low- and high-level visual properties. All faces are standardized in terms of size, luminance, and location and size of facial features. Each face begins with a neutral pose and transitions to an expression over the course of 30 frames. Altogether there are 222 stimuli spanning 3 different categories of movement: (1) an affective movement (fearful face); (2) a neutral movement (close-lipped, puffed cheeks with open eyes); and (3) a biologically impossible movement (upward dislocation of eyes and mouth). To determine whether early brain responses sensitive to low-level visual features differed between expressions, we measured the occipital P100 event related potential (ERP), which is known to reflect differences in early stages of visual processing and the N170, which reflects structural encoding of faces. We found no differences between faces at the P100, indicating that different face categories were well matched on low-level image properties. This database provides researchers with a well-controlled set of dynamic faces controlled on low-level image characteristics that are applicable to a range of research questions in social perception. PMID:25028164
Motion facilitates face perception across changes in viewpoint and expression in older adults.
Maguinness, Corrina; Newell, Fiona N
2014-12-01
Faces are inherently dynamic stimuli. However, face perception in younger adults appears to be mediated by the ability to extract structural cues from static images and a benefit of motion is inconsistent. In contrast, static face processing is poorer and more image-dependent in older adults. We therefore compared the role of facial motion in younger and older adults to assess whether motion can enhance perception when static cues are insufficient. In our studies, older and younger adults learned faces presented in motion or in a sequence of static images, containing rigid (viewpoint) or nonrigid (expression) changes. Immediately following learning, participants matched a static test image to the learned face which varied by viewpoint (Experiment 1) or expression (Experiment 2) and was either learned or novel. First, we found an age effect with better face matching performance in younger than in older adults. However, we observed face matching performance improved in the older adult group, across changes in viewpoint and expression, when faces were learned in motion relative to static presentation. There was no benefit for facial (nonrigid) motion when the task involved matching inverted faces (Experiment 3), suggesting that the ability to use dynamic face information for the purpose of recognition reflects motion encoding which is specific to upright faces. Our results suggest that ageing may offer a unique insight into how dynamic cues support face processing, which may not be readily observed in younger adults' performance. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
Improving face image extraction by using deep learning technique
NASA Astrophysics Data System (ADS)
Xue, Zhiyun; Antani, Sameer; Long, L. R.; Demner-Fushman, Dina; Thoma, George R.
2016-03-01
The National Library of Medicine (NLM) has made a collection of over a 1.2 million research articles containing 3.2 million figure images searchable using the Open-iSM multimodal (text+image) search engine. Many images are visible light photographs, some of which are images containing faces ("face images"). Some of these face images are acquired in unconstrained settings, while others are studio photos. To extract the face regions in the images, we first applied one of the most widely-used face detectors, a pre-trained Viola-Jones detector implemented in Matlab and OpenCV. The Viola-Jones detector was trained for unconstrained face image detection, but the results for the NLM database included many false positives, which resulted in a very low precision. To improve this performance, we applied a deep learning technique, which reduced the number of false positives and as a result, the detection precision was improved significantly. (For example, the classification accuracy for identifying whether the face regions output by this Viola- Jones detector are true positives or not in a test set is about 96%.) By combining these two techniques (Viola-Jones and deep learning) we were able to increase the system precision considerably, while avoiding the need to manually construct a large training set by manual delineation of the face regions.
Video face recognition against a watch list
NASA Astrophysics Data System (ADS)
Abbas, Jehanzeb; Dagli, Charlie K.; Huang, Thomas S.
2007-10-01
Due to a large increase in the video surveillance data recently in an effort to maintain high security at public places, we need more robust systems to analyze this data and make tasks like face recognition a realistic possibility in challenging environments. In this paper we explore a watch-list scenario where we use an appearance based model to classify query faces from low resolution videos into either a watch-list or a non-watch-list face. We then use our simple yet a powerful face recognition system to recognize the faces classified as watch-list faces. Where the watch-list includes those people that we are interested in recognizing. Our system uses simple feature machine algorithms from our previous work to match video faces against still images. To test our approach, we match video faces against a large database of still images obtained from a previous work in the field from Yahoo News over a period of time. We do this matching in an efficient manner to come up with a faster and nearly real-time system. This system can be incorporated into a larger surveillance system equipped with advanced algorithms involving anomalous event detection and activity recognition. This is a step towards more secure and robust surveillance systems and efficient video data analysis.
Hargis, Mary B; Castel, Alan D
2017-06-01
The ability to associate items in memory is critical for social interactions. Older adults show deficits in remembering associative information but can sometimes remember high-value information. In two experiments, younger and older participants studied faces, names, and occupations that were of differing social value. There were no age differences in the recall of important information in Experiment 1, but age differences were present for less important information. In Experiment 2, when younger adults' encoding time was reduced, age differences were largely absent. These findings are considered in light of value-directed strategies when remembering social associative information. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
On Hunting Animals of the Biometric Menagerie for Online Signature.
Houmani, Nesma; Garcia-Salicetti, Sonia
2016-01-01
Individuals behave differently regarding to biometric authentication systems. This fact was formalized in the literature by the concept of Biometric Menagerie, defining and labeling user groups with animal names in order to reflect their characteristics with respect to biometric systems. This concept was illustrated for face, fingerprint, iris, and speech modalities. The present study extends the Biometric Menagerie to online signatures, by proposing a novel methodology that ties specific quality measures for signatures to categories of the Biometric Menagerie. Such measures are combined for retrieving automatically writer categories of the extended version of the Biometric Menagerie. Performance analysis with different types of classifiers shows the pertinence of our approach on the well-known MCYT-100 database.
Mapping Norway - a Method to Register and Survey the Status of Accessibility
NASA Astrophysics Data System (ADS)
Michaelis, Sven; Bögelsack, Kathrin
2018-05-01
The Norwegian mapping authority has developed a standard method for mapping accessibility mostly for people with limited or no walking abilities in urban and recreational areas. We choose an object-orientated approach where points, lines and polygons represents objects in the environment. All data are stored in a geospatial database, so they can be presented as web map and analyzed using GIS software. By the end of 2016 more than 160 municipalities are mapped using that method. The aim of this project is to establish a national standard for mapping and to provide a geodatabase that shows the status of accessibility throughout Norway. The data provide a useful tool for national statistics, local planning authorities and private users. First results show that accessibility is low and Norway still faces many challenges to meet the government's goals for Universal Design.
Dudding-Byth, Tracy; Baxter, Anne; Holliday, Elizabeth G; Hackett, Anna; O'Donnell, Sheridan; White, Susan M; Attia, John; Brunner, Han; de Vries, Bert; Koolen, David; Kleefstra, Tjitske; Ratwatte, Seshika; Riveros, Carlos; Brain, Steve; Lovell, Brian C
2017-12-19
Massively parallel genetic sequencing allows rapid testing of known intellectual disability (ID) genes. However, the discovery of novel syndromic ID genes requires molecular confirmation in at least a second or a cluster of individuals with an overlapping phenotype or similar facial gestalt. Using computer face-matching technology we report an automated approach to matching the faces of non-identical individuals with the same genetic syndrome within a database of 3681 images [1600 images of one of 10 genetic syndrome subgroups together with 2081 control images]. Using the leave-one-out method, two research questions were specified: 1) Using two-dimensional (2D) photographs of individuals with one of 10 genetic syndromes within a database of images, did the technology correctly identify more than expected by chance: i) a top match? ii) at least one match within the top five matches? or iii) at least one in the top 10 with an individual from the same syndrome subgroup? 2) Was there concordance between correct technology-based matches and whether two out of three clinical geneticists would have considered the diagnosis based on the image alone? The computer face-matching technology correctly identifies a top match, at least one correct match in the top five and at least one in the top 10 more than expected by chance (P < 0.00001). There was low agreement between the technology and clinicians, with higher accuracy of the technology when results were discordant (P < 0.01) for all syndromes except Kabuki syndrome. Although the accuracy of the computer face-matching technology was tested on images of individuals with known syndromic forms of intellectual disability, the results of this pilot study illustrate the potential utility of face-matching technology within deep phenotyping platforms to facilitate the interpretation of DNA sequencing data for individuals who remain undiagnosed despite testing the known developmental disorder genes.
Gabor-based kernel PCA with fractional power polynomial models for face recognition.
Liu, Chengjun
2004-05-01
This paper presents a novel Gabor-based kernel Principal Component Analysis (PCA) method by integrating the Gabor wavelet representation of face images and the kernel PCA method for face recognition. Gabor wavelets first derive desirable facial features characterized by spatial frequency, spatial locality, and orientation selectivity to cope with the variations due to illumination and facial expression changes. The kernel PCA method is then extended to include fractional power polynomial models for enhanced face recognition performance. A fractional power polynomial, however, does not necessarily define a kernel function, as it might not define a positive semidefinite Gram matrix. Note that the sigmoid kernels, one of the three classes of widely used kernel functions (polynomial kernels, Gaussian kernels, and sigmoid kernels), do not actually define a positive semidefinite Gram matrix either. Nevertheless, the sigmoid kernels have been successfully used in practice, such as in building support vector machines. In order to derive real kernel PCA features, we apply only those kernel PCA eigenvectors that are associated with positive eigenvalues. The feasibility of the Gabor-based kernel PCA method with fractional power polynomial models has been successfully tested on both frontal and pose-angled face recognition, using two data sets from the FERET database and the CMU PIE database, respectively. The FERET data set contains 600 frontal face images of 200 subjects, while the PIE data set consists of 680 images across five poses (left and right profiles, left and right half profiles, and frontal view) with two different facial expressions (neutral and smiling) of 68 subjects. The effectiveness of the Gabor-based kernel PCA method with fractional power polynomial models is shown in terms of both absolute performance indices and comparative performance against the PCA method, the kernel PCA method with polynomial kernels, the kernel PCA method with fractional power polynomial models, the Gabor wavelet-based PCA method, and the Gabor wavelet-based kernel PCA method with polynomial kernels.
Cardoso, Christopher; Ellenbogen, Mark A; Linnen, Anne-Marie
2014-02-01
Evidence suggests that intranasal oxytocin enhances the perception of emotion in facial expressions during standard emotion identification tasks. However, it is not clear whether this effect is desirable in people who do not show deficits in emotion perception. That is, a heightened perception of emotion in faces could lead to "oversensitivity" to the emotions of others in nonclinical participants. The goal of this study was to assess the effects of intranasal oxytocin on emotion perception using ecologically valid social and nonsocial visual tasks. Eighty-two participants (42 women) self-administered a 24 IU dose of intranasal oxytocin or a placebo in a double-blind, randomized experiment and then completed the perceiving and understanding emotion components of the Mayer-Salovey-Caruso Emotional Intelligence Test. In this test, emotion identification accuracy is based on agreement with a normative sample. As expected, participants administered intranasal oxytocin rated emotion in facial stimuli as expressing greater emotional intensity than those given a placebo. Consequently, accurate identification of emotion in faces, based on agreement with a normative sample, was impaired in the oxytocin group relative to placebo. No such effect was observed for tests using nonsocial stimuli. The results are consistent with the hypothesis that intranasal oxytocin enhances the salience of social stimuli in the environment, but not nonsocial stimuli. The present findings support a growing literature showing that the effects of intranasal oxytocin on social cognition can be negative under certain circumstances, in this case promoting "oversensitivity" to emotion in faces in healthy people. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Application of OpenCV in Asus Tinker Board for face recognition
NASA Astrophysics Data System (ADS)
Chen, Wei-Yu; Wu, Frank; Hu, Chung-Chiang
2017-06-01
The rise of the Internet of Things to promote the development of technology development board, the processor speed of operation and memory capacity increases, more and more applications, can already be completed before the data on the board computing, combined with the network to sort the information after Sent to the cloud for processing, so that the front of the development board is no longer simply retrieve the data device. This study uses Asus Tinker Board to install OpenCV for real-time face recognition and capture of the face, the acquired face to the Microsoft Cognitive Service cloud database for artificial intelligence comparison, to find out what the face now represents the mood. The face of the corresponding person name, and finally, and then through the text of Speech to read the name of the name to complete the identification of the action. This study was developed using the Asus Tinker Board, which uses ARM-based CPUs with high efficiency and low power consumption, plus improvements in memory and hardware performance for the development board.
Ethnicity identification from face images
NASA Astrophysics Data System (ADS)
Lu, Xiaoguang; Jain, Anil K.
2004-08-01
Human facial images provide the demographic information, such as ethnicity and gender. Conversely, ethnicity and gender also play an important role in face-related applications. Image-based ethnicity identification problem is addressed in a machine learning framework. The Linear Discriminant Analysis (LDA) based scheme is presented for the two-class (Asian vs. non-Asian) ethnicity classification task. Multiscale analysis is applied to the input facial images. An ensemble framework, which integrates the LDA analysis for the input face images at different scales, is proposed to further improve the classification performance. The product rule is used as the combination strategy in the ensemble. Experimental results based on a face database containing 263 subjects (2,630 face images, with equal balance between the two classes) are promising, indicating that LDA and the proposed ensemble framework have sufficient discriminative power for the ethnicity classification problem. The normalized ethnicity classification scores can be helpful in the facial identity recognition. Useful as a "soft" biometric, face matching scores can be updated based on the output of ethnicity classification module. In other words, ethnicity classifier does not have to be perfect to be useful in practice.
Deska, Jason C; Lloyd, E Paige; Hugenberg, Kurt
2018-04-01
The ability to rapidly and accurately decode facial expressions is adaptive for human sociality. Although judgments of emotion are primarily determined by musculature, static face structure can also impact emotion judgments. The current work investigates how facial width-to-height ratio (fWHR), a stable feature of all faces, influences perceivers' judgments of expressive displays of anger and fear (Studies 1a, 1b, & 2), and anger and happiness (Study 3). Across 4 studies, we provide evidence consistent with the hypothesis that perceivers more readily see anger on faces with high fWHR compared with those with low fWHR, which instead facilitates the recognition of fear and happiness. This bias emerges when participants are led to believe that targets displaying otherwise neutral faces are attempting to mask an emotion (Studies 1a & 1b), and is evident when faces display an emotion (Studies 2 & 3). Together, these studies suggest that target facial width-to-height ratio biases ascriptions of emotion with consequences for emotion recognition speed and accuracy. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Filliter, Jillian H; Glover, Jacqueline M; McMullen, Patricia A; Salmon, Joshua P; Johnson, Shannon A
2016-03-01
Houses have often been used as comparison stimuli in face-processing studies because of the many attributes they share with faces (e.g., distinct members of a basic category, consistent internal features, mono-orientation, and relative familiarity). Despite this, no large, well-controlled databases of photographs of houses that have been developed for research use currently exist. To address this gap, we photographed 100 houses and carefully edited these images. We then asked 41 undergraduate students (18 to 31 years of age) to rate each house on three dimensions: typicality, likeability, and face-likeness. The ratings had a high degree of face validity, and analyses revealed a significant positive correlation between typicality and likeability. We anticipate that this stimulus set (i.e., the DalHouses) and the associated ratings will prove useful to face-processing researchers by minimizing the effort required to acquire stimuli and allowing for easier replication and extension of studies. The photographs of all 100 houses and their ratings data can be obtained at http://dx.doi.org/10.6084/m9.figshare.1279430.
Ekas, Naomi V; Haltigan, John D; Messinger, Daniel S
2013-06-01
The still-face paradigm (SFP) was designed to assess infant expectations that parents will respond to infant communicative signals. During the still-face (SF) episode, the parent ceases interaction and maintains a neutral expression. Original, qualitative descriptions of infant behavior suggested changes within the SF episode: infants decrease bidding and disengage from their impassive parent. Research has documented changes in mean levels of infant behavior between episodes of the SFP. The hypothesis that infant behavior changes within the SF episode has not been empirically tested. In this study, hierarchical linear modeling indicated that infant gazing at the parent, smiling, and social bidding (smiling while gazing at the parent) decreased with time in the SF episode, while infant cry-face expressions increased. Changes in infant behaviors within the SF episode were associated with infant attachment and infant internalizing problems. The dynamic still-face effect quantifies infant initiation of interaction in the face of parental unresponsiveness and is a potential predictor of individual differences in development. PsycINFO Database Record (c) 2013 APA, all rights reserved
NASA Astrophysics Data System (ADS)
Uzbaş, Betül; Arslan, Ahmet
2018-04-01
Gender is an important step for human computer interactive processes and identification. Human face image is one of the important sources to determine gender. In the present study, gender classification is performed automatically from facial images. In order to classify gender, we propose a combination of features that have been extracted face, eye and lip regions by using a hybrid method of Local Binary Pattern and Gray-Level Co-Occurrence Matrix. The features have been extracted from automatically obtained face, eye and lip regions. All of the extracted features have been combined and given as input parameters to classification methods (Support Vector Machine, Artificial Neural Networks, Naive Bayes and k-Nearest Neighbor methods) for gender classification. The Nottingham Scan face database that consists of the frontal face images of 100 people (50 male and 50 female) is used for this purpose. As the result of the experimental studies, the highest success rate has been achieved as 98% by using Support Vector Machine. The experimental results illustrate the efficacy of our proposed method.
Adaptive error correction codes for face identification
NASA Astrophysics Data System (ADS)
Hussein, Wafaa R.; Sellahewa, Harin; Jassim, Sabah A.
2012-06-01
Face recognition in uncontrolled environments is greatly affected by fuzziness of face feature vectors as a result of extreme variation in recording conditions (e.g. illumination, poses or expressions) in different sessions. Many techniques have been developed to deal with these variations, resulting in improved performances. This paper aims to model template fuzziness as errors and investigate the use of error detection/correction techniques for face recognition in uncontrolled environments. Error correction codes (ECC) have recently been used for biometric key generation but not on biometric templates. We have investigated error patterns in binary face feature vectors extracted from different image windows of differing sizes and for different recording conditions. By estimating statistical parameters for the intra-class and inter-class distributions of Hamming distances in each window, we encode with appropriate ECC's. The proposed approached is tested for binarised wavelet templates using two face databases: Extended Yale-B and Yale. We shall demonstrate that using different combinations of BCH-based ECC's for different blocks and different recording conditions leads to in different accuracy rates, and that using ECC's results in significantly improved recognition results.
Zhao, Xi; Dellandréa, Emmanuel; Chen, Liming; Kakadiaris, Ioannis A
2011-10-01
Three-dimensional face landmarking aims at automatically localizing facial landmarks and has a wide range of applications (e.g., face recognition, face tracking, and facial expression analysis). Existing methods assume neutral facial expressions and unoccluded faces. In this paper, we propose a general learning-based framework for reliable landmark localization on 3-D facial data under challenging conditions (i.e., facial expressions and occlusions). Our approach relies on a statistical model, called 3-D statistical facial feature model, which learns both the global variations in configurational relationships between landmarks and the local variations of texture and geometry around each landmark. Based on this model, we further propose an occlusion classifier and a fitting algorithm. Results from experiments on three publicly available 3-D face databases (FRGC, BU-3-DFE, and Bosphorus) demonstrate the effectiveness of our approach, in terms of landmarking accuracy and robustness, in the presence of expressions and occlusions.
Face-off: A new identification procedure for child eyewitnesses.
Price, Heather L; Fitzgerald, Ryan J
2016-09-01
In 2 experiments, we introduce a new "face-off" procedure for child eyewitness identifications. The new procedure, which is premised on reducing the stimulus set size, was compared with the showup and simultaneous procedures in Experiment 1 and with modified versions of the simultaneous and elimination procedures in Experiment 2. Several benefits of the face-off procedure were observed: it was significantly more diagnostic than the showup procedure; it led to significantly more correct rejections of target-absent lineups than the simultaneous procedures in both experiments, and it led to greater information gain than the modified elimination and simultaneous procedures. The face-off procedure led to consistently more conservative responding than the simultaneous procedures in both experiments. Given the commonly cited concern that children are too lenient in their decision criteria for identification tasks, the face-off procedure may offer a concrete technique to reduce children's high choosing rates. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Suspect identification by facial features.
Lee, Eric; Whalen, Thomas; Sakalauskas, John; Baigent, Glen; Bisesar, Chandra; McCarthy, Andrew; Reid, Glenda; Wotton, Cynthia
2004-06-10
Often during criminal investigations, witnesses must examine photographs of known offenders, colloquially called 'mug shots'. As witnesses view increasing numbers of mug shots that are presented in an arbitrary order, they become more likely to identify the wrong suspect. An alternative is a subjective feature-based mug shot retrieval system in which witnesses first complete a questionnaire about the appearance of the suspect, and then examine photographs in order of decreasing resemblance to their description. In the first experiment, this approach is found to be more efficient and more accurate than searching an album. The next three experiments show that it makes little difference if the witness has seen the suspect in person or only seen a photograph. In the last two experiments, it is shown that the feature-based retrieval system is effective even when the witness has seen the suspect in realistic natural settings. The results show that the main conclusions drawn from previous studies, where witnesses searched for faces seen only in photographs, also apply when witnesses are searching for a face that they saw live in naturalistic settings. Additionally, it is shown that is it better to have two raters than one create the database, but that more than two raters yield rapidly diminishing returns for the extra cost.
Laptop Computer - Based Facial Recognition System Assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. A. Cain; G. B. Singleton
2001-03-01
The objective of this project was to assess the performance of the leading commercial-off-the-shelf (COTS) facial recognition software package when used as a laptop application. We performed the assessment to determine the system's usefulness for enrolling facial images in a database from remote locations and conducting real-time searches against a database of previously enrolled images. The assessment involved creating a database of 40 images and conducting 2 series of tests to determine the product's ability to recognize and match subject faces under varying conditions. This report describes the test results and includes a description of the factors affecting the results.more » After an extensive market survey, we selected Visionics' FaceIt{reg_sign} software package for evaluation and a review of the Facial Recognition Vendor Test 2000 (FRVT 2000). This test was co-sponsored by the US Department of Defense (DOD) Counterdrug Technology Development Program Office, the National Institute of Justice, and the Defense Advanced Research Projects Agency (DARPA). Administered in May-June 2000, the FRVT 2000 assessed the capabilities of facial recognition systems that were currently available for purchase on the US market. Our selection of this Visionics product does not indicate that it is the ''best'' facial recognition software package for all uses. It was the most appropriate package based on the specific applications and requirements for this specific application. In this assessment, the system configuration was evaluated for effectiveness in identifying individuals by searching for facial images captured from video displays against those stored in a facial image database. An additional criterion was that the system be capable of operating discretely. For this application, an operational facial recognition system would consist of one central computer hosting the master image database with multiple standalone systems configured with duplicates of the master operating in remote locations. Remote users could perform real-time searches where network connectivity is not available. As images are enrolled at the remote locations, periodic database synchronization is necessary.« less
Nine-year-old children use norm-based coding to visually represent facial expression.
Burton, Nichola; Jeffery, Linda; Skinner, Andrew L; Benton, Christopher P; Rhodes, Gillian
2013-10-01
Children are less skilled than adults at making judgments about facial expression. This could be because they have not yet developed adult-like mechanisms for visually representing faces. Adults are thought to represent faces in a multidimensional face-space, and have been shown to code the expression of a face relative to the norm or average face in face-space. Norm-based coding is economical and adaptive, and may be what makes adults more sensitive to facial expression than children. This study investigated the coding system that children use to represent facial expression. An adaptation aftereffect paradigm was used to test 24 adults and 18 children (9 years 2 months to 9 years 11 months old). Participants adapted to weak and strong antiexpressions. They then judged the expression of an average expression. Adaptation created aftereffects that made the test face look like the expression opposite that of the adaptor. Consistent with the predictions of norm-based but not exemplar-based coding, aftereffects were larger for strong than weak adaptors for both age groups. Results indicate that, like adults, children's coding of facial expressions is norm-based. PsycINFO Database Record (c) 2013 APA, all rights reserved.
NASA Astrophysics Data System (ADS)
Morishima, Shigeo; Nakamura, Satoshi
2004-12-01
We introduce a multimodal English-to-Japanese and Japanese-to-English translation system that also translates the speaker's speech motion by synchronizing it to the translated speech. This system also introduces both a face synthesis technique that can generate any viseme lip shape and a face tracking technique that can estimate the original position and rotation of a speaker's face in an image sequence. To retain the speaker's facial expression, we substitute only the speech organ's image with the synthesized one, which is made by a 3D wire-frame model that is adaptable to any speaker. Our approach provides translated image synthesis with an extremely small database. The tracking motion of the face from a video image is performed by template matching. In this system, the translation and rotation of the face are detected by using a 3D personal face model whose texture is captured from a video frame. We also propose a method to customize the personal face model by using our GUI tool. By combining these techniques and the translated voice synthesis technique, an automatic multimodal translation can be achieved that is suitable for video mail or automatic dubbing systems into other languages.
Detection of emotional faces: salient physical features guide effective visual search.
Calvo, Manuel G; Nummenmaa, Lauri
2008-08-01
In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent, surprised and disgusted faces was found both under upright and inverted display conditions. Inversion slowed down the detection of these faces less than that of others (fearful, angry, and sad). Accordingly, the detection advantage involves processing of featural rather than configural information. The facial features responsible for the detection advantage are located in the mouth rather than the eye region. Computationally modeled visual saliency predicted both attentional orienting and detection. Saliency was greatest for the faces (happy) and regions (mouth) that were fixated earlier and detected faster, and there was close correspondence between the onset of the modeled saliency peak and the time at which observers initially fixated the faces. The authors conclude that visual saliency of specific facial features--especially the smiling mouth--is responsible for facilitated initial orienting, which thus shortens detection. (PsycINFO Database Record (c) 2008 APA, all rights reserved).
Fast and accurate face recognition based on image compression
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Blasch, Erik
2017-05-01
Image compression is desired for many image-related applications especially for network-based applications with bandwidth and storage constraints. The face recognition community typical reports concentrate on the maximal compression rate that would not decrease the recognition accuracy. In general, the wavelet-based face recognition methods such as EBGM (elastic bunch graph matching) and FPB (face pattern byte) are of high performance but run slowly due to their high computation demands. The PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis) algorithms run fast but perform poorly in face recognition. In this paper, we propose a novel face recognition method based on standard image compression algorithm, which is termed as compression-based (CPB) face recognition. First, all gallery images are compressed by the selected compression algorithm. Second, a mixed image is formed with the probe and gallery images and then compressed. Third, a composite compression ratio (CCR) is computed with three compression ratios calculated from: probe, gallery and mixed images. Finally, the CCR values are compared and the largest CCR corresponds to the matched face. The time cost of each face matching is about the time of compressing the mixed face image. We tested the proposed CPB method on the "ASUMSS face database" (visible and thermal images) from 105 subjects. The face recognition accuracy with visible images is 94.76% when using JPEG compression. On the same face dataset, the accuracy of FPB algorithm was reported as 91.43%. The JPEG-compressionbased (JPEG-CPB) face recognition is standard and fast, which may be integrated into a real-time imaging device.
(abstract) Synthesis of Speaker Facial Movements to Match Selected Speech Sequences
NASA Technical Reports Server (NTRS)
Scott, Kenneth C.
1994-01-01
We are developing a system for synthesizing image sequences the simulate the facial motion of a speaker. To perform this synthesis, we are pursuing two major areas of effort. We are developing the necessary computer graphics technology to synthesize a realistic image sequence of a person speaking selected speech sequences. Next, we are developing a model that expresses the relation between spoken phonemes and face/mouth shape. A subject is video taped speaking an arbitrary text that contains expression of the full list of desired database phonemes. The subject is video taped from the front speaking normally, recording both audio and video detail simultaneously. Using the audio track, we identify the specific video frames on the tape relating to each spoken phoneme. From this range we digitize the video frame which represents the extreme of mouth motion/shape. Thus, we construct a database of images of face/mouth shape related to spoken phonemes. A selected audio speech sequence is recorded which is the basis for synthesizing a matching video sequence; the speaker need not be the same as used for constructing the database. The audio sequence is analyzed to determine the spoken phoneme sequence and the relative timing of the enunciation of those phonemes. Synthesizing an image sequence corresponding to the spoken phoneme sequence is accomplished using a graphics technique known as morphing. Image sequence keyframes necessary for this processing are based on the spoken phoneme sequence and timing. We have been successful in synthesizing the facial motion of a native English speaker for a small set of arbitrary speech segments. Our future work will focus on advancement of the face shape/phoneme model and independent control of facial features.
Characterization and recognition of mixed emotional expressions in thermal face image
NASA Astrophysics Data System (ADS)
Saha, Priya; Bhattacharjee, Debotosh; De, Barin K.; Nasipuri, Mita
2016-05-01
Facial expressions in infrared imaging have been introduced to solve the problem of illumination, which is an integral constituent of visual imagery. The paper investigates facial skin temperature distribution on mixed thermal facial expressions of our created face database where six are basic expressions and rest 12 are a mixture of those basic expressions. Temperature analysis has been performed on three facial regions of interest (ROIs); periorbital, supraorbital and mouth. Temperature variability of the ROIs in different expressions has been measured using statistical parameters. The temperature variation measurement in ROIs of a particular expression corresponds to a vector, which is later used in recognition of mixed facial expressions. Investigations show that facial features in mixed facial expressions can be characterized by positive emotion induced facial features and negative emotion induced facial features. Supraorbital is a useful facial region that can differentiate basic expressions from mixed expressions. Analysis and interpretation of mixed expressions have been conducted with the help of box and whisker plot. Facial region containing mixture of two expressions is generally less temperature inducing than corresponding facial region containing basic expressions.
When cheating would make you a cheater: implicating the self prevents unethical behavior.
Bryan, Christopher J; Adams, Gabrielle S; Monin, Benoît
2013-11-01
In 3 experiments using 2 different paradigms, people were less likely to cheat for personal gain when a subtle change in phrasing framed such behavior as diagnostic of an undesirable identity. Participants were given the opportunity to claim money they were not entitled to at the experimenters' expense; instructions referred to cheating with either language that was designed to highlight the implications of cheating for the actor's identity (e.g., "Please don't be a cheater") or language that focused on the action (e.g., "Please don't cheat"). Participants in the "cheating" condition claimed significantly more money than did participants in the "cheater" condition, who showed no evidence of having cheated at all. This difference occurred both in a face-to-face interaction (Experiment 1) and in a private online setting (Experiments 2 and 3). These results demonstrate the power of a subtle linguistic difference to prevent even private unethical behavior by invoking people's desire to maintain a self-image as good and honest. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Combining facial dynamics with appearance for age estimation.
Dibeklioglu, Hamdi; Alnajar, Fares; Ali Salah, Albert; Gevers, Theo
2015-06-01
Estimating the age of a human from the captured images of his/her face is a challenging problem. In general, the existing approaches to this problem use appearance features only. In this paper, we show that in addition to appearance information, facial dynamics can be leveraged in age estimation. We propose a method to extract and use dynamic features for age estimation, using a person's smile. Our approach is tested on a large, gender-balanced database with 400 subjects, with an age range between 8 and 76. In addition, we introduce a new database on posed disgust expressions with 324 subjects in the same age range, and evaluate the reliability of the proposed approach when used with another expression. State-of-the-art appearance-based age estimation methods from the literature are implemented as baseline. We demonstrate that for each of these methods, the addition of the proposed dynamic features results in statistically significant improvement. We further propose a novel hierarchical age estimation architecture based on adaptive age grouping. We test our approach extensively, including an exploration of spontaneous versus posed smile dynamics, and gender-specific age estimation. We show that using spontaneity information reduces the mean absolute error by up to 21%, advancing the state of the art for facial age estimation.
Tang, Xin; Feng, Guo-Can; Li, Xiao-Xin; Cai, Jia-Xin
2015-01-01
Face recognition is challenging especially when the images from different persons are similar to each other due to variations in illumination, expression, and occlusion. If we have sufficient training images of each person which can span the facial variations of that person under testing conditions, sparse representation based classification (SRC) achieves very promising results. However, in many applications, face recognition often encounters the small sample size problem arising from the small number of available training images for each person. In this paper, we present a novel face recognition framework by utilizing low-rank and sparse error matrix decomposition, and sparse coding techniques (LRSE+SC). Firstly, the low-rank matrix recovery technique is applied to decompose the face images per class into a low-rank matrix and a sparse error matrix. The low-rank matrix of each individual is a class-specific dictionary and it captures the discriminative feature of this individual. The sparse error matrix represents the intra-class variations, such as illumination, expression changes. Secondly, we combine the low-rank part (representative basis) of each person into a supervised dictionary and integrate all the sparse error matrix of each individual into a within-individual variant dictionary which can be applied to represent the possible variations between the testing and training images. Then these two dictionaries are used to code the query image. The within-individual variant dictionary can be shared by all the subjects and only contribute to explain the lighting conditions, expressions, and occlusions of the query image rather than discrimination. At last, a reconstruction-based scheme is adopted for face recognition. Since the within-individual dictionary is introduced, LRSE+SC can handle the problem of the corrupted training data and the situation that not all subjects have enough samples for training. Experimental results show that our method achieves the state-of-the-art results on AR, FERET, FRGC and LFW databases.
Tang, Xin; Feng, Guo-can; Li, Xiao-xin; Cai, Jia-xin
2015-01-01
Face recognition is challenging especially when the images from different persons are similar to each other due to variations in illumination, expression, and occlusion. If we have sufficient training images of each person which can span the facial variations of that person under testing conditions, sparse representation based classification (SRC) achieves very promising results. However, in many applications, face recognition often encounters the small sample size problem arising from the small number of available training images for each person. In this paper, we present a novel face recognition framework by utilizing low-rank and sparse error matrix decomposition, and sparse coding techniques (LRSE+SC). Firstly, the low-rank matrix recovery technique is applied to decompose the face images per class into a low-rank matrix and a sparse error matrix. The low-rank matrix of each individual is a class-specific dictionary and it captures the discriminative feature of this individual. The sparse error matrix represents the intra-class variations, such as illumination, expression changes. Secondly, we combine the low-rank part (representative basis) of each person into a supervised dictionary and integrate all the sparse error matrix of each individual into a within-individual variant dictionary which can be applied to represent the possible variations between the testing and training images. Then these two dictionaries are used to code the query image. The within-individual variant dictionary can be shared by all the subjects and only contribute to explain the lighting conditions, expressions, and occlusions of the query image rather than discrimination. At last, a reconstruction-based scheme is adopted for face recognition. Since the within-individual dictionary is introduced, LRSE+SC can handle the problem of the corrupted training data and the situation that not all subjects have enough samples for training. Experimental results show that our method achieves the state-of-the-art results on AR, FERET, FRGC and LFW databases. PMID:26571112
Implementation of a high-speed face recognition system that uses an optical parallel correlator.
Watanabe, Eriko; Kodate, Kashiko
2005-02-10
We implement a fully automatic fast face recognition system by using a 1000 frame/s optical parallel correlator designed and assembled by us. The operational speed for the 1:N (i.e., matching one image against N, where N refers to the number of images in the database) identification experiment (4000 face images) amounts to less than 1.5 s, including the preprocessing and postprocessing times. The binary real-only matched filter is devised for the sake of face recognition, and the system is optimized by the false-rejection rate (FRR) and the false-acceptance rate (FAR), according to 300 samples selected by the biometrics guideline. From trial 1:N identification experiments with the optical parallel correlator, we acquired low error rates of 2.6% FRR and 1.3% FAR. Facial images of people wearing thin glasses or heavy makeup that rendered identification difficult were identified with this system.
Automatic Detection of Frontal Face Midline by Chain-coded Merlin-Farber Hough Trasform
NASA Astrophysics Data System (ADS)
Okamoto, Daichi; Ohyama, Wataru; Wakabayashi, Tetsushi; Kimura, Fumitaka
We propose a novel approach for detection of the facial midline (facial symmetry axis) from a frontal face image. The facial midline has several applications, for instance reducing computational cost required for facial feature extraction (FFE) and postoperative assessment for cosmetic or dental surgery. The proposed method detects the facial midline of a frontal face from an edge image as the symmetry axis using the Merlin-Faber Hough transformation. And a new performance improvement scheme for midline detection by MFHT is present. The main concept of the proposed scheme is suppression of redundant vote on the Hough parameter space by introducing chain code representation for the binary edge image. Experimental results on the image dataset containing 2409 images from FERET database indicate that the proposed algorithm can improve the accuracy of midline detection from 89.9% to 95.1 % for face images with different scales and rotation.
Face recognition using an enhanced independent component analysis approach.
Kwak, Keun-Chang; Pedrycz, Witold
2007-03-01
This paper is concerned with an enhanced independent component analysis (ICA) and its application to face recognition. Typically, face representations obtained by ICA involve unsupervised learning and high-order statistics. In this paper, we develop an enhancement of the generic ICA by augmenting this method by the Fisher linear discriminant analysis (LDA); hence, its abbreviation, FICA. The FICA is systematically developed and presented along with its underlying architecture. A comparative analysis explores four distance metrics, as well as classification with support vector machines (SVMs). We demonstrate that the FICA approach leads to the formation of well-separated classes in low-dimension subspace and is endowed with a great deal of insensitivity to large variation in illumination and facial expression. The comprehensive experiments are completed for the facial-recognition technology (FERET) face database; a comparative analysis demonstrates that FICA comes with improved classification rates when compared with some other conventional approaches such as eigenface, fisherface, and the ICA itself.
Face recognition in capuchin monkeys (Cebus apella).
Pokorny, Jennifer J; de Waal, Frans B M
2009-05-01
Primates live in complex social groups that necessitate recognition of the individuals with whom they interact. In humans, faces provide a visual means by which to gain information such as identity, allowing us to distinguish between both familiar and unfamiliar individuals. The current study used a computerized oddity task to investigate whether a New World primate, Cebus apella, can discriminate the faces of In-group and Out-group conspecifics based on identity. The current study, improved on past methodologies, demonstrates that capuchins recognize the faces of both familiar and unfamiliar conspecifics. Once a performance criterion had been reached, subjects successfully transferred to a large number of novel images within the first 100 trials thus ruling out performance based on previous conditioning. Capuchins can be added to a growing list of primates that appear to recognize two-dimensional facial images of conspecifics. (PsycINFO Database Record (c) 2009 APA, all rights reserved).
Emotion-Color Associations in the Context of the Face.
Thorstenson, Christopher A; Elliot, Andrew J; Pazda, Adam D; Perrett, David I; Xiao, Dengke
2017-11-27
Facial expressions of emotion contain important information that is perceived and used by observers to understand others' emotional state. While there has been considerable research into perceptions of facial musculature and emotion, less work has been conducted to understand perceptions of facial coloration and emotion. The current research examined emotion-color associations in the context of the face. Across 4 experiments, participants were asked to manipulate the color of face, or shape, stimuli along 2 color axes (i.e., red-green, yellow-blue) for 6 target emotions (i.e., anger, disgust, fear, happiness, sadness, surprise). The results yielded a pattern that is consistent with physiological and psychological models of emotion. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
A service-oriented data access control model
NASA Astrophysics Data System (ADS)
Meng, Wei; Li, Fengmin; Pan, Juchen; Song, Song; Bian, Jiali
2017-01-01
The development of mobile computing, cloud computing and distributed computing meets the growing individual service needs. Facing with complex application system, it's an urgent problem to ensure real-time, dynamic, and fine-grained data access control. By analyzing common data access control models, on the basis of mandatory access control model, the paper proposes a service-oriented access control model. By regarding system services as subject and data of databases as object, the model defines access levels and access identification of subject and object, and ensures system services securely to access databases.
Wanyonyi, Kristina L; Themessl-Huber, Markus; Humphris, Gerry; Freeman, Ruth
2011-12-01
To conduct a systematic review of the effect of face-to-face delivered tailored health messages on patient behavior and applications for practice. A systematic literature review and meta-analysis. Systematic searches of a number of electronic databases were conducted and criteria for selection of studies were specified. 6 experimental studies published between 2003 and 2009 were included. The studies were all randomized controlled trials to evaluate the effectiveness of a face-to-face tailored messaging intervention. There were variation in their research design and methods used to randomize. All participants were aged at least 18 years. All of the studies reported positive changes in participants' health behavior with varying degrees of effect size and duration. A meta-analysis of the available data also confirmed an overall positive effect of tailored messaging on participants' health behaviors. The systematic review and the meta-analysis demonstrate a significant and positive effective of face-to-face tailored messaging upon participants' health behaviors. Health practitioners should be encouraged to allot time in their work routines to discover their patients' psycho-social characteristics and felt needs in order that they can provide a tailored health message to enable the patient to adopt health-promoting regimes into their lifestyle. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
ERIC Educational Resources Information Center
Leach, Ernest R.
The discipline of marketing, applied to higher education, has the potential for increasing enrollments, reducing attrition, and making college services more responsive to the needs of consumers. Faced with enrollments that were below projections, Prince George's Community College devised a four-stage marketing plan that focused on service,…
Pose-variant facial expression recognition using an embedded image system
NASA Astrophysics Data System (ADS)
Song, Kai-Tai; Han, Meng-Ju; Chang, Shuo-Hung
2008-12-01
In recent years, one of the most attractive research areas in human-robot interaction is automated facial expression recognition. Through recognizing the facial expression, a pet robot can interact with human in a more natural manner. In this study, we focus on the facial pose-variant problem. A novel method is proposed in this paper to recognize pose-variant facial expressions. After locating the face position in an image frame, the active appearance model (AAM) is applied to track facial features. Fourteen feature points are extracted to represent the variation of facial expressions. The distance between feature points are defined as the feature values. These feature values are sent to a support vector machine (SVM) for facial expression determination. The pose-variant facial expression is classified into happiness, neutral, sadness, surprise or anger. Furthermore, in order to evaluate the performance for practical applications, this study also built a low resolution database (160x120 pixels) using a CMOS image sensor. Experimental results show that the recognition rate is 84% with the self-built database.
NASA Astrophysics Data System (ADS)
Chan, V. S.; Wong, C. P. C.; McLean, A. G.; Luo, G. N.; Wirth, B. D.
2013-10-01
The Xolotl code under development by PSI-SciDAC will enhance predictive modeling capability of plasma-facing materials under burning plasma conditions. The availability and application of experimental data to compare to code-calculated observables are key requirements to validate the breadth and content of physics included in the model and ultimately gain confidence in its results. A dedicated effort has been in progress to collect and organize a) a database of relevant experiments and their publications as previously carried out at sample exposure facilities in US and Asian tokamaks (e.g., DIII-D DiMES, and EAST MAPES), b) diagnostic and surface analysis capabilities available at each device, and c) requirements for future experiments with code validation in mind. The content of this evolving database will serve as a significant resource for the plasma-material interaction (PMI) community. Work supported in part by the US Department of Energy under GA-DE-SC0008698, DE-AC52-07NA27344 and DE-AC05-00OR22725.
A lightweight approach for biometric template protection
NASA Astrophysics Data System (ADS)
Al-Assam, Hisham; Sellahewa, Harin; Jassim, Sabah
2009-05-01
Privacy and security are vital concerns for practical biometric systems. The concept of cancelable or revocable biometrics has been proposed as a solution for biometric template security. Revocable biometric means that biometric templates are no longer fixed over time and could be revoked in the same way as lost or stolen credit cards are. In this paper, we describe a novel and an efficient approach to biometric template protection that meets the revocability property. This scheme can be incorporated into any biometric verification scheme while maintaining, if not improving, the accuracy of the original biometric system. However, we shall demonstrate the result of applying such transforms on face biometric templates and compare the efficiency of our approach with that of the well-known random projection techniques. We shall also present the results of experimental work on recognition accuracy before and after applying the proposed transform on feature vectors that are generated by wavelet transforms. These results are based on experiments conducted on a number of well-known face image databases, e.g. Yale and ORL databases.
Covariance analysis for evaluating head trackers
NASA Astrophysics Data System (ADS)
Kang, Donghoon
2017-10-01
Existing methods for evaluating the performance of head trackers usually rely on publicly available face databases, which contain facial images and the ground truths of their corresponding head orientations. However, most of the existing publicly available face databases are constructed by assuming that a frontal head orientation can be determined by compelling the person under examination to look straight ahead at the camera on the first video frame. Since nobody can accurately direct one's head toward the camera, this assumption may be unrealistic. Rather than obtaining estimation errors, we present a method for computing the covariance of estimation error rotations to evaluate the reliability of head trackers. As an uncertainty measure of estimators, the Schatten 2-norm of a square root of error covariance (or the algebraic average of relative error angles) can be used. The merit of the proposed method is that it does not disturb the person under examination by asking him to direct his head toward certain directions. Experimental results using real data validate the usefulness of our method.
Modulation of the composite face effect by unintended emotion cues.
Gray, Katie L H; Murphy, Jennifer; Marsh, Jade E; Cook, Richard
2017-04-01
When upper and lower regions from different emotionless faces are aligned to form a facial composite, observers 'fuse' the two halves together, perceptually. The illusory distortion induced by task-irrelevant ('distractor') halves hinders participants' judgements about task-relevant ('target') halves. This composite-face effect reveals a tendency to integrate feature information from disparate regions of intact upright faces, consistent with theories of holistic face processing. However, observers frequently perceive emotion in ostensibly neutral faces, contrary to the intentions of experimenters. This study sought to determine whether this 'perceived emotion' influences the composite-face effect. In our first experiment, we confirmed that the composite effect grows stronger as the strength of distractor emotion increased. Critically, effects of distractor emotion were induced by weak emotion intensities, and were incidental insofar as emotion cues hindered image matching, not emotion labelling per se . In Experiment 2, we found a correlation between the presence of perceived emotion in a set of ostensibly neutral distractor regions sourced from commonly used face databases, and the strength of illusory distortion they induced. In Experiment 3, participants completed a sequential matching composite task in which half of the distractor regions were rated high and low for perceived emotion, respectively. Significantly stronger composite effects were induced by the high-emotion distractor halves. These convergent results suggest that perceived emotion increases the strength of the composite-face effect induced by supposedly emotionless faces. These findings have important implications for the study of holistic face processing in typical and atypical populations.
Starrfelt, Randi; Klargaard, Solja K; Petersen, Anders; Gerlach, Christian
2018-02-01
Recent models suggest that face and word recognition may rely on overlapping cognitive processes and neural regions. In support of this notion, face recognition deficits have been demonstrated in developmental dyslexia. Here we test whether the opposite association can also be found, that is, impaired reading in developmental prosopagnosia. We tested 10 adults with developmental prosopagnosia and 20 matched controls. All participants completed the Cambridge Face Memory Test, the Cambridge Face Perception test and a Face recognition questionnaire used to quantify everyday face recognition experience. Reading was measured in four experimental tasks, testing different levels of letter, word, and text reading: (a) single word reading with words of varying length,(b) vocal response times in single letter and short word naming, (c) recognition of single letters and short words at brief exposure durations (targeting the word superiority effect), and d) text reading. Participants with developmental prosopagnosia performed strikingly similar to controls across the four reading tasks. Formal analysis revealed a significant dissociation between word and face recognition, as the difference in performance with faces and words was significantly greater for participants with developmental prosopagnosia than for controls. Adult developmental prosopagnosics read as quickly and fluently as controls, while they are seemingly unable to learn efficient strategies for recognizing faces. We suggest that this is due to the differing demands that face and word recognition put on the perceptual system. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
An Adult Developmental Approach to Perceived Facial Attractiveness and Distinctiveness
Ebner, Natalie C.; Luedicke, Joerg; Voelkle, Manuel C.; Riediger, Michaela; Lin, Tian; Lindenberger, Ulman
2018-01-01
Attractiveness and distinctiveness constitute facial features with high biological and social relevance. Bringing a developmental perspective to research on social-cognitive face perception, we used a large set of faces taken from the FACES Lifespan Database to examine effects of face and perceiver characteristics on subjective evaluations of attractiveness and distinctiveness in young (20–31 years), middle-aged (44–55 years), and older (70–81 years) men and women. We report novel findings supporting variations by face and perceiver age, in interaction with gender and emotion: although older and middle-aged compared to young perceivers generally rated faces of all ages as more attractive, young perceivers gave relatively higher attractiveness ratings to young compared to middle-aged and older faces. Controlling for variations in attractiveness, older compared to young faces were viewed as more distinctive by young and middle-aged perceivers. Age affected attractiveness more negatively for female than male faces. Furthermore, happy faces were rated as most attractive, while disgusted faces were rated as least attractive, particularly so by middle-aged and older perceivers and for young and female faces. Perceivers largely agreed on distinctiveness ratings for neutral and happy emotions, but older and middle-aged compared to young perceivers rated faces displaying negative emotions as more distinctive. These findings underscore the importance of a lifespan perspective on perception of facial characteristics and suggest possible effects of age on goal-directed perception, social motivation, and in-group bias. This publication makes available picture-specific normative data for experimental stimulus selection. PMID:29867620
Prevention and early intervention to improve mental health in higher education students: a review.
Reavley, Nicola; Jorm, Anthony F
2010-05-01
The age at which most young people are in higher education is also the age of peak onset for mental and substance use disorders, with these having their first onset before age 24 in 75% of cases. In most developed countries, over 50% of young people are in higher education. To review the evidence for prevention and early intervention in mental health problems in higher education students. The review was limited to interventions targeted to anxiety, depression and alcohol misuse. Interventions to review were identified by searching PubMed, PsycINFO and the Cochrane Database of Systematic Reviews. Interventions were included if they were designed to specifically prevent or intervene early in the general (non-health professional) higher education student population, in one or more of the following areas: anxiety, depression or alcohol misuse symptoms, mental health literacy, stigma and one or more behavioural outcomes. For interventions to prevent or intervene early for alcohol misuse, evidence of effectiveness is strongest for brief motivational interventions and for personalized normative interventions delivered using computers or in individual face-to-face sessions. Few interventions to prevent or intervene early with depression or anxiety were identified. These were mostly face-to-face, cognitive-behavioural/skill-based interventions. One social marketing intervention to raise awareness of depression and treatments showed some evidence of effectiveness. There is very limited evidence that interventions are effective in preventing or intervening early with depression and anxiety disorders in higher education students. Further studies, possibly involving interventions that have shown promise in other populations, are needed.
Content Based Image Retrieval based on Wavelet Transform coefficients distribution
Lamard, Mathieu; Cazuguel, Guy; Quellec, Gwénolé; Bekri, Lynda; Roux, Christian; Cochener, Béatrice
2007-01-01
In this paper we propose a content based image retrieval method for diagnosis aid in medical fields. We characterize images without extracting significant features by using distribution of coefficients obtained by building signatures from the distribution of wavelet transform. The research is carried out by computing signature distances between the query and database images. Several signatures are proposed; they use a model of wavelet coefficient distribution. To enhance results, a weighted distance between signatures is used and an adapted wavelet base is proposed. Retrieval efficiency is given for different databases including a diabetic retinopathy, a mammography and a face database. Results are promising: the retrieval efficiency is higher than 95% for some cases using an optimization process. PMID:18003013
Integrating Scientific Array Processing into Standard SQL
NASA Astrophysics Data System (ADS)
Misev, Dimitar; Bachhuber, Johannes; Baumann, Peter
2014-05-01
We live in a time that is dominated by data. Data storage is cheap and more applications than ever accrue vast amounts of data. Storing the emerging multidimensional data sets efficiently, however, and allowing them to be queried by their inherent structure, is a challenge many databases have to face today. Despite the fact that multidimensional array data is almost always linked to additional, non-array information, array databases have mostly developed separately from relational systems, resulting in a disparity between the two database categories. The current SQL standard and SQL DBMS supports arrays - and in an extension also multidimensional arrays - but does so in a very rudimentary and inefficient way. This poster demonstrates the practicality of an SQL extension for array processing, implemented in a proof-of-concept multi-faceted system that manages a federation of array and relational database systems, providing transparent, efficient and scalable access to the heterogeneous data in them.
Choosing Assessments that Matter
ERIC Educational Resources Information Center
Abilock, Debbie, Ed.
2007-01-01
Professionally, school librarians are faced with an explosion of choices--search engines, online catalogs, media types, subscription databases, and Web tools--all requiring scrutiny, evaluation, and selection. In turn, this support "stuff" forms a basis for making additional choices about how and what they teach and what they assess. Whereas once…
Search Engines for Tomorrow's Scholars
ERIC Educational Resources Information Center
Fagan, Jody Condit
2011-01-01
Today's scholars face an outstanding array of choices when choosing search tools: Google Scholar, discipline-specific abstracts and index databases, library discovery tools, and more recently, Microsoft's re-launch of their academic search tool, now dubbed Microsoft Academic Search. What are these tools' strengths for the emerging needs of…
Seubert, Liza J; Whitelaw, Kerry; Hattingh, Laetitia; Watson, Margaret C; Clifford, Rhonda M
2017-12-13
Easy access to effective over-the-counter (OTC) treatments allows self-management of some conditions, however inappropriate or incorrect supply or use of OTC medicines can cause harm. Pharmacy personnel should support consumers in their health-seeking behaviour by utilising effective communication skills underpinned by clinical knowledge. To identify interventions targeted towards improving communication between consumers and pharmacy personnel during OTC consultations in the community pharmacy setting. Systematic review and narrative analysis. Databases searched were MEDLINE, EMBASE, Psycinfo, Cochrane Central Register and Cochrane Database of Systematic Reviews for literature published between 2000 and 30 October 2014, as well as reference lists of included articles. The search was re-run on 18 January 2016 and 25 September 2017 to maximise the currency. Two reviewers independently screened retrieved articles for inclusion, assessed study quality and extracted data. Full publications of intervention studies were included. Participants were community pharmacy personnel and/or consumers involved in OTC consultations. Interventions which aimed to improve communication during OTC consultations in the community pharmacy setting were included if they involved a direct measurable communication outcome. Studies reporting attitudes and measures not quantifiable were excluded. The protocol was published on Prospero Database of Systematic Reviews. Of 4978 records identified, 11 studies met inclusion criteria. Interventions evaluated were: face-to-face training sessions (n = 10); role-plays (n = 9); a software decision making program (n = 1); and simulated patient (SP) visits followed by immediate feedback (n = 1). Outcomes were measured using: SP methodology (n = 10) and a survey (n = 1), with most (n = 10) reporting a level of improvement in some communication behaviours. Empirical evaluation of interventions using active learning techniques such as face-to-face training with role-play can improve some communication skills. However interventions that are not fully described limit the ability for replication and/or generalisability. This review identified interventions targeting pharmacy personnel. Future interventions to improve communication should consider the consumer's role in OTC consultations. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
A comparative study of six European databases of medically oriented Web resources.
Abad García, Francisca; González Teruel, Aurora; Bayo Calduch, Patricia; de Ramón Frias, Rosa; Castillo Blasco, Lourdes
2005-10-01
The paper describes six European medically oriented databases of Web resources, pertaining to five quality-controlled subject gateways, and compares their performance. The characteristics, coverage, procedure for selecting Web resources, record structure, searching possibilities, and existence of user assistance were described for each database. Performance indicators for each database were obtained by means of searches carried out using the key words, "myocardial infarction." Most of the databases originated in the 1990s in an academic or library context and include all types of Web resources of an international nature. Five databases use Medical Subject Headings. The number of fields per record varies between three and nineteen. The language of the search interfaces is mostly English, and some of them allow searches in other languages. In some databases, the search can be extended to Pubmed. Organizing Medical Networked Information, Catalogue et Index des Sites Médicaux Francophones, and Diseases, Disorders and Related Topics produced the best results. The usefulness of these databases as quick reference resources is clear. In addition, their lack of content overlap means that, for the user, they complement each other. Their continued survival faces three challenges: the instability of the Internet, maintenance costs, and lack of use in spite of their potential usefulness.
Robinson, Amanda K; Plaut, David C; Behrmann, Marlene
2017-07-01
Words and faces have vastly different visual properties, but increasing evidence suggests that word and face processing engage overlapping distributed networks. For instance, fMRI studies have shown overlapping activity for face and word processing in the fusiform gyrus despite well-characterized lateralization of these objects to the left and right hemispheres, respectively. To investigate whether face and word perception influences perception of the other stimulus class and elucidate the mechanisms underlying such interactions, we presented images using rapid serial visual presentations. Across 3 experiments, participants discriminated 2 face, word, and glasses targets (T1 and T2) embedded in a stream of images. As expected, T2 discrimination was impaired when it followed T1 by 200 to 300 ms relative to longer intertarget lags, the so-called attentional blink. Interestingly, T2 discrimination accuracy was significantly reduced at short intertarget lags when a face was followed by a word (face-word) compared with glasses-word and word-word combinations, indicating that face processing interfered with word perception. The reverse effect was not observed; that is, word-face performance was no different than the other object combinations. EEG results indicated the left N170 to T1 was correlated with the word decrement for face-word trials, but not for other object combinations. Taken together, the results suggest face processing interferes with word processing, providing evidence for overlapping neural mechanisms of these 2 object types. Furthermore, asymmetrical face-word interference points to greater overlap of face and word representations in the left than the right hemisphere. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Automatic temporal segment detection via bilateral long short-term memory recurrent neural networks
NASA Astrophysics Data System (ADS)
Sun, Bo; Cao, Siming; He, Jun; Yu, Lejun; Li, Liandong
2017-03-01
Constrained by the physiology, the temporal factors associated with human behavior, irrespective of facial movement or body gesture, are described by four phases: neutral, onset, apex, and offset. Although they may benefit related recognition tasks, it is not easy to accurately detect such temporal segments. An automatic temporal segment detection framework using bilateral long short-term memory recurrent neural networks (BLSTM-RNN) to learn high-level temporal-spatial features, which synthesizes the local and global temporal-spatial information more efficiently, is presented. The framework is evaluated in detail over the face and body database (FABO). The comparison shows that the proposed framework outperforms state-of-the-art methods for solving the problem of temporal segment detection.
On Hunting Animals of the Biometric Menagerie for Online Signature
Houmani, Nesma; Garcia-Salicetti, Sonia
2016-01-01
Individuals behave differently regarding to biometric authentication systems. This fact was formalized in the literature by the concept of Biometric Menagerie, defining and labeling user groups with animal names in order to reflect their characteristics with respect to biometric systems. This concept was illustrated for face, fingerprint, iris, and speech modalities. The present study extends the Biometric Menagerie to online signatures, by proposing a novel methodology that ties specific quality measures for signatures to categories of the Biometric Menagerie. Such measures are combined for retrieving automatically writer categories of the extended version of the Biometric Menagerie. Performance analysis with different types of classifiers shows the pertinence of our approach on the well-known MCYT-100 database. PMID:27054836
Borderline Intellectual Functioning: A Systematic Literature Review
ERIC Educational Resources Information Center
Peltopuro, Minna; Ahonen, Timo; Kaartinen, Jukka; Seppälä, Heikki; Närhi, Vesa
2014-01-01
The literature related to people with borderline intellectual functioning (BIF) was systematically reviewed in order to summarize the present knowledge. Database searches yielded 1,726 citations, and 49 studies were included in the review. People with BIF face a variety of hardships in life, including neurocognitive, social, and mental health…
Trials and Triumphs of Expanded Extension Programs.
ERIC Educational Resources Information Center
Leavengood, Scott; Love, Bob
1998-01-01
Oregon extension faced challenges in presenting programs in the wood products industry. Several traditional tactics, revised to suit a new audience, have proved successful: personal coaching, building partnerships, and providing a high level of service. Newer methods, such as database marketing and distance learning, are also proving to be…
Key questions dominating contemporary ecological research and management concern interactions between biodiversity, ecosystem processes, and ecosystem services provision in the face of global change. This is particularly salient for freshwater biodiversity and in the context of r...
Beach Advisory and Closing Online Notification (BEACON) system
Beach Advisory and Closing Online Notification system (BEACON) is a colletion of state and local data reported to EPA about beach closings and advisories. BEACON is the public-facing query of the Program tracking, Beach Advisories, Water quality standards, and Nutrients database (PRAWN) which tracks beach closing and advisory information.
School Gardens Enhance Academic Performance and Dietary Outcomes in Children
ERIC Educational Resources Information Center
Berezowitz, Claire K.; Bontrager Yoder, Andrea B.; Schoeller, Dale A.
2015-01-01
Background: Schools face increasing demands to provide education on healthy living and improve core academic performance. Although these appear to be competing concerns, they may interact beneficially. This article focuses on school garden programs and their effects on students' academic and dietary outcomes. Methods: Database searches in CABI,…
Vicarious Social Touch Biases Gazing at Faces and Facial Emotions.
Schirmer, Annett; Ng, Tabitha; Ebstein, Richard P
2018-02-01
Research has suggested that interpersonal touch promotes social processing and other-concern, and that women may respond to it more sensitively than men. In this study, we asked whether this phenomenon would extend to third-party observers who experience touch vicariously. In an eye-tracking experiment, participants (N = 64, 32 men and 32 women) viewed prime and target images with the intention of remembering them. Primes comprised line drawings of dyadic interactions with and without touch. Targets comprised two faces shown side-by-side, with one being neutral and the other being happy or sad. Analysis of prime fixations revealed that faces in touch interactions attracted longer gazing than faces in no-touch interactions. In addition, touch enhanced gazing at the area of touch in women but not men. Analysis of target fixations revealed that touch priming increased looking at both faces immediately after target onset, and subsequently, at the emotional face in the pair. Sex differences in target processing were nonsignificant. Together, the present results imply that vicarious touch biases visual attention to faces and promotes emotion sensitivity. In addition, they suggest that, compared with men, women are more aware of tactile exchanges in their environment. As such, vicarious touch appears to share important qualities with actual physical touch. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Lundqvist, Daniel; Svärd, Joakim; Michelgård Palmquist, Åsa; Fischer, Håkan; Svenningsson, Per
2017-09-01
The literature on emotional processing in Parkinson's disease (PD) patients shows mixed results. This may be because of various methodological and/or patient-related differences, such as failing to adjust for cognitive functioning, depression, and/or mood. In the current study, we tested PD patients and healthy controls (HCs) using emotional stimuli across a variety of tasks, including visual search, short-term memory (STM), categorical perception, and emotional stimulus rating. The PD and HC groups were matched on cognitive ability, depression, and mood. We also explored possible relationships between task results and antiparkinsonian treatment effects, as measured by levodopa equivalent dosages (LED), in the PD group. The results show that PD patients use a larger emotional range compared with HCs when reporting their impression of emotional faces on rated emotional valence, arousal, and potency. The results also show that dopaminergic therapy was correlated with stimulus rating results such that PD patients with higher LED scores rated negative faces as less arousing, less negative, and less powerful. Finally, results also show that PD patients display a general slowing effect in the visual search tasks compared with HCs, indicating overall slowed responses. There were no group differences observed in the STM or categorical perception tasks. Our results indicate a relationship between emotional responses, PD, and dopaminergic therapy, in which PD per se is associated with stronger emotional responses, whereas LED levels are negatively correlated with the strength of emotional responses. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Face recognition via sparse representation of SIFT feature on hexagonal-sampling image
NASA Astrophysics Data System (ADS)
Zhang, Daming; Zhang, Xueyong; Li, Lu; Liu, Huayong
2018-04-01
This paper investigates a face recognition approach based on Scale Invariant Feature Transform (SIFT) feature and sparse representation. The approach takes advantage of SIFT which is local feature other than holistic feature in classical Sparse Representation based Classification (SRC) algorithm and possesses strong robustness to expression, pose and illumination variations. Since hexagonal image has more inherit merits than square image to make recognition process more efficient, we extract SIFT keypoint in hexagonal-sampling image. Instead of matching SIFT feature, firstly the sparse representation of each SIFT keypoint is given according the constructed dictionary; secondly these sparse vectors are quantized according dictionary; finally each face image is represented by a histogram and these so-called Bag-of-Words vectors are classified by SVM. Due to use of local feature, the proposed method achieves better result even when the number of training sample is small. In the experiments, the proposed method gave higher face recognition rather than other methods in ORL and Yale B face databases; also, the effectiveness of the hexagonal-sampling in the proposed method is verified.
Error Rates in Users of Automatic Face Recognition Software
White, David; Dunn, James D.; Schmid, Alexandra C.; Kemp, Richard I.
2015-01-01
In recent years, wide deployment of automatic face recognition systems has been accompanied by substantial gains in algorithm performance. However, benchmarking tests designed to evaluate these systems do not account for the errors of human operators, who are often an integral part of face recognition solutions in forensic and security settings. This causes a mismatch between evaluation tests and operational accuracy. We address this by measuring user performance in a face recognition system used to screen passport applications for identity fraud. Experiment 1 measured target detection accuracy in algorithm-generated ‘candidate lists’ selected from a large database of passport images. Accuracy was notably poorer than in previous studies of unfamiliar face matching: participants made over 50% errors for adult target faces, and over 60% when matching images of children. Experiment 2 then compared performance of student participants to trained passport officers–who use the system in their daily work–and found equivalent performance in these groups. Encouragingly, a group of highly trained and experienced “facial examiners” outperformed these groups by 20 percentage points. We conclude that human performance curtails accuracy of face recognition systems–potentially reducing benchmark estimates by 50% in operational settings. Mere practise does not attenuate these limits, but superior performance of trained examiners suggests that recruitment and selection of human operators, in combination with effective training and mentorship, can improve the operational accuracy of face recognition systems. PMID:26465631
3D face recognition based on multiple keypoint descriptors and sparse representation.
Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying; Lu, Jianwei
2014-01-01
Recent years have witnessed a growing interest in developing methods for 3D face recognition. However, 3D scans often suffer from the problems of missing parts, large facial expressions, and occlusions. To be useful in real-world applications, a 3D face recognition approach should be able to handle these challenges. In this paper, we propose a novel general approach to deal with the 3D face recognition problem by making use of multiple keypoint descriptors (MKD) and the sparse representation-based classification (SRC). We call the proposed method 3DMKDSRC for short. Specifically, with 3DMKDSRC, each 3D face scan is represented as a set of descriptor vectors extracted from keypoints by meshSIFT. Descriptor vectors of gallery samples form the gallery dictionary. Given a probe 3D face scan, its descriptors are extracted at first and then its identity can be determined by using a multitask SRC. The proposed 3DMKDSRC approach does not require the pre-alignment between two face scans and is quite robust to the problems of missing data, occlusions and expressions. Its superiority over the other leading 3D face recognition schemes has been corroborated by extensive experiments conducted on three benchmark databases, Bosphorus, GavabDB, and FRGC2.0. The Matlab source code for 3DMKDSRC and the related evaluation results are publicly available at http://sse.tongji.edu.cn/linzhang/3dmkdsrcface/3dmkdsrc.htm.
Pea, Roy; Nass, Clifford; Meheula, Lyn; Rance, Marcus; Kumar, Aman; Bamford, Holden; Nass, Matthew; Simha, Aneesh; Stillerman, Benjamin; Yang, Steven; Zhou, Michael
2012-03-01
An online survey of 3,461 North American girls ages 8-12 conducted in the summer of 2010 through Discovery Girls magazine examined the relationships between social well-being and young girls' media use--including video, video games, music listening, reading/homework, e-mailing/posting on social media sites, texting/instant messaging, and talking on phones/video chatting--and face-to-face communication. This study introduced both a more granular measure of media multitasking and a new comparative measure of media use versus time spent in face-to-face communication. Regression analyses indicated that negative social well-being was positively associated with levels of uses of media that are centrally about interpersonal interaction (e.g., phone, online communication) as well as uses of media that are not (e.g., video, music, and reading). Video use was particularly strongly associated with negative social well-being indicators. Media multitasking was also associated with negative social indicators. Conversely, face-to-face communication was strongly associated with positive social well-being. Cell phone ownership and having a television or computer in one's room had little direct association with children's socioemotional well-being. We hypothesize possible causes for these relationships, call for research designs to address causality, and outline possible implications of such findings for the social well-being of younger adolescents. PsycINFO Database Record (c) 2012 APA, all rights reserved.
Neural Correlates of Covert Face Processing: fMRI Evidence from a Prosopagnosic Patient
Liu, Jiangang; Wang, Meiyun; Shi, Xiaohong; Feng, Lu; Li, Ling; Thacker, Justine Marie; Tian, Jie; Shi, Dapeng; Lee, Kang
2014-01-01
Brains can perceive or recognize a face even though we are subjectively unaware of the existence of that face. However, the exact neural correlates of such covert face processing remain unknown. Here, we compared the fMRI activities between a prosopagnosic patient and normal controls when they saw famous and unfamiliar faces. When compared with objects, the patient showed greater activation to famous faces in the fusiform face area (FFA) though he could not overtly recognize those faces. In contrast, the controls showed greater activation to both famous and unfamiliar faces in the FFA. Compared with unfamiliar faces, famous faces activated the controls', but not the patient's lateral prefrontal cortex (LPFC) known to be involved in familiar face recognition. In contrast, the patient showed greater activation in the bilateral medial frontal gyrus (MeFG). Functional connectivity analyses revealed that the patient's right middle fusiform gyrus (FG) showed enhanced connectivity to the MeFG, whereas the controls' middle FG showed enhanced connectivity to the LPFC. These findings suggest that the FFA may be involved in both covert and overt face recognition. The patient's impairment in overt face recognition may be due to the absence of the coupling between the right FG and the LPFC. PMID:23448870
Elguoshy, Amr; Hirao, Yoshitoshi; Xu, Bo; Saito, Suguru; Quadery, Ali F; Yamamoto, Keiko; Mitsui, Toshiaki; Yamamoto, Tadashi
2017-12-01
In an attempt to complete human proteome project (HPP), Chromosome-Centric Human Proteome Project (C-HPP) launched the journey of missing protein (MP) investigation in 2012. However, 2579 and 572 protein entries in the neXtProt (2017-1) are still considered as missing and uncertain proteins, respectively. Thus, in this study, we proposed a pipeline to analyze, identify, and validate human missing and uncertain proteins in open-access transcriptomics and proteomics databases. Analysis of RNA expression pattern for missing proteins in Human protein Atlas showed that 28% of them, such as Olfactory receptor 1I1 ( O60431 ), had no RNA expression, suggesting the necessity to consider uncommon tissues for transcriptomic and proteomic studies. Interestingly, 21% had elevated expression level in a particular tissue (tissue-enriched proteins), indicating the importance of targeting such proteins in their elevated tissues. Additionally, the analysis of RNA expression level for missing proteins showed that 95% had no or low expression level (0-10 transcripts per million), indicating that low abundance is one of the major obstacles facing the detection of missing proteins. Moreover, missing proteins are predicted to generate fewer predicted unique tryptic peptides than the identified proteins. Searching for these predicted unique tryptic peptides that correspond to missing and uncertain proteins in the experimental peptide list of open-access MS-based databases (PA, GPM) resulted in the detection of 402 missing and 19 uncertain proteins with at least two unique peptides (≥9 aa) at <(5 × 10 -4 )% FDR. Finally, matching the native spectra for the experimentally detected peptides with their SRMAtlas synthetic counterparts at three transition sources (QQQ, QTOF, QTRAP) gave us an opportunity to validate 41 missing proteins by ≥2 proteotypic peptides.
Access to digital library databases in higher education: design problems and infrastructural gaps.
Oswal, Sushil K
2014-01-01
After defining accessibility and usability, the author offers a broad survey of the research studies on digital content databases which have thus far primarily depended on data drawn from studies conducted by sighted researchers with non-disabled users employing screen readers and low vision devices. This article aims at producing a detailed description of the difficulties confronted by blind screen reader users with online library databases which now hold most of the academic, peer-reviewed journal and periodical content essential for research and teaching in higher education. The approach taken here is borrowed from descriptive ethnography which allows the author to create a complete picture of the accessibility and usability problems faced by an experienced academic user of digital library databases and screen readers. The author provides a detailed analysis of the different aspects of accessibility issues in digital databases under several headers with a special focus on full-text PDF files. The author emphasizes that long-term studies with actual, blind screen reader users employing both qualitative and computerized research tools can yield meaningful data for the designers and developers to improve these databases to a level that they begin to provide an equal access to the blind.
Human sex differences in emotional processing of own-race and other-race faces.
Ran, Guangming; Chen, Xu; Pan, Yangu
2014-06-18
There is evidence that women and men show differences in the perception of affective facial expressions. However, none of the previous studies directly investigated sex differences in emotional processing of own-race and other-race faces. The current study addressed this issue using high time resolution event-related potential techniques. In total, data from 25 participants (13 women and 12 men) were analyzed. It was found that women showed increased N170 amplitudes to negative White faces compared with negative Chinese faces over the right hemisphere electrodes. This result suggests that women show enhanced sensitivity to other-race faces showing negative emotions (fear or disgust), which may contribute toward evolution. However, the current data showed that men had increased N170 amplitudes to happy Chinese versus happy White faces over the left hemisphere electrodes, indicating that men show enhanced sensitivity to own-race faces showing positive emotions (happiness). In this respect, men might use past pleasant emotional experiences to boost recognition of own-race faces.
van Ballegooijen, Wouter; Cuijpers, Pim; van Straten, Annemieke; Karyotaki, Eirini; Andersson, Gerhard; Smit, Jan H.; Riper, Heleen
2014-01-01
Background Internet-based cognitive behavioural therapy (iCBT) is an effective and acceptable treatment for depression, especially when it includes guidance, but its treatment adherence has not yet been systematically studied. We conducted a meta-analysis, comparing the adherence to guided iCBT with the adherence to individual face-to-face CBT. Methods Studies were selected from a database of trials that investigate treatment for adult depression (see www.evidencebasedpsychotherapies.org), updated to January 2013. We identified 24 studies describing 26 treatment conditions (14 face-to-face CBT, 12 guided iCBT), by means of these inclusion criteria: targeting depressed adults, no comorbid somatic disorder or substance abuse, community recruitment, published in the year 2000 or later. The main outcome measure was the percentage of completed sessions. We also coded the percentage of treatment completers (separately coding for 100% or at least 80% of treatment completed). Results We did not find studies that compared guided iCBT and face-to-face CBT in a single trial that met our inclusion criteria. Face-to-face CBT treatments ranged from 12 to 28 sessions, guided iCBT interventions consisted of 5 to 9 sessions. Participants in face-to-face CBT completed on average 83.9% of their treatment, which did not differ significantly from participants in guided iCBT (80.8%, P = .59). The percentage of completers (total intervention) was significantly higher in face-to-face CBT (84.7%) than in guided iCBT (65.1%, P < .001), as was the percentage of completers of 80% or more of the intervention (face-to-face CBT: 85.2%, guided iCBT: 67.5%, P = .003). Non-completers of face-to-face CBT completed on average 24.5% of their treatment, while non-completers of guided iCBT completed on average 42.1% of their treatment. Conclusion We did not find studies that compared guided iCBT and face-to-face CBT in a single trial. Adherence to guided iCBT appears to be adequate and could be equal to adherence to face-to-face CBT. PMID:25029507
Ahern, Elayne; Kinsella, Stephen; Semkovska, Maria
2018-02-01
Leading cause of disability worldwide, depression is the most prevalent mental disorder with growing societal costs. As mental health services demand often outweighs provision, accessible treatment options are needed. Our systematic review and meta-analysis evaluated the clinical efficacy and economic evidence for the use of online cognitive behavioral therapy (oCBT) as an accessible treatment solution for depression. Areas covered: Electronic databases were searched for controlled trials published between 2006 and 2016. Of the reviewed 3,324 studies, 29 met the criteria for inclusion in the efficacy meta-analysis. The systematic review identified five oCBT economic evaluations. Therapist-supported oCBT was equivalent to face-to-face CBT at improving depressive symptoms and superior to treatment-as-usual, waitlist control, and attention control. Depression severity, number of sessions, or support did not affect efficacy. From a healthcare provider perspective, oCBT tended to show greater costs with greater benefits in the short term, relative to comparator treatments. Expert commentary: Although efficacious, further economic evidence is required to support the provision of oCBT as a cost-effective treatment for depression. Economic evaluations that incorporate a societal perspective will better account for direct and indirect treatment costs. Nevertheless, oCBT shows promise of effectively improving depressive symptoms, considering limited mental healthcare resources.
[Covering stoma in anterior rectum resection with TME for rectal cancer in elderly patients].
Cirocchi, Roberto; Grassi, Veronica; Barillaro, Ivan; Cacurri, Alban; Koltraka, Bledar; Coccette, Marco; Sciannameo, Francesco
2010-01-01
The aim of our study is to evaluate the advisability of covering stoma in Anterior Rectum Resection with TME in elderly patients. A research of both the Ministry of Health and Terni Hospital databases has been conducted so as to collect information about patients with rectal tumor. Such research allowed to identify the amount of patients diagnosed with rectal cancer, the type of intervention, and the average hospitalization time. Between January 1997 and June 2008, 209 patients have undergone chirurgical surgery at Terni hospital's General and Emergency Surgical Clinic. An Anterior Rectum Resection with TME has been performed in 135 patients out of the sample (64.59%). The average hospitalization time of geriatric patients does not show significant differences compared to that of younger patients. An age-cohort analysis has been performed among patients who have been subject to stomia and those who have not. The former have been further split up between those who underwent ileostomy and those subject to colostomy. While ileostomy patients face a similar hospitalization time across all age cohorts, geriatric colostomy patients face longer hospitalizations than younger patients. Patients subject to Anterior Rectum Resection show no meaningful differences, in terms of hospitalization time, across all age cohorts. In geriatric patients the construction of covering stoma has resulted in longer hospitalizations only when a loop colostomy was executed, as opposed to loop ileostomy.
A unified classifier for robust face recognition based on combining multiple subspace algorithms
NASA Astrophysics Data System (ADS)
Ijaz Bajwa, Usama; Ahmad Taj, Imtiaz; Waqas Anwar, Muhammad
2012-10-01
Face recognition being the fastest growing biometric technology has expanded manifold in the last few years. Various new algorithms and commercial systems have been proposed and developed. However, none of the proposed or developed algorithm is a complete solution because it may work very well on one set of images with say illumination changes but may not work properly on another set of image variations like expression variations. This study is motivated by the fact that any single classifier cannot claim to show generally better performance against all facial image variations. To overcome this shortcoming and achieve generality, combining several classifiers using various strategies has been studied extensively also incorporating the question of suitability of any classifier for this task. The study is based on the outcome of a comprehensive comparative analysis conducted on a combination of six subspace extraction algorithms and four distance metrics on three facial databases. The analysis leads to the selection of the most suitable classifiers which performs better on one task or the other. These classifiers are then combined together onto an ensemble classifier by two different strategies of weighted sum and re-ranking. The results of the ensemble classifier show that these strategies can be effectively used to construct a single classifier that can successfully handle varying facial image conditions of illumination, aging and facial expressions.
Cultural differences in gaze and emotion recognition: Americans contrast more than Chinese.
Stanley, Jennifer Tehan; Zhang, Xin; Fung, Helene H; Isaacowitz, Derek M
2013-02-01
We investigated the influence of contextual expressions on emotion recognition accuracy and gaze patterns among American and Chinese participants. We expected Chinese participants would be more influenced by, and attend more to, contextual information than Americans. Consistent with our hypothesis, Americans were more accurate than Chinese participants at recognizing emotions embedded in the context of other emotional expressions. Eye-tracking data suggest that, for some emotions, Americans attended more to the target faces, and they made more gaze transitions to the target face than Chinese. For all emotions except anger and disgust, Americans appeared to use more of a contrasting strategy where each face was individually contrasted with the target face, compared with Chinese who used less of a contrasting strategy. Both cultures were influenced by contextual information, although the benefit of contextual information depended upon the perceptual dissimilarity of the contextual emotions to the target emotion and the gaze pattern employed during the recognition task. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Correlation based efficient face recognition and color change detection
NASA Astrophysics Data System (ADS)
Elbouz, M.; Alfalou, A.; Brosseau, C.; Alam, M. S.; Qasmi, S.
2013-01-01
Identifying the human face via correlation is a topic attracting widespread interest. At the heart of this technique lies the comparison of an unknown target image to a known reference database of images. However, the color information in the target image remains notoriously difficult to interpret. In this paper, we report a new technique which: (i) is robust against illumination change, (ii) offers discrimination ability to detect color change between faces having similar shape, and (iii) is specifically designed to detect red colored stains (i.e. facial bleeding). We adopt the Vanderlugt correlator (VLC) architecture with a segmented phase filter and we decompose the color target image using normalized red, green, and blue (RGB), and hue, saturation, and value (HSV) scales. We propose a new strategy to effectively utilize color information in signatures for further increasing the discrimination ability. The proposed algorithm has been found to be very efficient for discriminating face subjects with different skin colors, and those having color stains in different areas of the facial image.
A comparative analysis of family adaptability and cohesion ratings among traumatized urban youth.
Bellantuono, Alessandro; Saigh, Philip A; Durham, Katherine; Dekis, Constance; Hackler, Dusty; McGuire, Leah A; Yasik, Anastasia E; Halamandaris, Phill V; Oberfield, Richard A
2018-03-01
Given the need to identify psychological risk factors among traumatized youth, this study examined the family functioning of traumatized youth with or without PTSD and a nonclinical sample. The Family Adaptability and Cohesion Evaluation Scales, second edition (FACES II; Olson, Portner, & Bell, 1982), scores of youth with posttraumatic stress disorder (PTSD; n = 29) were compared with the scores of trauma-exposed youth without PTSD (n = 48) and a nontraumatized comparison group (n = 44). Child diagnostic interviews determined that all participants were free of major comorbid disorders. The FACES II scores of the participants with PTSD were not significantly different from the scores of trauma-exposed youth without PTSD and the nontraumatized comparison group. FACES II scores were also not significantly different between the trauma-exposed youth without PTSD and the nontraumatized comparison group. PTSD and trauma-exposure without PTSD were not associated with variations in the perception of family functioning as measured by the FACES II. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
High precision automated face localization in thermal images: oral cancer dataset as test case
NASA Astrophysics Data System (ADS)
Chakraborty, M.; Raman, S. K.; Mukhopadhyay, S.; Patsa, S.; Anjum, N.; Ray, J. G.
2017-02-01
Automated face detection is the pivotal step in computer vision aided facial medical diagnosis and biometrics. This paper presents an automatic, subject adaptive framework for accurate face detection in the long infrared spectrum on our database for oral cancer detection consisting of malignant, precancerous and normal subjects of varied age group. Previous works on oral cancer detection using Digital Infrared Thermal Imaging(DITI) reveals that patients and normal subjects differ significantly in their facial thermal distribution. Therefore, it is a challenging task to formulate a completely adaptive framework to veraciously localize face from such a subject specific modality. Our model consists of first extracting the most probable facial regions by minimum error thresholding followed by ingenious adaptive methods to leverage the horizontal and vertical projections of the segmented thermal image. Additionally, the model incorporates our domain knowledge of exploiting temperature difference between strategic locations of the face. To our best knowledge, this is the pioneering work on detecting faces in thermal facial images comprising both patients and normal subjects. Previous works on face detection have not specifically targeted automated medical diagnosis; face bounding box returned by those algorithms are thus loose and not apt for further medical automation. Our algorithm significantly outperforms contemporary face detection algorithms in terms of commonly used metrics for evaluating face detection accuracy. Since our method has been tested on challenging dataset consisting of both patients and normal subjects of diverse age groups, it can be seamlessly adapted in any DITI guided facial healthcare or biometric applications.
ERIC Educational Resources Information Center
Brooks, Sam; Dorst, Thomas J.
2002-01-01
Discusses the role of consortia in academic libraries, specifically the Illinois Digital Academic Library (IDAL), and describes a study conducted by the IDAL that investigated issues surrounding full text database research including stability of content, vendor communication, embargo periods, publisher concerns, quality of content, linking and…
Raising the Bar: Book Vendors and the New Realities of Service.
ERIC Educational Resources Information Center
Alessi, Dana L.
1999-01-01
Library book vendors are facing new realities in the 21st century. Changes in continuations, firm order placement, value-added services, approval plans, retrospective collection development, and database creation and maintenance are being effected in an effort to keep current customers and attract new ones. Those changes and the subsequent shift…
ERIC Educational Resources Information Center
Cook, Barbara J.
This document addresses the problems faced by families with a schizophrenic member, based on a survey of the literature from a variety of sources including: (1) the Alden Library of Ohio University; (2) the ERIC database; (3) the American Association of Counseling and Development (AACD); and (4) the National Alliance for the Mentally Ill (NAMI).…
The iCSS Chemistry Dashboard is a publicly accessible dashboard provided by the National Center for Computation Toxicology at the US-EPA. It serves a number of purposes, including providing a chemistry database underpinning many of our public-facing projects (e.g. ToxCast and Exp...
Chemical Tracking Systems: Not Your Usual Global Positioning System!
ERIC Educational Resources Information Center
Roy, Ken
2007-01-01
The haphazard storing and tracking of chemicals in the laboratory is a serious safety issue facing science teachers. To get control of your chemicals, try implementing a "chemical tracking system". A chemical tracking system (CTS) is a database of chemicals used in the laboratory. If implemented correctly, a CTS will reduce purchasing costs,…
ERIC Educational Resources Information Center
Wilkinson, Joanne; Lauer, Emily; Greenwood, Nechama W.; Freund, Karen M.; Rosen, Amy K.
2014-01-01
Though it is widely recognized that people with intellectual and developmental disabilities (IDD) face significant health disparities, the comprehensive data sets needed for population-level health surveillance of people with IDD are lacking. This paucity of data makes it difficult to track and accurately describe health differences, improvements,…
Facing Future Users--The Challenge of Transforming a Traditional Online Database into a Web Service.
ERIC Educational Resources Information Center
Tolonen, Eva
The Energy Technology Data Exchange (ETDE) agreement included 19 member countries spanning four continents: Japan and the Republic of Korea; Belgium, Denmark, Finland, France, Germany, Italy, The Netherlands, Norway, Poland, Spain, Sweden, Switzerland, and the United Kingdom; Canada, Mexico, and the United States; and Brazil. The participating…
The Problems Public Schools Face: High School Misbehaviour in 1990 and 2002
ERIC Educational Resources Information Center
Fish, Reva M.; Finn, Kristin V.; Finn, Jeremy D.
2011-01-01
Misbehaviour in high school impacts learning and instruction in the classroom as well as the educational climate of the institution. In this report, changes in administrators', teachers', and students' reports of misbehaviour between 1990 and 2002 were examined using two national US databases. There was little change in administrators'…
ERIC Educational Resources Information Center
Horn, Michael B.; Fisher, Julia Freeland
2017-01-01
The Clayton Christiansen Institute maintains a database of more than 400 schools across the United States that have implemented some form of blended learning, which combines online learning with brick-and-mortar classrooms. Data the Institute has collected over the past six months suggests three trends as this model continues to evolve and mature.…
Unified framework for automated iris segmentation using distantly acquired face images.
Tan, Chun-Wei; Kumar, Ajay
2012-09-01
Remote human identification using iris biometrics has high civilian and surveillance applications and its success requires the development of robust segmentation algorithm to automatically extract the iris region. This paper presents a new iris segmentation framework which can robustly segment the iris images acquired using near infrared or visible illumination. The proposed approach exploits multiple higher order local pixel dependencies to robustly classify the eye region pixels into iris or noniris regions. Face and eye detection modules have been incorporated in the unified framework to automatically provide the localized eye region from facial image for iris segmentation. We develop robust postprocessing operations algorithm to effectively mitigate the noisy pixels caused by the misclassification. Experimental results presented in this paper suggest significant improvement in the average segmentation errors over the previously proposed approaches, i.e., 47.5%, 34.1%, and 32.6% on UBIRIS.v2, FRGC, and CASIA.v4 at-a-distance databases, respectively. The usefulness of the proposed approach is also ascertained from recognition experiments on three different publicly available databases.
Example of monitoring measurements in a virtual eye clinic using 'big data'.
Jones, Lee; Bryan, Susan R; Miranda, Marco A; Crabb, David P; Kotecha, Aachal
2017-10-26
To assess the equivalence of measurement outcomes between patients attending a standard glaucoma care service, where patients see an ophthalmologist in a face-to-face setting, and a glaucoma monitoring service (GMS). The average mean deviation (MD) measurement on the visual field (VF) test for 250 patients attending a GMS were compared with a 'big data' repository of patients attending a standard glaucoma care service (reference database). In addition, the speed of VF progression between GMS patients and reference database patients was compared. Reference database patients were used to create expected outcomes that GMS patients could be compared with. For GMS patients falling outside of the expected limits, further analysis was carried out on the clinical management decisions for these patients. The average MD of patients in the GMS ranged from +1.6 dB to -18.9 dB between two consecutive appointments at the clinic. In the first analysis, 12 (4.8%; 95% CI 2.5% to 8.2%) GMS patients scored outside the 90% expected values based on the reference database. In the second analysis, 1.9% (95% CI 0.4% to 5.4%) GMS patients had VF changes outside of the expected 90% limits. Using 'big data' collected in the standard glaucoma care service, we found that patients attending a GMS have equivalent outcomes on the VF test. Our findings provide support for the implementation of virtual healthcare delivery in the hospital eye service. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Prevalence of face recognition deficits in middle childhood.
Bennetts, Rachel J; Murray, Ebony; Boyce, Tian; Bate, Sarah
2017-02-01
Approximately 2-2.5% of the adult population is believed to show severe difficulties with face recognition, in the absence of any neurological injury-a condition known as developmental prosopagnosia (DP). However, to date no research has attempted to estimate the prevalence of face recognition deficits in children, possibly because there are very few child-friendly, well-validated tests of face recognition. In the current study, we examined face and object recognition in a group of primary school children (aged 5-11 years), to establish whether our tests were suitable for children and to provide an estimate of face recognition difficulties in children. In Experiment 1 (n = 184), children completed a pre-existing test of child face memory, the Cambridge Face Memory Test-Kids (CFMT-K), and a bicycle test with the same format. In Experiment 2 (n = 413), children completed three-alternative forced-choice matching tasks with faces and bicycles. All tests showed good psychometric properties. The face and bicycle tests were well matched for difficulty and showed a similar developmental trajectory. Neither the memory nor the matching tests were suitable to detect impairments in the youngest groups of children, but both tests appear suitable to screen for face recognition problems in middle childhood. In the current sample, 1.2-5.2% of children showed difficulties with face recognition; 1.2-4% showed face-specific difficulties-that is, poor face recognition with typical object recognition abilities. This is somewhat higher than previous adult estimates: It is possible that face matching tests overestimate the prevalence of face recognition difficulties in children; alternatively, some children may "outgrow" face recognition difficulties.
Membership-degree preserving discriminant analysis with applications to face recognition.
Yang, Zhangjing; Liu, Chuancai; Huang, Pu; Qian, Jianjun
2013-01-01
In pattern recognition, feature extraction techniques have been widely employed to reduce the dimensionality of high-dimensional data. In this paper, we propose a novel feature extraction algorithm called membership-degree preserving discriminant analysis (MPDA) based on the fisher criterion and fuzzy set theory for face recognition. In the proposed algorithm, the membership degree of each sample to particular classes is firstly calculated by the fuzzy k-nearest neighbor (FKNN) algorithm to characterize the similarity between each sample and class centers, and then the membership degree is incorporated into the definition of the between-class scatter and the within-class scatter. The feature extraction criterion via maximizing the ratio of the between-class scatter to the within-class scatter is applied. Experimental results on the ORL, Yale, and FERET face databases demonstrate the effectiveness of the proposed algorithm.
Binary zone-plate array for a parallel joint transform correlator applied to face recognition.
Kodate, K; Hashimoto, A; Thapliya, R
1999-05-10
Taking advantage of small aberrations, high efficiency, and compactness, we developed a new, to our knowledge, design procedure for a binary zone-plate array (BZPA) and applied it to a parallel joint transform correlator for the recognition of the human face. Pairs of reference and unknown images of faces are displayed on a liquid-crystal spatial light modulator (SLM), Fourier transformed by the BZPA, intensity recorded on an optically addressable SLM, and inversely Fourier transformed to obtain correlation signals. Consideration of the bandwidth allows the relations among the channel number, the numerical aperture of the zone plates, and the pattern size to be determined. Experimentally a five-channel parallel correlator was implemented and tested successfully with a 100-person database. The design and the fabrication of a 20-channel BZPA for phonetic character recognition are also included.
Martijn, Carolien; Vanderlinden, Marlies; Roefs, Anne; Huijding, Jorg; Jansen, Anita
2010-09-01
Many women show weight and body concerns that leave them vulnerable to body dissatisfaction, lowered self-esteem, psychological distress, and eating disorders. This study tested whether body satisfaction could be increased by means of evaluative conditioning. In the experimental condition (n = 26), women with low and high body concern completed a conditioning procedure in which pictures of their bodies were selectively linked to positive social stimuli (pictures of smiling faces). Pictures of control bodies were linked to neutral or negative social stimuli (neutral and frowning faces). In a control condition (n = 28), low and high body concerned women underwent a procedure in which pictures of their own body and of control bodies were randomly followed by positive, neutral, and negative social stimuli. Changes in body satisfaction and self-esteem before and after the conditioning task. Women with high body concern demonstrated an increase in body satisfaction and global self-esteem when pictorial representations of their own bodies were associated with positive stimuli that signaled social acceptance. A simple conditioning procedure increased body satisfaction in healthy, normal weight women who were concerned about their shape and weight. (PsycINFO Database Record (c) 2010 APA, all rights reserved).
Discrimination of emotional facial expressions by tufted capuchin monkeys (Sapajus apella).
Calcutt, Sarah E; Rubin, Taylor L; Pokorny, Jennifer J; de Waal, Frans B M
2017-02-01
Tufted or brown capuchin monkeys (Sapajus apella) have been shown to recognize conspecific faces as well as categorize them according to group membership. Little is known, though, about their capacity to differentiate between emotionally charged facial expressions or whether facial expressions are processed as a collection of features or configurally (i.e., as a whole). In 3 experiments, we examined whether tufted capuchins (a) differentiate photographs of neutral faces from either affiliative or agonistic expressions, (b) use relevant facial features to make such choices or view the expression as a whole, and (c) demonstrate an inversion effect for facial expressions suggestive of configural processing. Using an oddity paradigm presented on a computer touchscreen, we collected data from 9 adult and subadult monkeys. Subjects discriminated between emotional and neutral expressions with an exceptionally high success rate, including differentiating open-mouth threats from neutral expressions even when the latter contained varying degrees of visible teeth and mouth opening. They also showed an inversion effect for facial expressions, results that may indicate that quickly recognizing expressions does not originate solely from feature-based processing but likely a combination of relational processes. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Chen, Po-Hao; Loehfelm, Thomas W; Kamer, Aaron P; Lemmon, Andrew B; Cook, Tessa S; Kohli, Marc D
2016-12-01
The residency review committee of the Accreditation Council of Graduate Medical Education (ACGME) collects data on resident exam volume and sets minimum requirements. However, this data is not made readily available, and the ACGME does not share their tools or methodology. It is therefore difficult to assess the integrity of the data and determine if it truly reflects relevant aspects of the resident experience. This manuscript describes our experience creating a multi-institutional case log, incorporating data from three American diagnostic radiology residency programs. Each of the three sites independently established automated query pipelines from the various radiology information systems in their respective hospital groups, thereby creating a resident-specific database. Then, the three institutional resident case log databases were aggregated into a single centralized database schema. Three hundred thirty residents and 2,905,923 radiologic examinations over a 4-year span were catalogued using 11 ACGME categories. Our experience highlights big data challenges including internal data heterogeneity and external data discrepancies faced by informatics researchers.
Cornes, Katherine; Donnelly, Nick; Godwin, Hayward; Wenger, Michael J
2011-06-01
The Thatcher illusion (Thompson, 1980) is considered to be a prototypical illustration of the notion that face perception is dependent on configural processes and representations. We explored this idea by examining the relative contributions of perceptual and decisional processes to the ability of observers to identify the orientation of two classes of forms-faces and churches-and a set of their component features. Observers were presented with upright and inverted images of faces and churches in which the components (eyes, mouth, windows, doors) were presented either upright or inverted. Observers first rated the subjective grotesqueness of all of the images and then performed a complete identification task in which they had to identify the orientation of the overall form and the orientation of each of the interior features. Grotesqueness ratings for both classes of image showed the standard modulation of rated grotesqueness as a function of orientation. The complete identification results revealed violations of both perceptual and decisional separability but failed to reveal any violations of within-stimulus (perceptual) independence. In addition, exploration of a simple bivariate Gaussian signal detection model of the relationship between identification performance and judged grotesqueness suggests that within-stimulus violations of perceptual independence on their own are insufficient for producing the illusion. This lack of evidence for within-stimulus configurality suggests the need for a critical reevaluation of the role of configural processing in the Thatcher illusion. (PsycINFO Database Record (c) 2011 APA, all rights reserved).
Marsh, John E; Patel, Krupali; Labonté, Katherine; Threadgold, Emma; Skelton, Faye C; Fodarella, Cristina; Thorley, Rachel; Battersby, Kirsty L; Frowd, Charlie D; Ball, Linden J; Vachon, François
2017-09-01
Cell-phone conversation is ubiquitous within public spaces. The current study investigates whether ignored cell-phone conversation impairs eyewitness memory for a perpetrator. Participants viewed a video of a staged crime in the presence of 1 side of a comprehensible cell-phone conversation (meaningful halfalogue), 2 sides of a comprehensible cell-phone conversation (meaningful dialogue), 1 side of an incomprehensible cell-phone conversation (meaningless halfalogue), or quiet. Between 24 and 28 hr later, participants freely described the perpetrator's face, constructed a single composite image of the perpetrator from memory, and attempted to identify the perpetrator from a sequential lineup. Further, participants rated the likeness of the composites to the perpetrator. Face recall and lineup identification were impaired when participants witnessed the staged crime in the presence of a meaningful halfalogue compared to a meaningless halfalogue, meaningful dialogue, or quiet. Moreover, likeness ratings showed that the composites constructed after ignoring the meaningful halfalogue resembled the perpetrator less than did those constructed after experiencing quiet or ignoring a meaningless halfalogue or a meaningful dialogue. The unpredictability of the meaningful content of the halfalogue, rather than its acoustic unexpectedness, produces distraction. The results are novel in that they suggest that an everyday distraction, even when presented in a different modality to target information, can impair the long-term memory of an eyewitness. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Research of facial feature extraction based on MMC
NASA Astrophysics Data System (ADS)
Xue, Donglin; Zhao, Jiufen; Tang, Qinhong; Shi, Shaokun
2017-07-01
Based on the maximum margin criterion (MMC), a new algorithm of statistically uncorrelated optimal discriminant vectors and a new algorithm of orthogonal optimal discriminant vectors for feature extraction were proposed. The purpose of the maximum margin criterion is to maximize the inter-class scatter while simultaneously minimizing the intra-class scatter after the projection. Compared with original MMC method and principal component analysis (PCA) method, the proposed methods are better in terms of reducing or eliminating the statistically correlation between features and improving recognition rate. The experiment results on Olivetti Research Laboratory (ORL) face database shows that the new feature extraction method of statistically uncorrelated maximum margin criterion (SUMMC) are better in terms of recognition rate and stability. Besides, the relations between maximum margin criterion and Fisher criterion for feature extraction were revealed.
Max Wertheimer, Habilitation candidate at the Frankfurt Psychological Institute.
Gundlach, Horst
2014-05-01
Max Wertheimer told Edwin B. Newman that it was pure chance that on his way to the Rhineland he prematurely got off the train in Frankfurt, and that he did so because he had an inspiration for an experiment that he wanted to perform. Most historians of psychology accept this anecdote, but fail to mention that thereby Wertheimer also mastered the next and decisive step toward his academic career in accomplishing his Habilitation. Exposing the institutional, personal, and intellectual context of Wertheimer's going to Frankfurt and giving a detailed account of the procedure of Habilitation will show that Newman's and similar reports of the episode, even if verbatim to Wertheimer's own telling, are nevertheless too improbable to accept at face value. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Cloud parallel processing of tandem mass spectrometry based proteomics data.
Mohammed, Yassene; Mostovenko, Ekaterina; Henneman, Alex A; Marissen, Rob J; Deelder, André M; Palmblad, Magnus
2012-10-05
Data analysis in mass spectrometry based proteomics struggles to keep pace with the advances in instrumentation and the increasing rate of data acquisition. Analyzing this data involves multiple steps requiring diverse software, using different algorithms and data formats. Speed and performance of the mass spectral search engines are continuously improving, although not necessarily as needed to face the challenges of acquired big data. Improving and parallelizing the search algorithms is one possibility; data decomposition presents another, simpler strategy for introducing parallelism. We describe a general method for parallelizing identification of tandem mass spectra using data decomposition that keeps the search engine intact and wraps the parallelization around it. We introduce two algorithms for decomposing mzXML files and recomposing resulting pepXML files. This makes the approach applicable to different search engines, including those relying on sequence databases and those searching spectral libraries. We use cloud computing to deliver the computational power and scientific workflow engines to interface and automate the different processing steps. We show how to leverage these technologies to achieve faster data analysis in proteomics and present three scientific workflows for parallel database as well as spectral library search using our data decomposition programs, X!Tandem and SpectraST.
Ahmad, Riaz; Naz, Saeeda; Afzal, Muhammad Zeshan; Amin, Sayed Hassan; Breuel, Thomas
2015-01-01
The presence of a large number of unique shapes called ligatures in cursive languages, along with variations due to scaling, orientation and location provides one of the most challenging pattern recognition problems. Recognition of the large number of ligatures is often a complicated task in oriental languages such as Pashto, Urdu, Persian and Arabic. Research on cursive script recognition often ignores the fact that scaling, orientation, location and font variations are common in printed cursive text. Therefore, these variations are not included in image databases and in experimental evaluations. This research uncovers challenges faced by Arabic cursive script recognition in a holistic framework by considering Pashto as a test case, because Pashto language has larger alphabet set than Arabic, Persian and Urdu. A database containing 8000 images of 1000 unique ligatures having scaling, orientation and location variations is introduced. In this article, a feature space based on scale invariant feature transform (SIFT) along with a segmentation framework has been proposed for overcoming the above mentioned challenges. The experimental results show a significantly improved performance of proposed scheme over traditional feature extraction techniques such as principal component analysis (PCA). PMID:26368566
NASA Astrophysics Data System (ADS)
Wijaya, Surya Li; Savvides, Marios; Vijaya Kumar, B. V. K.
2005-02-01
Face recognition on mobile devices, such as personal digital assistants and cell phones, is a big challenge owing to the limited computational resources available to run verifications on the devices themselves. One approach is to transmit the captured face images by use of the cell-phone connection and to run the verification on a remote station. However, owing to limitations in communication bandwidth, it may be necessary to transmit a compressed version of the image. We propose using the image compression standard JPEG2000, which is a wavelet-based compression engine used to compress the face images to low bit rates suitable for transmission over low-bandwidth communication channels. At the receiver end, the face images are reconstructed with a JPEG2000 decoder and are fed into the verification engine. We explore how advanced correlation filters, such as the minimum average correlation energy filter [Appl. Opt. 26, 3633 (1987)] and its variants, perform by using face images captured under different illumination conditions and encoded with different bit rates under the JPEG2000 wavelet-encoding standard. We evaluate the performance of these filters by using illumination variations from the Carnegie Mellon University's Pose, Illumination, and Expression (PIE) face database. We also demonstrate the tolerance of these filters to noisy versions of images with illumination variations.
Neural network face recognition using wavelets
NASA Astrophysics Data System (ADS)
Karunaratne, Passant V.; Jouny, Ismail I.
1997-04-01
The recognition of human faces is a phenomenon that has been mastered by the human visual system and that has been researched extensively in the domain of computer neural networks and image processing. This research is involved in the study of neural networks and wavelet image processing techniques in the application of human face recognition. The objective of the system is to acquire a digitized still image of a human face, carry out pre-processing on the image as required, an then, given a prior database of images of possible individuals, be able to recognize the individual in the image. The pre-processing segment of the system includes several procedures, namely image compression, denoising, and feature extraction. The image processing is carried out using Daubechies wavelets. Once the images have been passed through the wavelet-based image processor they can be efficiently analyzed by means of a neural network. A back- propagation neural network is used for the recognition segment of the system. The main constraints of the system is with regard to the characteristics of the images being processed. The system should be able to carry out effective recognition of the human faces irrespective of the individual's facial-expression, presence of extraneous objects such as head-gear or spectacles, and face/head orientation. A potential application of this face recognition system would be as a secondary verification method in an automated teller machine.
Identification using face regions: application and assessment in forensic scenarios.
Tome, Pedro; Fierrez, Julian; Vera-Rodriguez, Ruben; Ramos, Daniel
2013-12-10
This paper reports an exhaustive analysis of the discriminative power of the different regions of the human face on various forensic scenarios. In practice, when forensic examiners compare two face images, they focus their attention not only on the overall similarity of the two faces. They carry out an exhaustive morphological comparison region by region (e.g., nose, mouth, eyebrows, etc.). In this scenario it is very important to know based on scientific methods to what extent each facial region can help in identifying a person. This knowledge obtained using quantitative and statical methods on given populations can then be used by the examiner to support or tune his observations. In order to generate such scientific knowledge useful for the expert, several methodologies are compared, such as manual and automatic facial landmarks extraction, different facial regions extractors, and various distances between the subject and the acquisition camera. Also, three scenarios of interest for forensics are considered comparing mugshot and Closed-Circuit TeleVision (CCTV) face images using MORPH and SCface databases. One of the findings is that depending of the acquisition distances, the discriminative power of the facial regions change, having in some cases better performance than the full face. Crown Copyright © 2013. Published by Elsevier Ireland Ltd. All rights reserved.
3D Face Recognition Based on Multiple Keypoint Descriptors and Sparse Representation
Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying; Lu, Jianwei
2014-01-01
Recent years have witnessed a growing interest in developing methods for 3D face recognition. However, 3D scans often suffer from the problems of missing parts, large facial expressions, and occlusions. To be useful in real-world applications, a 3D face recognition approach should be able to handle these challenges. In this paper, we propose a novel general approach to deal with the 3D face recognition problem by making use of multiple keypoint descriptors (MKD) and the sparse representation-based classification (SRC). We call the proposed method 3DMKDSRC for short. Specifically, with 3DMKDSRC, each 3D face scan is represented as a set of descriptor vectors extracted from keypoints by meshSIFT. Descriptor vectors of gallery samples form the gallery dictionary. Given a probe 3D face scan, its descriptors are extracted at first and then its identity can be determined by using a multitask SRC. The proposed 3DMKDSRC approach does not require the pre-alignment between two face scans and is quite robust to the problems of missing data, occlusions and expressions. Its superiority over the other leading 3D face recognition schemes has been corroborated by extensive experiments conducted on three benchmark databases, Bosphorus, GavabDB, and FRGC2.0. The Matlab source code for 3DMKDSRC and the related evaluation results are publicly available at http://sse.tongji.edu.cn/linzhang/3dmkdsrcface/3dmkdsrc.htm. PMID:24940876
Pruning or tuning? Maturational profiles of face specialization during typical development.
Zhu, Xun; Bhatt, Ramesh S; Joseph, Jane E
2016-06-01
Face processing undergoes significant developmental change with age. Two kinds of developmental changes in face specialization were examined in this study: specialized maturation, or the continued tuning of a region to faces but little change in the tuning to other categories; and competitive interactions, or the continued tuning to faces accompanied by decreased tuning to nonfaces (i.e., pruning). Using fMRI, in regions where adults showed a face preference, a face- and object-specialization index were computed for younger children (5-8 years), older children (9-12 years) and adults (18-45 years). The specialization index was scaled to each subject's maximum activation magnitude in each region to control for overall age differences in the activation level. Although no regions showed significant face specialization in the younger age group, regions strongly associated with social cognition (e.g., right posterior superior temporal sulcus, right inferior orbital cortex) showed specialized maturation, in which tuning to faces increased with age but there was no pruning of nonface responses. Conversely, regions that are associated with more basic perceptual processing or motor mirroring (right middle temporal cortex, right inferior occipital cortex, right inferior frontal opercular cortex) showed competitive interactions in which tuning to faces was accompanied by pruning of object responses with age. The overall findings suggest that cortical maturation for face processing is regional-specific and involves both increased tuning to faces and diminished response to nonfaces. Regions that show competitive interactions likely support a more generalized function that is co-opted for face processing with development, whereas regions that show specialized maturation increase their tuning to faces, potentially in an activity-dependent, experience-driven manner.
Chang, Allen; Murray, Elizabeth; Yassa, Michael A.
2016-01-01
Face recognition is an important component of successful social interactions in humans. A large literature in social psychology has focused on the phenomenon termed “the other race” (ORE) effect, the tendency to be more proficient with face recognition within one’s own ethnic group, as compared to other ethnic groups. Several potential hypotheses have been proposed for this effect including perceptual expertise, social grouping, and holistic face processing. Recent work on mnemonic discrimination (i.e. the ability to resolve mnemonic interference among similar experiences) may provide a mechanistic account for the ORE. In the current study, we examined how discrimination and generalization in the presence of mnemonic interference may contribute to the ORE. We developed a database of computerized faces divided evenly among ethnic origins (Black, Caucasian, East Asian, South Asian), as well as morphed face stimuli that varied in the amount of similarity to the original stimuli (30%, 40%, 50%, and 60% morphs). Participants first examined the original unmorphed stimuli during study, then during test were asked to judge the prior occurrence of repetitions (targets), morphed stimuli (lures), and new stimuli (foils). We examined participants’ ability to correctly reject similar morphed lures and found that it increased linearly as a function of face dissimilarity. We additionally found that Caucasian participants’ mnemonic discrimination/generalization functions were sharply tuned for Caucasian faces but considerably less tuned for East Asian and Black faces. These results suggest that expertise plays an important role in resolving mnemonic interference, which may offer a mechanistic account for the ORE. PMID:26413724
Support Vector Machine Based on Adaptive Acceleration Particle Swarm Optimization
Abdulameer, Mohammed Hasan; Othman, Zulaiha Ali
2014-01-01
Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented. PMID:24790584
Segmentation of human face using gradient-based approach
NASA Astrophysics Data System (ADS)
Baskan, Selin; Bulut, M. Mete; Atalay, Volkan
2001-04-01
This paper describes a method for automatic segmentation of facial features such as eyebrows, eyes, nose, mouth and ears in color images. This work is an initial step for wide range of applications based on feature-based approaches, such as face recognition, lip-reading, gender estimation, facial expression analysis, etc. Human face can be characterized by its skin color and nearly elliptical shape. For this purpose, face detection is performed using color and shape information. Uniform illumination is assumed. No restrictions on glasses, make-up, beard, etc. are imposed. Facial features are extracted using the vertically and horizontally oriented gradient projections. The gradient of a minimum with respect to its neighbor maxima gives the boundaries of a facial feature. Each facial feature has a different horizontal characteristic. These characteristics are derived by extensive experimentation with many face images. Using fuzzy set theory, the similarity between the candidate and the feature characteristic under consideration is calculated. Gradient-based method is accompanied by the anthropometrical information, for robustness. Ear detection is performed using contour-based shape descriptors. This method detects the facial features and circumscribes each facial feature with the smallest rectangle possible. AR database is used for testing. The developed method is also suitable for real-time systems.
The iCSS CompTox Dashboard is a publicly accessible dashboard provided by the National Center for Computation Toxicology at the US-EPA. It serves a number of purposes, including providing a chemistry database underpinning many of our public-facing projects (e.g. ToxCast and ExpoC...
ERIC Educational Resources Information Center
Cominole, Melissa; Wheeless, Sara; Dudley, Kristin; Franklin, Jeff; Wine, Jennifer
2007-01-01
The "2004/06 Beginning Postsecondary Students Longitudinal Study (BPS:04/06)" is sponsored by the U.S. Department of Education to respond to the need for a national, comprehensive database concerning issues students may face in enrollment, persistence, progress, and attainment in postsecondary education and in consequent early rates of…
ERIC Educational Resources Information Center
Murdock, Alan K.
2017-01-01
Forsyth Technical Community College (FTCC) face a shortage of funding to meet the demands of students, faculty, staff and businesses. Through this practitioner research, the utilization of the college's current customer relationship management (CRM) database advanced. By leveraging technology, the researcher assisted the college in meeting the…
ERIC Educational Resources Information Center
MacSuga, Ashley S.; Simonsen, Brandi
2011-01-01
Many classroom teachers are faced with challenging student behaviors that impact their ability to facilitate learning in productive, safe environments. At the same time, high-stakes testing, increased emphasis on evidence-based instruction, data-based decision making, and response-to-intervention models have put heavy demands on teacher time and…
Children of Parents with Intellectual Disability: Facing Poor Outcomes or Faring Okay?
ERIC Educational Resources Information Center
Collings, Susan; Llewellyn, Gwynnyth
2012-01-01
Background: Children of parents with intellectual disability are assumed to be at risk of poor outcomes but a comprehensive review of the literature has not previously been undertaken. Method: A database and reference search from March 2010 to March 2011 resulted in 26 studies for review. Results: Two groups of studies were identified. The first…
Challenges in the Use of Social Networking Sites to Trace Potential Research Participants
ERIC Educational Resources Information Center
Marsh, Jackie; Bishop, Julia C.
2014-01-01
This paper reports on a number of challenges faced in tracing contributors to research projects that were originally conducted many decades previously. The need to trace contributors in this way arises in projects which focus on involving research participants in previous studies who have not been maintained on a database, or with whom the…
You Got a Problem with That? Exploring Evaluators' Disagreements about Ethics.
ERIC Educational Resources Information Center
Morris, Michael; Jacobs, Lynette
Research has suggested that evaluators vary in the extent to which they interpret the challenges they face in ethical terms. The question of what accounts for these differences was explored through a survey completed by 391 individuals listed in the database of the American Evaluation Association. The first section of the questionnaire presented…
The US EPA is faced with long lists of chemicals that need to be assessed for hazard, and a gap in evaluating chemical risk is accounting for metabolic activation resulting in increased toxicity. The goals of this project are to develop a capability to predict metabolic maps of x...
3D face analysis by using Mesh-LBP feature
NASA Astrophysics Data System (ADS)
Wang, Haoyu; Yang, Fumeng; Zhang, Yuming; Wu, Congzhong
2017-11-01
Objective: Face Recognition is one of the widely application of image processing. Corresponding two-dimensional limitations, such as the pose and illumination changes, to a certain extent restricted its accurate rate and further development. How to overcome the pose and illumination changes and the effects of self-occlusion is the research hotspot and difficulty, also attracting more and more domestic and foreign experts and scholars to study it. 3D face recognition fusing shape and texture descriptors has become a very promising research direction. Method: Our paper presents a 3D point cloud based on mesh local binary pattern grid (Mesh-LBP), then feature extraction for 3D face recognition by fusing shape and texture descriptors. 3D Mesh-LBP not only retains the integrity of the 3D geometry, is also reduces the need for recognition process of normalization steps, because the triangle Mesh-LBP descriptor is calculated on 3D grid. On the other hand, in view of multi-modal consistency in face recognition advantage, construction of LBP can fusing shape and texture information on Triangular Mesh. In this paper, some of the operators used to extract Mesh-LBP, Such as the normal vectors of the triangle each face and vertex, the gaussian curvature, the mean curvature, laplace operator and so on. Conclusion: First, Kinect devices obtain 3D point cloud face, after the pretreatment and normalization, then transform it into triangular grid, grid local binary pattern feature extraction from face key significant parts of face. For each local face, calculate its Mesh-LBP feature with Gaussian curvature, mean curvature laplace operator and so on. Experiments on the our research database, change the method is robust and high recognition accuracy.
Garrido, Margarida V; Lopes, Diniz; Prada, Marília; Rodrigues, David; Jerónimo, Rita; Mourão, Rui P
2017-08-01
This article presents subjective rating norms for a new set of Stills And Videos of facial Expressions-the SAVE database. Twenty nonprofessional models were filmed while posing in three different facial expressions (smile, neutral, and frown). After each pose, the models completed the PANAS questionnaire, and reported more positive affect after smiling and more negative affect after frowning. From the shooting material, stills and 5 s and 10 s videos were edited (total stimulus set = 180). A different sample of 120 participants evaluated the stimuli for attractiveness, arousal, clarity, genuineness, familiarity, intensity, valence, and similarity. Overall, facial expression had a main effect in all of the evaluated dimensions, with smiling models obtaining the highest ratings. Frowning expressions were perceived as being more arousing, clearer, and more intense, but also as more negative than neutral expressions. Stimulus presentation format only influenced the ratings of attractiveness, familiarity, genuineness, and intensity. The attractiveness and familiarity ratings increased with longer exposure times, whereas genuineness decreased. The ratings in the several dimensions were correlated. The subjective norms of facial stimuli presented in this article have potential applications to the work of researchers in several research domains. From our database, researchers may choose the most adequate stimulus presentation format for a particular experiment, select and manipulate the dimensions of interest, and control for the remaining dimensions. The full stimulus set and descriptive results (means, standard deviations, and confidence intervals) for each stimulus per dimension are provided as supplementary material.
Kobayashi, Norio; Ishii, Manabu; Takahashi, Satoshi; Mochizuki, Yoshiki; Matsushima, Akihiro; Toyoda, Tetsuro
2011-07-01
Global cloud frameworks for bioinformatics research databases become huge and heterogeneous; solutions face various diametric challenges comprising cross-integration, retrieval, security and openness. To address this, as of March 2011 organizations including RIKEN published 192 mammalian, plant and protein life sciences databases having 8.2 million data records, integrated as Linked Open or Private Data (LOD/LPD) using SciNetS.org, the Scientists' Networking System. The huge quantity of linked data this database integration framework covers is based on the Semantic Web, where researchers collaborate by managing metadata across public and private databases in a secured data space. This outstripped the data query capacity of existing interface tools like SPARQL. Actual research also requires specialized tools for data analysis using raw original data. To solve these challenges, in December 2009 we developed the lightweight Semantic-JSON interface to access each fragment of linked and raw life sciences data securely under the control of programming languages popularly used by bioinformaticians such as Perl and Ruby. Researchers successfully used the interface across 28 million semantic relationships for biological applications including genome design, sequence processing, inference over phenotype databases, full-text search indexing and human-readable contents like ontology and LOD tree viewers. Semantic-JSON services of SciNetS.org are provided at http://semanticjson.org.
Super-recognizers: People with extraordinary face recognition ability
Russell, Richard; Duchaine, Brad; Nakayama, Ken
2014-01-01
We tested four people who claimed to have significantly better than ordinary face recognition ability. Exceptional ability was confirmed in each case. On two very different tests of face recognition, all four experimental subjects performed beyond the range of control subject performance. They also scored significantly better than average on a perceptual discrimination test with faces. This effect was larger with upright than inverted faces, and the four subjects showed a larger ‘inversion effect’ than control subjects, who in turn showed a larger inversion effect than developmental prosopagnosics. This indicates an association between face recognition ability and the magnitude of the inversion effect. Overall, these ‘super-recognizers’ are about as good at face recognition and perception as developmental prosopagnosics are bad. Our findings demonstrate the existence of people with exceptionally good face recognition ability, and show that the range of face recognition and face perception ability is wider than previously acknowledged. PMID:19293090
Super-recognizers: people with extraordinary face recognition ability.
Russell, Richard; Duchaine, Brad; Nakayama, Ken
2009-04-01
We tested 4 people who claimed to have significantly better than ordinary face recognition ability. Exceptional ability was confirmed in each case. On two very different tests of face recognition, all 4 experimental subjects performed beyond the range of control subject performance. They also scored significantly better than average on a perceptual discrimination test with faces. This effect was larger with upright than with inverted faces, and the 4 subjects showed a larger "inversion effect" than did control subjects, who in turn showed a larger inversion effect than did developmental prosopagnosics. This result indicates an association between face recognition ability and the magnitude of the inversion effect. Overall, these "super-recognizers" are about as good at face recognition and perception as developmental prosopagnosics are bad. Our findings demonstrate the existence of people with exceptionally good face recognition ability and show that the range of face recognition and face perception ability is wider than has been previously acknowledged.
A 2D range Hausdorff approach to 3D facial recognition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koch, Mark William; Russ, Trina Denise; Little, Charles Quentin
2004-11-01
This paper presents a 3D facial recognition algorithm based on the Hausdorff distance metric. The standard 3D formulation of the Hausdorff matching algorithm has been modified to operate on a 2D range image, enabling a reduction in computation from O(N2) to O(N) without large storage requirements. The Hausdorff distance is known for its robustness to data outliers and inconsistent data between two data sets, making it a suitable choice for dealing with the inherent problems in many 3D datasets due to sensor noise and object self-occlusion. For optimal performance, the algorithm assumes a good initial alignment between probe and templatemore » datasets. However, to minimize the error between two faces, the alignment can be iteratively refined. Results from the algorithm are presented using 3D face images from the Face Recognition Grand Challenge database version 1.0.« less
PEM-PCA: a parallel expectation-maximization PCA face recognition architecture.
Rujirakul, Kanokmon; So-In, Chakchai; Arnonkijpanich, Banchar
2014-01-01
Principal component analysis or PCA has been traditionally used as one of the feature extraction techniques in face recognition systems yielding high accuracy when requiring a small number of features. However, the covariance matrix and eigenvalue decomposition stages cause high computational complexity, especially for a large database. Thus, this research presents an alternative approach utilizing an Expectation-Maximization algorithm to reduce the determinant matrix manipulation resulting in the reduction of the stages' complexity. To improve the computational time, a novel parallel architecture was employed to utilize the benefits of parallelization of matrix computation during feature extraction and classification stages including parallel preprocessing, and their combinations, so-called a Parallel Expectation-Maximization PCA architecture. Comparing to a traditional PCA and its derivatives, the results indicate lower complexity with an insignificant difference in recognition precision leading to high speed face recognition systems, that is, the speed-up over nine and three times over PCA and Parallel PCA.
Gupta, Vishal K; Han, Seonghee; Mortal, Sandra C; Silveri, Sabatino Dino; Turban, Daniel B
2018-02-01
We examine the glass cliff proposition that female CEOs receive more scrutiny than male CEOs, by investigating whether CEO gender is related to threats from activist investors in public firms. Activist investors are extraorganizational stakeholders who, when dissatisfied with some aspect of the way the firm is being managed, seek to change the strategy or operations of the firm. Although some have argued that women will be viewed more favorably than men in top leadership positions (so-called "female leadership" advantage logic), we build on role congruity theory to hypothesize that female CEOs are significantly more likely than male CEOs to come under threat from activist investors. Results support our predictions, suggesting that female CEOs may face additional challenges not faced by male CEOs. Practical implications and directions for future research are discussed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Garrido, Lucia; Driver, Jon; Dolan, Raymond J.; Duchaine, Bradley C.; Furl, Nicholas
2016-01-01
Face processing is mediated by interactions between functional areas in the occipital and temporal lobe, and the fusiform face area (FFA) and anterior temporal lobe play key roles in the recognition of facial identity. Individuals with developmental prosopagnosia (DP), a lifelong face recognition impairment, have been shown to have structural and functional neuronal alterations in these areas. The present study investigated how face selectivity is generated in participants with normal face processing, and how functional abnormalities associated with DP, arise as a function of network connectivity. Using functional magnetic resonance imaging and dynamic causal modeling, we examined effective connectivity in normal participants by assessing network models that include early visual cortex (EVC) and face-selective areas and then investigated the integrity of this connectivity in participants with DP. Results showed that a feedforward architecture from EVC to the occipital face area, EVC to FFA, and EVC to posterior superior temporal sulcus (pSTS) best explained how face selectivity arises in both controls and participants with DP. In this architecture, the DP group showed reduced connection strengths on feedforward connections carrying face information from EVC to FFA and EVC to pSTS. These altered network dynamics in DP contribute to the diminished face selectivity in the posterior occipitotemporal areas affected in DP. These findings suggest a novel view on the relevance of feedforward projection from EVC to posterior occipitotemporal face areas in generating cortical face selectivity and differences in face recognition ability. SIGNIFICANCE STATEMENT Areas of the human brain showing enhanced activation to faces compared to other objects or places have been extensively studied. However, the factors leading to this face selectively have remained mostly unknown. We show that effective connectivity from early visual cortex to posterior occipitotemporal face areas gives rise to face selectivity. Furthermore, people with developmental prosopagnosia, a lifelong face recognition impairment, have reduced face selectivity in the posterior occipitotemporal face areas and left anterior temporal lobe. We show that this reduced face selectivity can be predicted by effective connectivity from early visual cortex to posterior occipitotemporal face areas. This study presents the first network-based account of how face selectivity arises in the human brain. PMID:27030766
Meyer, Denny; Blamey, Peter J; Pipingas, Andrew; Bhar, Sunil
2018-01-01
Background Sensorineural hearing loss is the most common sensory deficit among older adults. Some of the psychosocial consequences of this condition include difficulty in understanding speech, depression, and social isolation. Studies have shown that older adults with hearing loss show some age-related cognitive decline. Hearing aids have been proven as successful interventions to alleviate sensorineural hearing loss. In addition to hearing aid use, the positive effects of auditory training—formal listening activities designed to optimize speech perception—are now being documented among adults with hearing loss who use hearing aids, especially new hearing aid users. Auditory training has also been shown to produce prolonged cognitive performance improvements. However, there is still little evidence to support the benefits of simultaneous hearing aid use and individualized face-to-face auditory training on cognitive performance in adults with hearing loss. Objective This study will investigate whether using hearing aids for the first time will improve the impact of individualized face-to-face auditory training on cognition, depression, and social interaction for adults with sensorineural hearing loss. The rationale for this study is based on the hypothesis that, in adults with sensorineural hearing loss, using hearing aids for the first time in combination with individualized face-to-face auditory training will be more effective for improving cognition, depressive symptoms, and social interaction rather than auditory training on its own. Methods This is a crossover trial targeting 40 men and women between 50 and 90 years of age with either mild or moderate symmetric sensorineural hearing loss. Consented, willing participants will be recruited from either an independent living accommodation or via a community database to undergo a 6-month intensive face-to-face auditory training program (active control). Participants will be assigned in random order to receive hearing aid (intervention) for either the first 3 or last 3 months of the 6-month auditory training program. Each participant will be tested at baseline, 3, and 6 months using a neuropsychological battery of computer-based cognitive assessments, together with a depression symptom instrument and a social interaction measure. The primary outcome will be cognitive performance with regard to spatial working memory. Secondary outcome measures include other cognition performance measures, depressive symptoms, social interaction, and hearing satisfaction. Results Data analysis is currently under way and the first results are expected to be submitted for publication in June 2018. Conclusions Results from the study will inform strategies for aural rehabilitation, hearing aid delivery, and future hearing loss intervention trials. Trial Registration ClinicalTrials.gov NCT03112850; https://clinicaltrials.gov/ct2/show/NCT03112850 (Archived by WebCite at http://www.webcitation.org/6xz12fD0B). PMID:29572201
Nkyekyer, Joanna; Meyer, Denny; Blamey, Peter J; Pipingas, Andrew; Bhar, Sunil
2018-03-23
Sensorineural hearing loss is the most common sensory deficit among older adults. Some of the psychosocial consequences of this condition include difficulty in understanding speech, depression, and social isolation. Studies have shown that older adults with hearing loss show some age-related cognitive decline. Hearing aids have been proven as successful interventions to alleviate sensorineural hearing loss. In addition to hearing aid use, the positive effects of auditory training-formal listening activities designed to optimize speech perception-are now being documented among adults with hearing loss who use hearing aids, especially new hearing aid users. Auditory training has also been shown to produce prolonged cognitive performance improvements. However, there is still little evidence to support the benefits of simultaneous hearing aid use and individualized face-to-face auditory training on cognitive performance in adults with hearing loss. This study will investigate whether using hearing aids for the first time will improve the impact of individualized face-to-face auditory training on cognition, depression, and social interaction for adults with sensorineural hearing loss. The rationale for this study is based on the hypothesis that, in adults with sensorineural hearing loss, using hearing aids for the first time in combination with individualized face-to-face auditory training will be more effective for improving cognition, depressive symptoms, and social interaction rather than auditory training on its own. This is a crossover trial targeting 40 men and women between 50 and 90 years of age with either mild or moderate symmetric sensorineural hearing loss. Consented, willing participants will be recruited from either an independent living accommodation or via a community database to undergo a 6-month intensive face-to-face auditory training program (active control). Participants will be assigned in random order to receive hearing aid (intervention) for either the first 3 or last 3 months of the 6-month auditory training program. Each participant will be tested at baseline, 3, and 6 months using a neuropsychological battery of computer-based cognitive assessments, together with a depression symptom instrument and a social interaction measure. The primary outcome will be cognitive performance with regard to spatial working memory. Secondary outcome measures include other cognition performance measures, depressive symptoms, social interaction, and hearing satisfaction. Data analysis is currently under way and the first results are expected to be submitted for publication in June 2018. Results from the study will inform strategies for aural rehabilitation, hearing aid delivery, and future hearing loss intervention trials. ClinicalTrials.gov NCT03112850; https://clinicaltrials.gov/ct2/show/NCT03112850 (Archived by WebCite at http://www.webcitation.org/6xz12fD0B). ©Joanna Nkyekyer, Denny Meyer, Peter J Blamey, Andrew Pipingas, Sunil Bhar. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 23.03.2018.
NASA Astrophysics Data System (ADS)
Tanabe, T.
The CRD database, which has been accumulating financial data on SMEsover the ten years since its founding, and has gathered approximately 12 million records for around 2 million SMEs, approximately 3 million records for somewhere around 900,000 sole proprietors, also collected default data on these companies and sole proprietors. The CRD database's weakness is anonymity. Going forward, therefore, it appears the CRD Association is faced with questions concerning how it will enhance the attractiveness of its database whether new knowledge should be gained by using econophysics or other research approaches. We have already seen several examples of knowledge gained through econophysical analyses using the CRD database, and I would like to express my hope that we will eventually see greater application of the SME credit information database and econophysical analysis for the development of Japans SME policies which are scientific economic policies for avoiding moral hazard, and will expect elucidating risk scenarios for the global financial, natural disaster, and other shocks expected to happen with greater frequency. Therefore, the role played by econophysics will become increasingly important, and we have high expectations for the role to be played by the field of econophysics.
Processing of configural and componential information in face-selective cortical areas.
Zhao, Mintao; Cheung, Sing-Hang; Wong, Alan C-N; Rhodes, Gillian; Chan, Erich K S; Chan, Winnie W L; Hayward, William G
2014-01-01
We investigated how face-selective cortical areas process configural and componential face information and how race of faces may influence these processes. Participants saw blurred (preserving configural information), scrambled (preserving componential information), and whole faces during fMRI scan, and performed a post-scan face recognition task using blurred or scrambled faces. The fusiform face area (FFA) showed stronger activation to blurred than to scrambled faces, and equivalent responses to blurred and whole faces. The occipital face area (OFA) showed stronger activation to whole than to blurred faces, which elicited similar responses to scrambled faces. Therefore, the FFA may be more tuned to process configural than componential information, whereas the OFA similarly participates in perception of both. Differences in recognizing own- and other-race blurred faces were correlated with differences in FFA activation to those faces, suggesting that configural processing within the FFA may underlie the other-race effect in face recognition.
Eichert, Hans-Christoph; Riper, Heleen
2017-01-01
Background Many studies have provided evidence for the effectiveness of Internet-based stand-alone interventions for mental disorders. A newer form of intervention combines the strengths of face-to-face (f2f) and Internet approaches (blended interventions). Objective The aim of this review was to provide an overview of (1) the different formats of blended treatments for adults, (2) the stage of treatment in which these are applied, (3) their objective in combining face-to-face and Internet-based approaches, and (4) their effectiveness. Methods Studies on blended concepts were identified through systematic searches in the MEDLINE, PsycINFO, Cochrane, and PubMed databases. Keywords included terms indicating face-to-face interventions (“inpatient,” “outpatient,” “face-to-face,” or “residential treatment”), which were combined with terms indicating Internet treatment (“internet,” “online,” or “web”) and terms indicating mental disorders (“mental health,” “depression,” “anxiety,” or “substance abuse”). We focused on three of the most common mental disorders (depression, anxiety, and substance abuse). Results We identified 64 publications describing 44 studies, 27 of which were randomized controlled trials (RCTs). Results suggest that, compared with stand-alone face-to-face therapy, blended therapy may save clinician time, lead to lower dropout rates and greater abstinence rates of patients with substance abuse, or help maintain initially achieved changes within psychotherapy in the long-term effects of inpatient therapy. However, there is a lack of comparative outcome studies investigating the superiority of the outcomes of blended treatments in comparison with classic face-to-face or Internet-based treatments, as well as of studies identifying the optimal ratio of face-to-face and Internet sessions. Conclusions Several studies have shown that, for common mental health disorders, blended interventions are feasible and can be more effective compared with no treatment controls. However, more RCTs on effectiveness and cost-effectiveness of blended treatments, especially compared with nonblended treatments are necessary. PMID:28916506
Erbe, Doris; Eichert, Hans-Christoph; Riper, Heleen; Ebert, David Daniel
2017-09-15
Many studies have provided evidence for the effectiveness of Internet-based stand-alone interventions for mental disorders. A newer form of intervention combines the strengths of face-to-face (f2f) and Internet approaches (blended interventions). The aim of this review was to provide an overview of (1) the different formats of blended treatments for adults, (2) the stage of treatment in which these are applied, (3) their objective in combining face-to-face and Internet-based approaches, and (4) their effectiveness. Studies on blended concepts were identified through systematic searches in the MEDLINE, PsycINFO, Cochrane, and PubMed databases. Keywords included terms indicating face-to-face interventions ("inpatient," "outpatient," "face-to-face," or "residential treatment"), which were combined with terms indicating Internet treatment ("internet," "online," or "web") and terms indicating mental disorders ("mental health," "depression," "anxiety," or "substance abuse"). We focused on three of the most common mental disorders (depression, anxiety, and substance abuse). We identified 64 publications describing 44 studies, 27 of which were randomized controlled trials (RCTs). Results suggest that, compared with stand-alone face-to-face therapy, blended therapy may save clinician time, lead to lower dropout rates and greater abstinence rates of patients with substance abuse, or help maintain initially achieved changes within psychotherapy in the long-term effects of inpatient therapy. However, there is a lack of comparative outcome studies investigating the superiority of the outcomes of blended treatments in comparison with classic face-to-face or Internet-based treatments, as well as of studies identifying the optimal ratio of face-to-face and Internet sessions. Several studies have shown that, for common mental health disorders, blended interventions are feasible and can be more effective compared with no treatment controls. However, more RCTs on effectiveness and cost-effectiveness of blended treatments, especially compared with nonblended treatments are necessary. ©Doris Erbe, Hans-Christoph Eichert, Heleen Riper, David Daniel Ebert. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 15.09.2017.
Ferreira Junior, José Raniery; Oliveira, Marcelo Costa; de Azevedo-Marques, Paulo Mazzoncini
2016-12-01
Lung cancer is the leading cause of cancer-related deaths in the world, and its main manifestation is pulmonary nodules. Detection and classification of pulmonary nodules are challenging tasks that must be done by qualified specialists, but image interpretation errors make those tasks difficult. In order to aid radiologists on those hard tasks, it is important to integrate the computer-based tools with the lesion detection, pathology diagnosis, and image interpretation processes. However, computer-aided diagnosis research faces the problem of not having enough shared medical reference data for the development, testing, and evaluation of computational methods for diagnosis. In order to minimize this problem, this paper presents a public nonrelational document-oriented cloud-based database of pulmonary nodules characterized by 3D texture attributes, identified by experienced radiologists and classified in nine different subjective characteristics by the same specialists. Our goal with the development of this database is to improve computer-aided lung cancer diagnosis and pulmonary nodule detection and classification research through the deployment of this database in a cloud Database as a Service framework. Pulmonary nodule data was provided by the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI), image descriptors were acquired by a volumetric texture analysis, and database schema was developed using a document-oriented Not only Structured Query Language (NoSQL) approach. The proposed database is now with 379 exams, 838 nodules, and 8237 images, 4029 of them are CT scans and 4208 manually segmented nodules, and it is allocated in a MongoDB instance on a cloud infrastructure.
The ATLAS conditions database architecture for the Muon spectrometer
NASA Astrophysics Data System (ADS)
Verducci, Monica; ATLAS Muon Collaboration
2010-04-01
The Muon System, facing the challenge requirement of the conditions data storage, has extensively started to use the conditions database project 'COOL' as the basis for all its conditions data storage both at CERN and throughout the worldwide collaboration as decided by the ATLAS Collaboration. The management of the Muon COOL conditions database will be one of the most challenging applications for Muon System, both in terms of data volumes and rates, but also in terms of the variety of data stored. The Muon conditions database is responsible for almost all of the 'non event' data and detector quality flags storage needed for debugging of the detector operations and for performing reconstruction and analysis. The COOL database allows database applications to be written independently of the underlying database technology and ensures long term compatibility with the entire ATLAS Software. COOL implements an interval of validity database, i.e. objects stored or referenced in COOL have an associated start and end time between which they are valid, the data is stored in folders, which are themselves arranged in a hierarchical structure of folder sets. The structure is simple and mainly optimized to store and retrieve object(s) associated with a particular time. In this work, an overview of the entire Muon conditions database architecture is given, including the different sources of the data and the storage model used. In addiction the software interfaces used to access to the conditions data are described, more emphasis is given to the Offline Reconstruction framework ATHENA and the services developed to provide the conditions data to the reconstruction.
The within-subjects design in the study of facial expressions.
Yik, Michelle; Widen, Sherri C; Russell, James A
2013-01-01
The common within-subjects design of studies on the recognition of emotion from facial expressions allows the judgement of one face to be influenced by previous faces, thus introducing the potential for artefacts. The present study (N=344) showed that the canonical "disgust face" was judged as disgusted, provided that the preceding set of faces included "anger expressions", but was judged as angry when the preceding set of faces excluded anger but instead included persons who looked sad or about to be sick. Chinese observers showed lower recognition of the "disgust face" than did American observers. Chinese observers also showed lower recognition of the "fear face" when responding in Chinese than in English.
RTS2: a powerful robotic observatory manager
NASA Astrophysics Data System (ADS)
Kubánek, Petr; Jelínek, Martin; Vítek, Stanislav; de Ugarte Postigo, Antonio; Nekola, Martin; French, John
2006-06-01
RTS2, or Remote Telescope System, 2nd Version, is an integrated package for remote telescope control under the Linux operating system. It is designed to run in fully autonomous mode, picking targets from a database table, storing image meta data to the database, processing images and storing their WCS coordinates in the database and offering Virtual-Observatory enabled access to them. It is currently running on various telescope setups world-wide. For control of devices from various manufacturers we developed an abstract device layer, enabling control of all possible combinations of mounts, CCDs, photometers, roof and cupola controllers. We describe the evolution of RTS2 from Python-based RTS to C and later C++ based RTS2, focusing on the problems we faced during development. The internal structure of RTS2, focusing on object layering, which is used to uniformly control various devices and provides uniform reporting layer, is also discussed.
Doors for memory: A searchable database.
Baddeley, Alan D; Hitch, Graham J; Quinlan, Philip T; Bowes, Lindsey; Stone, Rob
2016-11-01
The study of human long-term memory has for over 50 years been dominated by research on words. This is partly due to lack of suitable nonverbal materials. Experience in developing a clinical test suggested that door scenes can provide an ecologically relevant and sensitive alternative to the faces and geometrical figures traditionally used to study visual memory. In pursuing this line of research, we have accumulated over 2000 door scenes providing a database that is categorized on a range of variables including building type, colour, age, condition, glazing, and a range of other physical characteristics. We describe an illustrative study of recognition memory for 100 doors tested by yes/no, two-alternative, or four-alternative forced-choice paradigms. These stimuli, together with the full categorized database, are available through a dedicated website. We suggest that door scenes provide an ecologically relevant and participant-friendly source of material for studying the comparatively neglected field of visual long-term memory.
Cultural evolution of military camouflage.
Talas, Laszlo; Baddeley, Roland J; Cuthill, Innes C
2017-07-05
While one has evolved and the other been consciously created, animal and military camouflage are expected to show many similar design principles. Using a unique database of calibrated photographs of camouflage uniform patterns, processed using texture and colour analysis methods from computer vision, we show that the parallels with biology are deeper than design for effective concealment. Using two case studies we show that, like many animal colour patterns, military camouflage can serve multiple functions. Following the dissolution of the Warsaw Pact, countries that became more Western-facing in political terms converged on NATO patterns in camouflage texture and colour. Following the break-up of the former Yugoslavia, the resulting states diverged in design, becoming more similar to neighbouring countries than the ancestral design. None of these insights would have been obtained using extant military approaches to camouflage design, which focus solely on concealment. Moreover, our computational techniques for quantifying pattern offer new tools for comparative biologists studying animal coloration.This article is part of the themed issue 'Animal coloration: production, perception, function and application'. © 2017 The Author(s).
Information Theory for Gabor Feature Selection for Face Recognition
NASA Astrophysics Data System (ADS)
Shen, Linlin; Bai, Li
2006-12-01
A discriminative and robust feature—kernel enhanced informative Gabor feature—is proposed in this paper for face recognition. Mutual information is applied to select a set of informative and nonredundant Gabor features, which are then further enhanced by kernel methods for recognition. Compared with one of the top performing methods in the 2004 Face Verification Competition (FVC2004), our methods demonstrate a clear advantage over existing methods in accuracy, computation efficiency, and memory cost. The proposed method has been fully tested on the FERET database using the FERET evaluation protocol. Significant improvements on three of the test data sets are observed. Compared with the classical Gabor wavelet-based approaches using a huge number of features, our method requires less than 4 milliseconds to retrieve a few hundreds of features. Due to the substantially reduced feature dimension, only 4 seconds are required to recognize 200 face images. The paper also unified different Gabor filter definitions and proposed a training sample generation algorithm to reduce the effects caused by unbalanced number of samples available in different classes.
Fusion of footsteps and face biometrics on an unsupervised and uncontrolled environment
NASA Astrophysics Data System (ADS)
Vera-Rodriguez, Ruben; Tome, Pedro; Fierrez, Julian; Ortega-Garcia, Javier
2012-06-01
This paper reports for the first time experiments on the fusion of footsteps and face on an unsupervised and not controlled environment for person authentication. Footstep recognition is a relatively new biometric based on signals extracted from people walking over floor sensors. The idea of the fusion between footsteps and face starts from the premise that in an area where footstep sensors are installed it is very simple to place a camera to capture also the face of the person that walks over the sensors. This setup may find application in scenarios like ambient assisted living, smart homes, eldercare, or security access. The paper reports a comparative assessment of both biometrics using the same database and experimental protocols. In the experimental work we consider two different applications: smart homes (small group of users with a large set of training data) and security access (larger group of users with a small set of training data) obtaining results of 0.9% and 5.8% EER respectively for the fusion of both modalities. This is a significant performance improvement compared with the results obtained by the individual systems.
Yang, Fan; Paindavoine, M
2003-01-01
This paper describes a real time vision system that allows us to localize faces in video sequences and verify their identity. These processes are image processing techniques based on the radial basis function (RBF) neural network approach. The robustness of this system has been evaluated quantitatively on eight video sequences. We have adapted our model for an application of face recognition using the Olivetti Research Laboratory (ORL), Cambridge, UK, database so as to compare the performance against other systems. We also describe three hardware implementations of our model on embedded systems based on the field programmable gate array (FPGA), zero instruction set computer (ZISC) chips, and digital signal processor (DSP) TMS320C62, respectively. We analyze the algorithm complexity and present results of hardware implementations in terms of the resources used and processing speed. The success rates of face tracking and identity verification are 92% (FPGA), 85% (ZISC), and 98.2% (DSP), respectively. For the three embedded systems, the processing speeds for images size of 288 /spl times/ 352 are 14 images/s, 25 images/s, and 4.8 images/s, respectively.
Nozadi, Sara S; Spinrad, Tracy L; Johnson, Scott P; Eisenberg, Nancy
2018-06-01
The current study examined whether an important temperamental characteristic, effortful control (EC), moderates the associations between dispositional anger and sadness, attention biases, and social functioning in a group of preschool-aged children (N = 77). Preschoolers' attentional biases toward angry and sad facial expressions were assessed using eye-tracking, and we obtained teachers' reports of children's temperament and social functioning. Associations of dispositional anger and sadness with time looking at relevant negative emotional stimuli were moderated by children's EC, but relations between time looking at emotional faces and indicators of social functioning, for the most part, were direct and not moderated by EC. In particular, time looking at angry faces (and low EC) predicted high levels of aggressive behaviors, whereas longer time looking at sad faces (and high EC) predicted higher social competence. Finally, latency to detect angry faces predicted aggressive behavior under conditions of average and low levels of EC. Findings are discussed in terms of the importance of differentiating between components of attention biases toward distinct negative emotions, and implications for attention training. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Can a Humanoid Face be Expressive? A Psychophysiological Investigation
Lazzeri, Nicole; Mazzei, Daniele; Greco, Alberto; Rotesi, Annalisa; Lanatà, Antonio; De Rossi, Danilo Emilio
2015-01-01
Non-verbal signals expressed through body language play a crucial role in multi-modal human communication during social relations. Indeed, in all cultures, facial expressions are the most universal and direct signs to express innate emotional cues. A human face conveys important information in social interactions and helps us to better understand our social partners and establish empathic links. Latest researches show that humanoid and social robots are becoming increasingly similar to humans, both esthetically and expressively. However, their visual expressiveness is a crucial issue that must be improved to make these robots more realistic and intuitively perceivable by humans as not different from them. This study concerns the capability of a humanoid robot to exhibit emotions through facial expressions. More specifically, emotional signs performed by a humanoid robot have been compared with corresponding human facial expressions in terms of recognition rate and response time. The set of stimuli included standardized human expressions taken from an Ekman-based database and the same facial expressions performed by the robot. Furthermore, participants’ psychophysiological responses have been explored to investigate whether there could be differences induced by interpreting robot or human emotional stimuli. Preliminary results show a trend to better recognize expressions performed by the robot than 2D photos or 3D models. Moreover, no significant differences in the subjects’ psychophysiological state have been found during the discrimination of facial expressions performed by the robot in comparison with the same task performed with 2D photos and 3D models. PMID:26075199
Supervised linear dimensionality reduction with robust margins for object recognition
NASA Astrophysics Data System (ADS)
Dornaika, F.; Assoum, A.
2013-01-01
Linear Dimensionality Reduction (LDR) techniques have been increasingly important in computer vision and pattern recognition since they permit a relatively simple mapping of data onto a lower dimensional subspace, leading to simple and computationally efficient classification strategies. Recently, many linear discriminant methods have been developed in order to reduce the dimensionality of visual data and to enhance the discrimination between different groups or classes. Many existing linear embedding techniques relied on the use of local margins in order to get a good discrimination performance. However, dealing with outliers and within-class diversity has not been addressed by margin-based embedding method. In this paper, we explored the use of different margin-based linear embedding methods. More precisely, we propose to use the concepts of Median miss and Median hit for building robust margin-based criteria. Based on such margins, we seek the projection directions (linear embedding) such that the sum of local margins is maximized. Our proposed approach has been applied to the problem of appearance-based face recognition. Experiments performed on four public face databases show that the proposed approach can give better generalization performance than the classic Average Neighborhood Margin Maximization (ANMM). Moreover, thanks to the use of robust margins, the proposed method down-grades gracefully when label outliers contaminate the training data set. In particular, we show that the concept of Median hit was crucial in order to get robust performance in the presence of outliers.
Nurmatov, Ulugbek B; Mullen, Stephen; Quinn-Scoggins, Harriet; Mann, Mala; Kemp, Alison
2018-05-01
the effectiveness and cost-effectiveness of burns first-aid educational interventions given to caregivers of children. Systematic review of eligible studies from seven databases, international journals, trials repositories and contacted international experts. Of 985 potential studies, four met the inclusion criteria. All had high risk of bias and weak global rating. Two studies identified a statistically significant increase in knowledge after of a media campaign. King et al. (41.7% vs 63.2%, p<0.0001), Skinner et al. (59% vs 40%, p=0.004). Skinner et al. also identified fewer admissions (64.4% vs 35.8%, p<0.001) and surgical procedures (25.6% vs 11.4%, p<0.001). Kua et al. identified a significant improvement in caregiver's knowledge (22.9% vs 78.3%, 95% CI 49.2, 61.4) after face-to-face education intervention. Ozyazicioglu et al. evaluated the effect of a first-aid training program and showed a reduction in use of harmful traditional methods for burns in children (29% vs 16.1%, p<0.001). No data on cost-effectiveness was identified. There is a paucity of high quality research in this field and considerable heterogeneity across the included studies. Delivery and content of interventions varied. However, studies showed a positive effect on knowledge. No study evaluated the direct effect of the intervention on first aid administration. High quality clinical trials are needed. Copyright © 2017 Elsevier Ltd and ISBI. All rights reserved.
Recognizing Action Units for Facial Expression Analysis
Tian, Ying-li; Kanade, Takeo; Cohn, Jeffrey F.
2010-01-01
Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more often communicated by changes in one or a few discrete facial features. In this paper, we develop an Automatic Face Analysis (AFA) system to analyze facial expressions based on both permanent facial features (brows, eyes, mouth) and transient facial features (deepening of facial furrows) in a nearly frontal-view face image sequence. The AFA system recognizes fine-grained changes in facial expression into action units (AUs) of the Facial Action Coding System (FACS), instead of a few prototypic expressions. Multistate face and facial component models are proposed for tracking and modeling the various facial features, including lips, eyes, brows, cheeks, and furrows. During tracking, detailed parametric descriptions of the facial features are extracted. With these parameters as the inputs, a group of action units (neutral expression, six upper face AUs and 10 lower face AUs) are recognized whether they occur alone or in combinations. The system has achieved average recognition rates of 96.4 percent (95.4 percent if neutral expressions are excluded) for upper face AUs and 96.7 percent (95.6 percent with neutral expressions excluded) for lower face AUs. The generalizability of the system has been tested by using independent image databases collected and FACS-coded for ground-truth by different research teams. PMID:25210210
Adaptive gamma correction-based expert system for nonuniform illumination face enhancement
NASA Astrophysics Data System (ADS)
Abdelhamid, Iratni; Mustapha, Aouache; Adel, Oulefki
2018-03-01
The image quality of a face recognition system suffers under severe lighting conditions. Thus, this study aims to develop an approach for nonuniform illumination adjustment based on an adaptive gamma correction (AdaptGC) filter that can solve the aforementioned issue. An approach for adaptive gain factor prediction was developed via neural network model-based cross-validation (NN-CV). To achieve this objective, a gamma correction function and its effects on the face image quality with different gain values were examined first. Second, an orientation histogram (OH) algorithm was assessed as a face's feature descriptor. Subsequently, a density histogram module was developed for face label generation. During the NN-CV construction, the model was assessed to recognize the OH descriptor and predict the face label. The performance of the NN-CV model was evaluated by examining the statistical measures of root mean square error and coefficient of efficiency. Third, to evaluate the AdaptGC enhancement approach, an image quality metric was adopted using enhancement by entropy, contrast per pixel, second-derivative-like measure of enhancement, and sharpness, then supported by visual inspection. The experiment results were examined using five face's databases, namely, extended Yale-B, Carnegie Mellon University-Pose, Illumination, and Expression, Mobio, FERET, and Oulu-CASIA-NIR-VIS. The final results prove that AdaptGC filter implementation compared with state-of-the-art methods is the best choice in terms of contrast and nonuniform illumination adjustment. In summary, the benefits attained prove that AdaptGC is driven by a profitable enhancement rate, which provides satisfying features for high rate face recognition systems.
Clinical application of the FACES score for face transplantation.
Chopra, Karan; Susarla, Srinivas M; Goodrich, Danielle; Bernard, Steven; Zins, James E; Papay, Frank; Lee, W P Andrew; Gordon, Chad R
2014-01-01
This study aimed to systematically evaluate all reported outcomes of facial allotransplantation (FT) using the previously described FACES scoring instrument. This was a retrospective study of all consecutive face transplants to date (January 2012). Candidates were identified using medical and general internet database searches. Medical literature and media reports were reviewed for details regarding demographic, operative, anatomic, and psychosocial data, which were then used to formulate FACES scores. Pre-transplant and post-transplant scores for "functional status", "aesthetic deformity", "co-morbidities", "exposed tissue", and "surgical history" were calculated. Scores were statistically compared using paired-samples analyses. Twenty consecutive patients were identified, with 18 surviving recipients. The sample was composed of 3 females and 17 males, with a mean age of 35.0 ± 11.0 years (range: 19-57 years). Overall, data reporting for functional parameters was poor. Six subjects had complete pre-transplant and post-transplant data available for all 5 FACES domains. The mean pre-transplant FACES score was 33.5 ± 8.8 (range: 23-44); the mean post-transplant score was 21.5 ± 5.9 (range: 14-32) and was statistically significantly lower than the pre-transplant score (P = 0.02). Among the individual domains, FT conferred a statistically significant improvement in aesthetic defect scores and exposed tissue scores (P ≤ 0.01) while, at the same time, it displayed no significant increases in co-morbidity (P = 0.17). There is a significant deficiency in functional outcome reports thus far. Moreover, FT resulted in improved overall FACES score, with the most dramatic improvements noted in aesthetic defect and exposed tissue scores.
Callousness and affective face processing in adults: Behavioral and brain-potential indicators.
Brislin, Sarah J; Yancey, James R; Perkins, Emily R; Palumbo, Isabella M; Drislane, Laura E; Salekin, Randall T; Fanti, Kostas A; Kimonis, Eva R; Frick, Paul J; Blair, R James R; Patrick, Christopher J
2018-03-01
The investigation of callous-unemotional (CU) traits has been central to contemporary research on child behavior problems, and served as the impetus for inclusion of a specifier for conduct disorder in the latest edition of the official psychiatric diagnostic system. Here, we report results from 2 studies that evaluated the construct validity of callousness as assessed in adults, by testing for affiliated deficits in behavioral and neural processing of fearful faces, as have been shown in youthful samples. We hypothesized that scores on an established measure of callousness would predict reduced recognition accuracy and diminished electocortical reactivity for fearful faces in adult participants. In Study 1, 66 undergraduate participants performed an emotion recognition task in which they viewed affective faces of different types and indicated the emotion expressed by each. In Study 2, electrocortical data were collected from 254 adult twins during viewing of fearful and neutral face stimuli, and scored for event-related response components. Analyses of Study 1 data revealed that higher callousness was associated with decreased recognition accuracy for fearful faces specifically. In Study 2, callousness was associated with reduced amplitude of both N170 and P200 responses to fearful faces. Current findings demonstrate for the first time that callousness in adults is associated with both behavioral and physiological deficits in the processing of fearful faces. These findings support the validity of the CU construct with adults and highlight the possibility of a multidomain measurement framework for continued study of this important clinical construct. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Getting older isn't all that bad: better decisions and coping when facing "sunk costs".
Bruine de Bruin, Wändi; Strough, JoNell; Parker, Andrew M
2014-09-01
Because people of all ages face decisions that affect their quality of life, decision-making competence is important across the life span. According to theories of rational decision making, one crucial decision skill involves the ability to discontinue failing commitments despite irrecoverable investments also referred to as "sunk costs." We find that older adults are better than younger adults at making decisions to discontinue such failing commitments especially when irrecoverable losses are large, as well as at coping with the associated irrecoverable losses. Our results are relevant to interventions that aim to promote better decision-making competence across the life span. PsycINFO Database Record (c) 2014 APA, all rights reserved.
What ASRS incident data tell about flight crew performance during aircraft malfunctions
NASA Technical Reports Server (NTRS)
Sumwalt, Robert L.; Watson, Alan W.
1995-01-01
This research examined 230 reports in NASA's Aviation Safety Reporting System's (ASRS) database to develop a better understanding of factors that can affect flight crew performance when crew are faced with inflight aircraft malfunctions. Each report was placed into one of two categories, based on severity of the malfunction. Report analysis was then conducted to extract information regarding crew procedural issues, crew communications and situational awareness. A comparison of these crew factors across malfunction type was then performed. This comparison revealed a significant difference in ways that crews dealt with serious malfunctions compared to less serious malfunctions. The authors offer recommendations toward improving crew performance when faced with inflight aircraft malfunctions.
NASA Astrophysics Data System (ADS)
Tinti, S.; Armigliato, A.; Pagnoni, G.; Zaniboni, F.
2012-04-01
One of the most challenging goals that the geo-scientific community is facing after the catastrophic tsunami occurred on December 2004 in the Indian Ocean is to develop the so-called "next generation" Tsunami Early Warning Systems (TEWS). Indeed, the meaning of "next generation" does not refer to the aim of a TEWS, which obviously remains to detect whether a tsunami has been generated or not by a given source and, in the first case, to send proper warnings and/or alerts in a suitable time to all the countries and communities that can be affected by the tsunami. Instead, "next generation" identifies with the development of a Decision Support System (DSS) that, in general terms, relies on 1) an integrated set of seismic, geodetic and marine sensors whose objective is to detect and characterise the possible tsunamigenic sources and to monitor instrumentally the time and space evolution of the generated tsunami, 2) databases of pre-computed numerical tsunami scenarios to be suitably combined based on the information coming from the sensor environment and to be used to forecast the degree of exposition of different coastal places both in the near- and in the far-field, 3) a proper overall (software) system architecture. The EU-FP7 TRIDEC Project aims at developing such a DSS and has selected two test areas in the Euro-Mediterranean region, namely the western Iberian margin and the eastern Mediterranean (Turkish coasts). In this study, we discuss the strategies that are being adopted in TRIDEC to build the databases of pre-computed tsunami scenarios and we show some applications to the western Iberian margin. In particular, two different databases are being populated, called "Virtual Scenario Database" (VSDB) and "Matching Scenario Database" (MSDB). The VSDB contains detailed simulations of few selected earthquake-generated tsunamis. The cases provided by the members of the VSDB are computed "real events"; in other words, they represent the unknowns that the TRIDEC platform must be able to recognise and match during the early crisis management phase. The MSDB contains a very large number (order of thousands) of tsunami simulations performed starting from many different simple earthquake sources of different magnitudes and located in the "vicinity" of the virtual scenario earthquake. Examples from both databases will be presented.
Righi, Giulia; Westerlund, Alissa; Congdon, Eliza L.; Troller-Renfree, Sonya; Nelson, Charles A.
2013-01-01
The goal of the present study was to investigate infants’ processing of female and male faces. We used an event-related potential (ERP) priming task, as well as a visual-paired comparison (VPC) eye tracking task to explore how 7-month-old “female expert” infants differed in their responses to faces of different genders. Female faces elicited larger N290 amplitudes than male faces. Furthermore, infants showed a priming effect for female faces only, whereby the N290 was significantly more negative for novel females compared to primed female faces. The VPC experiment was designed to test whether infants could reliably discriminate between two female and two male faces. Analyses showed that infants were able to differentiate faces of both genders. The results of the present study suggest that 7-month olds with a large amount of female face experience show a processing advantage for forming a neural representation of female faces, compared to male faces. However, the enhanced neural sensitivity to the repetition of female faces is not due to the infants' inability to discriminate male faces. Instead, the combination of results from the two tasks suggests that the differential processing for female faces may be a signature of expert-level processing. PMID:24200421
Selecting fillers on emotional appearance improves lineup identification accuracy.
Flowe, Heather D; Klatt, Thimna; Colloff, Melissa F
2014-12-01
Mock witnesses sometimes report using criminal stereotypes to identify a face from a lineup, a tendency known as criminal face bias. Faces are perceived as criminal-looking if they appear angry. We tested whether matching the emotional appearance of the fillers to an angry suspect can reduce criminal face bias. In Study 1, mock witnesses (n = 226) viewed lineups in which the suspect had an angry, happy, or neutral expression, and we varied whether the fillers matched the expression. An additional group of participants (n = 59) rated the faces on criminal and emotional appearance. As predicted, mock witnesses tended to identify suspects who appeared angrier and more criminal-looking than the fillers. This tendency was reduced when the lineup fillers matched the emotional appearance of the suspect. Study 2 extended the results, testing whether the emotional appearance of the suspect and fillers affects recognition memory. Participants (n = 1,983) studied faces and took a lineup test in which the emotional appearance of the target and fillers was varied between subjects. Discrimination accuracy was enhanced when the fillers matched an angry target's emotional appearance. We conclude that lineup member emotional appearance plays a critical role in the psychology of lineup identification. The fillers should match an angry suspect's emotional appearance to improve lineup identification accuracy. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Abramson, Lior; Marom, Inbal; Petranker, Rotem; Aviezer, Hillel
2017-04-01
The majority of emotion perception studies utilize instructed and stereotypical expressions of faces or bodies. While such stimuli are highly standardized and well-recognized, their resemblance to real-life expressions of emotion remains unknown. Here we examined facial and body expressions of fear and anger during real-life situations and compared their recognition to that of instructed expressions of the same emotions. In order to examine the source of the affective signal, expressions of emotion were presented as faces alone, bodies alone, and naturally, as faces with bodies. The results demonstrated striking deviations between recognition of instructed and real-life stimuli, which differed as a function of the emotion expressed. In real-life fearful expressions of emotion, bodies were far better recognized than faces, a pattern not found with instructed expressions of emotion. Anger reactions were better recognized from the body than from the face in both real-life and instructed stimuli. However, the real-life stimuli were overall better recognized than their instructed counterparts. These results indicate that differences between instructed and real-life expressions of emotion are prevalent and raise caution against an overreliance of researchers on instructed affective stimuli. The findings also demonstrate that in real life, facial expression perception may rely heavily on information from the contextualizing body. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Hybrid generative-discriminative approach to age-invariant face recognition
NASA Astrophysics Data System (ADS)
Sajid, Muhammad; Shafique, Tamoor
2018-03-01
Age-invariant face recognition is still a challenging research problem due to the complex aging process involving types of facial tissues, skin, fat, muscles, and bones. Most of the related studies that have addressed the aging problem are focused on generative representation (aging simulation) or discriminative representation (feature-based approaches). Designing an appropriate hybrid approach taking into account both the generative and discriminative representations for age-invariant face recognition remains an open problem. We perform a hybrid matching to achieve robustness to aging variations. This approach automatically segments the eyes, nose-bridge, and mouth regions, which are relatively less sensitive to aging variations compared with the rest of the facial regions that are age-sensitive. The aging variations of age-sensitive facial parts are compensated using a demographic-aware generative model based on a bridged denoising autoencoder. The age-insensitive facial parts are represented by pixel average vector-based local binary patterns. Deep convolutional neural networks are used to extract relative features of age-sensitive and age-insensitive facial parts. Finally, the feature vectors of age-sensitive and age-insensitive facial parts are fused to achieve the recognition results. Extensive experimental results on morphological face database II (MORPH II), face and gesture recognition network (FG-NET), and Verification Subset of cross-age celebrity dataset (CACD-VS) demonstrate the effectiveness of the proposed method for age-invariant face recognition well.
Supporting reputation based trust management enhancing security layer for cloud service models
NASA Astrophysics Data System (ADS)
Karthiga, R.; Vanitha, M.; Sumaiya Thaseen, I.; Mangaiyarkarasi, R.
2017-11-01
In the existing system trust between cloud providers and consumers is inadequate to establish the service level agreement though the consumer’s response is good cause to assess the overall reliability of cloud services. Investigators recognized the significance of trust can be managed and security can be provided based on feedback collected from participant. In this work a face recognition system that helps to identify the user effectively. So we use an image comparison algorithm where the user face is captured during registration time and get stored in database. With that original image we compare it with the sample image that is already stored in database. If both the image get matched then the users are identified effectively. When the confidential data are subcontracted to the cloud, data holders will become worried about the confidentiality of their data in the cloud. Encrypting the data before subcontracting has been regarded as the important resources of keeping user data privacy beside the cloud server. So in order to keep the data secure we use an AES algorithm. Symmetric-key algorithms practice a shared key concept, keeping data secret requires keeping this key secret. So only the user with private key can decrypt data.
MPEG-7 audio-visual indexing test-bed for video retrieval
NASA Astrophysics Data System (ADS)
Gagnon, Langis; Foucher, Samuel; Gouaillier, Valerie; Brun, Christelle; Brousseau, Julie; Boulianne, Gilles; Osterrath, Frederic; Chapdelaine, Claude; Dutrisac, Julie; St-Onge, Francis; Champagne, Benoit; Lu, Xiaojian
2003-12-01
This paper reports on the development status of a Multimedia Asset Management (MAM) test-bed for content-based indexing and retrieval of audio-visual documents within the MPEG-7 standard. The project, called "MPEG-7 Audio-Visual Document Indexing System" (MADIS), specifically targets the indexing and retrieval of video shots and key frames from documentary film archives, based on audio-visual content like face recognition, motion activity, speech recognition and semantic clustering. The MPEG-7/XML encoding of the film database is done off-line. The description decomposition is based on a temporal decomposition into visual segments (shots), key frames and audio/speech sub-segments. The visible outcome will be a web site that allows video retrieval using a proprietary XQuery-based search engine and accessible to members at the Canadian National Film Board (NFB) Cineroute site. For example, end-user will be able to ask to point on movie shots in the database that have been produced in a specific year, that contain the face of a specific actor who tells a specific word and in which there is no motion activity. Video streaming is performed over the high bandwidth CA*net network deployed by CANARIE, a public Canadian Internet development organization.
Face and body recognition show similar improvement during childhood.
Bank, Samantha; Rhodes, Gillian; Read, Ainsley; Jeffery, Linda
2015-09-01
Adults are proficient in extracting identity cues from faces. This proficiency develops slowly during childhood, with performance not reaching adult levels until adolescence. Bodies are similar to faces in that they convey identity cues and rely on specialized perceptual mechanisms. However, it is currently unclear whether body recognition mirrors the slow development of face recognition during childhood. Recent evidence suggests that body recognition develops faster than face recognition. Here we measured body and face recognition in 6- and 10-year-old children and adults to determine whether these two skills show different amounts of improvement during childhood. We found no evidence that they do. Face and body recognition showed similar improvement with age, and children, like adults, were better at recognizing faces than bodies. These results suggest that the mechanisms of face and body memory mature at a similar rate or that improvement of more general cognitive and perceptual skills underlies improvement of both face and body recognition. Copyright © 2015 Elsevier Inc. All rights reserved.
Kret, Mariska E; Tomonaga, Masaki
2016-01-01
For social species such as primates, the recognition of conspecifics is crucial for their survival. As demonstrated by the 'face inversion effect', humans are experts in recognizing faces and unlike objects, recognize their identity by processing it configurally. The human face, with its distinct features such as eye-whites, eyebrows, red lips and cheeks signals emotions, intentions, health and sexual attraction and, as we will show here, shares important features with the primate behind. Chimpanzee females show a swelling and reddening of the anogenital region around the time of ovulation. This provides an important socio-sexual signal for group members, who can identify individuals by their behinds. We hypothesized that chimpanzees process behinds configurally in a way humans process faces. In four different delayed matching-to-sample tasks with upright and inverted body parts, we show that humans demonstrate a face, but not a behind inversion effect and that chimpanzees show a behind, but no clear face inversion effect. The findings suggest an evolutionary shift in socio-sexual signalling function from behinds to faces, two hairless, symmetrical and attractive body parts, which might have attuned the human brain to process faces, and the human face to become more behind-like.
[The (in)visibility of psychological family violence in childhood and adolescence].
Abranches, Cecy Dunshee de; Assis, Simone Gonçalves de
2011-05-01
Psychological family violence in childhood and adolescence is still poorly studied, due to difficulties in its definition and detection. This article aims to examine how psychological family violence reported by children and adolescents has been addressed in academic studies, using a literature review (LILACS, MEDLINE, SciELO, PubMed, CAPES Portal, PsycINFO, and SCOPUS databases). Among 51 epidemiological studies, 16 articles met the review's objectives; some of the articles reported a high prevalence of such violence. The study showed that the issue has been studied more in the international literature than in Brazil, which has significantly increased its visibility in the last decade but still faces difficulties involving definition, conceptualization, and operationalization. Eliminating the invisibility of psychological violence in the family could help promote prevention of such violence and protection of children and adolescents.
Privacy and policy for genetic research.
DeCew, Judith Wagner
2004-01-01
I begin with a discussion of the value of privacy and what we lose without it. I then turn to the difficulties of preserving privacy for genetic information and other medical records in the face of advanced information technology. I suggest three alternative public policy approaches to the problem of protecting individual privacy and also preserving databases for genetic research: (1) governmental guidelines and centralized databases, (2) corporate self-regulation, and (3) my hybrid approach. None of these are unproblematic; I discuss strengths and drawbacks of each, emphasizing the importance of protecting the privacy of sensitive medical and genetic information as well as letting information technology flourish to aid patient care, public health and scientific research.
Mining Bug Databases for Unidentified Software Vulnerabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dumidu Wijayasekara; Milos Manic; Jason Wright
2012-06-01
Identifying software vulnerabilities is becoming more important as critical and sensitive systems increasingly rely on complex software systems. It has been suggested in previous work that some bugs are only identified as vulnerabilities long after the bug has been made public. These vulnerabilities are known as hidden impact vulnerabilities. This paper discusses the feasibility and necessity to mine common publicly available bug databases for vulnerabilities that are yet to be identified. We present bug database analysis of two well known and frequently used software packages, namely Linux kernel and MySQL. It is shown that for both Linux and MySQL, amore » significant portion of vulnerabilities that were discovered for the time period from January 2006 to April 2011 were hidden impact vulnerabilities. It is also shown that the percentage of hidden impact vulnerabilities has increased in the last two years, for both software packages. We then propose an improved hidden impact vulnerability identification methodology based on text mining bug databases, and conclude by discussing a few potential problems faced by such a classifier.« less
Emotional contexts modulate intentional memory suppression of neutral faces: Insights from ERPs.
Pierguidi, Lapo; Righi, Stefania; Gronchi, Giorgio; Marzi, Tessa; Caharel, Stephanie; Giovannelli, Fabio; Viggiano, Maria Pia
2016-08-01
The main goal of present work is to gain new insight into the temporal dynamics underlying the voluntary memory control for neutral faces associated with neutral, positive and negative contexts. A directed forgetting (DF) procedure was used during the recording of EEG to answer the question whether is it possible to forget a face that has been encoded within a particular emotional context. A face-scene phase in which a neutral face was showed in a neutral or emotional scene (positive, negative) was followed by the voluntary memory cue (cue phase) indicating whether the face had to-be remember or to-be-forgotten (TBR and TBF). Memory for faces was then assessed with an old/new recognition task. Behaviorally, we found that it is harder to suppress faces-in-positive-scenes compared to faces-in-negative and neutral-scenes. The temporal information obtained by the ERPs showed: 1) during the face-scene phase, the Late Positive Potential (LPP), which indexes motivated emotional attention, was larger for faces-in-negative-scenes compared to faces-in-neutral-scenes. 2) Remarkably, during the cue phase, ERPs were significantly modulated by the emotional contexts. Faces-in-neutral scenes showed an ERP pattern that has been typically associated to DF effect whereas faces-in-positive-scenes elicited the reverse ERP pattern. Faces-in-negative scenes did not show differences in the DF-related neural activities but larger N1 amplitude for TBF vs. TBR faces may index early attentional deployment. These results support the hypothesis that the pleasantness or unpleasantness of the contexts (through attentional broadening and narrowing mechanisms, respectively) may modulate the effectiveness of intentional memory suppression for neutral information. Copyright © 2016 Elsevier B.V. All rights reserved.
Essentialist thinking predicts decrements in children's memory for racially ambiguous faces.
Gaither, Sarah E; Schultz, Jennifer R; Pauker, Kristin; Sommers, Samuel R; Maddox, Keith B; Ambady, Nalini
2014-02-01
Past research shows that adults often display poor memory for racially ambiguous and racial outgroup faces, with both face types remembered worse than own-race faces. In the present study, the authors examined whether children also show this pattern of results. They also examined whether emerging essentialist thinking about race predicts children's memory for faces. Seventy-four White children (ages 4-9 years) completed a face-memory task comprising White, Black, and racially ambiguous Black-White faces. Essentialist thinking about race was also assessed (i.e., thinking of race as immutable and biologically based). White children who used essentialist thinking showed the same bias as White adults: They remembered White faces significantly better than they remembered ambiguous and Black faces. However, children who did not use essentialist thinking remembered both White and racially ambiguous faces significantly better than they remembered Black faces. This finding suggests a specific shift in racial thinking wherein the boundaries between racial groups become more discrete, highlighting the importance of how race is conceptualized in judgments of racially ambiguous individuals.
The changing face of emotion: age-related patterns of amygdala activation to salient faces.
Todd, Rebecca M; Evans, Jennifer W; Morris, Drew; Lewis, Marc D; Taylor, Margot J
2011-01-01
The present study investigated age-related differences in the amygdala and other nodes of face-processing networks in response to facial expression and familiarity. fMRI data were analyzed from 31 children (3.5-8.5 years) and 14 young adults (18-33 years) who viewed pictures of familiar (mothers) and unfamiliar emotional faces. Results showed that amygdala activation for faces over a scrambled image baseline increased with age. Children, but not adults, showed greater amygdala activation to happy than angry faces; in addition, amygdala activation for angry faces increased with age. In keeping with growing evidence of a positivity bias in young children, our data suggest that children find happy faces to be more salient or meaningful than angry faces. Both children and adults showed preferential activation to mothers' over strangers' faces in a region of rostral anterior cingulate cortex associated with self-evaluation, suggesting that some nodes in frontal evaluative networks are active early in development. This study presents novel data on neural correlates of face processing in childhood and indicates that preferential amygdala activation for emotional expressions changes with age.
Category search speeds up face-selective fMRI responses in a non-hierarchical cortical face network.
Jiang, Fang; Badler, Jeremy B; Righi, Giulia; Rossion, Bruno
2015-05-01
The human brain is extremely efficient at detecting faces in complex visual scenes, but the spatio-temporal dynamics of this remarkable ability, and how it is influenced by category-search, remain largely unknown. In the present study, human subjects were shown gradually-emerging images of faces or cars in visual scenes, while neural activity was recorded using functional magnetic resonance imaging (fMRI). Category search was manipulated by the instruction to indicate the presence of either a face or a car, in different blocks, as soon as an exemplar of the target category was detected in the visual scene. The category selectivity of most face-selective areas was enhanced when participants were instructed to report the presence of faces in gradually decreasing noise stimuli. Conversely, the same regions showed much less selectivity when participants were instructed instead to detect cars. When "face" was the target category, the fusiform face area (FFA) showed consistently earlier differentiation of face versus car stimuli than did the "occipital face area" (OFA). When "car" was the target category, only the FFA showed differentiation of face versus car stimuli. These observations provide further challenges for hierarchical models of cortical face processing and show that during gradual revealing of information, selective category-search may decrease the required amount of information, enhancing and speeding up category-selective responses in the human brain. Copyright © 2015 Elsevier Ltd. All rights reserved.
Joint attention enhances visual working memory.
Gregory, Samantha E A; Jackson, Margaret C
2017-02-01
Joint attention-the mutual focus of 2 individuals on an item-speeds detection and discrimination of target information. However, what happens to that information beyond the initial perceptual episode? To fully comprehend and engage with our immediate environment also requires working memory (WM), which integrates information from second to second to create a coherent and fluid picture of our world. Yet, no research exists at present that examines how joint attention directly impacts WM. To investigate this, we created a unique paradigm that combines gaze cues with a traditional visual WM task. A central, direct gaze 'cue' face looked left or right, followed 500 ms later by 4, 6, or 8 colored squares presented on one side of the face for encoding. Crucially, the cue face either looked at the squares (valid cue) or looked away from them (invalid cue). A no shift (direct gaze) condition served as a baseline. After a blank 1,000 ms maintenance interval, participants stated whether a single test square color was present or not in the preceding display. WM accuracy was significantly greater for colors encoded in the valid versus invalid and direct conditions. Further experiments showed that an arrow cue and a low-level motion cue-both shown to reliably orient attention-did not reliably modulate WM, indicating that social cues are more powerful. This study provides the first direct evidence that sharing the focus of another individual establishes a point of reference from which information is advantageously encoded into WM. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Nonlinear correlations impair quantification of episodic memory by mesial temporal BOLD activity.
Klamer, Silke; Zeltner, Lena; Erb, Michael; Klose, Uwe; Wagner, Kathrin; Frings, Lars; Groen, Georg; Veil, Cornelia; Rona, Sabine; Lerche, Holger; Milian, Monika
2013-07-01
Episodic memory processes can be investigated using different functional MRI (fMRI) paradigms. The purpose of the present study was to examine correlations between neuropsychological memory test scores and BOLD signal changes during fMRI scanning using three different memory tasks. Twenty-eight right-handed healthy subjects underwent three paradigms, (a) a word pair, (b) a space-labyrinth, and (c) a face-name association paradigm. These paradigms were compared for their value in memory quantification and lateralization by calculating correlations between the BOLD signals in the mesial temporal lobe and behavioral data derived from a neuropsychological test battery. As expected, group analysis showed left-sided activation for the verbal, a tendency to right-sided activation for the spatial, and bilateral activation for the face-name paradigm. No linear correlations were observed between neuropsychological data and activation in the temporo-mesial region. However, we found significant u-shaped correlations between behavioral memory performance and activation in both the verbal and the face-name paradigms, that is, BOLD signal changes were greater not only among participants who performed best on the neuropsychological tests, but also among the poorest performers. The figural learning task did not correlate with the activations in the space-labyrinth paradigm at all. We interpreted the u-shaped correlations to be due to compensatory hippocampal activations associated with low performance when people try unsuccessfully to remember presented items. Because activation levels did not linearly increase with memory performance, the latter cannot be quantified by fMRI alone, but only be used in conjunction with neuropsychological testing. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Slezak, Diego Fernandez; Sigman, Mariano
2012-08-01
The time spent making a decision and its quality define a widely studied trade-off. Some models suggest that the time spent is set to optimize reward, as verified empirically in simple-decision making experiments. However, in a more complex perspective compromising components of regulation focus, ambitions, fear, risk and social variables, adjustment of the speed-accuracy trade-off may not be optimal. Specifically, regulatory focus theory shows that people can be set in a promotion mode, where focus is on seeking to approach a desired state (to win), or in a prevention mode, focusing to avoid undesired states (not to lose). In promotion, people are eager to take risks increasing speed and decreasing accuracy. In prevention, strategic vigilance increases, decreasing speed and improving accuracy. When time and accuracy have to be compromised, one can ask which of these 2 strategies optimizes reward, leading to optimal performance. This is investigated here in a unique experimental environment. Decision making is studied in rapid-chess (180 s per game), in which the goal of a player is to mate the opponent in a finite amount of time or, alternatively, time-out of the opponent with sufficient material to mate. In different games, players face strong and weak opponents. It was observed that (a) players adopt a more conservative strategy when facing strong opponents, with slower and more accurate moves, and (b) this strategy is suboptimal: Players increase their winning likelihood against strong opponents using the policy they adopt when confronting opponents with similar strength. (PsycINFO Database Record (c) 2012 APA, all rights reserved).
Prieto, Esther Alonso; Caharel, Stéphanie; Henson, Richard; Rossion, Bruno
2011-01-01
Compared to objects, pictures of faces elicit a larger early electromagnetic response at occipito-temporal sites on the human scalp, with an onset of 130 ms and a peak at about 170 ms. This N170 face effect is larger in the right than the left hemisphere and has been associated with the early categorization of the stimulus as a face. Here we tested whether this effect can be observed in the absence of some of the visual areas showing a preferential response to faces as typically identified in neuroimaging. Event-related potentials were recorded in response to faces, cars, and their phase-scrambled versions in a well-known brain-damaged case of prosopagnosia (PS). Despite the patient’s right inferior occipital gyrus lesion encompassing the most posterior cortical area showing preferential response to faces (“occipital face area”), we identified an early face-sensitive component over the right occipito-temporal hemisphere of the patient that was identified as the N170. A second experiment supported this conclusion, showing the typical N170 increase of latency and amplitude in response to inverted faces. In contrast, there was no N170 in the left hemisphere, where PS has a lesion to the middle fusiform gyrus and shows no evidence of face-preferential response in neuroimaging (no left “fusiform face area”). These results were replicated by a magnetoencephalographic investigation of the patient, disclosing a M170 component only in the right hemisphere. These observations indicate that face-preferential activation in the inferior occipital cortex is not necessary to elicit early visual responses associated with face perception (N170/M170) on the human scalp. These results further suggest that when the right inferior occipital cortex is damaged, the integrity of the middle fusiform gyrus and/or the superior temporal sulcus – two areas showing face-preferential responses in the patient’s right hemisphere – might be necessary to generate the N170 effect. PMID:22275889
The feasibility test of state-of-the-art face detection algorithms for vehicle occupant detection
NASA Astrophysics Data System (ADS)
Makrushin, Andrey; Dittmann, Jana; Vielhauer, Claus; Langnickel, Mirko; Kraetzer, Christian
2010-01-01
Vehicle seat occupancy detection systems are designed to prevent the deployment of airbags at unoccupied seats, thus avoiding the considerable cost imposed by the replacement of airbags. Occupancy detection can also improve passenger comfort, e.g. by activating air-conditioning systems. The most promising development perspectives are seen in optical sensing systems which have become cheaper and smaller in recent years. The most plausible way to check the seat occupancy by occupants is the detection of presence and location of heads, or more precisely, faces. This paper compares the detection performances of the three most commonly used and widely available face detection algorithms: Viola- Jones, Kienzle et al. and Nilsson et al. The main objective of this work is to identify whether one of these systems is suitable for use in a vehicle environment with variable and mostly non-uniform illumination conditions, and whether any one face detection system can be sufficient for seat occupancy detection. The evaluation of detection performance is based on a large database comprising 53,928 video frames containing proprietary data collected from 39 persons of both sexes and different ages and body height as well as different objects such as bags and rearward/forward facing child restraint systems.
Pose invariant face recognition: 3D model from single photo
NASA Astrophysics Data System (ADS)
Napoléon, Thibault; Alfalou, Ayman
2017-02-01
Face recognition is widely studied in the literature for its possibilities in surveillance and security. In this paper, we report a novel algorithm for the identification task. This technique is based on an optimized 3D modeling allowing to reconstruct faces in different poses from a limited number of references (i.e. one image by class/person). Particularly, we propose to use an active shape model to detect a set of keypoints on the face necessary to deform our synthetic model with our optimized finite element method. Indeed, in order to improve our deformation, we propose a regularization by distances on graph. To perform the identification we use the VanderLugt correlator well know to effectively address this task. On the other hand we add a difference of Gaussian filtering step to highlight the edges and a description step based on the local binary patterns. The experiments are performed on the PHPID database enhanced with our 3D reconstructed faces of each person with an azimuth and an elevation ranging from -30° to +30°. The obtained results prove the robustness of our new method with 88.76% of good identification when the classic 2D approach (based on the VLC) obtains just 44.97%.
An online database for informing ecological network models: http://kelpforest.ucsc.edu.
Beas-Luna, Rodrigo; Novak, Mark; Carr, Mark H; Tinker, Martin T; Black, August; Caselle, Jennifer E; Hoban, Michael; Malone, Dan; Iles, Alison
2014-01-01
Ecological network models and analyses are recognized as valuable tools for understanding the dynamics and resiliency of ecosystems, and for informing ecosystem-based approaches to management. However, few databases exist that can provide the life history, demographic and species interaction information necessary to parameterize ecological network models. Faced with the difficulty of synthesizing the information required to construct models for kelp forest ecosystems along the West Coast of North America, we developed an online database (http://kelpforest.ucsc.edu/) to facilitate the collation and dissemination of such information. Many of the database's attributes are novel yet the structure is applicable and adaptable to other ecosystem modeling efforts. Information for each taxonomic unit includes stage-specific life history, demography, and body-size allometries. Species interactions include trophic, competitive, facilitative, and parasitic forms. Each data entry is temporally and spatially explicit. The online data entry interface allows researchers anywhere to contribute and access information. Quality control is facilitated by attributing each entry to unique contributor identities and source citations. The database has proven useful as an archive of species and ecosystem-specific information in the development of several ecological network models, for informing management actions, and for education purposes (e.g., undergraduate and graduate training). To facilitate adaptation of the database by other researches for other ecosystems, the code and technical details on how to customize this database and apply it to other ecosystems are freely available and located at the following link (https://github.com/kelpforest-cameo/databaseui).
An Online Database for Informing Ecological Network Models: http://kelpforest.ucsc.edu
Beas-Luna, Rodrigo; Novak, Mark; Carr, Mark H.; Tinker, Martin T.; Black, August; Caselle, Jennifer E.; Hoban, Michael; Malone, Dan; Iles, Alison
2014-01-01
Ecological network models and analyses are recognized as valuable tools for understanding the dynamics and resiliency of ecosystems, and for informing ecosystem-based approaches to management. However, few databases exist that can provide the life history, demographic and species interaction information necessary to parameterize ecological network models. Faced with the difficulty of synthesizing the information required to construct models for kelp forest ecosystems along the West Coast of North America, we developed an online database (http://kelpforest.ucsc.edu/) to facilitate the collation and dissemination of such information. Many of the database's attributes are novel yet the structure is applicable and adaptable to other ecosystem modeling efforts. Information for each taxonomic unit includes stage-specific life history, demography, and body-size allometries. Species interactions include trophic, competitive, facilitative, and parasitic forms. Each data entry is temporally and spatially explicit. The online data entry interface allows researchers anywhere to contribute and access information. Quality control is facilitated by attributing each entry to unique contributor identities and source citations. The database has proven useful as an archive of species and ecosystem-specific information in the development of several ecological network models, for informing management actions, and for education purposes (e.g., undergraduate and graduate training). To facilitate adaptation of the database by other researches for other ecosystems, the code and technical details on how to customize this database and apply it to other ecosystems are freely available and located at the following link (https://github.com/kelpforest-cameo/databaseui). PMID:25343723
An online database for informing ecological network models: http://kelpforest.ucsc.edu
Beas-Luna, Rodrigo; Tinker, M. Tim; Novak, Mark; Carr, Mark H.; Black, August; Caselle, Jennifer E.; Hoban, Michael; Malone, Dan; Iles, Alison C.
2014-01-01
Ecological network models and analyses are recognized as valuable tools for understanding the dynamics and resiliency of ecosystems, and for informing ecosystem-based approaches to management. However, few databases exist that can provide the life history, demographic and species interaction information necessary to parameterize ecological network models. Faced with the difficulty of synthesizing the information required to construct models for kelp forest ecosystems along the West Coast of North America, we developed an online database (http://kelpforest.ucsc.edu/) to facilitate the collation and dissemination of such information. Many of the database's attributes are novel yet the structure is applicable and adaptable to other ecosystem modeling efforts. Information for each taxonomic unit includes stage-specific life history, demography, and body-size allometries. Species interactions include trophic, competitive, facilitative, and parasitic forms. Each data entry is temporally and spatially explicit. The online data entry interface allows researchers anywhere to contribute and access information. Quality control is facilitated by attributing each entry to unique contributor identities and source citations. The database has proven useful as an archive of species and ecosystem-specific information in the development of several ecological network models, for informing management actions, and for education purposes (e.g., undergraduate and graduate training). To facilitate adaptation of the database by other researches for other ecosystems, the code and technical details on how to customize this database and apply it to other ecosystems are freely available and located at the following link (https://github.com/kelpforest-cameo/databaseui).
Virtual reality simulation training in Otolaryngology.
Arora, Asit; Lau, Loretta Y M; Awad, Zaid; Darzi, Ara; Singh, Arvind; Tolley, Neil
2014-01-01
To conduct a systematic review of the validity data for the virtual reality surgical simulator platforms available in Otolaryngology. Ovid and Embase databases searched July 13, 2013. Four hundred and nine abstracts were independently reviewed by 2 authors. Thirty-six articles which fulfilled the search criteria were retrieved and viewed in full text. These articles were assessed for quantitative data on at least one aspect of face, content, construct or predictive validity. Papers were stratified by simulator, sub-specialty and further classified by the validation method used. There were 21 articles reporting applications for temporal bone surgery (n = 12), endoscopic sinus surgery (n = 6) and myringotomy (n = 3). Four different simulator platforms were validated for temporal bone surgery and two for each of the other surgical applications. Face/content validation represented the most frequent study type (9/21). Construct validation studies performed on temporal bone and endoscopic sinus surgery simulators showed that performance measures reliably discriminated between different experience levels. Simulation training improved cadaver temporal bone dissection skills and operating room performance in sinus surgery. Several simulator platforms particularly in temporal bone surgery and endoscopic sinus surgery are worthy of incorporation into training programmes. Standardised metrics are necessary to guide curriculum development in Otolaryngology. Copyright © 2013 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.
Children's Recognition of Emotional Facial Expressions Through Photographs and Drawings.
Brechet, Claire
2017-01-01
The author's purpose was to examine children's recognition of emotional facial expressions, by comparing two types of stimulus: photographs and drawings. The author aimed to investigate whether drawings could be considered as a more evocative material than photographs, as a function of age and emotion. Five- and 7-year-old children were presented with photographs and drawings displaying facial expressions of 4 basic emotions (i.e., happiness, sadness, anger, and fear) and were asked to perform a matching task by pointing to the face corresponding to the target emotion labeled by the experimenter. The photographs we used were selected from the Radboud Faces Database and the drawings were designed on the basis of both the facial components involved in the expression of these emotions and the graphic cues children tend to use when asked to depict these emotions in their own drawings. Our results show that drawings are better recognized than photographs, for sadness, anger, and fear (with no difference for happiness, due to a ceiling effect). And that the difference between the 2 types of stimuli tends to be more important for 5-year-olds compared to 7-year-olds. These results are discussed in view of their implications, both for future research and for practical application.
Flannery, K B; Fenning, P; Kato, M McGrath; McIntosh, K
2014-06-01
High school is an important time in the educational career of students. It is also a time when adolescents face many behavioral, academic, and social-emotional challenges. Current statistics about the behavioral, academic, and social-emotional challenges faced by adolescents, and the impact on society through incarceration and dropout, have prompted high schools to direct their attention toward keeping students engaged and reducing high-risk behavioral challenges. The purpose of the study was to examine the effects of School-Wide Positive Behavioral Interventions and Supports (SW-PBIS) on the levels of individual student problem behaviors during a 3-year effectiveness trial without random assignment to condition. Participants were 36,653 students in 12 high schools. Eight schools implemented SW-PBIS, and four schools served as comparison schools. Results of a multilevel latent growth model showed statistically significant decreases in student office discipline referrals in SW-PBIS schools, with increases in comparison schools, when controlling for enrollment and percent of students receiving free or reduced price meals. In addition, as fidelity of implementation increased, office discipline referrals significantly decreased. Results are discussed in terms of effectiveness of a SW-PBIS approach in high schools and considerations to enhance fidelity of implementation. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Hudson, Lawrence N; Newbold, Tim; Contu, Sara; Hill, Samantha L L; Lysenko, Igor; De Palma, Adriana; Phillips, Helen R P; Senior, Rebecca A; Bennett, Dominic J; Booth, Hollie; Choimes, Argyrios; Correia, David L P; Day, Julie; Echeverría-Londoño, Susy; Garon, Morgan; Harrison, Michelle L K; Ingram, Daniel J; Jung, Martin; Kemp, Victoria; Kirkpatrick, Lucinda; Martin, Callum D; Pan, Yuan; White, Hannah J; Aben, Job; Abrahamczyk, Stefan; Adum, Gilbert B; Aguilar-Barquero, Virginia; Aizen, Marcelo A; Ancrenaz, Marc; Arbeláez-Cortés, Enrique; Armbrecht, Inge; Azhar, Badrul; Azpiroz, Adrián B; Baeten, Lander; Báldi, András; Banks, John E; Barlow, Jos; Batáry, Péter; Bates, Adam J; Bayne, Erin M; Beja, Pedro; Berg, Åke; Berry, Nicholas J; Bicknell, Jake E; Bihn, Jochen H; Böhning-Gaese, Katrin; Boekhout, Teun; Boutin, Céline; Bouyer, Jérémy; Brearley, Francis Q; Brito, Isabel; Brunet, Jörg; Buczkowski, Grzegorz; Buscardo, Erika; Cabra-García, Jimmy; Calviño-Cancela, María; Cameron, Sydney A; Cancello, Eliana M; Carrijo, Tiago F; Carvalho, Anelena L; Castro, Helena; Castro-Luna, Alejandro A; Cerda, Rolando; Cerezo, Alexis; Chauvat, Matthieu; Clarke, Frank M; Cleary, Daniel F R; Connop, Stuart P; D'Aniello, Biagio; da Silva, Pedro Giovâni; Darvill, Ben; Dauber, Jens; Dejean, Alain; Diekötter, Tim; Dominguez-Haydar, Yamileth; Dormann, Carsten F; Dumont, Bertrand; Dures, Simon G; Dynesius, Mats; Edenius, Lars; Elek, Zoltán; Entling, Martin H; Farwig, Nina; Fayle, Tom M; Felicioli, Antonio; Felton, Annika M; Ficetola, Gentile F; Filgueiras, Bruno K C; Fonte, Steven J; Fraser, Lauchlan H; Fukuda, Daisuke; Furlani, Dario; Ganzhorn, Jörg U; Garden, Jenni G; Gheler-Costa, Carla; Giordani, Paolo; Giordano, Simonetta; Gottschalk, Marco S; Goulson, Dave; Gove, Aaron D; Grogan, James; Hanley, Mick E; Hanson, Thor; Hashim, Nor R; Hawes, Joseph E; Hébert, Christian; Helden, Alvin J; Henden, John-André; Hernández, Lionel; Herzog, Felix; Higuera-Diaz, Diego; Hilje, Branko; Horgan, Finbarr G; Horváth, Roland; Hylander, Kristoffer; Isaacs-Cubides, Paola; Ishitani, Masahiro; Jacobs, Carmen T; Jaramillo, Víctor J; Jauker, Birgit; Jonsell, Mats; Jung, Thomas S; Kapoor, Vena; Kati, Vassiliki; Katovai, Eric; Kessler, Michael; Knop, Eva; Kolb, Annette; Kőrösi, Ádám; Lachat, Thibault; Lantschner, Victoria; Le Féon, Violette; LeBuhn, Gretchen; Légaré, Jean-Philippe; Letcher, Susan G; Littlewood, Nick A; López-Quintero, Carlos A; Louhaichi, Mounir; Lövei, Gabor L; Lucas-Borja, Manuel Esteban; Luja, Victor H; Maeto, Kaoru; Magura, Tibor; Mallari, Neil Aldrin; Marin-Spiotta, Erika; Marshall, E J P; Martínez, Eliana; Mayfield, Margaret M; Mikusinski, Grzegorz; Milder, Jeffrey C; Miller, James R; Morales, Carolina L; Muchane, Mary N; Muchane, Muchai; Naidoo, Robin; Nakamura, Akihiro; Naoe, Shoji; Nates-Parra, Guiomar; Navarrete Gutierrez, Dario A; Neuschulz, Eike L; Noreika, Norbertas; Norfolk, Olivia; Noriega, Jorge Ari; Nöske, Nicole M; O'Dea, Niall; Oduro, William; Ofori-Boateng, Caleb; Oke, Chris O; Osgathorpe, Lynne M; Paritsis, Juan; Parra-H, Alejandro; Pelegrin, Nicolás; Peres, Carlos A; Persson, Anna S; Petanidou, Theodora; Phalan, Ben; Philips, T Keith; Poveda, Katja; Power, Eileen F; Presley, Steven J; Proença, Vânia; Quaranta, Marino; Quintero, Carolina; Redpath-Downing, Nicola A; Reid, J Leighton; Reis, Yana T; Ribeiro, Danilo B; Richardson, Barbara A; Richardson, Michael J; Robles, Carolina A; Römbke, Jörg; Romero-Duque, Luz Piedad; Rosselli, Loreta; Rossiter, Stephen J; Roulston, T'ai H; Rousseau, Laurent; Sadler, Jonathan P; Sáfián, Szabolcs; Saldaña-Vázquez, Romeo A; Samnegård, Ulrika; Schüepp, Christof; Schweiger, Oliver; Sedlock, Jodi L; Shahabuddin, Ghazala; Sheil, Douglas; Silva, Fernando A B; Slade, Eleanor M; Smith-Pardo, Allan H; Sodhi, Navjot S; Somarriba, Eduardo J; Sosa, Ramón A; Stout, Jane C; Struebig, Matthew J; Sung, Yik-Hei; Threlfall, Caragh G; Tonietto, Rebecca; Tóthmérész, Béla; Tscharntke, Teja; Turner, Edgar C; Tylianakis, Jason M; Vanbergen, Adam J; Vassilev, Kiril; Verboven, Hans A F; Vergara, Carlos H; Vergara, Pablo M; Verhulst, Jort; Walker, Tony R; Wang, Yanping; Watling, James I; Wells, Konstans; Williams, Christopher D; Willig, Michael R; Woinarski, John C Z; Wolf, Jan H D; Woodcock, Ben A; Yu, Douglas W; Zaitsev, Andrey S; Collen, Ben; Ewers, Rob M; Mace, Georgina M; Purves, Drew W; Scharlemann, Jörn P W; Purvis, Andy
2014-01-01
Biodiversity continues to decline in the face of increasing anthropogenic pressures such as habitat destruction, exploitation, pollution and introduction of alien species. Existing global databases of species’ threat status or population time series are dominated by charismatic species. The collation of datasets with broad taxonomic and biogeographic extents, and that support computation of a range of biodiversity indicators, is necessary to enable better understanding of historical declines and to project – and avert – future declines. We describe and assess a new database of more than 1.6 million samples from 78 countries representing over 28,000 species, collated from existing spatial comparisons of local-scale biodiversity exposed to different intensities and types of anthropogenic pressures, from terrestrial sites around the world. The database contains measurements taken in 208 (of 814) ecoregions, 13 (of 14) biomes, 25 (of 35) biodiversity hotspots and 16 (of 17) megadiverse countries. The database contains more than 1% of the total number of all species described, and more than 1% of the described species within many taxonomic groups – including flowering plants, gymnosperms, birds, mammals, reptiles, amphibians, beetles, lepidopterans and hymenopterans. The dataset, which is still being added to, is therefore already considerably larger and more representative than those used by previous quantitative models of biodiversity trends and responses. The database is being assembled as part of the PREDICTS project (Projecting Responses of Ecological Diversity In Changing Terrestrial Systems – http://www.predicts.org.uk). We make site-level summary data available alongside this article. The full database will be publicly available in 2015. PMID:25558364
Hudson, Lawrence N; Newbold, Tim; Contu, Sara; Hill, Samantha L L; Lysenko, Igor; De Palma, Adriana; Phillips, Helen R P; Senior, Rebecca A; Bennett, Dominic J; Booth, Hollie; Choimes, Argyrios; Correia, David L P; Day, Julie; Echeverría-Londoño, Susy; Garon, Morgan; Harrison, Michelle L K; Ingram, Daniel J; Jung, Martin; Kemp, Victoria; Kirkpatrick, Lucinda; Martin, Callum D; Pan, Yuan; White, Hannah J; Aben, Job; Abrahamczyk, Stefan; Adum, Gilbert B; Aguilar-Barquero, Virginia; Aizen, Marcelo A; Ancrenaz, Marc; Arbeláez-Cortés, Enrique; Armbrecht, Inge; Azhar, Badrul; Azpiroz, Adrián B; Baeten, Lander; Báldi, András; Banks, John E; Barlow, Jos; Batáry, Péter; Bates, Adam J; Bayne, Erin M; Beja, Pedro; Berg, Åke; Berry, Nicholas J; Bicknell, Jake E; Bihn, Jochen H; Böhning-Gaese, Katrin; Boekhout, Teun; Boutin, Céline; Bouyer, Jérémy; Brearley, Francis Q; Brito, Isabel; Brunet, Jörg; Buczkowski, Grzegorz; Buscardo, Erika; Cabra-García, Jimmy; Calviño-Cancela, María; Cameron, Sydney A; Cancello, Eliana M; Carrijo, Tiago F; Carvalho, Anelena L; Castro, Helena; Castro-Luna, Alejandro A; Cerda, Rolando; Cerezo, Alexis; Chauvat, Matthieu; Clarke, Frank M; Cleary, Daniel F R; Connop, Stuart P; D'Aniello, Biagio; da Silva, Pedro Giovâni; Darvill, Ben; Dauber, Jens; Dejean, Alain; Diekötter, Tim; Dominguez-Haydar, Yamileth; Dormann, Carsten F; Dumont, Bertrand; Dures, Simon G; Dynesius, Mats; Edenius, Lars; Elek, Zoltán; Entling, Martin H; Farwig, Nina; Fayle, Tom M; Felicioli, Antonio; Felton, Annika M; Ficetola, Gentile F; Filgueiras, Bruno K C; Fonte, Steven J; Fraser, Lauchlan H; Fukuda, Daisuke; Furlani, Dario; Ganzhorn, Jörg U; Garden, Jenni G; Gheler-Costa, Carla; Giordani, Paolo; Giordano, Simonetta; Gottschalk, Marco S; Goulson, Dave; Gove, Aaron D; Grogan, James; Hanley, Mick E; Hanson, Thor; Hashim, Nor R; Hawes, Joseph E; Hébert, Christian; Helden, Alvin J; Henden, John-André; Hernández, Lionel; Herzog, Felix; Higuera-Diaz, Diego; Hilje, Branko; Horgan, Finbarr G; Horváth, Roland; Hylander, Kristoffer; Isaacs-Cubides, Paola; Ishitani, Masahiro; Jacobs, Carmen T; Jaramillo, Víctor J; Jauker, Birgit; Jonsell, Mats; Jung, Thomas S; Kapoor, Vena; Kati, Vassiliki; Katovai, Eric; Kessler, Michael; Knop, Eva; Kolb, Annette; Kőrösi, Ádám; Lachat, Thibault; Lantschner, Victoria; Le Féon, Violette; LeBuhn, Gretchen; Légaré, Jean-Philippe; Letcher, Susan G; Littlewood, Nick A; López-Quintero, Carlos A; Louhaichi, Mounir; Lövei, Gabor L; Lucas-Borja, Manuel Esteban; Luja, Victor H; Maeto, Kaoru; Magura, Tibor; Mallari, Neil Aldrin; Marin-Spiotta, Erika; Marshall, E J P; Martínez, Eliana; Mayfield, Margaret M; Mikusinski, Grzegorz; Milder, Jeffrey C; Miller, James R; Morales, Carolina L; Muchane, Mary N; Muchane, Muchai; Naidoo, Robin; Nakamura, Akihiro; Naoe, Shoji; Nates-Parra, Guiomar; Navarrete Gutierrez, Dario A; Neuschulz, Eike L; Noreika, Norbertas; Norfolk, Olivia; Noriega, Jorge Ari; Nöske, Nicole M; O'Dea, Niall; Oduro, William; Ofori-Boateng, Caleb; Oke, Chris O; Osgathorpe, Lynne M; Paritsis, Juan; Parra-H, Alejandro; Pelegrin, Nicolás; Peres, Carlos A; Persson, Anna S; Petanidou, Theodora; Phalan, Ben; Philips, T Keith; Poveda, Katja; Power, Eileen F; Presley, Steven J; Proença, Vânia; Quaranta, Marino; Quintero, Carolina; Redpath-Downing, Nicola A; Reid, J Leighton; Reis, Yana T; Ribeiro, Danilo B; Richardson, Barbara A; Richardson, Michael J; Robles, Carolina A; Römbke, Jörg; Romero-Duque, Luz Piedad; Rosselli, Loreta; Rossiter, Stephen J; Roulston, T'ai H; Rousseau, Laurent; Sadler, Jonathan P; Sáfián, Szabolcs; Saldaña-Vázquez, Romeo A; Samnegård, Ulrika; Schüepp, Christof; Schweiger, Oliver; Sedlock, Jodi L; Shahabuddin, Ghazala; Sheil, Douglas; Silva, Fernando A B; Slade, Eleanor M; Smith-Pardo, Allan H; Sodhi, Navjot S; Somarriba, Eduardo J; Sosa, Ramón A; Stout, Jane C; Struebig, Matthew J; Sung, Yik-Hei; Threlfall, Caragh G; Tonietto, Rebecca; Tóthmérész, Béla; Tscharntke, Teja; Turner, Edgar C; Tylianakis, Jason M; Vanbergen, Adam J; Vassilev, Kiril; Verboven, Hans A F; Vergara, Carlos H; Vergara, Pablo M; Verhulst, Jort; Walker, Tony R; Wang, Yanping; Watling, James I; Wells, Konstans; Williams, Christopher D; Willig, Michael R; Woinarski, John C Z; Wolf, Jan H D; Woodcock, Ben A; Yu, Douglas W; Zaitsev, Andrey S; Collen, Ben; Ewers, Rob M; Mace, Georgina M; Purves, Drew W; Scharlemann, Jörn P W; Purvis, Andy
2014-12-01
Biodiversity continues to decline in the face of increasing anthropogenic pressures such as habitat destruction, exploitation, pollution and introduction of alien species. Existing global databases of species' threat status or population time series are dominated by charismatic species. The collation of datasets with broad taxonomic and biogeographic extents, and that support computation of a range of biodiversity indicators, is necessary to enable better understanding of historical declines and to project - and avert - future declines. We describe and assess a new database of more than 1.6 million samples from 78 countries representing over 28,000 species, collated from existing spatial comparisons of local-scale biodiversity exposed to different intensities and types of anthropogenic pressures, from terrestrial sites around the world. The database contains measurements taken in 208 (of 814) ecoregions, 13 (of 14) biomes, 25 (of 35) biodiversity hotspots and 16 (of 17) megadiverse countries. The database contains more than 1% of the total number of all species described, and more than 1% of the described species within many taxonomic groups - including flowering plants, gymnosperms, birds, mammals, reptiles, amphibians, beetles, lepidopterans and hymenopterans. The dataset, which is still being added to, is therefore already considerably larger and more representative than those used by previous quantitative models of biodiversity trends and responses. The database is being assembled as part of the PREDICTS project (Projecting Responses of Ecological Diversity In Changing Terrestrial Systems - http://www.predicts.org.uk). We make site-level summary data available alongside this article. The full database will be publicly available in 2015.
3D landmarking in multiexpression face analysis: a preliminary study on eyebrows and mouth.
Vezzetti, Enrico; Marcolin, Federica
2014-08-01
The application of three-dimensional (3D) facial analysis and landmarking algorithms in the field of maxillofacial surgery and other medical applications, such as diagnosis of diseases by facial anomalies and dysmorphism, has gained a lot of attention. In a previous work, we used a geometric approach to automatically extract some 3D facial key points, called landmarks, working in the differential geometry domain, through the coefficients of fundamental forms, principal curvatures, mean and Gaussian curvatures, derivatives, shape and curvedness indexes, and tangent map. In this article we describe the extension of our previous landmarking algorithm, which is now able to extract eyebrows and mouth landmarks using both old and new meshes. The algorithm has been tested on our face database and on the public Bosphorus 3D database. We chose to work on the mouth and eyebrows as a separate study because of the role that these parts play in facial expressions. In fact, since the mouth is the part of the face that moves the most and affects mainly facial expressions, extracting mouth landmarks from various facial poses means that the newly developed algorithm is pose-independent. This journal requires that authors assign a level of evidence to each submission to which Evidence-Based Medicine rankings are applicable. This excludes Review Articles, Book Reviews, and manuscripts that concern Basic Science, Animal Studies, Cadaver Studies, and Experimental Studies. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors http://www.springer.com/00266 .
Automatic pattern localization across layout database and photolithography mask
NASA Astrophysics Data System (ADS)
Morey, Philippe; Brault, Frederic; Beisser, Eric; Ache, Oliver; Röth, Klaus-Dieter
2016-03-01
Advanced process photolithography masks require more and more controls for registration versus design and critical dimension uniformity (CDU). The distribution of the measurement points should be distributed all over the whole mask and may be denser in areas critical to wafer overlay requirements. This means that some, if not many, of theses controls should be made inside the customer die and may use non-dedicated patterns. It is then mandatory to access the original layout database to select patterns for the metrology process. Finding hundreds of relevant patterns in a database containing billions of polygons may be possible, but in addition, it is mandatory to create the complete metrology job fast and reliable. Combining, on one hand, a software expertise in mask databases processing and, on the other hand, advanced skills in control and registration equipment, we have developed a Mask Dataprep Station able to select an appropriate number of measurement targets and their positions in a huge database and automatically create measurement jobs on the corresponding area on the mask for the registration metrology system. In addition, the required design clips are generated from the database in order to perform the rendering procedure on the metrology system. This new methodology has been validated on real production line for the most advanced process. This paper presents the main challenges that we have faced, as well as some results on the global performances.
Kobayashi, Norio; Ishii, Manabu; Takahashi, Satoshi; Mochizuki, Yoshiki; Matsushima, Akihiro; Toyoda, Tetsuro
2011-01-01
Global cloud frameworks for bioinformatics research databases become huge and heterogeneous; solutions face various diametric challenges comprising cross-integration, retrieval, security and openness. To address this, as of March 2011 organizations including RIKEN published 192 mammalian, plant and protein life sciences databases having 8.2 million data records, integrated as Linked Open or Private Data (LOD/LPD) using SciNetS.org, the Scientists' Networking System. The huge quantity of linked data this database integration framework covers is based on the Semantic Web, where researchers collaborate by managing metadata across public and private databases in a secured data space. This outstripped the data query capacity of existing interface tools like SPARQL. Actual research also requires specialized tools for data analysis using raw original data. To solve these challenges, in December 2009 we developed the lightweight Semantic-JSON interface to access each fragment of linked and raw life sciences data securely under the control of programming languages popularly used by bioinformaticians such as Perl and Ruby. Researchers successfully used the interface across 28 million semantic relationships for biological applications including genome design, sequence processing, inference over phenotype databases, full-text search indexing and human-readable contents like ontology and LOD tree viewers. Semantic-JSON services of SciNetS.org are provided at http://semanticjson.org. PMID:21632604
A meta-synthesis on parenting a child with autism
Ooi, Khim Lynn; Ong, Yin Sin; Jacob, Sabrina Anne; Khan, Tahir Mehmood
2016-01-01
Background The lifelong nature of autism in a child has deep implications on parents as they are faced with a range of challenges and emotional consequences in raising the child. The aim of this meta-synthesis was to explore the perspectives of parents in raising a child with autism in the childhood period to gain an insight of the adaptations and beliefs of parents toward autism, their family and social experiences, as well as their perceptions toward health and educational services. Methods A systematic search of six databases (PubMed, EMBASE, PsychInfo, Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, and Database of Abstracts of Reviews of Effects [DARE]) was conducted from inception up to September 30, 2014. Full-text English articles of qualitative studies describing parents’ perceptions relating to the care of children younger than 12 years of age and diagnosed with a sole disorder of autism were included. Results A total of 50 eligible articles were appraised and analyzed, identifying four core themes encompassing all thoughts, emotions, and experiences commonly expressed by parents: 1) The Parent, 2) Impact on the Family, 3) Social Impact, and 4) Health and Educational Services. Findings revealed that parents who have a child with autism experienced multiple challenges in different aspects of care, impacting on parents’ stress and adaptation. Conclusion Health care provision should be family centered, addressing and supporting the needs of the whole family and not just the affected child, to ensure the family’s well-being and quality of life in the face of a diagnosis of autism. PMID:27103804
Zhou, Guomei; Cheng, Zhijie; Yue, Zhenzhu; Tredoux, Colin; He, Jibo; Wang, Ling
2015-01-01
Studies have shown that people are better at recognizing human faces from their own-race than from other-races, an effect often termed the Own-Race Advantage. The current study investigates whether there is an Own-Race Advantage in attention and its neural correlates. Participants were asked to search for a human face among animal faces. Experiment 1 showed a classic Own-Race Advantage in response time both for Chinese and Black South African participants. Using event-related potentials (ERPs), Experiment 2 showed a similar Own-Race Advantage in response time for both upright faces and inverted faces. Moreover, the latency of N2pc for own-race faces was earlier than that for other-race faces. These results suggested that own-race faces capture attention more efficiently than other-race faces.
NASA Astrophysics Data System (ADS)
Horii, Steven C.; Kim, Woojin; Boonn, William; Iyoob, Christopher; Maston, Keith; Coleman, Beverly G.
2011-03-01
When the first quarter of 2010 Department of Radiology statistics were provided to the Section Chiefs, the authors (SH, BC) were alarmed to discover that Ultrasound showed a decrease of 2.5 percent in billed examinations. This seemed to be in direct contradistinction to the experience of the ultrasound faculty members and sonographers. Their experience was that they were far busier than during the same quarter of 2009. The one exception that all acknowledged was the month of February, 2010 when several major winter storms resulted in a much decreased Hospital admission and Emergency Department visit rate. Since these statistics in part help establish priorities for capital budget items, professional and technical staffing levels, and levels of incentive salary, they are taken very seriously. The availability of a desktop, Web-based RIS database search tool developed by two of the authors (WK, WB) and built-in database functions of the ultrasound miniPACS, made it possible for us very rapidly to develop and test hypotheses for why the number of billable examinations was declining in the face of what experience told the authors was an increasing number of examinations being performed. Within a short time, we identified the major cause as errors on the part of the company retained to verify billable Current Procedural Terminology (CPT) codes against ultrasound reports. This information is being used going forward to recover unbilled examinations and take measures to reduce or eliminate the types of coding errors that resulted in the problem.
Essentialist Thinking Predicts Decrements in Children's Memory for Racially Ambiguous Faces
ERIC Educational Resources Information Center
Gaither, Sarah E.; Schultz, Jennifer R.; Pauker, Kristin; Sommers, Samuel R.; Maddox, Keith B.; Ambady, Nalini
2014-01-01
Past research shows that adults often display poor memory for racially ambiguous and racial outgroup faces, with both face types remembered worse than own-race faces. In the present study, the authors examined whether children also show this pattern of results. They also examined whether emerging essentialist thinking about race predicts…
Unconscious processing of facial attractiveness: invisible attractive faces orient visual attention
Hung, Shao-Min; Nieh, Chih-Hsuan; Hsieh, Po-Jang
2016-01-01
Past research has proven human’s extraordinary ability to extract information from a face in the blink of an eye, including its emotion, gaze direction, and attractiveness. However, it remains elusive whether facial attractiveness can be processed and influences our behaviors in the complete absence of conscious awareness. Here we demonstrate unconscious processing of facial attractiveness with three distinct approaches. In Experiment 1, the time taken for faces to break interocular suppression was measured. The results showed that attractive faces enjoyed the privilege of breaking suppression and reaching consciousness earlier. In Experiment 2, we further showed that attractive faces had lower visibility thresholds, again suggesting that facial attractiveness could be processed more easily to reach consciousness. Crucially, in Experiment 3, a significant decrease of accuracy on an orientation discrimination task subsequent to an invisible attractive face showed that attractive faces, albeit suppressed and invisible, still exerted an effect by orienting attention. Taken together, for the first time, we show that facial attractiveness can be processed in the complete absence of consciousness, and an unconscious attractive face is still capable of directing our attention. PMID:27848992
Unconscious processing of facial attractiveness: invisible attractive faces orient visual attention.
Hung, Shao-Min; Nieh, Chih-Hsuan; Hsieh, Po-Jang
2016-11-16
Past research has proven human's extraordinary ability to extract information from a face in the blink of an eye, including its emotion, gaze direction, and attractiveness. However, it remains elusive whether facial attractiveness can be processed and influences our behaviors in the complete absence of conscious awareness. Here we demonstrate unconscious processing of facial attractiveness with three distinct approaches. In Experiment 1, the time taken for faces to break interocular suppression was measured. The results showed that attractive faces enjoyed the privilege of breaking suppression and reaching consciousness earlier. In Experiment 2, we further showed that attractive faces had lower visibility thresholds, again suggesting that facial attractiveness could be processed more easily to reach consciousness. Crucially, in Experiment 3, a significant decrease of accuracy on an orientation discrimination task subsequent to an invisible attractive face showed that attractive faces, albeit suppressed and invisible, still exerted an effect by orienting attention. Taken together, for the first time, we show that facial attractiveness can be processed in the complete absence of consciousness, and an unconscious attractive face is still capable of directing our attention.
From evidence based bioethics to evidence based social policies.
Bonneux, Luc
2007-01-01
In this issue, Norwegian authors demonstrate that causes of early expulsion out the workforce are rooted in childhood. They reconstruct individual biographies in administrative databases linked by an unique national identification number, looking forward 15 years in early adulthood and looking back 20 years till birth with close to negligible loss to follow up. Evidence based bioethics suggest that it is better to live in a country that allows reconstructing biographies in administrative databases then in countries that forbid access by restrictive legislation based on privacy considerations. The benefits of gained knowledge from existing and accessible information are tangible, particularly for the weak and the poor, while the harms of theoretical privacy invasion have not yet materialised. The study shows once again that disadvantage runs in families. Low parental education, parental disability and unstable marital unions predict early disability pensions and premature expulsion out gainful employment. The effect of low parental education is mediated by low education of the index person. However, in a feast of descriptive studies of socio-economic causes of ill health we still face a famine of evaluative intervention studies. An evidence based social policy should be based on effective interventions that are able to break the vicious circles of disability handed down from generation to generation.
Manifold Regularized Experimental Design for Active Learning.
Zhang, Lining; Shum, Hubert P H; Shao, Ling
2016-12-02
Various machine learning and data mining tasks in classification require abundant data samples to be labeled for training. Conventional active learning methods aim at labeling the most informative samples for alleviating the labor of the user. Many previous studies in active learning select one sample after another in a greedy manner. However, this is not very effective because the classification models has to be retrained for each newly labeled sample. Moreover, many popular active learning approaches utilize the most uncertain samples by leveraging the classification hyperplane of the classifier, which is not appropriate since the classification hyperplane is inaccurate when the training data are small-sized. The problem of insufficient training data in real-world systems limits the potential applications of these approaches. This paper presents a novel method of active learning called manifold regularized experimental design (MRED), which can label multiple informative samples at one time for training. In addition, MRED gives an explicit geometric explanation for the selected samples to be labeled by the user. Different from existing active learning methods, our method avoids the intrinsic problems caused by insufficiently labeled samples in real-world applications. Various experiments on synthetic datasets, the Yale face database and the Corel image database have been carried out to show how MRED outperforms existing methods.
Fusiform gyrus face selectivity relates to individual differences in facial recognition ability.
Furl, Nicholas; Garrido, Lúcia; Dolan, Raymond J; Driver, Jon; Duchaine, Bradley
2011-07-01
Regions of the occipital and temporal lobes, including a region in the fusiform gyrus (FG), have been proposed to constitute a "core" visual representation system for faces, in part because they show face selectivity and face repetition suppression. But recent fMRI studies of developmental prosopagnosics (DPs) raise questions about whether these measures relate to face processing skills. Although DPs manifest deficient face processing, most studies to date have not shown unequivocal reductions of functional responses in the proposed core regions. We scanned 15 DPs and 15 non-DP control participants with fMRI while employing factor analysis to derive behavioral components related to face identification or other processes. Repetition suppression specific to facial identities in FG or to expression in FG and STS did not show compelling relationships with face identification ability. However, we identified robust relationships between face selectivity and face identification ability in FG across our sample for several convergent measures, including voxel-wise statistical parametric mapping, peak face selectivity in individually defined "fusiform face areas" (FFAs), and anatomical extents (cluster sizes) of those FFAs. None of these measures showed associations with behavioral expression or object recognition ability. As a group, DPs had reduced face-selective responses in bilateral FFA when compared with non-DPs. Individual DPs were also more likely than non-DPs to lack expected face-selective activity in core regions. These findings associate individual differences in face processing ability with selectivity in core face processing regions. This confirms that face selectivity can provide a valid marker for neural mechanisms that contribute to face identification ability.
Holistic face training enhances face processing in developmental prosopagnosia
Cohan, Sarah; Nakayama, Ken
2014-01-01
Prosopagnosia has largely been regarded as an untreatable disorder. However, recent case studies using cognitive training have shown that it is possible to enhance face recognition abilities in individuals with developmental prosopagnosia. Our goal was to determine if this approach could be effective in a larger population of developmental prosopagnosics. We trained 24 developmental prosopagnosics using a 3-week online face-training program targeting holistic face processing. Twelve subjects with developmental prosopagnosia were assessed before and after training, and the other 12 were assessed before and after a waiting period, they then performed the training, and were then assessed again. The assessments included measures of front-view face discrimination, face discrimination with view-point changes, measures of holistic face processing, and a 5-day diary to quantify potential real-world improvements. Compared with the waiting period, developmental prosopagnosics showed moderate but significant overall training-related improvements on measures of front-view face discrimination. Those who reached the more difficult levels of training (‘better’ trainees) showed the strongest improvements in front-view face discrimination and showed significantly increased holistic face processing to the point of being similar to that of unimpaired control subjects. Despite challenges in characterizing developmental prosopagnosics’ everyday face recognition and potential biases in self-report, results also showed modest but consistent self-reported diary improvements. In summary, we demonstrate that by using cognitive training that targets holistic processing, it is possible to enhance face perception across a group of developmental prosopagnosics and further suggest that those who improved the most on the training task received the greatest benefits. PMID:24691394
Active Authentication: Beyond Passwords
2011-11-18
103m 26-Jul-07 208k 27-Dec-10 4.9m Source: www.privacyrights.org/data-breach Hackers broke into a Gannett Co database containing personal...Pattern • Knuckle Pattern • Lip Pattern • Nail bed Pattern • Nose Pattern • Oto-acoustic Emissions • Palmprint • Retina Pattern • Skin... Palmprint Knuckle Pattern Pulse Electrocardiogram Electroencephalogram Face Geometry Lip Pattern Blue may be suitable for continuous monitoring
Air Land Sea Bulletin. Issue No. 2013-1
2013-01-01
face, finger- print, iris , DNA, and palm print. Biometric capabilities may achieve enabling effects such as the ability to separate, identify...to obtain forensic-quality fingerprints, latent fingerprints, iris images, photos, and other biometric data. Figure 1. SEEK II ALSB 2013-1 12...logical and biographical contextual data of POIs and matches fingerprints and iris images against an internal biomet- rics enrollment database. The
The other-race effect in children from a multiracial population: A cross-cultural comparison.
Tham, Diana Su Yun; Bremner, J Gavin; Hay, Dennis
2017-03-01
The role of experience with other-race faces in the development of the other-race effect was investigated through a cross-cultural comparison between 5- and 6-year-olds and 13- and 14-year-olds raised in a monoracial (British White, n=83) population and a multiracial (Malaysian Chinese, n=68) population. British White children showed an other-race effect to three other-race faces (Chinese, Malay, and African Black) that was stable across age. Malaysian Chinese children showed a recognition deficit for less experienced faces (African Black) but showed a recognition advantage for faces of which they have direct or indirect experience. Interestingly, younger (Malaysian Chinese) children showed no other-race effect for female faces such that they can recognize all female faces regardless of race. These findings point to the importance of early race and gender experiences in reorganizing the face representation to accommodate changes in experience across development. Copyright © 2016 Elsevier Inc. All rights reserved.
The changing face of emotion: age-related patterns of amygdala activation to salient faces
Evans, Jennifer W.; Morris, Drew; Lewis, Marc D.; Taylor, Margot J.
2011-01-01
The present study investigated age-related differences in the amygdala and other nodes of face-processing networks in response to facial expression and familiarity. fMRI data were analyzed from 31 children (3.5–8.5 years) and 14 young adults (18–33 years) who viewed pictures of familiar (mothers) and unfamiliar emotional faces. Results showed that amygdala activation for faces over a scrambled image baseline increased with age. Children, but not adults, showed greater amygdala activation to happy than angry faces; in addition, amygdala activation for angry faces increased with age. In keeping with growing evidence of a positivity bias in young children, our data suggest that children find happy faces to be more salient or meaningful than angry faces. Both children and adults showed preferential activation to mothers’ over strangers’ faces in a region of rostral anterior cingulate cortex associated with self-evaluation, suggesting that some nodes in frontal evaluative networks are active early in development. This study presents novel data on neural correlates of face processing in childhood and indicates that preferential amygdala activation for emotional expressions changes with age. PMID:20194512
Learning toward practical head pose estimation
NASA Astrophysics Data System (ADS)
Sang, Gaoli; He, Feixiang; Zhu, Rong; Xuan, Shibin
2017-08-01
Head pose is useful information for many face-related tasks, such as face recognition, behavior analysis, human-computer interfaces, etc. Existing head pose estimation methods usually assume that the face images have been well aligned or that sufficient and precise training data are available. In practical applications, however, these assumptions are very likely to be invalid. This paper first investigates the impact of the failure of these assumptions, i.e., misalignment of face images, uncertainty and undersampling of training data, on head pose estimation accuracy of state-of-the-art methods. A learning-based approach is then designed to enhance the robustness of head pose estimation to these factors. To cope with misalignment, instead of using hand-crafted features, it seeks suitable features by learning from a set of training data with a deep convolutional neural network (DCNN), such that the training data can be best classified into the correct head pose categories. To handle uncertainty and undersampling, it employs multivariate labeling distributions (MLDs) with dense sampling intervals to represent the head pose attributes of face images. The correlation between the features and the dense MLD representations of face images is approximated by a maximum entropy model, whose parameters are optimized on the given training data. To estimate the head pose of a face image, its MLD representation is first computed according to the model based on the features extracted from the image by the trained DCNN, and its head pose is then assumed to be the one corresponding to the peak in its MLD. Evaluation experiments on the Pointing'04, FacePix, Multi-PIE, and CASIA-PEAL databases prove the effectiveness and efficiency of the proposed method.
NASA Astrophysics Data System (ADS)
Shanafield, Harold; Shamblin, Stephanie; Devarakonda, Ranjeet; McMurry, Ben; Walker Beaty, Tammy; Wilson, Bruce; Cook, Robert B.
2011-02-01
The FLUXNET global network of regional flux tower networks serves to coordinate the regional and global analysis of eddy covariance based CO2, water vapor and energy flux measurements taken at more than 500 sites in continuous long-term operation. The FLUXNET database presently contains information about the location, characteristics, and data availability of each of these sites. To facilitate the coordination and distribution of this information, we redesigned the underlying database and associated web site. We chose the PostgreSQL database as a platform based on its performance, stability and GIS extensions. PostreSQL allows us to enhance our search and presentation capabilities, which will in turn provide increased functionality for users seeking to understand the FLUXNET data. The redesigned database will also significantly decrease the burden of managing such highly varied data. The website is being developed using the Drupal content management system, which provides many community-developed modules and a robust framework for custom feature development. In parallel, we are working with the regional networks to ensure that the information in the FLUXNET database is identical to that in the regional networks. Going forward, we also plan to develop an automated way to synchronize information with the regional networks.
The effect of the buccal corridor and tooth display on smile attractiveness.
Niaki, Esfandiar Akhavan; Arab, Sepideh; Shamshiri, Ahmadreza; Imani, Mohammad Moslem
2015-11-01
The aim of the present study was to evaluate the lay perception of the effect of the buccal corridor and amount of tooth-gingival display on the attractiveness of a smile in different facial types. Using Adobe Photoshop CS3 software, frontal facial images of two smiling Iranian female subjects (one short-faced and one long-faced) were altered to create different magnitudes of buccal corridor display (5, 10, 15, 20 and 25%) and tooth-gingival display (2 mm central incisor show, 6 mm central incisor show, total central incisor show, total tooth show with 2 mm gingival show and total tooth show with 4 mm gingival show). Sixty Iranians (30 males and 30 females) rated the attractiveness of the pictures on a 1-5 point scale. Narrower smiles were preferred in long-faced subjects compared with short-faced subjects. Minimal tooth show was more attractive than excessive gingival display in short-faced subjects. There were no gender specific, statistically significant differences found in the ratings given by the lay assessors. Harmonious geometry of the smile and face in both the vertical and transverse dimensions influences smile attractiveness and this should be considered in orthodontic treatment planning.
Music to my ears: Age-related decline in musical and facial emotion recognition.
Sutcliffe, Ryan; Rendell, Peter G; Henry, Julie D; Bailey, Phoebe E; Ruffman, Ted
2017-12-01
We investigated young-old differences in emotion recognition using music and face stimuli and tested explanatory hypotheses regarding older adults' typically worse emotion recognition. In Experiment 1, young and older adults labeled emotions in an established set of faces, and in classical piano stimuli that we pilot-tested on other young and older adults. Older adults were worse at detecting anger, sadness, fear, and happiness in music. Performance on the music and face emotion tasks was not correlated for either age group. Because musical expressions of fear were not equated for age groups in the pilot study of Experiment 1, we conducted a second experiment in which we created a novel set of music stimuli that included more accessible musical styles, and which we again pilot-tested on young and older adults. In this pilot study, all musical emotions were identified similarly by young and older adults. In Experiment 2, participants also made age estimations in another set of faces to examine whether potential relations between the face and music emotion tasks would be shared with the age estimation task. Older adults did worse in each of the tasks, and had specific difficulty recognizing happy, sad, peaceful, angry, and fearful music clips. Older adults' difficulties in each of the 3 tasks-music emotion, face emotion, and face age-were not correlated with each other. General cognitive decline did not appear to explain our results as increasing age predicted emotion performance even after fluid IQ was controlled for within the older adult group. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Jones, Taryn M; Dean, Catherine M; Hush, Julia M; Dear, Blake F; Titov, Nickolai
2015-04-19
Individuals living with acquired brain injury, typically caused by stroke or trauma, are far less likely to achieve recommended levels of physical activity for optimal health and well-being. With a growing number of people living with chronic disease and disability globally, self-management programs are seen as integral to the management of these conditions and the prevention of secondary health conditions. However, to date, there has been no systematic review of the literature examining the efficacy of self-management programs specifically on physical activity in individuals with acquired brain injury, whether delivered face-to-face or remotely. Therefore, the purpose of this review is to evaluate the efficacy of self-management programs in increasing physical activity levels in adults living in the community following acquired brain injury. The efficacy of remote versus face-to-face delivery was also examined. A systematic review of the literature was conducted. Electronic databases were searched. Two independent reviewers screened all studies for eligibility, assessed risk of bias, and extracted relevant data. Five studies met the inclusion criteria for this review. Studies were widely heterogeneous with respect to program content and delivery characteristics and outcomes, although all programs utilized behavioral change principles. Four of the five studies examined interventions in which physical activity was a component of a multifaceted intervention, where the depth to which physical activity specific content was covered, and the extent to which skills were taught and practiced, could not be clearly established. Three studies showed favorable physical activity outcomes following self-management interventions for stroke; however, risk of bias was high, and overall efficacy remains unclear. Although not used in isolation from face-to-face delivery, remote delivery via telephone was the predominant form of delivery in two studies with support for its inclusion in self-management programs for individuals following stroke. The efficacy of self-management programs in increasing physical activity levels in community-dwelling adults following acquired brain injury (ABI) is still unknown. Research into the efficacy of self-management programs specifically aimed at improving physical activity in adults living in the community following acquired brain injury is needed. The efficacy of remote delivery methods also warrants further investigation. PROSPERO CRD42013006748.
Hassett, Afton L; Li, Tracy; Buyske, Steven; Savage, Shantal V; Gignac, Monique A M
2008-05-01
To consider the feasibility of assessing multiple facets of independence in rheumatoid arthritis (RA) using a measure developed from existing items and examining its face validity, construct validity and responsiveness to change. The ATTAIN (Abatacept Trial in Treatment of Anti-tumor necrosis factor [TNF] Inadequate responders) database was used. Patients with RA were randomized 2:1, abatacept (n = 258) and placebo (n = 133). A multi-faceted scale to measure physical and psychosocial independence was constructed using items from the Health Assessment Questionnaire (HAQ) and Short Form 36 Health Survey (SF-36). Questions assessing activity limitations and need for outside caregiver help were also examined. Interviews with 20 RA patients assessed face validity. Item Response Theory analysis yielded two traits - 'Psychosocial Independence', derived from the number of days with activity limitations plus the Role Emotional, Social Functioning and Role Physical subscale items from the SF-36; and 'Physical Independence', derived from 15 HAQ items assessing need for help from another. The two traits showed no significant differential item functioning for age or gender and demonstrated good face validity. Changes over 169 days on Psychosocial Independence were greater (mean 0.46 units, 95% confidence interval [CI]: 0.17-0.75) for the abatacept group than for placebo (p = 0.002). Changes in Physical Independence were greater (mean 0.59 units, 95% CI: 0.35-0.82) for the abatacept group than for placebo (p < 0.001). The multi-faceted assessment of independence in RA based on items from commonly used instruments is feasible suggesting promise for evaluating independence in future clinical trials. This approach demonstrated good face and construct validity and responsiveness in RA patients who had previously failed anti-TNF therapy. However, we caution against an interpretation that these data suggest that abatacept improves independence because the component parts of this assessment came from instruments used in the ATTAIN trial where data had been previously analyzed.
Brand, S; Otte, D; Stübig, T; Petri, M; Ettinger, M; Mueller, C W; Krettek, C; Haasper, C; Probst, C
2013-12-01
Patients of motor vehicle crashes (MVCs) suffering burns are challenging for the rescue team and the admitting hospital. These patients often face worse outcomes than crash patients with trauma only. Our analysis of the German In-depth Accident Study (GIDAS) database researches the detailed crash mechanisms to identify potential prevention measures. We analyzed the 2011 GIDAS database comprising 14,072 MVC patients and compared individuals with (Burns) and without (NoBurns) burns. Only complete data sets were included. Patients with burns obviously resulting of air bag deployment only were not included in the Burns group. Data acquisition by an on call team of medical and technical researchers starts at the crash scene immediately after the crash and comprises technical data as well as medical information until discharge from the hospital. Statistical analysis was done by Mann-Whitney-U-test. Level of significance was p < 0.05. 14,072 MVC patients with complete data sets were included in the analysis. 99 individuals suffered burns (0.7%; group "Burns"). Demographic data and injury severity showed no statistical significant difference between the two groups of Burns and NoBurns. Injury severity was measured using the Injury Severity Score (ISS). Direct frontal impact (Burns: 48.5% vs. NoBurns: 33%; p < 0.05) and high-energy impacts as represented by delta-v (m/s) (Burns: 33.5 ± 21.4 vs. NoBurns: 25.2 ± 15.9; p < 0.05) were significantly different between groups as was mortality (Burns: 12.5% vs. NoBurns: 2.1%; p < 0.05). Type of patients' motor vehicles and type of crash opponent showed no differences. Our results show, that frontal and high-energy impacts are associated with a frequency of burns. This may serve automobile construction companies to improve the burn safety to prevent flames spreading from the motor compartment to the passenger compartment. Communities may impose speed limits in local crash hot spots. Copyright © 2013 Elsevier Ltd and ISBI. All rights reserved.