Sample records for recognizing human faces

  1. Getting to the Bottom of Face Processing. Species-Specific Inversion Effects for Faces and Behinds in Humans and Chimpanzees (Pan Troglodytes).

    PubMed

    Kret, Mariska E; Tomonaga, Masaki

    2016-01-01

    For social species such as primates, the recognition of conspecifics is crucial for their survival. As demonstrated by the 'face inversion effect', humans are experts in recognizing faces and unlike objects, recognize their identity by processing it configurally. The human face, with its distinct features such as eye-whites, eyebrows, red lips and cheeks signals emotions, intentions, health and sexual attraction and, as we will show here, shares important features with the primate behind. Chimpanzee females show a swelling and reddening of the anogenital region around the time of ovulation. This provides an important socio-sexual signal for group members, who can identify individuals by their behinds. We hypothesized that chimpanzees process behinds configurally in a way humans process faces. In four different delayed matching-to-sample tasks with upright and inverted body parts, we show that humans demonstrate a face, but not a behind inversion effect and that chimpanzees show a behind, but no clear face inversion effect. The findings suggest an evolutionary shift in socio-sexual signalling function from behinds to faces, two hairless, symmetrical and attractive body parts, which might have attuned the human brain to process faces, and the human face to become more behind-like.

  2. Recognizing Disguised Faces: Human and Machine Evaluation

    PubMed Central

    Dhamecha, Tejas Indulal; Singh, Richa; Vatsa, Mayank; Kumar, Ajay

    2014-01-01

    Face verification, though an easy task for humans, is a long-standing open research area. This is largely due to the challenging covariates, such as disguise and aging, which make it very hard to accurately verify the identity of a person. This paper investigates human and machine performance for recognizing/verifying disguised faces. Performance is also evaluated under familiarity and match/mismatch with the ethnicity of observers. The findings of this study are used to develop an automated algorithm to verify the faces presented under disguise variations. We use automatically localized feature descriptors which can identify disguised face patches and account for this information to achieve improved matching accuracy. The performance of the proposed algorithm is evaluated on the IIIT-Delhi Disguise database that contains images pertaining to 75 subjects with different kinds of disguise variations. The experiments suggest that the proposed algorithm can outperform a popular commercial system and evaluates them against humans in matching disguised face images. PMID:25029188

  3. Facial recognition in education system

    NASA Astrophysics Data System (ADS)

    Krithika, L. B.; Venkatesh, K.; Rathore, S.; Kumar, M. Harish

    2017-11-01

    Human beings exploit emotions comprehensively for conveying messages and their resolution. Emotion detection and face recognition can provide an interface between the individuals and technologies. The most successful applications of recognition analysis are recognition of faces. Many different techniques have been used to recognize the facial expressions and emotion detection handle varying poses. In this paper, we approach an efficient method to recognize the facial expressions to track face points and distances. This can automatically identify observer face movements and face expression in image. This can capture different aspects of emotion and facial expressions.

  4. Recognizing an individual face: 3D shape contributes earlier than 2D surface reflectance information.

    PubMed

    Caharel, Stéphanie; Jiang, Fang; Blanz, Volker; Rossion, Bruno

    2009-10-01

    The human brain recognizes faces by means of two main diagnostic sources of information: three-dimensional (3D) shape and two-dimensional (2D) surface reflectance. Here we used event-related potentials (ERPs) in a face adaptation paradigm to examine the time-course of processing for these two types of information. With a 3D morphable model, we generated pairs of faces that were either identical, varied in 3D shape only, in 2D surface reflectance only, or in both. Sixteen human observers discriminated individual faces in these 4 types of pairs, in which a first (adapting) face was followed shortly by a second (test) face. Behaviorally, observers were as accurate and as fast for discriminating individual faces based on either 3D shape or 2D surface reflectance alone, but were faster when both sources of information were present. As early as the face-sensitive N170 component (approximately 160 ms following the test face), there was larger amplitude for changes in 3D shape relative to the repetition of the same face, especially over the right occipito-temporal electrodes. However, changes in 2D reflectance between the adapter and target face did not increase the N170 amplitude. At about 250 ms, both 3D shape and 2D reflectance contributed equally, and the largest difference in amplitude compared to the repetition of the same face was found when both 3D shape and 2D reflectance were combined, in line with observers' behavior. These observations indicate that evidence to recognize individual faces accumulate faster in the right hemisphere human visual cortex from diagnostic 3D shape information than from 2D surface reflectance information.

  5. Facial detection using deep learning

    NASA Astrophysics Data System (ADS)

    Sharma, Manik; Anuradha, J.; Manne, H. K.; Kashyap, G. S. C.

    2017-11-01

    In the recent past, we have observed that Facebook has developed an uncanny ability to recognize people in photographs. Previously, we had to tag people in photos by clicking on them and typing their name. Now as soon as we upload a photo, Facebook tags everyone on its own. Facebook can recognize faces with 98% accuracy which is pretty much as good as humans can do. This technology is called Face Detection. Face detection is a popular topic in biometrics. We have surveillance cameras in public places for video capture as well as security purposes. The main advantages of this algorithm over other are uniqueness and approval. We need speed and accuracy to identify. But face detection is really a series of several related problems: First, look at a picture and find all the faces in it. Second, focus on each face and understand that even if a face is turned in a weird direction or in bad lighting, it is still the same person. Third select features which can be used to identify each face uniquely like size of the eyes, face etc. Finally, compare these features to data we have to find the person name. As a human, your brain is wired to do all of this automatically and instantly. In fact, humans are too good at recognizing faces. Computers are not capable of this kind of high-level generalization, so we must teach them how to do each step in this process separately. The growth of face detection is largely driven by growing applications such as credit card verification, surveillance video images, authentication for banking and security system access.

  6. Holistic Processing of Static and Moving Faces

    ERIC Educational Resources Information Center

    Zhao, Mintao; Bülthoff, Isabelle

    2017-01-01

    Humans' face ability develops and matures with extensive experience in perceiving, recognizing, and interacting with faces that move most of the time. However, how facial movements affect 1 core aspect of face ability--holistic face processing--remains unclear. Here we investigated the influence of rigid facial motion on holistic and part-based…

  7. Recognizing Age-Separated Face Images: Humans and Machines

    PubMed Central

    Yadav, Daksha; Singh, Richa; Vatsa, Mayank; Noore, Afzel

    2014-01-01

    Humans utilize facial appearance, gender, expression, aging pattern, and other ancillary information to recognize individuals. It is interesting to observe how humans perceive facial age. Analyzing these properties can help in understanding the phenomenon of facial aging and incorporating the findings can help in designing effective algorithms. Such a study has two components - facial age estimation and age-separated face recognition. Age estimation involves predicting the age of an individual given his/her facial image. On the other hand, age-separated face recognition consists of recognizing an individual given his/her age-separated images. In this research, we investigate which facial cues are utilized by humans for estimating the age of people belonging to various age groups along with analyzing the effect of one's gender, age, and ethnicity on age estimation skills. We also analyze how various facial regions such as binocular and mouth regions influence age estimation and recognition capabilities. Finally, we propose an age-invariant face recognition algorithm that incorporates the knowledge learned from these observations. Key observations of our research are: (1) the age group of newborns and toddlers is easiest to estimate, (2) gender and ethnicity do not affect the judgment of age group estimation, (3) face as a global feature, is essential to achieve good performance in age-separated face recognition, and (4) the proposed algorithm yields improved recognition performance compared to existing algorithms and also outperforms a commercial system in the young image as probe scenario. PMID:25474200

  8. Recognizing age-separated face images: humans and machines.

    PubMed

    Yadav, Daksha; Singh, Richa; Vatsa, Mayank; Noore, Afzel

    2014-01-01

    Humans utilize facial appearance, gender, expression, aging pattern, and other ancillary information to recognize individuals. It is interesting to observe how humans perceive facial age. Analyzing these properties can help in understanding the phenomenon of facial aging and incorporating the findings can help in designing effective algorithms. Such a study has two components--facial age estimation and age-separated face recognition. Age estimation involves predicting the age of an individual given his/her facial image. On the other hand, age-separated face recognition consists of recognizing an individual given his/her age-separated images. In this research, we investigate which facial cues are utilized by humans for estimating the age of people belonging to various age groups along with analyzing the effect of one's gender, age, and ethnicity on age estimation skills. We also analyze how various facial regions such as binocular and mouth regions influence age estimation and recognition capabilities. Finally, we propose an age-invariant face recognition algorithm that incorporates the knowledge learned from these observations. Key observations of our research are: (1) the age group of newborns and toddlers is easiest to estimate, (2) gender and ethnicity do not affect the judgment of age group estimation, (3) face as a global feature, is essential to achieve good performance in age-separated face recognition, and (4) the proposed algorithm yields improved recognition performance compared to existing algorithms and also outperforms a commercial system in the young image as probe scenario.

  9. Face Processing: Models For Recognition

    NASA Astrophysics Data System (ADS)

    Turk, Matthew A.; Pentland, Alexander P.

    1990-03-01

    The human ability to process faces is remarkable. We can identify perhaps thousands of faces learned throughout our lifetime and read facial expression to understand such subtle qualities as emotion. These skills are quite robust, despite sometimes large changes in the visual stimulus due to expression, aging, and distractions such as glasses or changes in hairstyle or facial hair. Computers which model and recognize faces will be useful in a variety of applications, including criminal identification, human-computer interface, and animation. We discuss models for representing faces and their applicability to the task of recognition, and present techniques for identifying faces and detecting eye blinks.

  10. From face processing to face recognition: Comparing three different processing levels.

    PubMed

    Besson, G; Barragan-Jason, G; Thorpe, S J; Fabre-Thorpe, M; Puma, S; Ceccaldi, M; Barbeau, E J

    2017-01-01

    Verifying that a face is from a target person (e.g. finding someone in the crowd) is a critical ability of the human face processing system. Yet how fast this can be performed is unknown. The 'entry-level shift due to expertise' hypothesis suggests that - since humans are face experts - processing faces should be as fast - or even faster - at the individual than at superordinate levels. In contrast, the 'superordinate advantage' hypothesis suggests that faces are processed from coarse to fine, so that the opposite pattern should be observed. To clarify this debate, three different face processing levels were compared: (1) a superordinate face categorization level (i.e. detecting human faces among animal faces), (2) a face familiarity level (i.e. recognizing famous faces among unfamiliar ones) and (3) verifying that a face is from a target person, our condition of interest. The minimal speed at which faces can be categorized (∼260ms) or recognized as familiar (∼360ms) has largely been documented in previous studies, and thus provides boundaries to compare our condition of interest to. Twenty-seven participants were included. The recent Speed and Accuracy Boosting procedure paradigm (SAB) was used since it constrains participants to use their fastest strategy. Stimuli were presented either upright or inverted. Results revealed that verifying that a face is from a target person (minimal RT at ∼260ms) was remarkably fast but longer than the face categorization level (∼240ms) and was more sensitive to face inversion. In contrast, it was much faster than recognizing a face as familiar (∼380ms), a level severely affected by face inversion. Face recognition corresponding to finding a specific person in a crowd thus appears achievable in only a quarter of a second. In favor of the 'superordinate advantage' hypothesis or coarse-to-fine account of the face visual hierarchy, these results suggest a graded engagement of the face processing system across processing levels as reflected by the face inversion effects. Furthermore, they underline how verifying that a face is from a target person and detecting a face as familiar - both often referred to as "Face Recognition" - in fact differs. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Learning to recognize face shapes through serial exploration.

    PubMed

    Wallraven, Christian; Whittingstall, Lisa; Bülthoff, Heinrich H

    2013-05-01

    Human observers are experts at visual face recognition due to specialized visual mechanisms for face processing that evolve with perceptual expertize. Such expertize has long been attributed to the use of configural processing, enabled by fast, parallel information encoding of the visual information in the face. Here we tested whether participants can learn to efficiently recognize faces that are serially encoded-that is, when only partial visual information about the face is available at any given time. For this, ten participants were trained in gaze-restricted face recognition in which face masks were viewed through a small aperture controlled by the participant. Tests comparing trained with untrained performance revealed (1) a marked improvement in terms of speed and accuracy, (2) a gradual development of configural processing strategies, and (3) participants' ability to rapidly learn and accurately recognize novel exemplars. This performance pattern demonstrates that participants were able to learn new strategies to compensate for the serial nature of information encoding. The results are discussed in terms of expertize acquisition and relevance for other sensory modalities relying on serial encoding.

  12. Are readers of our face readers of our minds? Dogs (Canis familiaris) show situation-dependent recognition of human's attention.

    PubMed

    Gácsi, Márta; Miklósi, Adám; Varga, Orsolya; Topál, József; Csányi, Vilmos

    2004-07-01

    The ability of animals to use behavioral/facial cues in detection of human attention has been widely investigated. In this test series we studied the ability of dogs to recognize human attention in different experimental situations (ball-fetching game, fetching objects on command, begging from humans). The attentional state of the humans was varied along two variables: (1) facing versus not facing the dog; (2) visible versus non-visible eyes. In the first set of experiments (fetching) the owners were told to take up different body positions (facing or not facing the dog) and to either cover or not cover their eyes with a blindfold. In the second set of experiments (begging) dogs had to choose between two eating humans based on either the visibility of the eyes or direction of the face. Our results show that the efficiency of dogs to discriminate between "attentive" and "inattentive" humans depended on the context of the test, but they could rely on the orientation of the body, the orientation of the head and the visibility of the eyes. With the exception of the fetching-game situation, they brought the object to the front of the human (even if he/she turned his/her back towards the dog), and preferentially begged from the facing (or seeing) human. There were also indications that dogs were sensitive to the visibility of the eyes because they showed increased hesitative behavior when approaching a blindfolded owner, and they also preferred to beg from the person with visible eyes. We conclude that dogs are able to rely on the same set of human facial cues for detection of attention, which form the behavioral basis of understanding attention in humans. Showing the ability of recognizing human attention across different situations dogs proved to be more flexible than chimpanzees investigated in similar circumstances.

  13. Super-Memorizers Are Not Super-Recognizers

    PubMed Central

    Ramon, Meike; Miellet, Sebastien; Dzieciol, Anna M.; Konrad, Boris Nikolai

    2016-01-01

    Humans have a natural expertise in recognizing faces. However, the nature of the interaction between this critical visual biological skill and memory is yet unclear. Here, we had the unique opportunity to test two individuals who have had exceptional success in the World Memory Championships, including several world records in face-name association memory. We designed a range of face processing tasks to determine whether superior/expert face memory skills are associated with distinctive perceptual strategies for processing faces. Superior memorizers excelled at tasks involving associative face-name learning. Nevertheless, they were as impaired as controls in tasks probing the efficiency of the face system: face inversion and the other-race effect. Super memorizers did not show increased hippocampal volumes, and exhibited optimal generic eye movement strategies when they performed complex multi-item face-name associations. Our data show that the visual computations of the face system are not malleable and are robust to acquired expertise involving extensive training of associative memory. PMID:27008627

  14. Super-Memorizers Are Not Super-Recognizers.

    PubMed

    Ramon, Meike; Miellet, Sebastien; Dzieciol, Anna M; Konrad, Boris Nikolai; Dresler, Martin; Caldara, Roberto

    2016-01-01

    Humans have a natural expertise in recognizing faces. However, the nature of the interaction between this critical visual biological skill and memory is yet unclear. Here, we had the unique opportunity to test two individuals who have had exceptional success in the World Memory Championships, including several world records in face-name association memory. We designed a range of face processing tasks to determine whether superior/expert face memory skills are associated with distinctive perceptual strategies for processing faces. Superior memorizers excelled at tasks involving associative face-name learning. Nevertheless, they were as impaired as controls in tasks probing the efficiency of the face system: face inversion and the other-race effect. Super memorizers did not show increased hippocampal volumes, and exhibited optimal generic eye movement strategies when they performed complex multi-item face-name associations. Our data show that the visual computations of the face system are not malleable and are robust to acquired expertise involving extensive training of associative memory.

  15. Functional specialization and convergence in the occipito-temporal cortex supporting haptic and visual identification of human faces and body parts: an fMRI study.

    PubMed

    Kitada, Ryo; Johnsrude, Ingrid S; Kochiyama, Takanori; Lederman, Susan J

    2009-10-01

    Humans can recognize common objects by touch extremely well whenever vision is unavailable. Despite its importance to a thorough understanding of human object recognition, the neuroscientific study of this topic has been relatively neglected. To date, the few published studies have addressed the haptic recognition of nonbiological objects. We now focus on haptic recognition of the human body, a particularly salient object category for touch. Neuroimaging studies demonstrate that regions of the occipito-temporal cortex are specialized for visual perception of faces (fusiform face area, FFA) and other body parts (extrastriate body area, EBA). Are the same category-sensitive regions activated when these components of the body are recognized haptically? Here, we use fMRI to compare brain organization for haptic and visual recognition of human body parts. Sixteen subjects identified exemplars of faces, hands, feet, and nonbiological control objects using vision and haptics separately. We identified two discrete regions within the fusiform gyrus (FFA and the haptic face region) that were each sensitive to both haptically and visually presented faces; however, these two regions differed significantly in their response patterns. Similarly, two regions within the lateral occipito-temporal area (EBA and the haptic body region) were each sensitive to body parts in both modalities, although the response patterns differed. Thus, although the fusiform gyrus and the lateral occipito-temporal cortex appear to exhibit modality-independent, category-sensitive activity, our results also indicate a degree of functional specialization related to sensory modality within these structures.

  16. Own-race faces capture attention faster than other-race faces: evidence from response time and the N2pc.

    PubMed

    Zhou, Guomei; Cheng, Zhijie; Yue, Zhenzhu; Tredoux, Colin; He, Jibo; Wang, Ling

    2015-01-01

    Studies have shown that people are better at recognizing human faces from their own-race than from other-races, an effect often termed the Own-Race Advantage. The current study investigates whether there is an Own-Race Advantage in attention and its neural correlates. Participants were asked to search for a human face among animal faces. Experiment 1 showed a classic Own-Race Advantage in response time both for Chinese and Black South African participants. Using event-related potentials (ERPs), Experiment 2 showed a similar Own-Race Advantage in response time for both upright faces and inverted faces. Moreover, the latency of N2pc for own-race faces was earlier than that for other-race faces. These results suggested that own-race faces capture attention more efficiently than other-race faces.

  17. Mapping the emotional face. How individual face parts contribute to successful emotion recognition.

    PubMed

    Wegrzyn, Martin; Vogt, Maria; Kireclioglu, Berna; Schneider, Julia; Kissler, Johanna

    2017-01-01

    Which facial features allow human observers to successfully recognize expressions of emotion? While the eyes and mouth have been frequently shown to be of high importance, research on facial action units has made more precise predictions about the areas involved in displaying each emotion. The present research investigated on a fine-grained level, which physical features are most relied on when decoding facial expressions. In the experiment, individual faces expressing the basic emotions according to Ekman were hidden behind a mask of 48 tiles, which was sequentially uncovered. Participants were instructed to stop the sequence as soon as they recognized the facial expression and assign it the correct label. For each part of the face, its contribution to successful recognition was computed, allowing to visualize the importance of different face areas for each expression. Overall, observers were mostly relying on the eye and mouth regions when successfully recognizing an emotion. Furthermore, the difference in the importance of eyes and mouth allowed to group the expressions in a continuous space, ranging from sadness and fear (reliance on the eyes) to disgust and happiness (mouth). The face parts with highest diagnostic value for expression identification were typically located in areas corresponding to action units from the facial action coding system. A similarity analysis of the usefulness of different face parts for expression recognition demonstrated that faces cluster according to the emotion they express, rather than by low-level physical features. Also, expressions relying more on the eyes or mouth region were in close proximity in the constructed similarity space. These analyses help to better understand how human observers process expressions of emotion, by delineating the mapping from facial features to psychological representation.

  18. Mapping the emotional face. How individual face parts contribute to successful emotion recognition

    PubMed Central

    Wegrzyn, Martin; Vogt, Maria; Kireclioglu, Berna; Schneider, Julia; Kissler, Johanna

    2017-01-01

    Which facial features allow human observers to successfully recognize expressions of emotion? While the eyes and mouth have been frequently shown to be of high importance, research on facial action units has made more precise predictions about the areas involved in displaying each emotion. The present research investigated on a fine-grained level, which physical features are most relied on when decoding facial expressions. In the experiment, individual faces expressing the basic emotions according to Ekman were hidden behind a mask of 48 tiles, which was sequentially uncovered. Participants were instructed to stop the sequence as soon as they recognized the facial expression and assign it the correct label. For each part of the face, its contribution to successful recognition was computed, allowing to visualize the importance of different face areas for each expression. Overall, observers were mostly relying on the eye and mouth regions when successfully recognizing an emotion. Furthermore, the difference in the importance of eyes and mouth allowed to group the expressions in a continuous space, ranging from sadness and fear (reliance on the eyes) to disgust and happiness (mouth). The face parts with highest diagnostic value for expression identification were typically located in areas corresponding to action units from the facial action coding system. A similarity analysis of the usefulness of different face parts for expression recognition demonstrated that faces cluster according to the emotion they express, rather than by low-level physical features. Also, expressions relying more on the eyes or mouth region were in close proximity in the constructed similarity space. These analyses help to better understand how human observers process expressions of emotion, by delineating the mapping from facial features to psychological representation. PMID:28493921

  19. Individual differences in perceiving and recognizing faces-One element of social cognition.

    PubMed

    Wilhelm, Oliver; Herzmann, Grit; Kunina, Olga; Danthiir, Vanessa; Schacht, Annekathrin; Sommer, Werner

    2010-09-01

    Recognizing faces swiftly and accurately is of paramount importance to humans as a social species. Individual differences in the ability to perform these tasks may therefore reflect important aspects of social or emotional intelligence. Although functional models of face cognition based on group and single case studies postulate multiple component processes, little is known about the ability structure underlying individual differences in face cognition. In 2 large individual differences experiments (N = 151 and N = 209), a broad variety of face-cognition tasks were tested and the component abilities of face cognition-face perception, face memory, and the speed of face cognition-were identified and then replicated. Experiment 2 also showed that the 3 face-cognition abilities are clearly distinct from immediate and delayed memory, mental speed, general cognitive ability, and object cognition. These results converge with functional and neuroanatomical models of face cognition by demonstrating the difference between face perception and face memory. The results also underline the importance of distinguishing between speed and accuracy of face cognition. Together our results provide a first step toward establishing face-processing abilities as an independent ability reflecting elements of social intelligence. (PsycINFO Database Record (c) 2010 APA, all rights reserved).

  20. Recognizing Action Units for Facial Expression Analysis

    PubMed Central

    Tian, Ying-li; Kanade, Takeo; Cohn, Jeffrey F.

    2010-01-01

    Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more often communicated by changes in one or a few discrete facial features. In this paper, we develop an Automatic Face Analysis (AFA) system to analyze facial expressions based on both permanent facial features (brows, eyes, mouth) and transient facial features (deepening of facial furrows) in a nearly frontal-view face image sequence. The AFA system recognizes fine-grained changes in facial expression into action units (AUs) of the Facial Action Coding System (FACS), instead of a few prototypic expressions. Multistate face and facial component models are proposed for tracking and modeling the various facial features, including lips, eyes, brows, cheeks, and furrows. During tracking, detailed parametric descriptions of the facial features are extracted. With these parameters as the inputs, a group of action units (neutral expression, six upper face AUs and 10 lower face AUs) are recognized whether they occur alone or in combinations. The system has achieved average recognition rates of 96.4 percent (95.4 percent if neutral expressions are excluded) for upper face AUs and 96.7 percent (95.6 percent with neutral expressions excluded) for lower face AUs. The generalizability of the system has been tested by using independent image databases collected and FACS-coded for ground-truth by different research teams. PMID:25210210

  1. Dogs recognize dog and human emotions.

    PubMed

    Albuquerque, Natalia; Guo, Kun; Wilkinson, Anna; Savalli, Carine; Otta, Emma; Mills, Daniel

    2016-01-01

    The perception of emotional expressions allows animals to evaluate the social intentions and motivations of each other. This usually takes place within species; however, in the case of domestic dogs, it might be advantageous to recognize the emotions of humans as well as other dogs. In this sense, the combination of visual and auditory cues to categorize others' emotions facilitates the information processing and indicates high-level cognitive representations. Using a cross-modal preferential looking paradigm, we presented dogs with either human or dog faces with different emotional valences (happy/playful versus angry/aggressive) paired with a single vocalization from the same individual with either a positive or negative valence or Brownian noise. Dogs looked significantly longer at the face whose expression was congruent to the valence of vocalization, for both conspecifics and heterospecifics, an ability previously known only in humans. These results demonstrate that dogs can extract and integrate bimodal sensory emotional information, and discriminate between positive and negative emotions from both humans and dogs. © 2016 The Author(s).

  2. A Face Attention Technique for a Robot Able to Interpret Facial Expressions

    NASA Astrophysics Data System (ADS)

    Simplício, Carlos; Prado, José; Dias, Jorge

    Automatic facial expressions recognition using vision is an important subject towards human-robot interaction. Here is proposed a human face focus of attention technique and a facial expressions classifier (a Dynamic Bayesian Network) to incorporate in an autonomous mobile agent whose hardware is composed by a robotic platform and a robotic head. The focus of attention technique is based on the symmetry presented by human faces. By using the output of this module the autonomous agent keeps always targeting the human face frontally. In order to accomplish this, the robot platform performs an arc centered at the human; thus the robotic head, when necessary, moves synchronized. In the proposed probabilistic classifier the information is propagated, from the previous instant, in a lower level of the network, to the current instant. Moreover, to recognize facial expressions are used not only positive evidences but also negative.

  3. Robust representations of individual faces in chimpanzees (Pan troglodytes) but not monkeys (Macaca mulatta).

    PubMed

    Taubert, Jessica; Weldon, Kimberly B; Parr, Lisa A

    2017-03-01

    Being able to recognize the faces of our friends and family members no matter where we see them represents a substantial challenge for the visual system because the retinal image of a face can be degraded by both changes in the person (age, expression, pose, hairstyle, etc.) and changes in the viewing conditions (direction and degree of illumination). Yet most of us are able to recognize familiar people effortlessly. A popular theory for how face recognition is achieved has argued that the brain stabilizes facial appearance by building average representations that enhance diagnostic features that reliably vary between people while diluting features that vary between instances of the same person. This explains why people find it easier to recognize average images of people, created by averaging multiple images of the same person together, than single instances (i.e. photographs). Although this theory is gathering momentum in the psychological and computer sciences, there is no evidence of whether this mechanism represents a unique specialization for individual recognition in humans. Here we tested two species, chimpanzees (Pan troglodytes) and rhesus monkeys (Macaca mulatta), to determine whether average images of different familiar individuals were easier to discriminate than photographs of familiar individuals. Using a two-alternative forced-choice, match-to-sample procedure, we report a behaviour response profile that suggests chimpanzees encode the faces of conspecifics differently than rhesus monkeys and in a manner similar to humans.

  4. Spatiotemporal dynamics in human visual cortex rapidly encode the emotional content of faces.

    PubMed

    Dima, Diana C; Perry, Gavin; Messaritaki, Eirini; Zhang, Jiaxiang; Singh, Krish D

    2018-06-08

    Recognizing emotion in faces is important in human interaction and survival, yet existing studies do not paint a consistent picture of the neural representation supporting this task. To address this, we collected magnetoencephalography (MEG) data while participants passively viewed happy, angry and neutral faces. Using time-resolved decoding of sensor-level data, we show that responses to angry faces can be discriminated from happy and neutral faces as early as 90 ms after stimulus onset and only 10 ms later than faces can be discriminated from scrambled stimuli, even in the absence of differences in evoked responses. Time-resolved relevance patterns in source space track expression-related information from the visual cortex (100 ms) to higher-level temporal and frontal areas (200-500 ms). Together, our results point to a system optimised for rapid processing of emotional faces and preferentially tuned to threat, consistent with the important evolutionary role that such a system must have played in the development of human social interactions. © 2018 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  5. Human face recognition using eigenface in cloud computing environment

    NASA Astrophysics Data System (ADS)

    Siregar, S. T. M.; Syahputra, M. F.; Rahmat, R. F.

    2018-02-01

    Doing a face recognition for one single face does not take a long time to process, but if we implement attendance system or security system on companies that have many faces to be recognized, it will take a long time. Cloud computing is a computing service that is done not on a local device, but on an internet connected to a data center infrastructure. The system of cloud computing also provides a scalability solution where cloud computing can increase the resources needed when doing larger data processing. This research is done by applying eigenface while collecting data as training data is also done by using REST concept to provide resource, then server can process the data according to existing stages. After doing research and development of this application, it can be concluded by implementing Eigenface, recognizing face by applying REST concept as endpoint in giving or receiving related information to be used as a resource in doing model formation to do face recognition.

  6. The many faces of research on face perception.

    PubMed

    Little, Anthony C; Jones, Benedict C; DeBruine, Lisa M

    2011-06-12

    Face perception is fundamental to human social interaction. Many different types of important information are visible in faces and the processes and mechanisms involved in extracting this information are complex and can be highly specialized. The importance of faces has long been recognized by a wide range of scientists. Importantly, the range of perspectives and techniques that this breadth has brought to face perception research has, in recent years, led to many important advances in our understanding of face processing. The articles in this issue on face perception each review a particular arena of interest in face perception, variously focusing on (i) the social aspects of face perception (attraction, recognition and emotion), (ii) the neural mechanisms underlying face perception (using brain scanning, patient data, direct stimulation of the brain, visual adaptation and single-cell recording), and (iii) comparative aspects of face perception (comparing adult human abilities with those of chimpanzees and children). Here, we introduce the central themes of the issue and present an overview of the articles.

  7. 'Faceness' and affectivity: evidence for genetic contributions to distinct components of electrocortical response to human faces.

    PubMed

    Shannon, Robert W; Patrick, Christopher J; Venables, Noah C; He, Sheng

    2013-12-01

    The ability to recognize a variety of different human faces is undoubtedly one of the most important and impressive functions of the human perceptual system. Neuroimaging studies have revealed multiple brain regions (including the FFA, STS, OFA) and electrophysiological studies have identified differing brain event-related potential (ERP) components (e.g., N170, P200) possibly related to distinct types of face information processing. To evaluate the heritability of ERP components associated with face processing, including N170, P200, and LPP, we examined ERP responses to fearful and neutral face stimuli in monozygotic (MZ) and dizygotic (DZ) twins. Concordance levels for early brain response indices of face processing (N170, P200) were found to be stronger for MZ than DZ twins, providing evidence of a heritable basis to each. These findings support the idea that certain key neural mechanisms for face processing are genetically coded. Implications for understanding individual differences in recognition of facial identity and the emotional content of faces are discussed. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Recognizing Facial Slivers.

    PubMed

    Gilad-Gutnick, Sharon; Harmatz, Elia Samuel; Tsourides, Kleovoulos; Yovel, Galit; Sinha, Pawan

    2018-07-01

    We report here an unexpectedly robust ability of healthy human individuals ( n = 40) to recognize extremely distorted needle-like facial images, challenging the well-entrenched notion that veridical spatial configuration is necessary for extracting facial identity. In face identification tasks of parametrically compressed internal and external features, we found that the sum of performances on each cue falls significantly short of performance on full faces, despite the equal visual information available from both measures (with full faces essentially being a superposition of internal and external features). We hypothesize that this large deficit stems from the use of positional information about how the internal features are positioned relative to the external features. To test this, we systematically changed the relations between internal and external features and found preferential encoding of vertical but not horizontal spatial relationships in facial representations ( n = 20). Finally, we employ magnetoencephalography imaging ( n = 20) to demonstrate a close mapping between the behavioral psychometric curve and the amplitude of the M250 face familiarity, but not M170 face-sensitive evoked response field component, providing evidence that the M250 can be modulated by faces that are perceptually identifiable, irrespective of extreme distortions to the face's veridical configuration. We theorize that the tolerance to compressive distortions has evolved from the need to recognize faces across varying viewpoints. Our findings help clarify the important, but poorly defined, concept of facial configuration and also enable an association between behavioral performance and previously reported neural correlates of face perception.

  9. Effect of familiarity and viewpoint on face recognition in chimpanzees

    PubMed Central

    Parr, Lisa A; Siebert, Erin; Taubert, Jessica

    2012-01-01

    Numerous studies have shown that familiarity strongly influences how well humans recognize faces. This is particularly true when faces are encountered across a change in viewpoint. In this situation, recognition may be accomplished by matching partial or incomplete information about a face to a stored representation of the known individual, whereas such representations are not available for unknown faces. Chimpanzees, our closest living relatives, share many of the same behavioral specializations for face processing as humans, but the influence of familiarity and viewpoint have never been compared in the same study. Here, we examined the ability of chimpanzees to match the faces of familiar and unfamiliar conspecifics in their frontal and 3/4 views using a computerized task. Results showed that, while chimpanzees were able to accurately match both familiar and unfamiliar faces in their frontal orientations, performance was significantly impaired only when unfamiliar faces were presented across a change in viewpoint. Therefore, like in humans, face processing in chimpanzees appears to be sensitive to individual familiarity. We propose that familiarization is a robust mechanism for strengthening the representation of faces and has been conserved in primates to achieve efficient individual recognition over a range of natural viewing conditions. PMID:22128558

  10. From Caregivers to Peers: Puberty Shapes Human Face Perception.

    PubMed

    Picci, Giorgia; Scherf, K Suzanne

    2016-11-01

    Puberty prepares mammals to sexually reproduce during adolescence. It is also hypothesized to invoke a social metamorphosis that prepares adolescents to take on adult social roles. We provide the first evidence to support this hypothesis in humans and show that pubertal development retunes the face-processing system from a caregiver bias to a peer bias. Prior to puberty, children exhibit enhanced recognition for adult female faces. With puberty, superior recognition emerges for peer faces that match one's pubertal status. As puberty progresses, so does the peer recognition bias. Adolescents become better at recognizing faces with a pubertal status similar to their own. These findings reconceptualize the adolescent "dip" in face recognition by showing that it is a recalibration of the face-processing system away from caregivers toward peers. Thus, in addition to preparing the physical body for sexual reproduction, puberty shapes the perceptual system for processing the social world in new ways. © The Author(s) 2016.

  11. Face recognition in capuchin monkeys (Cebus apella).

    PubMed

    Pokorny, Jennifer J; de Waal, Frans B M

    2009-05-01

    Primates live in complex social groups that necessitate recognition of the individuals with whom they interact. In humans, faces provide a visual means by which to gain information such as identity, allowing us to distinguish between both familiar and unfamiliar individuals. The current study used a computerized oddity task to investigate whether a New World primate, Cebus apella, can discriminate the faces of In-group and Out-group conspecifics based on identity. The current study, improved on past methodologies, demonstrates that capuchins recognize the faces of both familiar and unfamiliar conspecifics. Once a performance criterion had been reached, subjects successfully transferred to a large number of novel images within the first 100 trials thus ruling out performance based on previous conditioning. Capuchins can be added to a growing list of primates that appear to recognize two-dimensional facial images of conspecifics. (PsycINFO Database Record (c) 2009 APA, all rights reserved).

  12. Recognizing Facial Expressions Automatically from Video

    NASA Astrophysics Data System (ADS)

    Shan, Caifeng; Braspenning, Ralph

    Facial expressions, resulting from movements of the facial muscles, are the face changes in response to a person's internal emotional states, intentions, or social communications. There is a considerable history associated with the study on facial expressions. Darwin [22] was the first to describe in details the specific facial expressions associated with emotions in animals and humans, who argued that all mammals show emotions reliably in their faces. Since that, facial expression analysis has been a area of great research interest for behavioral scientists [27]. Psychological studies [48, 3] suggest that facial expressions, as the main mode for nonverbal communication, play a vital role in human face-to-face communication. For illustration, we show some examples of facial expressions in Fig. 1.

  13. Monocular Advantage for Face Perception Implicates Subcortical Mechanisms in Adult Humans

    PubMed Central

    Gabay, Shai; Nestor, Adrian; Dundas, Eva; Behrmann, Marlene

    2014-01-01

    The ability to recognize faces accurately and rapidly is an evolutionarily adaptive process. Most studies examining the neural correlates of face perception in adult humans have focused on a distributed cortical network of face-selective regions. There is, however, robust evidence from phylogenetic and ontogenetic studies that implicates subcortical structures, and recently, some investigations in adult humans indicate subcortical correlates of face perception as well. The questions addressed here are whether low-level subcortical mechanisms for face perception (in the absence of changes in expression) are conserved in human adults, and if so, what is the nature of these subcortical representations. In a series of four experiments, we presented pairs of images to the same or different eyes. Participants’ performance demonstrated that subcortical mechanisms, indexed by monocular portions of the visual system, play a functional role in face perception. These mechanisms are sensitive to face-like configurations and afford a coarse representation of a face, comprised of primarily low spatial frequency information, which suffices for matching faces but not for more complex aspects of face perception such as sex differentiation. Importantly, these subcortical mechanisms are not implicated in the perception of other visual stimuli, such as cars or letter strings. These findings suggest a conservation of phylogenetically and ontogenetically lower-order systems in adult human face perception. The involvement of subcortical structures in face recognition provokes a reconsideration of current theories of face perception, which are reliant on cortical level processing, inasmuch as it bolsters the cross-species continuity of the biological system for face recognition. PMID:24236767

  14. Anxiety disorders in adolescence are associated with impaired facial expression recognition to negative valence.

    PubMed

    Jarros, Rafaela Behs; Salum, Giovanni Abrahão; Belem da Silva, Cristiano Tschiedel; Toazza, Rudineia; de Abreu Costa, Marianna; Fumagalli de Salles, Jerusa; Manfro, Gisele Gus

    2012-02-01

    The aim of the present study was to test the ability of adolescents with a current anxiety diagnosis to recognize facial affective expressions, compared to those without an anxiety disorder. Forty cases and 27 controls were selected from a larger cross sectional community sample of adolescents, aged from 10 to 17 years old. Adolescent's facial recognition of six human emotions (sadness, anger, disgust, happy, surprise and fear) and neutral faces was assessed through a facial labeling test using Ekman's Pictures of Facial Affect (POFA). Adolescents with anxiety disorders had a higher mean number of errors in angry faces as compared to controls: 3.1 (SD=1.13) vs. 2.5 (SD=2.5), OR=1.72 (CI95% 1.02 to 2.89; p=0.040). However, they named neutral faces more accurately than adolescents without anxiety diagnosis: 15% of cases vs. 37.1% of controls presented at least one error in neutral faces, OR=3.46 (CI95% 1.02 to 11.7; p=0.047). No differences were found considering other human emotions or on the distribution of errors in each emotional face between the groups. Our findings support an anxiety-mediated influence on the recognition of facial expressions in adolescence. These difficulty in recognizing angry faces and more accuracy in naming neutral faces may lead to misinterpretation of social clues and can explain some aspects of the impairment in social interactions in adolescents with anxiety disorders. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. The evolution of face processing in primates

    PubMed Central

    Parr, Lisa A.

    2011-01-01

    The ability to recognize faces is an important socio-cognitive skill that is associated with a number of cognitive specializations in humans. While numerous studies have examined the presence of these specializations in non-human primates, species where face recognition would confer distinct advantages in social situations, results have been mixed. The majority of studies in chimpanzees support homologous face-processing mechanisms with humans, but results from monkey studies appear largely dependent on the type of testing methods used. Studies that employ passive viewing paradigms, like the visual paired comparison task, report evidence of similarities between monkeys and humans, but tasks that use more stringent, operant response tasks, like the matching-to-sample task, often report species differences. Moreover, the data suggest that monkeys may be less sensitive than chimpanzees and humans to the precise spacing of facial features, in addition to the surface-based cues reflected in those features, information that is critical for the representation of individual identity. The aim of this paper is to provide a comprehensive review of the available data from face-processing tasks in non-human primates with the goal of understanding the evolution of this complex cognitive skill. PMID:21536559

  16. Recognizing Faces Like Humans

    DTIC Science & Technology

    2010-02-01

    1995. 3. B. Kamgar-Parsi and B. Kamgar-Parsi, Methods of facial recognition , US Patent pending, 2007. 4. www.frvt.org/FRGC/ Homepage of the Facial Recognition Grand Challenge project. Last accessed 4 February 2010. c 2010 SPIE

  17. Design of embedded intelligent monitoring system based on face recognition

    NASA Astrophysics Data System (ADS)

    Liang, Weidong; Ding, Yan; Zhao, Liangjin; Li, Jia; Hu, Xuemei

    2017-01-01

    In this paper, a new embedded intelligent monitoring system based on face recognition is proposed. The system uses Pi Raspberry as the central processor. A sensors group has been designed with Zigbee module in order to assist the system to work better and the two alarm modes have been proposed using the Internet and 3G modem. The experimental results show that the system can work under various light intensities to recognize human face and send alarm information in real time.

  18. Hierarchical Representation Learning for Kinship Verification.

    PubMed

    Kohli, Naman; Vatsa, Mayank; Singh, Richa; Noore, Afzel; Majumdar, Angshul

    2017-01-01

    Kinship verification has a number of applications such as organizing large collections of images and recognizing resemblances among humans. In this paper, first, a human study is conducted to understand the capabilities of human mind and to identify the discriminatory areas of a face that facilitate kinship-cues. The visual stimuli presented to the participants determine their ability to recognize kin relationship using the whole face as well as specific facial regions. The effect of participant gender and age and kin-relation pair of the stimulus is analyzed using quantitative measures such as accuracy, discriminability index d' , and perceptual information entropy. Utilizing the information obtained from the human study, a hierarchical kinship verification via representation learning (KVRL) framework is utilized to learn the representation of different face regions in an unsupervised manner. We propose a novel approach for feature representation termed as filtered contractive deep belief networks (fcDBN). The proposed feature representation encodes relational information present in images using filters and contractive regularization penalty. A compact representation of facial images of kin is extracted as an output from the learned model and a multi-layer neural network is utilized to verify the kin accurately. A new WVU kinship database is created, which consists of multiple images per subject to facilitate kinship verification. The results show that the proposed deep learning framework (KVRL-fcDBN) yields the state-of-the-art kinship verification accuracy on the WVU kinship database and on four existing benchmark data sets. Furthermore, kinship information is used as a soft biometric modality to boost the performance of face verification via product of likelihood ratio and support vector machine based approaches. Using the proposed KVRL-fcDBN framework, an improvement of over 20% is observed in the performance of face verification.

  19. The surprisingly high human efficiency at learning to recognize faces

    PubMed Central

    Peterson, Matthew F.; Abbey, Craig K.; Eckstein, Miguel P.

    2009-01-01

    We investigated the ability of humans to optimize face recognition performance through rapid learning of individual relevant features. We created artificial faces with discriminating visual information heavily concentrated in single features (nose, eyes, chin or mouth). In each of 2500 learning blocks a feature was randomly selected and retained over the course of four trials, during which observers identified randomly sampled, noisy face images. Observers learned the discriminating feature through indirect feedback, leading to large performance gains. Performance was compared to a learning Bayesian ideal observer, resulting in unexpectedly high learning compared to previous studies with simpler stimuli. We explore various explanations and conclude that the higher learning measured with faces cannot be driven by adaptive eye movement strategies but can be mostly accounted for by suboptimalities in human face discrimination when observers are uncertain about the discriminating feature. We show that an initial bias of humans to use specific features to perform the task even though they are informed that each of four features is equally likely to be the discriminatory feature would lead to seemingly supra-optimal learning. We also examine the possibility of inefficient human integration of visual information across the spatially distributed facial features. Together, the results suggest that humans can show large performance improvement effects in discriminating faces as they learn to identify the feature containing the discriminatory information. PMID:19000918

  20. Neural network face recognition using wavelets

    NASA Astrophysics Data System (ADS)

    Karunaratne, Passant V.; Jouny, Ismail I.

    1997-04-01

    The recognition of human faces is a phenomenon that has been mastered by the human visual system and that has been researched extensively in the domain of computer neural networks and image processing. This research is involved in the study of neural networks and wavelet image processing techniques in the application of human face recognition. The objective of the system is to acquire a digitized still image of a human face, carry out pre-processing on the image as required, an then, given a prior database of images of possible individuals, be able to recognize the individual in the image. The pre-processing segment of the system includes several procedures, namely image compression, denoising, and feature extraction. The image processing is carried out using Daubechies wavelets. Once the images have been passed through the wavelet-based image processor they can be efficiently analyzed by means of a neural network. A back- propagation neural network is used for the recognition segment of the system. The main constraints of the system is with regard to the characteristics of the images being processed. The system should be able to carry out effective recognition of the human faces irrespective of the individual's facial-expression, presence of extraneous objects such as head-gear or spectacles, and face/head orientation. A potential application of this face recognition system would be as a secondary verification method in an automated teller machine.

  1. The lasting effects of process-specific versus stimulus-specific learning during infancy.

    PubMed

    Hadley, Hillary; Pickron, Charisse B; Scott, Lisa S

    2015-09-01

    The capacity to tell the difference between two faces within an infrequently experienced face group (e.g. other species, other race) declines from 6 to 9 months of age unless infants learn to match these faces with individual-level names. Similarly, the use of individual-level labels can also facilitate differentiation of a group of non-face objects (strollers). This early learning leads to increased neural specialization for previously unfamiliar face or object groups. The current investigation aimed to determine whether early conceptual learning between 6 and 9 months leads to sustained behavioral advantages and neural changes in these same children at 4-6 years of age. Results suggest that relative to a control group of children with no previous training and to children with infant category-level naming experience, children with early individual-level training exhibited faster response times to human faces. Further, individual-level training with a face group - but not an object group - led to more adult-like neural responses for human faces. These results suggest that early individual-level learning results in long-lasting process-specific effects, which benefit categories that continue to be perceived and recognized at the individual level (e.g. human faces). © 2014 John Wiley & Sons Ltd.

  2. Lateralization of kin recognition signals in the human face

    PubMed Central

    Dal Martello, Maria F.; Maloney, Laurence T.

    2010-01-01

    When human subjects view photographs of faces, their judgments of identity, gender, emotion, age, and attractiveness depend more on one side of the face than the other. We report an experiment testing whether allocentric kin recognition (the ability to judge the degree of kinship between individuals other than the observer) is also lateralized. One hundred and twenty-four observers judged whether or not pairs of children were biological siblings by looking at photographs of their faces. In three separate conditions, (1) the right hemi-face was masked, (2) the left hemi-face was masked, or (3) the face was fully visible. The d′ measures for the masked left hemi-face and masked right hemi-face were 1.024 and 1.004, respectively (no significant difference), and the d′ measure for the unmasked face was 1.079, not significantly greater than that for either of the masked conditions. We conclude, first, that there is no superiority of one or the other side of the observed face in kin recognition, second, that the information present in the left and right hemi-faces relevant to recognizing kin is completely redundant, and last that symmetry cues are not used for kin recognition. PMID:20884584

  3. The neural code for face orientation in the human fusiform face area.

    PubMed

    Ramírez, Fernando M; Cichy, Radoslaw M; Allefeld, Carsten; Haynes, John-Dylan

    2014-09-03

    Humans recognize faces and objects with high speed and accuracy regardless of their orientation. Recent studies have proposed that orientation invariance in face recognition involves an intermediate representation where neural responses are similar for mirror-symmetric views. Here, we used fMRI, multivariate pattern analysis, and computational modeling to investigate the neural encoding of faces and vehicles at different rotational angles. Corroborating previous studies, we demonstrate a representation of face orientation in the fusiform face-selective area (FFA). We go beyond these studies by showing that this representation is category-selective and tolerant to retinal translation. Critically, by controlling for low-level confounds, we found the representation of orientation in FFA to be compatible with a linear angle code. Aspects of mirror-symmetric coding cannot be ruled out when FFA mean activity levels are considered as a dimension of coding. Finally, we used a parametric family of computational models, involving a biased sampling of view-tuned neuronal clusters, to compare different face angle encoding models. The best fitting model exhibited a predominance of neuronal clusters tuned to frontal views of faces. In sum, our findings suggest a category-selective and monotonic code of face orientation in the human FFA, in line with primate electrophysiology studies that observed mirror-symmetric tuning of neural responses at higher stages of the visual system, beyond the putative homolog of human FFA. Copyright © 2014 the authors 0270-6474/14/3412155-13$15.00/0.

  4. It is all in the face: carotenoid skin coloration loses attractiveness outside the face.

    PubMed

    Lefevre, C E; Ewbank, M P; Calder, A J; von dem Hagen, E; Perrett, D I

    2013-01-01

    Recently, the importance of skin colour for facial attractiveness has been recognized. In particular, dietary carotenoid-induced skin colour has been proposed as a signal of health and therefore attractiveness. While perceptual results are highly consistent, it is currently not clear whether carotenoid skin colour is preferred because it poses a cue to current health condition in humans or whether it is simply seen as a more aesthetically pleasing colour, independently of skin-specific signalling properties. Here, we tested this question by comparing attractiveness ratings of faces to corresponding ratings of meaningless scrambled face images matching the colours and contrasts found in the face. We produced sets of face and non-face stimuli with either healthy (high-carotenoid coloration) or unhealthy (low-carotenoid coloration) colour and asked participants for attractiveness ratings. Results showed that, while for faces increased carotenoid coloration significantly improved attractiveness, there was no equivalent effect on perception of scrambled images. These findings are consistent with a specific signalling system of current condition through skin coloration in humans and indicate that preferences are not caused by sensory biases in observers.

  5. Life Cycle Impacts of a Commercial Rainwater Harvesting System and Sustainability

    EPA Science Inventory

    A sustainability paradigm is being recognized globally as a path forward for human prosperity and ecological health in the face of climate change and challenges of the water-energy-food nexus. Rainwater harvesting (RWH) and related green infrastructure practices are receiving ren...

  6. Comparison of emotion recognition from facial expression and music.

    PubMed

    Gaspar, Tina; Labor, Marina; Jurić, Iva; Dumancić, Dijana; Ilakovac, Vesna; Heffer, Marija

    2011-01-01

    The recognition of basic emotions in everyday communication involves interpretation of different visual and auditory clues. The ability to recognize emotions is not clearly determined as their presentation is usually very short (micro expressions), whereas the recognition itself does not have to be a conscious process. We assumed that the recognition from facial expressions is selected over the recognition of emotions communicated through music. In order to compare the success rate in recognizing emotions presented as facial expressions or in classical music works we conducted a survey which included 90 elementary school and 87 high school students from Osijek (Croatia). The participants had to match 8 photographs of different emotions expressed on the face and 8 pieces of classical music works with 8 offered emotions. The recognition of emotions expressed through classical music pieces was significantly less successful than the recognition of emotional facial expressions. The high school students were significantly better at recognizing facial emotions than the elementary school students, whereas girls were better than boys. The success rate in recognizing emotions from music pieces was associated with higher grades in mathematics. Basic emotions are far better recognized if presented on human faces than in music, possibly because the understanding of facial emotions is one of the oldest communication skills in human society. Female advantage in emotion recognition was selected due to the necessity of their communication with the newborns during early development. The proficiency in recognizing emotional content of music and mathematical skills probably share some general cognitive skills like attention, memory and motivation. Music pieces were differently processed in brain than facial expressions and consequently, probably differently evaluated as relevant emotional clues.

  7. Cultural similarities and differences in perceiving and recognizing facial expressions of basic emotions.

    PubMed

    Yan, Xiaoqian; Andrews, Timothy J; Young, Andrew W

    2016-03-01

    The ability to recognize facial expressions of basic emotions is often considered a universal human ability. However, recent studies have suggested that this commonality has been overestimated and that people from different cultures use different facial signals to represent expressions (Jack, Blais, Scheepers, Schyns, & Caldara, 2009; Jack, Caldara, & Schyns, 2012). We investigated this possibility by examining similarities and differences in the perception and categorization of facial expressions between Chinese and white British participants using whole-face and partial-face images. Our results showed no cultural difference in the patterns of perceptual similarity of expressions from whole-face images. When categorizing the same expressions, however, both British and Chinese participants were slightly more accurate with whole-face images of their own ethnic group. To further investigate potential strategy differences, we repeated the perceptual similarity and categorization tasks with presentation of only the upper or lower half of each face. Again, the perceptual similarity of facial expressions was similar between Chinese and British participants for both the upper and lower face regions. However, participants were slightly better at categorizing facial expressions of their own ethnic group for the lower face regions, indicating that the way in which culture shapes the categorization of facial expressions is largely driven by differences in information decoding from this part of the face. (c) 2016 APA, all rights reserved).

  8. Unaware person recognition from the body when face identification fails.

    PubMed

    Rice, Allyson; Phillips, P Jonathon; Natu, Vaidehi; An, Xiaobo; O'Toole, Alice J

    2013-11-01

    How does one recognize a person when face identification fails? Here, we show that people rely on the body but are unaware of doing so. State-of-the-art face-recognition algorithms were used to select images of people with almost no useful identity information in the face. Recognition of the face alone in these cases was near chance level, but recognition of the person was accurate. Accuracy in identifying the person without the face was identical to that in identifying the whole person. Paradoxically, people reported relying heavily on facial features over noninternal face and body features in making their identity decisions. Eye movements indicated otherwise, with gaze duration and fixations shifting adaptively toward the body and away from the face when the body was a better indicator of identity than the face. This shift occurred with no cost to accuracy or response time. Human identity processing may be partially inaccessible to conscious awareness.

  9. Unconstrained face detection and recognition based on RGB-D camera for the visually impaired

    NASA Astrophysics Data System (ADS)

    Zhao, Xiangdong; Wang, Kaiwei; Yang, Kailun; Hu, Weijian

    2017-02-01

    It is highly important for visually impaired people (VIP) to be aware of human beings around themselves, so correctly recognizing people in VIP assisting apparatus provide great convenience. However, in classical face recognition technology, faces used in training and prediction procedures are usually frontal, and the procedures of acquiring face images require subjects to get close to the camera so that frontal face and illumination guaranteed. Meanwhile, labels of faces are defined manually rather than automatically. Most of the time, labels belonging to different classes need to be input one by one. It prevents assisting application for VIP with these constraints in practice. In this article, a face recognition system under unconstrained environment is proposed. Specifically, it doesn't require frontal pose or uniform illumination as required by previous algorithms. The attributes of this work lie in three aspects. First, a real time frontal-face synthesizing enhancement is implemented, and frontal faces help to increase recognition rate, which is proved with experiment results. Secondly, RGB-D camera plays a significant role in our system, from which both color and depth information are utilized to achieve real time face tracking which not only raises the detection rate but also gives an access to label faces automatically. Finally, we propose to use neural networks to train a face recognition system, and Principal Component Analysis (PCA) is applied to pre-refine the input data. This system is expected to provide convenient help for VIP to get familiar with others, and make an access for them to recognize people when the system is trained enough.

  10. Reconstructing human evolution: Achievements, challenges, and opportunities

    PubMed Central

    Wood, Bernard

    2010-01-01

    This contribution reviews the evidence that has resolved the branching structure of the higher primate part of the tree of life and the substantial body of fossil evidence for human evolution. It considers some of the problems faced by those who try to interpret the taxonomy and systematics of the human fossil record. How do you to tell an early human taxon from one in a closely related clade? How do you determine the number of taxa represented in the human clade? How can homoplasy be recognized and factored into attempts to recover phylogeny? PMID:20445105

  11. Recognition of own-race and other-race faces by three-month-old infants.

    PubMed

    Sangrigoli, Sandy; De Schonen, Scania

    2004-10-01

    People are better at recognizing faces of their own race than faces of another race. Such race specificity may be due to differential expertise in the two races. In order to find out whether this other-race effect develops as early as face-recognition skills or whether it is a long-term effect of acquired expertise, we tested face recognition in 3-month-old Caucasian infants by conducting two experiments using Caucasian and Asiatic faces and a visual pair-comparison task. We hypothesized that if the other race effect develops together with face processing skills during the first months of life, the ability to recognize own-race faces will be greater than the ability to recognize other-race faces: 3-month-old Caucasian infants should be better at recognizing Caucasian faces than Asiatic faces. If, on the contrary, the other-race effect is the long-term result of acquired expertise, no difference between recognizing own- and other-race faces will be observed at that age. In Experiment 1, Caucasian infants were habituated to a single face. Recognition was assessed by a novelty preference paradigm. The infants' recognition performance was better for Caucasian than for Asiatic faces. In Experiment 2, Caucasian infants were familiarized with three individual faces. Recognition was demonstrated with both Caucasian and Asiatic faces. These results suggest that (i) the representation of face information by 3-month-olds may be race-experience-dependent (Experiment 1), and (ii) short-term familiarization with exemplars of another race group is sufficient to reduce the other-race effect and to extend the power of face processing (Experiment 2).

  12. Unpacking Intuition

    PubMed Central

    Seligman, Martin E.P.; Kahana, Michael

    2009-01-01

    Can intuition be taught? The way in which faces are recognized, the structure of natural classes, and the architecture of intuition may all be instances of the same process. The conjecture that intuition is a species of recognition memory implies that human intuitive decision making can be enormously enhanced by virtual simulation. PMID:20300491

  13. Gene conservation in California's forests

    Treesearch

    Constance I. Millar

    1986-01-01

    The University of California's Wildland Resources Center has established a new program of forest gene conservation to ensure that California's rich and diverse forests maintain their vigor and productivity in the face of human activities. At an international level, conservation biologists recognize the importance not only of protecting rare species from...

  14. Facial Expressions and Ability to Recognize Emotions From Eyes or Mouth in Children

    PubMed Central

    Guarnera, Maria; Hichy, Zira; Cascio, Maura I.; Carrubba, Stefano

    2015-01-01

    This research aims to contribute to the literature on the ability to recognize anger, happiness, fear, surprise, sadness, disgust and neutral emotions from facial information. By investigating children’s performance in detecting these emotions from a specific face region, we were interested to know whether children would show differences in recognizing these expressions from the upper or lower face, and if any difference between specific facial regions depended on the emotion in question. For this purpose, a group of 6-7 year-old children was selected. Participants were asked to recognize emotions by using a labeling task with three stimulus types (region of the eyes, of the mouth, and full face). The findings seem to indicate that children correctly recognize basic facial expressions when pictures represent the whole face, except for a neutral expression, which was recognized from the mouth, and sadness, which was recognized from the eyes. Children are also able to identify anger from the eyes as well as from the whole face. With respect to gender differences, there is no female advantage in emotional recognition. The results indicate a significant interaction ‘gender x face region’ only for anger and neutral emotions. PMID:27247651

  15. Neurons in the human amygdala selective for perceived emotion

    PubMed Central

    Wang, Shuo; Tudusciuc, Oana; Mamelak, Adam N.; Ross, Ian B.; Adolphs, Ralph; Rutishauser, Ueli

    2014-01-01

    The human amygdala plays a key role in recognizing facial emotions and neurons in the monkey and human amygdala respond to the emotional expression of faces. However, it remains unknown whether these responses are driven primarily by properties of the stimulus or by the perceptual judgments of the perceiver. We investigated these questions by recording from over 200 single neurons in the amygdalae of 7 neurosurgical patients with implanted depth electrodes. We presented degraded fear and happy faces and asked subjects to discriminate their emotion by button press. During trials where subjects responded correctly, we found neurons that distinguished fear vs. happy emotions as expressed by the displayed faces. During incorrect trials, these neurons indicated the patients’ subjective judgment. Additional analysis revealed that, on average, all neuronal responses were modulated most by increases or decreases in response to happy faces, and driven predominantly by judgments about the eye region of the face stimuli. Following the same analyses, we showed that hippocampal neurons, unlike amygdala neurons, only encoded emotions but not subjective judgment. Our results suggest that the amygdala specifically encodes the subjective judgment of emotional faces, but that it plays less of a role in simply encoding aspects of the image array. The conscious percept of the emotion shown in a face may thus arise from interactions between the amygdala and its connections within a distributed cortical network, a scheme also consistent with the long response latencies observed in human amygdala recordings. PMID:24982200

  16. Newborns' Face Recognition over Changes in Viewpoint

    ERIC Educational Resources Information Center

    Turati, Chiara; Bulf, Hermann; Simion, Francesca

    2008-01-01

    The study investigated the origins of the ability to recognize faces despite rotations in depth. Four experiments are reported that tested, using the habituation technique, whether 1-to-3-day-old infants are able to recognize the invariant aspects of a face over changes in viewpoint. Newborns failed to recognize facial perceptual invariances…

  17. Recognizing Faces

    ERIC Educational Resources Information Center

    Ellis, Hadyn D.

    1975-01-01

    The proposition that the mechanisms underlying facial recognition are different from those involved in recognizing other classes of pictorial material was assessed following a general review of the literature concerned with recognizing faces. (Author/RK)

  18. You Look Familiar: How Malaysian Chinese Recognize Faces

    PubMed Central

    Tan, Chrystalle B. Y.; Stephen, Ian D.; Whitehead, Ross; Sheppard, Elizabeth

    2012-01-01

    East Asian and white Western observers employ different eye movement strategies for a variety of visual processing tasks, including face processing. Recent eye tracking studies on face recognition found that East Asians tend to integrate information holistically by focusing on the nose while white Westerners perceive faces featurally by moving between the eyes and mouth. The current study examines the eye movement strategy that Malaysian Chinese participants employ when recognizing East Asian, white Western, and African faces. Rather than adopting the Eastern or Western fixation pattern, Malaysian Chinese participants use a mixed strategy by focusing on the eyes and nose more than the mouth. The combination of Eastern and Western strategies proved advantageous in participants' ability to recognize East Asian and white Western faces, suggesting that individuals learn to use fixation patterns that are optimized for recognizing the faces with which they are more familiar. PMID:22253762

  19. Decoding representations of face identity that are tolerant to rotation.

    PubMed

    Anzellotti, Stefano; Fairhall, Scott L; Caramazza, Alfonso

    2014-08-01

    In order to recognize the identity of a face we need to distinguish very similar images (specificity) while also generalizing identity information across image transformations such as changes in orientation (tolerance). Recent studies investigated the representation of individual faces in the brain, but it remains unclear whether the human brain regions that were found encode representations of individual images (specificity) or face identity (specificity plus tolerance). In the present article, we use multivoxel pattern analysis in the human ventral stream to investigate the representation of face identity across rotations in depth, a kind of transformation in which no point in the face image remains unchanged. The results reveal representations of face identity that are tolerant to rotations in depth in occipitotemporal cortex and in anterior temporal cortex, even when the similarity between mirror symmetrical views cannot be used to achieve tolerance. Converging evidence from different analysis techniques shows that the right anterior temporal lobe encodes a comparable amount of identity information to occipitotemporal regions, but this information is encoded over a smaller extent of cortex. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. Robust Point Set Matching for Partial Face Recognition.

    PubMed

    Weng, Renliang; Lu, Jiwen; Tan, Yap-Peng

    2016-03-01

    Over the past three decades, a number of face recognition methods have been proposed in computer vision, and most of them use holistic face images for person identification. In many real-world scenarios especially some unconstrained environments, human faces might be occluded by other objects, and it is difficult to obtain fully holistic face images for recognition. To address this, we propose a new partial face recognition approach to recognize persons of interest from their partial faces. Given a pair of gallery image and probe face patch, we first detect keypoints and extract their local textural features. Then, we propose a robust point set matching method to discriminatively match these two extracted local feature sets, where both the textural information and geometrical information of local features are explicitly used for matching simultaneously. Finally, the similarity of two faces is converted as the distance between these two aligned feature sets. Experimental results on four public face data sets show the effectiveness of the proposed approach.

  1. Lean Production as Promoter of Thinkers to Achieve Companies' Agility

    ERIC Educational Resources Information Center

    Alves, Anabela C.; Dinis-Carvalho, Jose; Sousa, Rui M.

    2012-01-01

    Purpose: This paper aims to explore the lean production paradigm as promoter of workers' creativity and thinking potential, and recognize this human potential as a fundamental asset for companies' growth and success, being a major factor to face the disturbing and unpredictable needs of current markets, providing companies with the necessary…

  2. Sustainability analysis and life-cycle ecological impacts of rainwater harvesting systems using holistic analysis and a modified eco-efficiency framework

    EPA Science Inventory

    Background/Question/Methods A sustainability paradigm is being recognized globally as a path forward for human prosperity and ecological health in the face of climate change and meeting challenges of the water-energy-food nexus. Rainfall shortages for drinking water and crop pro...

  3. Is facial skin tone sufficient to produce a cross-racial identification effect?

    PubMed

    Alley, T R; Schultheis, J A

    2001-06-01

    Research clearly supports the existence of an other race effect for human faces whereby own-race faces are more accurately perceived and recognized. Why this occurs remains unclear. A computerized program (Mac-a-Mug Pro) for face composition was used to create pairs of target and distractor faces that differed only in skin tone. The six target faces were rated on honesty and aggressiveness by 72 university students, with just one 'Black' and one 'White' face viewed by each student. One week later, they attempted to identify these faces in four lineups: two with target-present and two with target-absent. The order of presentation of targets, lineups, and faces within lineups was varied. Own-race identification was slightly better than cross-racial identification. There was no significant difference in the confidence of responses to own- versus other-race faces. These results indicate that neither morphological variation nor differential confidence is necessary for a cross-racial identification effect.

  4. Face recognition in newly hatched chicks at the onset of vision.

    PubMed

    Wood, Samantha M W; Wood, Justin N

    2015-04-01

    How does face recognition emerge in the newborn brain? To address this question, we used an automated controlled-rearing method with a newborn animal model: the domestic chick (Gallus gallus). This automated method allowed us to examine chicks' face recognition abilities at the onset of both face experience and object experience. In the first week of life, newly hatched chicks were raised in controlled-rearing chambers that contained no objects other than a single virtual human face. In the second week of life, we used an automated forced-choice testing procedure to examine whether chicks could distinguish that familiar face from a variety of unfamiliar faces. Chicks successfully distinguished the familiar face from most of the unfamiliar faces-for example, chicks were sensitive to changes in the face's age, gender, and orientation (upright vs. inverted). Thus, chicks can build an accurate representation of the first face they see in their life. These results show that the initial state of face recognition is surprisingly powerful: Newborn visual systems can begin encoding and recognizing faces at the onset of vision. (c) 2015 APA, all rights reserved).

  5. Individual differences in cortical face selectivity predict behavioral performance in face recognition

    PubMed Central

    Huang, Lijie; Song, Yiying; Li, Jingguang; Zhen, Zonglei; Yang, Zetian; Liu, Jia

    2014-01-01

    In functional magnetic resonance imaging studies, object selectivity is defined as a higher neural response to an object category than other object categories. Importantly, object selectivity is widely considered as a neural signature of a functionally-specialized area in processing its preferred object category in the human brain. However, the behavioral significance of the object selectivity remains unclear. In the present study, we used the individual differences approach to correlate participants' face selectivity in the face-selective regions with their behavioral performance in face recognition measured outside the scanner in a large sample of healthy adults. Face selectivity was defined as the z score of activation with the contrast of faces vs. non-face objects, and the face recognition ability was indexed as the normalized residual of the accuracy in recognizing previously-learned faces after regressing out that for non-face objects in an old/new memory task. We found that the participants with higher face selectivity in the fusiform face area (FFA) and the occipital face area (OFA), but not in the posterior part of the superior temporal sulcus (pSTS), possessed higher face recognition ability. Importantly, the association of face selectivity in the FFA and face recognition ability cannot be accounted for by FFA response to objects or behavioral performance in object recognition, suggesting that the association is domain-specific. Finally, the association is reliable, confirmed by the replication from another independent participant group. In sum, our finding provides empirical evidence on the validity of using object selectivity as a neural signature in defining object-selective regions in the human brain. PMID:25071513

  6. Compensation for Blur Requires Increase in Field of View and Viewing Time

    PubMed Central

    Kwon, MiYoung; Liu, Rong; Chien, Lillian

    2016-01-01

    Spatial resolution is an important factor for human pattern recognition. In particular, low resolution (blur) is a defining characteristic of low vision. Here, we examined spatial (field of view) and temporal (stimulus duration) requirements for blurry object recognition. The spatial resolution of an image such as letter or face, was manipulated with a low-pass filter. In experiment 1, studying spatial requirement, observers viewed a fixed-size object through a window of varying sizes, which was repositioned until object identification (moving window paradigm). Field of view requirement, quantified as the number of “views” (window repositions) for correct recognition, was obtained for three blur levels, including no blur. In experiment 2, studying temporal requirement, we determined threshold viewing time, the stimulus duration yielding criterion recognition accuracy, at six blur levels, including no blur. For letter and face recognition, we found blur significantly increased the number of views, suggesting a larger field of view is required to recognize blurry objects. We also found blur significantly increased threshold viewing time, suggesting longer temporal integration is necessary to recognize blurry objects. The temporal integration reflects the tradeoff between stimulus intensity and time. While humans excel at recognizing blurry objects, our findings suggest compensating for blur requires increased field of view and viewing time. The need for larger spatial and longer temporal integration for recognizing blurry objects may further challenge object recognition in low vision. Thus, interactions between blur and field of view should be considered for developing low vision rehabilitation or assistive aids. PMID:27622710

  7. Toward an integrative view of human pain and suffering. Reply to comments on “Facing the experience of pain: A neuropsychological perspective”

    NASA Astrophysics Data System (ADS)

    Fabbro, Franco; Crescentini, Cristiano

    2014-09-01

    We would like to begin this response by recognizing the important contribution made by Grant [1], Pagnoni and Porro [2], Avenanti, Vicario and Borgomaneri [3], Masataka [4], Gard [5], and De Anna [6] to our review [7]. Through their thought-provoking and insightful commentaries, and with their diverse expertise, all commentators have contributed to enrich the discussion on human pain and suffering.

  8. High-emulation mask recognition with high-resolution hyperspectral video capture system

    NASA Astrophysics Data System (ADS)

    Feng, Jiao; Fang, Xiaojing; Li, Shoufeng; Wang, Yongjin

    2014-11-01

    We present a method for distinguishing human face from high-emulation mask, which is increasingly used by criminals for activities such as stealing card numbers and passwords on ATM. Traditional facial recognition technique is difficult to detect such camouflaged criminals. In this paper, we use the high-resolution hyperspectral video capture system to detect high-emulation mask. A RGB camera is used for traditional facial recognition. A prism and a gray scale camera are used to capture spectral information of the observed face. Experiments show that mask made of silica gel has different spectral reflectance compared with the human skin. As multispectral image offers additional spectral information about physical characteristics, high-emulation mask can be easily recognized.

  9. Touching the Challenge: Embodied Solutions Enabling Humanistic Moral Education

    ERIC Educational Resources Information Center

    Schwarz-Franco, Orit

    2016-01-01

    One of the main educational challenges we still face today--more than ever--is the humanistic challenge, namely how to promote humanistic moral values, how to strengthen in students the motivation to be morally active, and especially how to help them recognize the other as a human subject. I adopt Nel Noddings' approach of relational ethics of…

  10. Information and Communication Technologies and Social Mobilization: The Case of the Indigenous Movement in Ecuador, 2007-2011

    ERIC Educational Resources Information Center

    Green-Barber, Lindsay N.

    2012-01-01

    Over the last three decades Indigenous people in Ecuador have faced government policies threatening their internationally recognized Indigenous human rights. Although a national social movement emerged in Ecuador in 1990, the level of mobilization has since varied. This dissertation project proposes to address the question, under what conditions…

  11. Infants' Recognition of Objects Using Canonical Color

    ERIC Educational Resources Information Center

    Kimura, Atsushi; Wada, Yuji; Yang, Jiale; Otsuka, Yumiko; Dan, Ippeita; Masuda, Tomohiro; Kanazawa, So; Yamaguchi, Masami K.

    2010-01-01

    We explored infants' ability to recognize the canonical colors of daily objects, including two color-specific objects (human face and fruit) and a non-color-specific object (flower), by using a preferential looking technique. A total of 58 infants between 5 and 8 months of age were tested with a stimulus composed of two color pictures of an object…

  12. Recognition and context memory for faces from own and other ethnic groups: a remember-know investigation.

    PubMed

    Horry, Ruth; Wright, Daniel B; Tredoux, Colin G

    2010-03-01

    People are more accurate at recognizing faces from their own ethnic group than at recognizing faces from other ethnic groups. This other-ethnicity effect (OEE) in recognition may be produced by a deficit in recollective memory for other-ethnicity faces. In a single study, White and Black participants saw White and Black faces presented within several different visual contexts. The participants were then given an old/new recognition task. Old responses were followed by remember-know-guess judgments and context judgments. Own-ethnicity faces were recognized more accurately, were given more remember responses, and produced more accurate context judgments than did other-ethnicity faces. These results are discussed in a dual-process framework, and implications for eyewitness memory are considered.

  13. Colour detection thresholds in faces and colour patches.

    PubMed

    Tan, Kok Wei; Stephen, Ian D

    2013-01-01

    Human facial skin colour reflects individuals' underlying health (Stephen et al 2011 Evolution & Human Behavior 32 216-227); and enhanced facial skin CIELab b* (yellowness), a* (redness), and L* (lightness) are perceived as healthy (also Stephen et al 2009a International Journal of Primatology 30 845-857). Here, we examine Malaysian Chinese participants' detection thresholds for CIELab L* (lightness), a* (redness), and b* (yellowness) colour changes in Asian, African, and Caucasian faces and skin coloured patches. Twelve face photos and three skin coloured patches were transformed to produce four pairs of images of each individual face and colour patch with different amounts of red, yellow, or lightness, from very subtle (deltaE = 1.2) to quite large differences (deltaE = 9.6). Participants were asked to decide which of sequentially displayed, paired same-face images or colour patches were lighter, redder, or yellower. Changes in facial redness, followed by changes in yellowness, were more easily discriminated than changes in luminance. However, visual sensitivity was not greater for redness and yellowness in nonface stimuli, suggesting red facial skin colour special salience. Participants were also significantly better at recognizing colour differences in own-race (Asian) and Caucasian faces than in African faces, suggesting the existence of cross-race effect in discriminating facial colours. Humans' colour vision may have been selected for skin colour signalling (Changizi et al 2006 Biology Letters 2 217-221), enabling individuals to perceive subtle changes in skin colour, reflecting health and emotional status.

  14. Recognition profile of emotions in natural and virtual faces.

    PubMed

    Dyck, Miriam; Winbeck, Maren; Leiberg, Susanne; Chen, Yuhan; Gur, Ruben C; Gur, Rurben C; Mathiak, Klaus

    2008-01-01

    Computer-generated virtual faces become increasingly realistic including the simulation of emotional expressions. These faces can be used as well-controlled, realistic and dynamic stimuli in emotion research. However, the validity of virtual facial expressions in comparison to natural emotion displays still needs to be shown for the different emotions and different age groups. Thirty-two healthy volunteers between the age of 20 and 60 rated pictures of natural human faces and faces of virtual characters (avatars) with respect to the expressed emotions: happiness, sadness, anger, fear, disgust, and neutral. Results indicate that virtual emotions were recognized comparable to natural ones. Recognition differences in virtual and natural faces depended on specific emotions: whereas disgust was difficult to convey with the current avatar technology, virtual sadness and fear achieved better recognition results than natural faces. Furthermore, emotion recognition rates decreased for virtual but not natural faces in participants over the age of 40. This specific age effect suggests that media exposure has an influence on emotion recognition. Virtual and natural facial displays of emotion may be equally effective. Improved technology (e.g. better modelling of the naso-labial area) may lead to even better results as compared to trained actors. Due to the ease with which virtual human faces can be animated and manipulated, validated artificial emotional expressions will be of major relevance in future research and therapeutic applications.

  15. Recognition Profile of Emotions in Natural and Virtual Faces

    PubMed Central

    Dyck, Miriam; Winbeck, Maren; Leiberg, Susanne; Chen, Yuhan; Gur, Rurben C.; Mathiak, Klaus

    2008-01-01

    Background Computer-generated virtual faces become increasingly realistic including the simulation of emotional expressions. These faces can be used as well-controlled, realistic and dynamic stimuli in emotion research. However, the validity of virtual facial expressions in comparison to natural emotion displays still needs to be shown for the different emotions and different age groups. Methodology/Principal Findings Thirty-two healthy volunteers between the age of 20 and 60 rated pictures of natural human faces and faces of virtual characters (avatars) with respect to the expressed emotions: happiness, sadness, anger, fear, disgust, and neutral. Results indicate that virtual emotions were recognized comparable to natural ones. Recognition differences in virtual and natural faces depended on specific emotions: whereas disgust was difficult to convey with the current avatar technology, virtual sadness and fear achieved better recognition results than natural faces. Furthermore, emotion recognition rates decreased for virtual but not natural faces in participants over the age of 40. This specific age effect suggests that media exposure has an influence on emotion recognition. Conclusions/Significance Virtual and natural facial displays of emotion may be equally effective. Improved technology (e.g. better modelling of the naso-labial area) may lead to even better results as compared to trained actors. Due to the ease with which virtual human faces can be animated and manipulated, validated artificial emotional expressions will be of major relevance in future research and therapeutic applications. PMID:18985152

  16. Recognition-induced forgetting of faces in visual long-term memory.

    PubMed

    Rugo, Kelsi F; Tamler, Kendall N; Woodman, Geoffrey F; Maxcey, Ashleigh M

    2017-10-01

    Despite more than a century of evidence that long-term memory for pictures and words are different, much of what we know about memory comes from studies using words. Recent research examining visual long-term memory has demonstrated that recognizing an object induces the forgetting of objects from the same category. This recognition-induced forgetting has been shown with a variety of everyday objects. However, unlike everyday objects, faces are objects of expertise. As a result, faces may be immune to recognition-induced forgetting. However, despite excellent memory for such stimuli, we found that faces were susceptible to recognition-induced forgetting. Our findings have implications for how models of human memory account for recognition-induced forgetting as well as represent objects of expertise and consequences for eyewitness testimony and the justice system.

  17. When May a Child Who Is Visually Impaired Recognize a Face?

    ERIC Educational Resources Information Center

    Markham, R.; Wyver, S.

    1996-01-01

    The ability of 16 school-age children with visual impairments and their sighted peers to recognize faces was compared. Although no intergroup differences were found in ability to identify entire faces, the visually impaired children were at a disadvantage when part of the face, especially the eyes, was not visible. Degree of visual acuity also…

  18. The role of external features in face recognition with central vision loss: A pilot study

    PubMed Central

    Bernard, Jean-Baptiste; Chung, Susana T.L.

    2016-01-01

    Purpose We evaluated how the performance for recognizing familiar face images depends on the internal (eyebrows, eyes, nose, mouth) and external face features (chin, outline of face, hairline) in individuals with central vision loss. Methods In Experiment 1, we measured eye movements for four observers with central vision loss to determine whether they fixated more often on the internal or the external features of face images while attempting to recognize the images. We then measured the accuracy for recognizing face images that contained only the internal, only the external, or both internal and external features (Experiment 2), and for hybrid images where the internal and external features came from two different source images (Experiment 3), for five observers with central vision loss and four age-matched control observers. Results When recognizing familiar face images, approximately 40% of the fixations of observers with central vision loss were centered on the external features of faces. The recognition accuracy was higher for images containing only external features (66.8±3.3% correct) than for images containing only internal features (35.8±15.0%), a finding contradicting that of control observers. For hybrid face images, observers with central vision loss responded more accurately to the external features (50.4±17.8%) than to the internal features (9.3±4.9%), while control observers did not show the same bias toward responding to the external features. Conclusions Contrary to people with normal vision who rely more on the internal features of face images for recognizing familiar faces, individuals with central vision loss show a higher dependence on using external features of face images. PMID:26829260

  19. The Role of External Features in Face Recognition with Central Vision Loss.

    PubMed

    Bernard, Jean-Baptiste; Chung, Susana T L

    2016-05-01

    We evaluated how the performance of recognizing familiar face images depends on the internal (eyebrows, eyes, nose, mouth) and external face features (chin, outline of face, hairline) in individuals with central vision loss. In experiment 1, we measured eye movements for four observers with central vision loss to determine whether they fixated more often on the internal or the external features of face images while attempting to recognize the images. We then measured the accuracy for recognizing face images that contained only the internal, only the external, or both internal and external features (experiment 2) and for hybrid images where the internal and external features came from two different source images (experiment 3) for five observers with central vision loss and four age-matched control observers. When recognizing familiar face images, approximately 40% of the fixations of observers with central vision loss was centered on the external features of faces. The recognition accuracy was higher for images containing only external features (66.8 ± 3.3% correct) than for images containing only internal features (35.8 ± 15.0%), a finding contradicting that of control observers. For hybrid face images, observers with central vision loss responded more accurately to the external features (50.4 ± 17.8%) than to the internal features (9.3 ± 4.9%), whereas control observers did not show the same bias toward responding to the external features. Contrary to people with normal vision who rely more on the internal features of face images for recognizing familiar faces, individuals with central vision loss show a higher dependence on using external features of face images.

  20. Children's ability to recognize other children's faces.

    PubMed

    Feinman, S; Entwisle, D R

    1976-06-01

    Facial recognition ability was studied with 288 children from 4 grades--first, second, third, and sixth--who also varied by sex race, and school type, the last being segregated or integrated. Children judged whether each of 40 pictures of children's faces had been present in a set of 20 pictures viewed earlier. Facial recognition ability increased significantly with each grade but leveled off between ages 8 and 11. Blacks' performance is significantly better than whites', and blacks are better at recognizing faces of whites than whites are at recognizing blacks. Children from an integrated school show smaller differences recognizing black or white faces than children from segregated schools, but the effect appears only for children of the integrated school who also live in mixed-race neighborhoods.

  1. Recognition memory of newly learned faces.

    PubMed

    Ishai, Alumit; Yago, Elena

    2006-12-11

    We used event-related fMRI to study recognition memory of newly learned faces. Caucasian subjects memorized unfamiliar, neutral and happy South Korean faces and 4 days later performed a memory retrieval task in the MR scanner. We predicted that previously seen faces would be recognized faster and more accurately and would elicit stronger neural activation than novel faces. Consistent with our hypothesis, novel faces were recognized more slowly and less accurately than previously seen faces. We found activation in a distributed cortical network that included face-responsive regions in the visual cortex, parietal and prefrontal regions, and the hippocampus. Within all regions, correctly recognized, previously seen faces evoked stronger activation than novel faces. Additionally, in parietal and prefrontal cortices, stronger activation was observed during correct than incorrect trials. Finally, in the hippocampus, false alarms to happy faces elicited stronger responses than false alarms to neutral faces. Our findings suggest that face recognition memory is mediated by stimulus-specific representations stored in extrastriate regions; parietal and prefrontal regions where old and new items are classified; and the hippocampus where veridical memory traces are recovered.

  2. Holistic processing of static and moving faces.

    PubMed

    Zhao, Mintao; Bülthoff, Isabelle

    2017-07-01

    Humans' face ability develops and matures with extensive experience in perceiving, recognizing, and interacting with faces that move most of the time. However, how facial movements affect 1 core aspect of face ability-holistic face processing-remains unclear. Here we investigated the influence of rigid facial motion on holistic and part-based face processing by manipulating the presence of facial motion during study and at test in a composite face task. The results showed that rigidly moving faces were processed as holistically as static faces (Experiment 1). Holistic processing of moving faces persisted whether facial motion was presented during study, at test, or both (Experiment 2). Moreover, when faces were inverted to eliminate the contributions of both an upright face template and observers' expertise with upright faces, rigid facial motion facilitated holistic face processing (Experiment 3). Thus, holistic processing represents a general principle of face perception that applies to both static and dynamic faces, rather than being limited to static faces. These results support an emerging view that both perceiver-based and face-based factors contribute to holistic face processing, and they offer new insights on what underlies holistic face processing, how information supporting holistic face processing interacts with each other, and why facial motion may affect face recognition and holistic face processing differently. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. Neural Decoding Reveals Impaired Face Configural Processing in the Right Fusiform Face Area of Individuals with Developmental Prosopagnosia

    PubMed Central

    Zhang, Jiedong; Liu, Jia

    2015-01-01

    Most of human daily social interactions rely on the ability to successfully recognize faces. Yet ∼2% of the human population suffers from face blindness without any acquired brain damage [this is also known as developmental prosopagnosia (DP) or congenital prosopagnosia]). Despite the presence of severe behavioral face recognition deficits, surprisingly, a majority of DP individuals exhibit normal face selectivity in the right fusiform face area (FFA), a key brain region involved in face configural processing. This finding, together with evidence showing impairments downstream from the right FFA in DP individuals, has led some to argue that perhaps the right FFA is largely intact in DP individuals. Using fMRI multivoxel pattern analysis, here we report the discovery of a neural impairment in the right FFA of DP individuals that may play a critical role in mediating their face-processing deficits. In seven individuals with DP, we discovered that, despite the right FFA's preference for faces and it showing decoding for the different face parts, it exhibited impaired face configural decoding and did not contain distinct neural response patterns for the intact and the scrambled face configurations. This abnormality was not present throughout the ventral visual cortex, as normal neural decoding was found in an adjacent object-processing region. To our knowledge, this is the first direct neural evidence showing impaired face configural processing in the right FFA in individuals with DP. The discovery of this neural impairment provides a new clue to our understanding of the neural basis of DP. PMID:25632131

  4. Quality of life differences in patients with right- versus left-sided facial paralysis: Universal preference of right-sided human face recognition.

    PubMed

    Ryu, Nam Gyu; Lim, Byung Woo; Cho, Jae Keun; Kim, Jin

    2016-09-01

    We investigated whether experiencing right- or left-sided facial paralysis would affect an individual's ability to recognize one side of the human face using hybrid hemi-facial photos by preliminary study. Further investigation looked at the relationship between facial recognition ability, stress, and quality of life. To investigate predominance of one side of the human face for face recognition, 100 normal participants (right-handed: n = 97, left-handed: n = 3, right brain dominance: n = 56, left brain dominance: n = 44) answered a questionnaire that included hybrid hemi-facial photos developed to determine decide superiority of one side for human face recognition. To determine differences of stress level and quality of life between individuals experiencing right- and left-sided facial paralysis, 100 patients (right side:50, left side:50, not including traumatic facial nerve paralysis) answered a questionnaire about facial disability index test and quality of life (SF-36 Korean version). Regardless of handedness or hemispheric dominance, the proportion of predominance of the right side in human face recognition was larger than the left side (71% versus 12%, neutral: 17%). Facial distress index of the patients with right-sided facial paralysis was lower than that of left-sided patients (68.8 ± 9.42 versus 76.4 ± 8.28), and the SF-36 scores of right-sided patients were lower than left-sided patients (119.07 ± 15.24 versus 123.25 ± 16.48, total score: 166). Universal preference for the right side in human face recognition showed worse psychological mood and social interaction in patients with right-side facial paralysis than left-sided paralysis. This information is helpful to clinicians in that psychological and social factors should be considered when treating patients with facial-paralysis. Copyright © 2016 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  5. Familiarity Detection is an Intrinsic Property of Cortical Microcircuits with Bidirectional Synaptic Plasticity.

    PubMed

    Zhang, Xiaoyu; Ju, Han; Penney, Trevor B; VanDongen, Antonius M J

    2017-01-01

    Humans instantly recognize a previously seen face as "familiar." To deepen our understanding of familiarity-novelty detection, we simulated biologically plausible neural network models of generic cortical microcircuits consisting of spiking neurons with random recurrent synaptic connections. NMDA receptor (NMDAR)-dependent synaptic plasticity was implemented to allow for unsupervised learning and bidirectional modifications. Network spiking activity evoked by sensory inputs consisting of face images altered synaptic efficacy, which resulted in the network responding more strongly to a previously seen face than a novel face. Network size determined how many faces could be accurately recognized as familiar. When the simulated model became sufficiently complex in structure, multiple familiarity traces could be retained in the same network by forming partially-overlapping subnetworks that differ slightly from each other, thereby resulting in a high storage capacity. Fisher's discriminant analysis was applied to identify critical neurons whose spiking activity predicted familiar input patterns. Intriguingly, as sensory exposure was prolonged, the selected critical neurons tended to appear at deeper layers of the network model, suggesting recruitment of additional circuits in the network for incremental information storage. We conclude that generic cortical microcircuits with bidirectional synaptic plasticity have an intrinsic ability to detect familiar inputs. This ability does not require a specialized wiring diagram or supervision and can therefore be expected to emerge naturally in developing cortical circuits.

  6. Familiarity Detection is an Intrinsic Property of Cortical Microcircuits with Bidirectional Synaptic Plasticity

    PubMed Central

    2017-01-01

    Abstract Humans instantly recognize a previously seen face as “familiar.” To deepen our understanding of familiarity-novelty detection, we simulated biologically plausible neural network models of generic cortical microcircuits consisting of spiking neurons with random recurrent synaptic connections. NMDA receptor (NMDAR)-dependent synaptic plasticity was implemented to allow for unsupervised learning and bidirectional modifications. Network spiking activity evoked by sensory inputs consisting of face images altered synaptic efficacy, which resulted in the network responding more strongly to a previously seen face than a novel face. Network size determined how many faces could be accurately recognized as familiar. When the simulated model became sufficiently complex in structure, multiple familiarity traces could be retained in the same network by forming partially-overlapping subnetworks that differ slightly from each other, thereby resulting in a high storage capacity. Fisher’s discriminant analysis was applied to identify critical neurons whose spiking activity predicted familiar input patterns. Intriguingly, as sensory exposure was prolonged, the selected critical neurons tended to appear at deeper layers of the network model, suggesting recruitment of additional circuits in the network for incremental information storage. We conclude that generic cortical microcircuits with bidirectional synaptic plasticity have an intrinsic ability to detect familiar inputs. This ability does not require a specialized wiring diagram or supervision and can therefore be expected to emerge naturally in developing cortical circuits. PMID:28534043

  7. A cross-race effect in metamemory: Predictions of face recognition are more accurate for members of our own race

    PubMed Central

    Hourihan, Kathleen L.; Benjamin, Aaron S.; Liu, Xiping

    2012-01-01

    The Cross-Race Effect (CRE) in face recognition is the well-replicated finding that people are better at recognizing faces from their own race, relative to other races. The CRE reveals systematic limitations on eyewitness identification accuracy and suggests that some caution is warranted in evaluating cross-race identification. The CRE is a problem because jurors value eyewitness identification highly in verdict decisions. In the present paper, we explore how accurate people are in predicting their ability to recognize own-race and other-race faces. Caucasian and Asian participants viewed photographs of Caucasian and Asian faces, and made immediate judgments of learning during study. An old/new recognition test replicated the CRE: both groups displayed superior discriminability of own-race faces, relative to other-race faces. Importantly, relative metamnemonic accuracy was also greater for own-race faces, indicating that the accuracy of predictions about face recognition is influenced by race. This result indicates another source of concern when eliciting or evaluating eyewitness identification: people are less accurate in judging whether they will or will not recognize a face when that face is of a different race than they are. This new result suggests that a witness’s claim of being likely to recognize a suspect from a lineup should be interpreted with caution when the suspect is of a different race than the witness. PMID:23162788

  8. Do congenital prosopagnosia and the other-race effect affect the same face recognition mechanisms?

    PubMed Central

    Esins, Janina; Schultz, Johannes; Wallraven, Christian; Bülthoff, Isabelle

    2014-01-01

    Congenital prosopagnosia (CP), an innate impairment in recognizing faces, as well as the other-race effect (ORE), a disadvantage in recognizing faces of foreign races, both affect face recognition abilities. Are the same face processing mechanisms affected in both situations? To investigate this question, we tested three groups of 21 participants: German congenital prosopagnosics, South Korean participants and German controls on three different tasks involving faces and objects. First we tested all participants on the Cambridge Face Memory Test in which they had to recognize Caucasian target faces in a 3-alternative-forced-choice task. German controls performed better than Koreans who performed better than prosopagnosics. In the second experiment, participants rated the similarity of Caucasian faces that differed parametrically in either features or second-order relations (configuration). Prosopagnosics were less sensitive to configuration changes than both other groups. In addition, while all groups were more sensitive to changes in features than in configuration, this difference was smaller in Koreans. In the third experiment, participants had to learn exemplars of artificial objects, natural objects, and faces and recognize them among distractors of the same category. Here prosopagnosics performed worse than participants in the other two groups only when they were tested on face stimuli. In sum, Koreans and prosopagnosic participants differed from German controls in different ways in all tests. This suggests that German congenital prosopagnosics perceive Caucasian faces differently than do Korean participants. Importantly, our results suggest that different processing impairments underlie the ORE and CP. PMID:25324757

  9. Functional dissociation of the left and right fusiform gyrus in self-face recognition.

    PubMed

    Ma, Yina; Han, Shihui

    2012-10-01

    It is well known that the fusiform gyrus is engaged in face perception, such as the processes of face familiarity and identity. However, the functional role of the fusiform gyrus in face processing related to high-level social cognition remains unclear. The current study assessed the functional role of individually defined fusiform face area (FFA) in the processing of self-face physical properties and self-face identity. We used functional magnetic resonance imaging to monitor neural responses to rapidly presented face stimuli drawn from morph continua between self-face (Morph 100%) and a gender-matched friend's face (Morph 0%) in a face recognition task. Contrasting Morph 100% versus Morph 60% that differed in self-face physical properties but were both recognized as the self uncovered neural activity sensitive to self-face physical properties in the left FFA. Contrasting Morphs 50% that were recognized as the self versus a friend on different trials revealed neural modulations associated with self-face identity in the right FFA. Moreover, the right FFA activity correlated with the frequency of recognizing Morphs 50% as the self. Our results provide evidence for functional dissociations of the left and right FFAs in the representations of self-face physical properties and self-face identity. Copyright © 2011 Wiley Periodicals, Inc.

  10. Recognizing Dynamic Faces in Malaysian Chinese Participants.

    PubMed

    Tan, Chrystalle B Y; Sheppard, Elizabeth; Stephen, Ian D

    2016-03-01

    High performance level in face recognition studies does not seem to be replicable in real-life situations possibly because of the artificial nature of laboratory studies. Recognizing faces in natural social situations may be a more challenging task, as it involves constant examination of dynamic facial motions that may alter facial structure vital to the recognition of unfamiliar faces. Because of the incongruences of recognition performance, the current study developed stimuli that closely represent natural social situations to yield results that more accurately reflect observers' performance in real-life settings. Naturalistic stimuli of African, East Asian, and Western Caucasian actors introducing themselves were presented to investigate Malaysian Chinese participants' recognition sensitivity and looking strategies when performing a face recognition task. When perceiving dynamic facial stimuli, participants fixated most on the nose, followed by the mouth then the eyes. Focusing on the nose may have enabled participants to gain a more holistic view of actors' facial and head movements, which proved to be beneficial in recognizing identities. Participants recognized all three races of faces equally well. The current results, which differed from a previous static face recognition study, may be a more accurate reflection of observers' recognition abilities and looking strategies. © The Author(s) 2015.

  11. 2013 Schroth faces of the future symposium to highlight early career professionals in Mycology

    USDA-ARS?s Scientific Manuscript database

    The 2013 Schroth Faces of the Future symposium was created to recognize early career professionals (those within 10 years of graduation) who represent the future in their field via innovative research. For this year, future faces in mycology research were recognized. Drs. Jason Slot, Erica Goss, Jam...

  12. Facial Asymmetry-Based Age Group Estimation: Role in Recognizing Age-Separated Face Images.

    PubMed

    Sajid, Muhammad; Taj, Imtiaz Ahmad; Bajwa, Usama Ijaz; Ratyal, Naeem Iqbal

    2018-04-23

    Face recognition aims to establish the identity of a person based on facial characteristics. On the other hand, age group estimation is the automatic calculation of an individual's age range based on facial features. Recognizing age-separated face images is still a challenging research problem due to complex aging processes involving different types of facial tissues, skin, fat, muscles, and bones. Certain holistic and local facial features are used to recognize age-separated face images. However, most of the existing methods recognize face images without incorporating the knowledge learned from age group estimation. In this paper, we propose an age-assisted face recognition approach to handle aging variations. Inspired by the observation that facial asymmetry is an age-dependent intrinsic facial feature, we first use asymmetric facial dimensions to estimate the age group of a given face image. Deeply learned asymmetric facial features are then extracted for face recognition using a deep convolutional neural network (dCNN). Finally, we integrate the knowledge learned from the age group estimation into the face recognition algorithm using the same dCNN. This integration results in a significant improvement in the overall performance compared to using the face recognition algorithm alone. The experimental results on two large facial aging datasets, the MORPH and FERET sets, show that the proposed age group estimation based on the face recognition approach yields superior performance compared to some existing state-of-the-art methods. © 2018 American Academy of Forensic Sciences.

  13. Covert face recognition in congenital prosopagnosia: a group study.

    PubMed

    Rivolta, Davide; Palermo, Romina; Schmalzl, Laura; Coltheart, Max

    2012-03-01

    Even though people with congenital prosopagnosia (CP) never develop a normal ability to "overtly" recognize faces, some individuals show indices of "covert" (or implicit) face recognition. The aim of this study was to demonstrate covert face recognition in CP when participants could not overtly recognize the faces. Eleven people with CP completed three tasks assessing their overt face recognition ability, and three tasks assessing their "covert" face recognition: a Forced choice familiarity task, a Forced choice cued task, and a Priming task. Evidence of covert recognition was observed with the Forced choice familiarity task, but not the Priming task. In addition, we propose that the Forced choice cued task does not measure covert processing as such, but instead "provoked-overt" recognition. Our study clearly shows that people with CP demonstrate covert recognition for faces that they cannot overtly recognize, and that behavioural tasks vary in their sensitivity to detect covert recognition in CP. Copyright © 2011 Elsevier Srl. All rights reserved.

  14. Gently does it: Humans outperform a software classifier in recognizing subtle, nonstereotypical facial expressions.

    PubMed

    Yitzhak, Neta; Giladi, Nir; Gurevich, Tanya; Messinger, Daniel S; Prince, Emily B; Martin, Katherine; Aviezer, Hillel

    2017-12-01

    According to dominant theories of affect, humans innately and universally express a set of emotions using specific configurations of prototypical facial activity. Accordingly, thousands of studies have tested emotion recognition using sets of highly intense and stereotypical facial expressions, yet their incidence in real life is virtually unknown. In fact, a commonplace experience is that emotions are expressed in subtle and nonprototypical forms. Such facial expressions are at the focus of the current study. In Experiment 1, we present the development and validation of a novel stimulus set consisting of dynamic and subtle emotional facial displays conveyed without constraining expressers to using prototypical configurations. Although these subtle expressions were more challenging to recognize than prototypical dynamic expressions, they were still well recognized by human raters, and perhaps most importantly, they were rated as more ecological and naturalistic than the prototypical expressions. In Experiment 2, we examined the characteristics of subtle versus prototypical expressions by subjecting them to a software classifier, which used prototypical basic emotion criteria. Although the software was highly successful at classifying prototypical expressions, it performed very poorly at classifying the subtle expressions. Further validation was obtained from human expert face coders: Subtle stimuli did not contain many of the key facial movements present in prototypical expressions. Together, these findings suggest that emotions may be successfully conveyed to human viewers using subtle nonprototypical expressions. Although classic prototypical facial expressions are well recognized, they appear less naturalistic and may not capture the richness of everyday emotional communication. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  15. Encoding deficit during face processing within the right fusiform face area in schizophrenia.

    PubMed

    Walther, Sebastian; Federspiel, Andrea; Horn, Helge; Bianchi, Piero; Wiest, Roland; Wirth, Miranka; Strik, Werner; Müller, Thomas Jörg

    2009-06-30

    Face processing is crucial to social interaction, but is impaired in schizophrenia patients, who experience delays in face recognition, difficulties identifying others, and misperceptions of affective content. The right fusiform face area plays an important role in the early stages of human face processing and thus may be affected in schizophrenia. The aim of the study was therefore to investigate whether face processing deficits are related to dysfunctions of the right fusiform face area in schizophrenia patients compared with controls. In a rapid, event-related functional magnetic resonance imaging (fMRI) design, we investigated the encoding of new faces, as well as the recognition of newly learned, famous, and unfamiliar faces, in 13 schizophrenia patients and 21 healthy controls. We applied region of interest analysis to each individual's right fusiform face area and tested for group differences. Controls displayed higher blood oxygenation level dependent (BOLD) activation during the memorization of faces that were later successfully recognized. In schizophrenia patients, this effect was not observed. During the recognition task, schizophrenia patients exhibited lower BOLD responses, less accuracy, and longer reaction times to famous and unfamiliar faces. Our results support the hypothesis that impaired face processing in schizophrenia is related to early-stage deficits during the encoding and recognition of faces.

  16. The Goals of American Agriculture from Thomas Jefferson to the 21st Century. Faculty Paper Series 86-3.

    ERIC Educational Resources Information Center

    Thompson, Paul B.

    Although the practice of agriculture is a universal component of all human societies, the purposes and goals that a society hopes to achieve through agriculture have varied. If the crisis facing agriculture today is to be resolved, a clear sense of agriculture's purpose and goals within American society must be achieved. It must be recognized that…

  17. Global-Local Precedence in the Perception of Facial Age and Emotional Expression by Children with Autism and Other Developmental Disabilities

    ERIC Educational Resources Information Center

    Gross, Thomas F.

    2005-01-01

    Global information processing and perception of facial age and emotional expression was studied in children with autism, language disorders, mental retardation, and a clinical control group. Children were given a global-local task and asked to recognize age and emotion in human and canine faces. Children with autism made fewer global responses and…

  18. Do Infants Recognize the Arcimboldo Images as Faces? Behavioral and Near-Infrared Spectroscopic Study

    ERIC Educational Resources Information Center

    Kobayashi, Megumi; Otsuka, Yumiko; Nakato, Emi; Kanazawa, So; Yamaguchi, Masami K.; Kakigi, Ryusuke

    2012-01-01

    Arcimboldo images induce the perception of faces when shown upright despite the fact that only nonfacial objects such as vegetables and fruits are painted. In the current study, we examined whether infants recognize a face in the Arcimboldo images by using the preferential looking technique and near-infrared spectroscopy (NIRS). In the first…

  19. Effect of Partial Occlusion on Newborns' Face Preference and Recognition

    ERIC Educational Resources Information Center

    Gava, Lucia; Valenza, Eloisa; Turati, Chiara; de Schonen, Scania

    2008-01-01

    Many studies have shown that newborns prefer (e.g. Goren, Sarty & Wu, 1975 ; Valenza, Simion, Macchi Cassia & Umilta, 1996) and recognize (e.g. Bushnell, Say & Mullin, 1989; Pascalis & de Schonen, 1994) faces. However, it is not known whether, at birth, faces are still preferred and recognized when some of their parts are not visible because…

  20. Technology survey on video face tracking

    NASA Astrophysics Data System (ADS)

    Zhang, Tong; Gomes, Herman Martins

    2014-03-01

    With the pervasiveness of monitoring cameras installed in public areas, schools, hospitals, work places and homes, video analytics technologies for interpreting these video contents are becoming increasingly relevant to people's lives. Among such technologies, human face detection and tracking (and face identification in many cases) are particularly useful in various application scenarios. While plenty of research has been conducted on face tracking and many promising approaches have been proposed, there are still significant challenges in recognizing and tracking people in videos with uncontrolled capturing conditions, largely due to pose and illumination variations, as well as occlusions and cluttered background. It is especially complex to track and identify multiple people simultaneously in real time due to the large amount of computation involved. In this paper, we present a survey on literature and software that are published or developed during recent years on the face tracking topic. The survey covers the following topics: 1) mainstream and state-of-the-art face tracking methods, including features used to model the targets and metrics used for tracking; 2) face identification and face clustering from face sequences; and 3) software packages or demonstrations that are available for algorithm development or trial. A number of publically available databases for face tracking are also introduced.

  1. Study of recognizing multiple persons' complicated hand gestures from the video sequence acquired by a moving camera

    NASA Astrophysics Data System (ADS)

    Dan, Luo; Ohya, Jun

    2010-02-01

    Recognizing hand gestures from the video sequence acquired by a dynamic camera could be a useful interface between humans and mobile robots. We develop a state based approach to extract and recognize hand gestures from moving camera images. We improved Human-Following Local Coordinate (HFLC) System, a very simple and stable method for extracting hand motion trajectories, which is obtained from the located human face, body part and hand blob changing factor. Condensation algorithm and PCA-based algorithm was performed to recognize extracted hand trajectories. In last research, this Condensation Algorithm based method only applied for one person's hand gestures. In this paper, we propose a principal component analysis (PCA) based approach to improve the recognition accuracy. For further improvement, temporal changes in the observed hand area changing factor are utilized as new image features to be stored in the database after being analyzed by PCA. Every hand gesture trajectory in the database is classified into either one hand gesture categories, two hand gesture categories, or temporal changes in hand blob changes. We demonstrate the effectiveness of the proposed method by conducting experiments on 45 kinds of sign language based Japanese and American Sign Language gestures obtained from 5 people. Our experimental recognition results show better performance is obtained by PCA based approach than the Condensation algorithm based method.

  2. Does cortisol modulate emotion recognition and empathy?

    PubMed

    Duesenberg, Moritz; Weber, Juliane; Schulze, Lars; Schaeuffele, Carmen; Roepke, Stefan; Hellmann-Regen, Julian; Otte, Christian; Wingenfeld, Katja

    2016-04-01

    Emotion recognition and empathy are important aspects in the interaction and understanding of other people's behaviors and feelings. The Human environment comprises of stressful situations that impact social interactions on a daily basis. Aim of the study was to examine the effects of the stress hormone cortisol on emotion recognition and empathy. In this placebo-controlled study, 40 healthy men and 40 healthy women (mean age 24.5 years) received either 10mg of hydrocortisone or placebo. We used the Multifaceted Empathy Test to measure emotional and cognitive empathy. Furthermore, we examined emotion recognition from facial expressions, which contained two emotions (anger and sadness) and two emotion intensities (40% and 80%). We did not find a main effect for treatment or sex on either empathy or emotion recognition but a sex × emotion interaction on emotion recognition. The main result was a four-way-interaction on emotion recognition including treatment, sex, emotion and task difficulty. At 40% task difficulty, women recognized angry faces better than men in the placebo condition. Furthermore, in the placebo condition, men recognized sadness better than anger. At 80% task difficulty, men and women performed equally well in recognizing sad faces but men performed worse compared to women with regard to angry faces. Apparently, our results did not support the hypothesis that increases in cortisol concentration alone influence empathy and emotion recognition in healthy young individuals. However, sex and task difficulty appear to be important variables in emotion recognition from facial expressions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Zoonoses-With Friends Like This, Who Needs Enemies?

    PubMed Central

    Baum, Stephen G.

    2008-01-01

    Zoonoses are infections that are spread from animals to humans. Most often, humans are “dead-end” hosts, meaning that there is no subsequent human-to-human transmission. If one considers most of the emerging infections that were recognized at the end of the last century and the beginning of this century, they would fall into the category of zoonoses. One of the most important common traits exhibited by infections that have been or can be eliminated from the face of the earth (e.g. smallpox, measles, polio) is the absence of any host other than humans. Therefore, zoonses represent infections that can never be eliminated and must be considered as permanent and recurrent factors to be dealt with in protecting human health. PMID:18596867

  4. Zoonoses-with friends like this, who needs enemies?

    PubMed

    Baum, Stephen G

    2008-01-01

    Zoonoses are infections that are spread from animals to humans. Most often, humans are "dead-end" hosts, meaning that there is no subsequent human-to-human transmission. If one considers most of the emerging infections that were recognized at the end of the last century and the beginning of this century, they would fall into the category of zoonoses. One of the most important common traits exhibited by infections that have been or can be eliminated from the face of the earth (e.g. smallpox, measles, polio) is the absence of any host other than humans. Therefore, zoonses represent infections that can never be eliminated and must be considered as permanent and recurrent factors to be dealt with in protecting human health.

  5. Fast and Famous: Looking for the Fastest Speed at Which a Face Can be Recognized

    PubMed Central

    Barragan-Jason, Gladys; Besson, Gabriel; Ceccaldi, Mathieu; Barbeau, Emmanuel J.

    2012-01-01

    Face recognition is supposed to be fast. However, the actual speed at which faces can be recognized remains unknown. To address this issue, we report two experiments run with speed constraints. In both experiments, famous faces had to be recognized among unknown ones using a large set of stimuli to prevent pre-activation of features which would speed up recognition. In the first experiment (31 participants), recognition of famous faces was investigated using a rapid go/no-go task. In the second experiment, 101 participants performed a highly time constrained recognition task using the Speed and Accuracy Boosting procedure. Results indicate that the fastest speed at which a face can be recognized is around 360–390 ms. Such latencies are about 100 ms longer than the latencies recorded in similar tasks in which subjects have to detect faces among other stimuli. We discuss which model of activation of the visual ventral stream could account for such latencies. These latencies are not consistent with a purely feed-forward pass of activity throughout the visual ventral stream. An alternative is that face recognition relies on the core network underlying face processing identified in fMRI studies (OFA, FFA, and pSTS) and reentrant loops to refine face representation. However, the model of activation favored is that of an activation of the whole visual ventral stream up to anterior areas, such as the perirhinal cortex, combined with parallel and feed-back processes. Further studies are needed to assess which of these three models of activation can best account for face recognition. PMID:23460051

  6. Implicit face prototype learning from geometric information.

    PubMed

    Or, Charles C-F; Wilson, Hugh R

    2013-04-19

    There is evidence that humans implicitly learn an average or prototype of previously studied faces, as the unseen face prototype is falsely recognized as having been learned (Solso & McCarthy, 1981). Here we investigated the extent and nature of face prototype formation where observers' memory was tested after they studied synthetic faces defined purely in geometric terms in a multidimensional face space. We found a strong prototype effect: The basic results showed that the unseen prototype averaged from the studied faces was falsely identified as learned at a rate of 86.3%, whereas individual studied faces were identified correctly 66.3% of the time and the distractors were incorrectly identified as having been learned only 32.4% of the time. This prototype learning lasted at least 1 week. Face prototype learning occurred even when the studied faces were further from the unseen prototype than the median variation in the population. Prototype memory formation was evident in addition to memory formation of studied face exemplars as demonstrated in our models. Additional studies showed that the prototype effect can be generalized across viewpoints, and head shape and internal features separately contribute to prototype formation. Thus, implicit face prototype extraction in a multidimensional space is a very general aspect of geometric face learning. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Mere social categorization modulates identification of facial expressions of emotion.

    PubMed

    Young, Steven G; Hugenberg, Kurt

    2010-12-01

    The ability of the human face to communicate emotional states via facial expressions is well known, and past research has established the importance and universality of emotional facial expressions. However, recent evidence has revealed that facial expressions of emotion are most accurately recognized when the perceiver and expresser are from the same cultural ingroup. The current research builds on this literature and extends this work. Specifically, we find that mere social categorization, using a minimal-group paradigm, can create an ingroup emotion-identification advantage even when the culture of the target and perceiver is held constant. Follow-up experiments show that this effect is supported by differential motivation to process ingroup versus outgroup faces and that this motivational disparity leads to more configural processing of ingroup faces than of outgroup faces. Overall, the results point to distinct processing modes for ingroup and outgroup faces, resulting in differential identification accuracy for facial expressions of emotion. PsycINFO Database Record (c) 2010 APA, all rights reserved.

  8. Face and emotion expression processing and the serotonin transporter polymorphism 5-HTTLPR/rs22531.

    PubMed

    Hildebrandt, A; Kiy, A; Reuter, M; Sommer, W; Wilhelm, O

    2016-06-01

    Face cognition, including face identity and facial expression processing, is a crucial component of socio-emotional abilities, characterizing humans as highest developed social beings. However, for these trait domains molecular genetic studies investigating gene-behavior associations based on well-founded phenotype definitions are still rare. We examined the relationship between 5-HTTLPR/rs25531 polymorphisms - related to serotonin-reuptake - and the ability to perceive and recognize faces and emotional expressions in human faces. For this aim we conducted structural equation modeling on data from 230 young adults, obtained by using a comprehensive, multivariate task battery with maximal effort tasks. By additionally modeling fluid intelligence and immediate and delayed memory factors, we aimed to address the discriminant relationships of the 5-HTTLPR/rs25531 polymorphisms with socio-emotional abilities. We found a robust association between the 5-HTTLPR/rs25531 polymorphism and facial emotion perception. Carriers of two long (L) alleles outperformed carriers of one or two S alleles. Weaker associations were present for face identity perception and memory for emotional facial expressions. There was no association between the 5-HTTLPR/rs25531 polymorphism and non-social abilities, demonstrating discriminant validity of the relationships. We discuss the implications and possible neural mechanisms underlying these novel findings. © 2016 John Wiley & Sons Ltd and International Behavioural and Neural Genetics Society.

  9. Diabetes, Insulin Resistance, and Metabolic Syndrome in Horses

    PubMed Central

    Johnson, Philip J.; Wiedmeyer, Charles E.; LaCarrubba, Alison; Ganjam, V. K. (Seshu); Messer, Nat T.

    2012-01-01

    Analogous to the situation in human medicine, contemporary practices in horse management, which incorporate lengthy periods of physical inactivity coupled with provision of nutritional rations characterized by inappropriately high sugar and starch, have led to obesity being more commonly recognized by practitioners of equine veterinary practice. In many of these cases, obesity is associated with insulin resistance (IR) and glucose intolerance. An equine metabolic syndrome (MS) has been described that is similar to the human MS in that both IR and aspects of obesity represent cornerstones of its definition. Unlike its human counterpart, identification of the equine metabolic syndrome (EMS) portends greater risk for development of laminitis, a chronic, crippling affliction of the equine hoof. When severe, laminitis sometimes necessitates euthanasia. Unlike the human condition, the risk of developing type 2 diabetes mellitus and many other chronic conditions, for which the risk is recognized as increased in the face of MS, is less likely in horses. The equine veterinary literature has been replete with reports of scientific investigations regarding the epidemiology, pathophysiology, and treatment of EMS. PMID:22768883

  10. Diabetes, insulin resistance, and metabolic syndrome in horses.

    PubMed

    Johnson, Philip J; Wiedmeyer, Charles E; LaCarrubba, Alison; Ganjam, V K; Messer, Nat T

    2012-05-01

    Analogous to the situation in human medicine, contemporary practices in horse management, which incorporate lengthy periods of physical inactivity coupled with provision of nutritional rations characterized by inappropriately high sugar and starch, have led to obesity being more commonly recognized by practitioners of equine veterinary practice. In many of these cases, obesity is associated with insulin resistance (IR) and glucose intolerance. An equine metabolic syndrome (MS) has been described that is similar to the human MS in that both IR and aspects of obesity represent cornerstones of its definition. Unlike its human counterpart, identification of the equine metabolic syndrome (EMS) portends greater risk for development of laminitis, a chronic, crippling affliction of the equine hoof. When severe, laminitis sometimes necessitates euthanasia. Unlike the human condition, the risk of developing type 2 diabetes mellitus and many other chronic conditions, for which the risk is recognized as increased in the face of MS, is less likely in horses. The equine veterinary literature has been replete with reports of scientific investigations regarding the epidemiology, pathophysiology, and treatment of EMS. © 2012 Diabetes Technology Society.

  11. The man who mistook his neuropsychologist for a popstar: when configural processing fails in acquired prosopagnosia

    PubMed Central

    Jansari, Ashok; Miller, Scott; Pearce, Laura; Cobb, Stephanie; Sagiv, Noam; Williams, Adrian L.; Tree, Jeremy J.; Hanley, J. Richard

    2015-01-01

    We report the case of an individual with acquired prosopagnosia who experiences extreme difficulties in recognizing familiar faces in everyday life despite excellent object recognition skills. Formal testing indicates that he is also severely impaired at remembering pre-experimentally unfamiliar faces and that he takes an extremely long time to identify famous faces and to match unfamiliar faces. Nevertheless, he performs as accurately and quickly as controls at identifying inverted familiar and unfamiliar faces and can recognize famous faces from their external features. He also performs as accurately as controls at recognizing famous faces when fracturing conceals the configural information in the face. He shows evidence of impaired global processing but normal local processing of Navon figures. This case appears to reflect the clearest example yet of an acquired prosopagnosic patient whose familiar face recognition deficit is caused by a severe configural processing deficit in the absence of any problems in featural processing. These preserved featural skills together with apparently intact visual imagery for faces allow him to identify a surprisingly large number of famous faces when unlimited time is available. The theoretical implications of this pattern of performance for understanding the nature of acquired prosopagnosia are discussed. PMID:26236212

  12. Inverting faces elicits sensitivity to race on the N170 component: a cross-cultural study.

    PubMed

    Vizioli, Luca; Foreman, Kay; Rousselet, Guillaume A; Caldara, Roberto

    2010-01-29

    Human beings are natural experts at processing faces, with some notable exceptions. Same-race faces are better recognized than other-race faces: the so-called other-race effect (ORE). Inverting faces impairs recognition more than for any other inverted visual object: the so-called face inversion effect (FIE). Interestingly, the FIE is stronger for same- compared to other-race faces. At the electrophysiological level, inverted faces elicit consistently delayed and often larger N170 compared to upright faces. However, whether the N170 component is sensitive to race is still a matter of ongoing debate. Here we investigated the N170 sensitivity to race in the framework of the FIE. We recorded EEG from Western Caucasian and East Asian observers while presented with Western Caucasian, East Asian and African American faces in upright and inverted orientations. To control for potential confounds in the EEG signal that might be evoked by the intrinsic and salient differences in the low-level properties of faces from different races, we normalized their amplitude-spectra, luminance and contrast. No differences on the N170 were observed for upright faces. Critically, inverted same-race faces lead to greater recognition impairment and elicited larger N170 amplitudes compared to inverted other-race faces. Our results indicate a finer-grained neural tuning for same-race faces at early stages of processing in both groups of observers.

  13. Recognizing the Face of Johnny, Suzy, and Me: Insensitivity to the Spacing Among Features at 4 Years of Age

    ERIC Educational Resources Information Center

    Mondloch, Catherine J.; Leis, Anishka; Maurer, Daphne

    2006-01-01

    Four-year-olds were tested for their ability to use differences in the spacing among features to recognize familiar faces. They were given a storybook depicting multiple views of 2 children. They returned to the laboratory 2 weeks later and used a "magic wand" to play a computer game that tested their ability to recognize the familiarized faces…

  14. Arguments Against a Configural Processing Account of Familiar Face Recognition.

    PubMed

    Burton, A Mike; Schweinberger, Stefan R; Jenkins, Rob; Kaufmann, Jürgen M

    2015-07-01

    Face recognition is a remarkable human ability, which underlies a great deal of people's social behavior. Individuals can recognize family members, friends, and acquaintances over a very large range of conditions, and yet the processes by which they do this remain poorly understood, despite decades of research. Although a detailed understanding remains elusive, face recognition is widely thought to rely on configural processing, specifically an analysis of spatial relations between facial features (so-called second-order configurations). In this article, we challenge this traditional view, raising four problems: (1) configural theories are underspecified; (2) large configural changes leave recognition unharmed; (3) recognition is harmed by nonconfigural changes; and (4) in separate analyses of face shape and face texture, identification tends to be dominated by texture. We review evidence from a variety of sources and suggest that failure to acknowledge the impact of familiarity on facial representations may have led to an overgeneralization of the configural account. We argue instead that second-order configural information is remarkably unimportant for familiar face recognition. © The Author(s) 2015.

  15. Neural Trade-Offs between Recognizing and Categorizing Own- and Other-Race Faces

    PubMed Central

    Liu, Jiangang; Wang, Zhe; Feng, Lu; Li, Jun; Tian, Jie; Lee, Kang

    2015-01-01

    Behavioral research has suggested a trade-off relationship between individual recognition and race categorization of own- and other-race faces, which is an important behavioral marker of face processing expertise. However, little is known about the neural mechanisms underlying this trade-off. Using functional magnetic resonance imaging (fMRI) methodology, we concurrently asked participants to recognize and categorize own- and other-race faces to examine the neural correlates of this trade-off relationship. We found that for other-race faces, the fusiform face area (FFA) and occipital face area (OFA) responded more to recognition than categorization, whereas for own-race faces, the responses were equal for the 2 tasks. The right superior temporal sulcus (STS) responses were the opposite to those of the FFA and OFA. Further, recognition enhanced the functional connectivity from the right FFA to the right STS, whereas categorization enhanced the functional connectivity from the right OFA to the right STS. The modulatory effects of these 2 couplings were negatively correlated. Our findings suggested that within the core face processing network, although recognizing and categorizing own- and other-race faces activated the same neural substrates, there existed neural trade-offs whereby their activations and functional connectivities were modulated by face race type and task demand due to one's differential processing expertise with own- and other-race faces. PMID:24591523

  16. A system for tracking and recognizing pedestrian faces using a network of loosely coupled cameras

    NASA Astrophysics Data System (ADS)

    Gagnon, L.; Laliberté, F.; Foucher, S.; Branzan Albu, A.; Laurendeau, D.

    2006-05-01

    A face recognition module has been developed for an intelligent multi-camera video surveillance system. The module can recognize a pedestrian face in terms of six basic emotions and the neutral state. Face and facial features detection (eyes, nasal root, nose and mouth) are first performed using cascades of boosted classifiers. These features are used to normalize the pose and dimension of the face image. Gabor filters are then sampled on a regular grid covering the face image to build a facial feature vector that feeds a nearest neighbor classifier with a cosine distance similarity measure for facial expression interpretation and face model construction. A graphical user interface allows the user to adjust the module parameters.

  17. Developmental prosopagnosia and super-recognition: no special role for surface reflectance processing.

    PubMed

    Russell, Richard; Chatterjee, Garga; Nakayama, Ken

    2012-01-01

    Face recognition by normal subjects depends in roughly equal proportions on shape and surface reflectance cues, while object recognition depends predominantly on shape cues. It is possible that developmental prosopagnosics are deficient not in their ability to recognize faces per se, but rather in their ability to use reflectance cues. Similarly, super-recognizers' exceptional ability with face recognition may be a result of superior surface reflectance perception and memory. We tested this possibility by administering tests of face perception and face recognition in which only shape or reflectance cues are available to developmental prosopagnosics, super-recognizers, and control subjects. Face recognition ability and the relative use of shape and pigmentation were unrelated in all the tests. Subjects who were better at using shape or reflectance cues were also better at using the other type of cue. These results do not support the proposal that variation in surface reflectance perception ability is the underlying cause of variation in face recognition ability. Instead, these findings support the idea that face recognition ability is related to neural circuits using representations that integrate shape and pigmentation information. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. Integrating conventional and inverse representation for face recognition.

    PubMed

    Xu, Yong; Li, Xuelong; Yang, Jian; Lai, Zhihui; Zhang, David

    2014-10-01

    Representation-based classification methods are all constructed on the basis of the conventional representation, which first expresses the test sample as a linear combination of the training samples and then exploits the deviation between the test sample and the expression result of every class to perform classification. However, this deviation does not always well reflect the difference between the test sample and each class. With this paper, we propose a novel representation-based classification method for face recognition. This method integrates conventional and the inverse representation-based classification for better recognizing the face. It first produces conventional representation of the test sample, i.e., uses a linear combination of the training samples to represent the test sample. Then it obtains the inverse representation, i.e., provides an approximation representation of each training sample of a subject by exploiting the test sample and training samples of the other subjects. Finally, the proposed method exploits the conventional and inverse representation to generate two kinds of scores of the test sample with respect to each class and combines them to recognize the face. The paper shows the theoretical foundation and rationale of the proposed method. Moreover, this paper for the first time shows that a basic nature of the human face, i.e., the symmetry of the face can be exploited to generate new training and test samples. As these new samples really reflect some possible appearance of the face, the use of them will enable us to obtain higher accuracy. The experiments show that the proposed conventional and inverse representation-based linear regression classification (CIRLRC), an improvement to linear regression classification (LRC), can obtain very high accuracy and greatly outperforms the naive LRC and other state-of-the-art conventional representation based face recognition methods. The accuracy of CIRLRC can be 10% greater than that of LRC.

  19. Super-recognizers: people with extraordinary face recognition ability.

    PubMed

    Russell, Richard; Duchaine, Brad; Nakayama, Ken

    2009-04-01

    We tested 4 people who claimed to have significantly better than ordinary face recognition ability. Exceptional ability was confirmed in each case. On two very different tests of face recognition, all 4 experimental subjects performed beyond the range of control subject performance. They also scored significantly better than average on a perceptual discrimination test with faces. This effect was larger with upright than with inverted faces, and the 4 subjects showed a larger "inversion effect" than did control subjects, who in turn showed a larger inversion effect than did developmental prosopagnosics. This result indicates an association between face recognition ability and the magnitude of the inversion effect. Overall, these "super-recognizers" are about as good at face recognition and perception as developmental prosopagnosics are bad. Our findings demonstrate the existence of people with exceptionally good face recognition ability and show that the range of face recognition and face perception ability is wider than has been previously acknowledged.

  20. Are 6-month-old human infants able to transfer emotional information (happy or angry) from voices to faces? An eye-tracking study.

    PubMed

    Palama, Amaya; Malsert, Jennifer; Gentaz, Edouard

    2018-01-01

    The present study examined whether 6-month-old infants could transfer amodal information (i.e. independently of sensory modalities) from emotional voices to emotional faces. Thus, sequences of successive emotional stimuli (voice or face from one sensory modality -auditory- to another sensory modality -visual-), corresponding to a cross-modal transfer, were displayed to 24 infants. Each sequence presented an emotional (angry or happy) or neutral voice, uniquely, followed by the simultaneous presentation of two static emotional faces (angry or happy, congruous or incongruous with the emotional voice). Eye movements in response to the visual stimuli were recorded with an eye-tracker. First, results suggested no difference in infants' looking time to happy or angry face after listening to the neutral voice or the angry voice. Nevertheless, after listening to the happy voice, infants looked longer at the incongruent angry face (the mouth area in particular) than the congruent happy face. These results revealed that a cross-modal transfer (from auditory to visual modalities) is possible for 6-month-old infants only after the presentation of a happy voice, suggesting that they recognize this emotion amodally.

  1. Prosopography, prosoporecognography and the Prosoporecognographical Chart.

    PubMed

    Santos-Filho, E F; Pereira, H B B

    2017-11-01

    Recognizing and identifying an individual based on his or her face is a technical and scientific challenge and the objective of our investigation. This article's goal is to establish a method, a foundation and an instrument for carrying out the process of recognizing and identifying an individual. Both the construction of the term and the deepening, conceptualization and epistemology of the process of describing and representing the face through a particular method of recognizing and identifying individuals are described in this article. The proposal of the Prosoporecognographical Chart is an important step in the facial-identification process, establishing taxonomic parameters for the phenotypic manifestations of the elements constituting the face. Based on the proposal presented here, the construction of a protocol for the process of recognizing and identifying an individual can be implemented computationally. Copyright © 2017. Published by Elsevier Ltd.

  2. Non-egalitarian allocations among preschool peers in a face-to-face bargaining task.

    PubMed

    Melis, Alicia P; Floedl, Anja; Tomasello, Michael

    2015-01-01

    In face-to-face bargaining tasks human adults almost always agree on an equal split of resources. This is due to mutually recognized fairness and equality norms. Early developmental studies on sharing and equality norms found that egalitarian allocations of resources are not common before children are 5 or 6 years old. However, recent studies have shown that in some face-to face collaborative situations, or when recipients express their desires, children at much younger ages choose equal allocations. We investigated the ability of 3.5 and 5-year-olds to negotiate face-to-face, whether to collaborate to obtain an equal or an unequal distribution of rewards. We hypothesized that the face-to-face interaction and interdependency between partners would facilitate egalitarian outcomes at both ages. In the first experiment we found that 5-year-olds were more egalitarian than 3.5-year-olds, but neither of the age classes shared equally. In the second experiment, in which we increased the magnitude of the inequality, we found that children at both ages mostly agreed on the unequal distribution. These results show that communication and face-to-face interactions are not sufficient to guarantee equal allocations at 3-5 years of age. These results add to previous findings suggesting that in the context of non-collaboratively produced resources it is only after 5 years of age that children use equality norms to allocate resources.

  3. MorphoHawk: Geometric-based Software for Manufacturing and More

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keith Arterburn

    2001-04-01

    Hollywood movies portray facial recognition as a perfected technology, but reality is that sophisticated computers and algorithmic calculations are far from perfect. In fact, the most sophisticated and successful computer for recognizing faces and other imagery already is the human brain with more than 10 billion nerve cells. Beginning at birth, humans process data and connect optical and sensory experiences that create unparalleled accumulation of data for people to associate images with life experiences, emotions and knowledge. Computers are powerful, rapid and tireless, but still cannot compare to the highly sophisticated relational calculations and associations that the human computer canmore » produce in connecting ‘what we see with what we know.’« less

  4. Infant perceptual development for faces and spoken words: An integrated approach

    PubMed Central

    Watson, Tamara L; Robbins, Rachel A; Best, Catherine T

    2014-01-01

    There are obvious differences between recognizing faces and recognizing spoken words or phonemes that might suggest development of each capability requires different skills. Recognizing faces and perceiving spoken language, however, are in key senses extremely similar endeavors. Both perceptual processes are based on richly variable, yet highly structured input from which the perceiver needs to extract categorically meaningful information. This similarity could be reflected in the perceptual narrowing that occurs within the first year of life in both domains. We take the position that the perceptual and neurocognitive processes by which face and speech recognition develop are based on a set of common principles. One common principle is the importance of systematic variability in the input as a source of information rather than noise. Experience of this variability leads to perceptual tuning to the critical properties that define individual faces or spoken words versus their membership in larger groupings of people and their language communities. We argue that parallels can be drawn directly between the principles responsible for the development of face and spoken language perception. PMID:25132626

  5. Infants' understanding of false labeling events: the referential roles of words and the speakers who use them.

    PubMed

    Koenig, Melissa A; Echols, Catharine H

    2003-04-01

    The four studies reported here examine whether 16-month-old infants' responses to true and false utterances interact with their knowledge of human agents. In Study 1, infants heard repeated instances either of true or false labeling of common objects; labels came from an active human speaker seated next to the infant. In Study 2, infants experienced the same stimuli and procedure; however, we replaced the human speaker of Study 1 with an audio speaker in the same location. In Study 3, labels came from a hidden audio speaker. In Study 4, a human speaker labeled the objects while facing away from them. In Study 1, infants looked significantly longer to the human agent when she falsely labeled than when she truthfully labeled the objects. Infants did not show a similar pattern of attention for the audio speaker of Study 2, the silent human of Study 3 or the facing-backward speaker of Study 4. In fact, infants who experienced truthful labeling looked significantly longer to the facing-backward labeler of Study 4 than to true labelers of the other three contexts. Additionally, infants were more likely to correct false labels when produced by the human labeler of Study 1 than in any of the other contexts. These findings suggest, first, that infants are developing a critical conception of other human speakers as truthful communicators, and second, that infants understand that human speakers may provide uniquely useful information when a word fails to match its referent. These findings are consistent with the view that infants can recognize differences in knowledge and that such differences can be based on differences in the availability of perceptual experience.

  6. Who is the Usual Suspect? Evidence of a Selection Bias Toward Faces That Make Direct Eye Contact in a Lineup Task

    PubMed Central

    van Golde, Celine; Verstraten, Frans A. J.

    2017-01-01

    The speed and ease with which we recognize the faces of our friends and family members belies the difficulty we have recognizing less familiar individuals. Nonetheless, overconfidence in our ability to recognize faces has carried over into various aspects of our legal system; for instance, eyewitness identification serves a critical role in criminal proceedings. For this reason, understanding the perceptual and psychological processes that underlie false identification is of the utmost importance. Gaze direction is a salient social signal and direct eye contact, in particular, is thought to capture attention. Here, we tested the hypothesis that differences in gaze direction may influence difficult decisions in a lineup context. In a series of experiments, we show that when a group of faces differed in their gaze direction, the faces that were making eye contact with the participants were more likely to be misidentified. Interestingly, this bias disappeared when the faces are presented with their eyes closed. These findings open a critical conversation between social neuroscience and forensic psychology, and imply that direct eye contact may (wrongly) increase the perceived familiarity of a face. PMID:28203355

  7. Is fear in your head? A comparison of instructed and real-life expressions of emotion in the face and body.

    PubMed

    Abramson, Lior; Marom, Inbal; Petranker, Rotem; Aviezer, Hillel

    2017-04-01

    The majority of emotion perception studies utilize instructed and stereotypical expressions of faces or bodies. While such stimuli are highly standardized and well-recognized, their resemblance to real-life expressions of emotion remains unknown. Here we examined facial and body expressions of fear and anger during real-life situations and compared their recognition to that of instructed expressions of the same emotions. In order to examine the source of the affective signal, expressions of emotion were presented as faces alone, bodies alone, and naturally, as faces with bodies. The results demonstrated striking deviations between recognition of instructed and real-life stimuli, which differed as a function of the emotion expressed. In real-life fearful expressions of emotion, bodies were far better recognized than faces, a pattern not found with instructed expressions of emotion. Anger reactions were better recognized from the body than from the face in both real-life and instructed stimuli. However, the real-life stimuli were overall better recognized than their instructed counterparts. These results indicate that differences between instructed and real-life expressions of emotion are prevalent and raise caution against an overreliance of researchers on instructed affective stimuli. The findings also demonstrate that in real life, facial expression perception may rely heavily on information from the contextualizing body. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  8. The development of emotion perception in face and voice during infancy.

    PubMed

    Grossmann, Tobias

    2010-01-01

    Interacting with others by reading their emotional expressions is an essential social skill in humans. How this ability develops during infancy and what brain processes underpin infants' perception of emotion in different modalities are the questions dealt with in this paper. Literature review. The first part provides a systematic review of behavioral findings on infants' developing emotion-reading abilities. The second part presents a set of new electrophysiological studies that provide insights into the brain processes underlying infants' developing abilities. Throughout, evidence from unimodal (face or voice) and multimodal (face and voice) processing of emotion is considered. The implications of the reviewed findings for our understanding of developmental models of emotion processing are discussed. The reviewed infant data suggest that (a) early in development, emotion enhances the sensory processing of faces and voices, (b) infants' ability to allocate increased attentional resources to negative emotional information develops earlier in the vocal domain than in the facial domain, and (c) at least by the age of 7 months, infants reliably match and recognize emotional information across face and voice.

  9. Equipping African American Clergy to Recognize Depression.

    PubMed

    Anthony, Jean Spann; Morris, Edith; Collins, Charles W; Watson, Albert; Williams, Jennifer E; Ferguson, Bʼnai; Ruhlman, Deborah L

    2016-01-01

    Many African Americans (AAs) use clergy as their primary source of help for depression, with few being referred to mental health providers. This study used face-to-face workshops to train AA clergy to recognize the symptoms and levels of severity of depression. A pretest/posttest format was used to test knowledge (N = 42) about depression symptoms. Results showed that the participation improved the clergy's ability to recognize depression symptoms. Faith community nurses can develop workshops for clergy to improve recognition and treatment of depression.

  10. From Birdsong to Human Speech Recognition: Bayesian Inference on a Hierarchy of Nonlinear Dynamical Systems

    PubMed Central

    Yildiz, Izzet B.; von Kriegstein, Katharina; Kiebel, Stefan J.

    2013-01-01

    Our knowledge about the computational mechanisms underlying human learning and recognition of sound sequences, especially speech, is still very limited. One difficulty in deciphering the exact means by which humans recognize speech is that there are scarce experimental findings at a neuronal, microscopic level. Here, we show that our neuronal-computational understanding of speech learning and recognition may be vastly improved by looking at an animal model, i.e., the songbird, which faces the same challenge as humans: to learn and decode complex auditory input, in an online fashion. Motivated by striking similarities between the human and songbird neural recognition systems at the macroscopic level, we assumed that the human brain uses the same computational principles at a microscopic level and translated a birdsong model into a novel human sound learning and recognition model with an emphasis on speech. We show that the resulting Bayesian model with a hierarchy of nonlinear dynamical systems can learn speech samples such as words rapidly and recognize them robustly, even in adverse conditions. In addition, we show that recognition can be performed even when words are spoken by different speakers and with different accents—an everyday situation in which current state-of-the-art speech recognition models often fail. The model can also be used to qualitatively explain behavioral data on human speech learning and derive predictions for future experiments. PMID:24068902

  11. From birdsong to human speech recognition: bayesian inference on a hierarchy of nonlinear dynamical systems.

    PubMed

    Yildiz, Izzet B; von Kriegstein, Katharina; Kiebel, Stefan J

    2013-01-01

    Our knowledge about the computational mechanisms underlying human learning and recognition of sound sequences, especially speech, is still very limited. One difficulty in deciphering the exact means by which humans recognize speech is that there are scarce experimental findings at a neuronal, microscopic level. Here, we show that our neuronal-computational understanding of speech learning and recognition may be vastly improved by looking at an animal model, i.e., the songbird, which faces the same challenge as humans: to learn and decode complex auditory input, in an online fashion. Motivated by striking similarities between the human and songbird neural recognition systems at the macroscopic level, we assumed that the human brain uses the same computational principles at a microscopic level and translated a birdsong model into a novel human sound learning and recognition model with an emphasis on speech. We show that the resulting Bayesian model with a hierarchy of nonlinear dynamical systems can learn speech samples such as words rapidly and recognize them robustly, even in adverse conditions. In addition, we show that recognition can be performed even when words are spoken by different speakers and with different accents-an everyday situation in which current state-of-the-art speech recognition models often fail. The model can also be used to qualitatively explain behavioral data on human speech learning and derive predictions for future experiments.

  12. Geometric distortions affect face recognition in chimpanzees (Pan troglodytes) and monkeys (Macaca mulatta).

    PubMed

    Taubert, Jessica; Parr, Lisa A

    2011-01-01

    All primates can recognize faces and do so by analyzing the subtle variation that exists between faces. Through a series of three experiments, we attempted to clarify the nature of second-order information processing in nonhuman primates. Experiment one showed that both chimpanzees (Pan troglodytes) and rhesus monkeys (Macaca mulatta) tolerate geometric distortions along the vertical axis, suggesting that information about absolute position of features does not contribute to accurate face recognition. Chimpanzees differed from monkeys, however, in that they were more sensitive to distortions along the horizontal axis, suggesting that when building a global representation of facial identity, horizontal relations between features are more diagnostic of identity than vertical relations. Two further experiments were performed to determine whether the monkeys were simply less sensitive to horizontal relations compared to chimpanzees or were instead relying on local features. The results of these experiments confirm that monkeys can utilize a holistic strategy when discriminating between faces regardless of familiarity. In contrast, our data show that chimpanzees, like humans, use a combination of holistic and local features when the faces are unfamiliar, but primarily holistic information when the faces become familiar. We argue that our comparative approach to the study of face recognition reveals the impact that individual experience and social organization has on visual cognition.

  13. Oxytocin eliminates the own-race bias in face recognition memory☆

    PubMed Central

    Blandón-Gitlin, Iris; Pezdek, Kathy; Saldivar, Sesar; Steelman, Erin

    2015-01-01

    The neuropeptide Oxytocin influences a number of social behaviors, including processing of faces. We examined whether Oxytocin facilitates the processing of out-group faces and reduce the own-race bias (ORB). The ORB is a robust phenomenon characterized by poor recognition memory of other-race faces compared to the same-race faces. In Experiment 1, participants received intranasal solutions of Oxytocin or placebo prior to viewing White and Black faces. On a subsequent recognition test, whereas in the placebo condition the same-race faces were better recognized than other-race faces, in the Oxytocin condition Black and White faces were equally well recognized, effectively eliminating the ORB. In Experiment 2, Oxytocin was administered after the study phase. The ORB resulted, but Oxytocin did not significantly reduce the effect. This study is the first to show that Oxytocin can enhance face memory of out-group members and underscore the importance of social encoding mechanisms underlying the own-race bias. PMID:23872107

  14. Memory for faces: the effect of facial appearance and the context in which the face is encountered.

    PubMed

    Mattarozzi, Katia; Todorov, Alexander; Codispoti, Maurizio

    2015-03-01

    We investigated the effects of appearance of emotionally neutral faces and the context in which the faces are encountered on incidental face memory. To approximate real-life situations as closely as possible, faces were embedded in a newspaper article, with a headline that specified an action performed by the person pictured. We found that facial appearance affected memory so that faces perceived as trustworthy or untrustworthy were remembered better than neutral ones. Furthermore, the memory of untrustworthy faces was slightly better than that of trustworthy faces. The emotional context of encoding affected the details of face memory. Faces encountered in a neutral context were more likely to be recognized as only familiar. In contrast, emotionally relevant contexts of encoding, whether pleasant or unpleasant, increased the likelihood of remembering semantic and even episodic details associated with faces. These findings suggest that facial appearance (i.e., perceived trustworthiness) affects face memory. Moreover, the findings support prior evidence that the engagement of emotion processing during memory encoding increases the likelihood that events are not only recognized but also remembered.

  15. System for face recognition under expression variations of neutral-sampled individuals using recognized expression warping and a virtual expression-face database

    NASA Astrophysics Data System (ADS)

    Petpairote, Chayanut; Madarasmi, Suthep; Chamnongthai, Kosin

    2018-01-01

    The practical identification of individuals using facial recognition techniques requires the matching of faces with specific expressions to faces from a neutral face database. A method for facial recognition under varied expressions against neutral face samples of individuals via recognition of expression warping and the use of a virtual expression-face database is proposed. In this method, facial expressions are recognized and the input expression faces are classified into facial expression groups. To aid facial recognition, the virtual expression-face database is sorted into average facial-expression shapes and by coarse- and fine-featured facial textures. Wrinkle information is also employed in classification by using a process of masking to adjust input faces to match the expression-face database. We evaluate the performance of the proposed method using the CMU multi-PIE, Cohn-Kanade, and AR expression-face databases, and we find that it provides significantly improved results in terms of face recognition accuracy compared to conventional methods and is acceptable for facial recognition under expression variation.

  16. Facial Expression Recognition using Multiclass Ensemble Least-Square Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Lawi, Armin; Sya'Rani Machrizzandi, M.

    2018-03-01

    Facial expression is one of behavior characteristics of human-being. The use of biometrics technology system with facial expression characteristics makes it possible to recognize a person’s mood or emotion. The basic components of facial expression analysis system are face detection, face image extraction, facial classification and facial expressions recognition. This paper uses Principal Component Analysis (PCA) algorithm to extract facial features with expression parameters, i.e., happy, sad, neutral, angry, fear, and disgusted. Then Multiclass Ensemble Least-Squares Support Vector Machine (MELS-SVM) is used for the classification process of facial expression. The result of MELS-SVM model obtained from our 185 different expression images of 10 persons showed high accuracy level of 99.998% using RBF kernel.

  17. The Ability of Visually Impaired Children to Read Expressions and Recognize Faces.

    ERIC Educational Resources Information Center

    Ellis, H. D.; And Others

    1987-01-01

    Seventeen visually impaired children, aged 7-11 years, were compared with sighted children on a test of facial recognition and a test of expression identification. The visually impaired children were less able to recognize faces successfully but showed no disadvantage in discerning facial expressions such as happiness, anger, surprise, or fear.…

  18. Do Dynamic Facial Expressions Convey Emotions to Children Better than Do Static Ones?

    ERIC Educational Resources Information Center

    Widen, Sherri C.; Russell, James A.

    2015-01-01

    Past research has shown that children recognize emotions from facial expressions poorly and improve only gradually with age, but the stimuli in such studies have been static faces. Because dynamic faces include more information, it may well be that children more readily recognize emotions from dynamic facial expressions. The current study of…

  19. Hypervigilance for fear after basolateral amygdala damage in humans

    PubMed Central

    Terburg, D; Morgan, B E; Montoya, E R; Hooge, I T; Thornton, H B; Hariri, A R; Panksepp, J; Stein, D J; van Honk, J

    2012-01-01

    Recent rodent research has shown that the basolateral amygdala (BLA) inhibits unconditioned, or innate, fear. It is, however, unknown whether the BLA acts in similar ways in humans. In a group of five subjects with a rare genetic syndrome, that is, Urbach–Wiethe disease (UWD), we used a combination of structural and functional neuroimaging, and established focal, bilateral BLA damage, while other amygdala sub-regions are functionally intact. We tested the translational hypothesis that these BLA-damaged UWD-subjects are hypervigilant to facial expressions of fear, which are prototypical innate threat cues in humans. Our data indeed repeatedly confirm fear hypervigilance in these UWD subjects. They show hypervigilant responses to unconsciously presented fearful faces in a modified Stroop task. They attend longer to the eyes of dynamically displayed fearful faces in an eye-tracked emotion recognition task, and in that task recognize facial fear significantly better than control subjects. These findings provide the first direct evidence in humans in support of an inhibitory function of the BLA on the brain's threat vigilance system, which has important implications for the understanding of the amygdala's role in the disorders of fear and anxiety. PMID:22832959

  20. Conjunction Faces Alter Confidence-Accuracy Relations for Old Faces

    ERIC Educational Resources Information Center

    Reinitz, Mark Tippens; Loftus, Geoffrey R.

    2017-01-01

    The authors used a state-trace methodology to investigate the informational dimensions used to recognize old and conjunction faces (made by combining parts of separately studied faces). Participants in 3 experiments saw faces presented for 1 s each. They then received a recognition test; faces were presented for varying brief durations and…

  1. Gender-Based Prototype Formation in Face Recognition

    ERIC Educational Resources Information Center

    Baudouin, Jean-Yves; Brochard, Renaud

    2011-01-01

    The role of gender categories in prototype formation during face recognition was investigated in 2 experiments. The participants were asked to learn individual faces and then to recognize them. During recognition, individual faces were mixed with faces, which were blended faces of same or different genders. The results of the 2 experiments showed…

  2. Down Syndrome and Automatic Processing of Familiar and Unfamiliar Emotional Faces

    ERIC Educational Resources Information Center

    Morales, Guadalupe E.; Lopez, Ernesto O.

    2010-01-01

    Participants with Down syndrome (DS) were required to participate in a face recognition experiment to recognize familiar (DS faces) and unfamiliar emotional faces (non DS faces), by using an affective priming paradigm. Pairs of emotional facial stimuli were presented (one face after another) with a short Stimulus Onset Asynchrony of 300…

  3. Non-Egalitarian Allocations among Preschool Peers in a Face-to-Face Bargaining Task

    PubMed Central

    Melis, Alicia P.; Floedl, Anja; Tomasello, Michael

    2015-01-01

    In face-to-face bargaining tasks human adults almost always agree on an equal split of resources. This is due to mutually recognized fairness and equality norms. Early developmental studies on sharing and equality norms found that egalitarian allocations of resources are not common before children are 5 or 6 years old. However, recent studies have shown that in some face-to face collaborative situations, or when recipients express their desires, children at much younger ages choose equal allocations. We investigated the ability of 3.5 and 5-year-olds to negotiate face-to-face, whether to collaborate to obtain an equal or an unequal distribution of rewards. We hypothesized that the face-to-face interaction and interdependency between partners would facilitate egalitarian outcomes at both ages. In the first experiment we found that 5-year-olds were more egalitarian than 3.5-year-olds, but neither of the age classes shared equally. In the second experiment, in which we increased the magnitude of the inequality, we found that children at both ages mostly agreed on the unequal distribution. These results show that communication and face-to-face interactions are not sufficient to guarantee equal allocations at 3–5 years of age. These results add to previous findings suggesting that in the context of non-collaboratively produced resources it is only after 5 years of age that children use equality norms to allocate resources. PMID:25786250

  4. The challenge of localizing the anterior temporal face area: a possible solution.

    PubMed

    Axelrod, Vadim; Yovel, Galit

    2013-11-01

    Humans recognize faces exceptionally well. However, the neural correlates of face recognition are still elusive. Accumulated evidence in recent years suggests that the anterior temporal lobe (ATL), in particular face-selective region in the ATL, is a probable locus of face recognition. Unfortunately, functional MRI (fMRI) studies encounter severe signal drop-out in the ventral ATL, where that ATL face area resides. Consequently, all previous studies localized this region in no more than half of the subjects and its volume was relatively small. Thus, a systematic exploration of the properties of the ATL face area is scarce. In the current high-resolution fMRI study we used coronal slice orientation, which permitted us to localize the ATL face area in all the subjects. Furthermore, the volume of the area was much larger than was reported in previous studies. Direct within subjects comparison with data collected with the commonly used axial slice orientation confirmed that the advantage of the coronal slice orientation in revealing a reliable and larger face-selective area in the ATL. Finally, by displaying the face-selective activations resultant from coronal and axial scanning together, we demonstrate an organization principle of a chain of face-selective regions along the posterior-anterior axis in the ventral temporal lobe that is highly reproducible across all subjects. By using the procedure proposed here, a significant progress can be made in studying the neural correlates of face recognition. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. How do schizophrenia patients use visual information to decode facial emotion?

    PubMed

    Lee, Junghee; Gosselin, Frédéric; Wynn, Jonathan K; Green, Michael F

    2011-09-01

    Impairment in recognizing facial emotions is a prominent feature of schizophrenia patients, but the underlying mechanism of this impairment remains unclear. This study investigated the specific aspects of visual information that are critical for schizophrenia patients to recognize emotional expression. Using the Bubbles technique, we probed the use of visual information during a facial emotion discrimination task (fear vs. happy) in 21 schizophrenia patients and 17 healthy controls. Visual information was sampled through randomly located Gaussian apertures (or "bubbles") at 5 spatial frequency scales. Online calibration of the amount of face exposed through bubbles was used to ensure 75% overall accuracy for each subject. Least-square multiple linear regression analyses between sampled information and accuracy were performed to identify critical visual information that was used to identify emotional expression. To accurately identify emotional expression, schizophrenia patients required more exposure of facial areas (i.e., more bubbles) compared with healthy controls. To identify fearful faces, schizophrenia patients relied less on bilateral eye regions at high-spatial frequency compared with healthy controls. For identification of happy faces, schizophrenia patients relied on the mouth and eye regions; healthy controls did not utilize eyes and used the mouth much less than patients did. Schizophrenia patients needed more facial information to recognize emotional expression of faces. In addition, patients differed from controls in their use of high-spatial frequency information from eye regions to identify fearful faces. This study provides direct evidence that schizophrenia patients employ an atypical strategy of using visual information to recognize emotional faces.

  6. The influence of nationality on the accuracy of face and voice recognition.

    PubMed

    Doty, N D

    1998-01-01

    Sixty English and U.S. citizens were tested to determine the effect of nationality on accuracy in recognizing previously witnessed faces and voices. Subjects viewed a frontal facial photograph and were then asked to select that face from a set of 10 oblique facial photographs. Subjects listened to a recorded voice and were then asked to select the same voice from a set of 10 voice recordings. This process was repeated 7 more times, such that subjects identified a male and female face and voice from England, France, Belize, and the United States. Subjects demonstrated better accuracy recognizing the faces and voices of their own nationality. Subgoups analysis further supported the other-nationality effect as well as the previously documented other-race effect.

  7. Face-selective neurons maintain consistent visual responses across months

    PubMed Central

    McMahon, David B. T.; Jones, Adam P.; Bondar, Igor V.; Leopold, David A.

    2014-01-01

    Face perception in both humans and monkeys is thought to depend on neurons clustered in discrete, specialized brain regions. Because primates are frequently called upon to recognize and remember new individuals, the neuronal representation of faces in the brain might be expected to change over time. The functional properties of neurons in behaving animals are typically assessed over time periods ranging from minutes to hours, which amounts to a snapshot compared to a lifespan of a neuron. It therefore remains unclear how neuronal properties observed on a given day predict that same neuron's activity months or years later. Here we show that the macaque inferotemporal cortex contains face-selective cells that show virtually no change in their patterns of visual responses over time periods as long as one year. Using chronically implanted microwire electrodes guided by functional MRI targeting, we obtained distinct profiles of selectivity for face and nonface stimuli that served as fingerprints for individual neurons in the anterior fundus (AF) face patch within the superior temporal sulcus. Longitudinal tracking over a series of daily recording sessions revealed that face-selective neurons maintain consistent visual response profiles across months-long time spans despite the influence of ongoing daily experience. We propose that neurons in the AF face patch are specialized for aspects of face perception that demand stability as opposed to plasticity. PMID:24799679

  8. Face-selective neurons maintain consistent visual responses across months.

    PubMed

    McMahon, David B T; Jones, Adam P; Bondar, Igor V; Leopold, David A

    2014-06-03

    Face perception in both humans and monkeys is thought to depend on neurons clustered in discrete, specialized brain regions. Because primates are frequently called upon to recognize and remember new individuals, the neuronal representation of faces in the brain might be expected to change over time. The functional properties of neurons in behaving animals are typically assessed over time periods ranging from minutes to hours, which amounts to a snapshot compared to a lifespan of a neuron. It therefore remains unclear how neuronal properties observed on a given day predict that same neuron's activity months or years later. Here we show that the macaque inferotemporal cortex contains face-selective cells that show virtually no change in their patterns of visual responses over time periods as long as one year. Using chronically implanted microwire electrodes guided by functional MRI targeting, we obtained distinct profiles of selectivity for face and nonface stimuli that served as fingerprints for individual neurons in the anterior fundus (AF) face patch within the superior temporal sulcus. Longitudinal tracking over a series of daily recording sessions revealed that face-selective neurons maintain consistent visual response profiles across months-long time spans despite the influence of ongoing daily experience. We propose that neurons in the AF face patch are specialized for aspects of face perception that demand stability as opposed to plasticity.

  9. A new face of sleep: The impact of post-learning sleep on recognition memory for face-name associations

    PubMed Central

    Maurer, Leonie; Zitting, Kirsi-Marja; Elliott, Kieran; Czeisler, Charles A.; Ronda, Joseph M.; Duffy, Jeanne F.

    2015-01-01

    Sleep has been demonstrated to improve consolidation of many types of new memories. However, few prior studies have examined how sleep impacts learning of face-name associations. The recognition of a new face along with the associated name is an important human cognitive skill. Here we investigated whether post-presentation sleep impacts recognition memory of new face-name associations in healthy adults. Fourteen participants were tested twice. Each time, they were presented 20 photos of faces with a corresponding name. Twelve hours later, they were shown each face twice, once with the correct and once with an incorrect name, and asked if each face-name combination was correct and to rate their confidence. In one condition the 12-hour interval between presentation and recall included an 8-hour nighttime sleep opportunity (“Sleep”), while in the other condition they remained awake (“Wake”). There were more correct and highly confident correct responses when the interval between presentation and recall included a sleep opportunity, although improvement between the “Wake” and “Sleep” conditions was not related to duration of sleep or any sleep stage. These data suggest that a nighttime sleep opportunity improves the ability to correctly recognize face-name associations. Further studies investigating the mechanism of this improvement are important, as this finding has implications for individuals with sleep disturbances and/or memory impairments. PMID:26549626

  10. A new face of sleep: The impact of post-learning sleep on recognition memory for face-name associations.

    PubMed

    Maurer, Leonie; Zitting, Kirsi-Marja; Elliott, Kieran; Czeisler, Charles A; Ronda, Joseph M; Duffy, Jeanne F

    2015-12-01

    Sleep has been demonstrated to improve consolidation of many types of new memories. However, few prior studies have examined how sleep impacts learning of face-name associations. The recognition of a new face along with the associated name is an important human cognitive skill. Here we investigated whether post-presentation sleep impacts recognition memory of new face-name associations in healthy adults. Fourteen participants were tested twice. Each time, they were presented 20 photos of faces with a corresponding name. Twelve hours later, they were shown each face twice, once with the correct and once with an incorrect name, and asked if each face-name combination was correct and to rate their confidence. In one condition the 12-h interval between presentation and recall included an 8-h nighttime sleep opportunity ("Sleep"), while in the other condition they remained awake ("Wake"). There were more correct and highly confident correct responses when the interval between presentation and recall included a sleep opportunity, although improvement between the "Wake" and "Sleep" conditions was not related to duration of sleep or any sleep stage. These data suggest that a nighttime sleep opportunity improves the ability to correctly recognize face-name associations. Further studies investigating the mechanism of this improvement are important, as this finding has implications for individuals with sleep disturbances and/or memory impairments. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Gender-based prototype formation in face recognition.

    PubMed

    Baudouin, Jean-Yves; Brochard, Renaud

    2011-07-01

    The role of gender categories in prototype formation during face recognition was investigated in 2 experiments. The participants were asked to learn individual faces and then to recognize them. During recognition, individual faces were mixed with faces, which were blended faces of same or different genders. The results of the 2 experiments showed that blended faces made with learned individual faces were recognized, even though they had never been seen before. In Experiment 1, this effect was stronger when faces belonged to the same gender category (same-sex blended faces), but it also emerged across gender categories (cross-sex blended faces). Experiment 2 further showed that this prototype effect was not affected by the presentation order for same-sex blended faces: The effect was equally strong when the faces were presented one after the other during learning or alternated with faces of the opposite gender. By contrast, the prototype effect across gender categories was highly sensitive to the temporal proximity of the faces blended into the blended faces and almost disappeared when other faces were intermixed. These results indicate that distinct neural populations code for female and male faces. However, the formation of a facial representation can also be mediated by both neural populations. The implications for face-space properties and face-encoding processes are discussed.

  12. Activation of the right fronto-temporal cortex during maternal facial recognition in young infants.

    PubMed

    Carlsson, Jakob; Lagercrantz, Hugo; Olson, Linus; Printz, Gordana; Bartocci, Marco

    2008-09-01

    Within the first days of life infants can already recognize their mother. This ability is based on several sensory mechanisms and increases during the first year of life, having its most crucial phase between 6 and 9 months when cortical circuits develop. The underlying cortical structures that are involved in this process are still unknown. Herein we report how the prefrontal cortices of healthy 6- to 9-month-old infants react to the sight of their mother's faces compared to that of an unknown female face. Concentrations of oxygenated haemoglobin [HbO2] and deoxygenated haemoglobin [HHb] were measured using near infrared spectroscopy (NIRS) in both fronto-temporal and occipital areas on the right side during the exposure to maternal and unfamiliar faces. The infants exhibited a distinct and significantly higher activation-related haemodynamic response in the right fronto-temporal cortex following exposure to the image of their mother's face, [HbO2] (0.75 micromol/L, p < 0.001), as compared to that of an unknown face (0.25 micromol/L, p < 0.001). Event-related haemodynamic changes, suggesting cortical activation, in response to the sight of human faces were detected in 6- to 9-month old children. The right fronto-temporal cortex appears to be involved in face recognition processes at this age.

  13. Karen and George: Face Recognition by Visually Impaired Children.

    ERIC Educational Resources Information Center

    Ellis, Hadyn D.; And Others

    1988-01-01

    Two visually impaired children, aged 8 and 10, appeared to have severe difficulty in recognizing faces. After assessment, it became apparent that only one had unusually poor facial recognition skills. After training, which included matching face photographs, schematic faces, and digitized faces, there was no evidence of any improvement.…

  14. Emotion processing in chimeric faces: hemispheric asymmetries in expression and recognition of emotions.

    PubMed

    Indersmitten, Tim; Gur, Ruben C

    2003-05-01

    Since the discovery of facial asymmetries in emotional expressions of humans and other primates, hypotheses have related the greater left-hemiface intensity to right-hemispheric dominance in emotion processing. However, the difficulty of creating true frontal views of facial expressions in two-dimensional photographs has confounded efforts to better understand the phenomenon. We have recently described a method for obtaining three-dimensional photographs of posed and evoked emotional expressions and used these stimuli to investigate both intensity of expression and accuracy of recognizing emotion in chimeric faces constructed from only left- or right-side composites. The participant population included 38 (19 male, 19 female) African-American, Caucasian, and Asian adults. They were presented with chimeric composites generated from faces of eight actors and eight actresses showing four emotions: happiness, sadness, anger, and fear, each in posed and evoked conditions. We replicated the finding that emotions are expressed more intensely in the left hemiface for all emotions and conditions, with the exception of evoked anger, which was expressed more intensely in the right hemiface. In contrast, the results indicated that emotional expressions are recognized more efficiently in the right hemiface, indicating that the right hemiface expresses emotions more accurately. The double dissociation between the laterality of expression intensity and that of recognition efficiency supports the notion that the two kinds of processes may have distinct neural substrates. Evoked anger is uniquely expressed more intensely and accurately on the side of the face that projects to the viewer's right hemisphere, dominant in emotion recognition.

  15. False memory for face in short-term memory and neural activity in human amygdala.

    PubMed

    Iidaka, Tetsuya; Harada, Tokiko; Sadato, Norihiro

    2014-12-03

    Human memory is often inaccurate. Similar to words and figures, new faces are often recognized as seen or studied items in long- and short-term memory tests; however, the neural mechanisms underlying this false memory remain elusive. In a previous fMRI study using morphed faces and a standard false memory paradigm, we found that there was a U-shaped response curve of the amygdala to old, new, and lure items. This indicates that the amygdala is more active in response to items that are salient (hit and correct rejection) compared to items that are less salient (false alarm), in terms of memory retrieval. In the present fMRI study, we determined whether the false memory for faces occurs within the short-term memory range (a few seconds), and assessed which neural correlates are involved in veridical and illusory memories. Nineteen healthy participants were scanned by 3T MRI during a short-term memory task using morphed faces. The behavioral results indicated that the occurrence of false memories was within the short-term range. We found that the amygdala displayed a U-shaped response curve to memory items, similar to those observed in our previous study. These results suggest that the amygdala plays a common role in both long- and short-term false memory for faces. We made the following conclusions: First, the amygdala is involved in detecting the saliency of items, in addition to fear, and supports goal-oriented behavior by modulating memory. Second, amygdala activity and response time might be related with a subject's response criterion for similar faces. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Emotion-independent face recognition

    NASA Astrophysics Data System (ADS)

    De Silva, Liyanage C.; Esther, Kho G. P.

    2000-12-01

    Current face recognition techniques tend to work well when recognizing faces under small variations in lighting, facial expression and pose, but deteriorate under more extreme conditions. In this paper, a face recognition system to recognize faces of known individuals, despite variations in facial expression due to different emotions, is developed. The eigenface approach is used for feature extraction. Classification methods include Euclidean distance, back propagation neural network and generalized regression neural network. These methods yield 100% recognition accuracy when the training database is representative, containing one image representing the peak expression for each emotion of each person apart from the neutral expression. The feature vectors used for comparison in the Euclidean distance method and for training the neural network must be all the feature vectors of the training set. These results are obtained for a face database consisting of only four persons.

  17. Robust representation and recognition of facial emotions using extreme sparse learning.

    PubMed

    Shojaeilangari, Seyedehsamaneh; Yau, Wei-Yun; Nandakumar, Karthik; Li, Jun; Teoh, Eam Khwang

    2015-07-01

    Recognition of natural emotions from human faces is an interesting topic with a wide range of potential applications, such as human-computer interaction, automated tutoring systems, image and video retrieval, smart environments, and driver warning systems. Traditionally, facial emotion recognition systems have been evaluated on laboratory controlled data, which is not representative of the environment faced in real-world applications. To robustly recognize the facial emotions in real-world natural situations, this paper proposes an approach called extreme sparse learning, which has the ability to jointly learn a dictionary (set of basis) and a nonlinear classification model. The proposed approach combines the discriminative power of extreme learning machine with the reconstruction property of sparse representation to enable accurate classification when presented with noisy signals and imperfect data recorded in natural settings. In addition, this paper presents a new local spatio-temporal descriptor that is distinctive and pose-invariant. The proposed framework is able to achieve the state-of-the-art recognition accuracy on both acted and spontaneous facial emotion databases.

  18. Training with Own-Race Faces Can Improve Processing of Other-Race Faces: Evidence from Developmental Prosopagnosia

    ERIC Educational Resources Information Center

    DeGutis, Joseph; DeNicola, Cristopher; Zink, Tyler; McGlinchey, Regina; Milberg, William

    2011-01-01

    Faces of one's own race are discriminated and recognized more accurately than faces of an other race (other-race effect--ORE). Studies have employed several methods to enhance individuation and recognition of other-race faces and reduce the ORE, including intensive perceptual training with other-race faces and explicitly instructing participants…

  19. Oxytocin eliminates the own-race bias in face recognition memory.

    PubMed

    Blandón-Gitlin, Iris; Pezdek, Kathy; Saldivar, Sesar; Steelman, Erin

    2014-09-11

    The neuropeptide Oxytocin influences a number of social behaviors, including processing of faces. We examined whether Oxytocin facilitates the processing of out-group faces and reduce the own-race bias (ORB). The ORB is a robust phenomenon characterized by poor recognition memory of other-race faces compared to the same-race faces. In Experiment 1, participants received intranasal solutions of Oxytocin or placebo prior to viewing White and Black faces. On a subsequent recognition test, whereas in the placebo condition the same-race faces were better recognized than other-race faces, in the Oxytocin condition Black and White faces were equally well recognized, effectively eliminating the ORB. In Experiment 2, Oxytocin was administered after the study phase. The ORB resulted, but Oxytocin did not significantly reduce the effect. This study is the first to show that Oxytocin can enhance face memory of out-group members and underscore the importance of social encoding mechanisms underlying the own-race bias. This article is part of a Special Issue entitled Oxytocin and Social Behav. Copyright © 2013 Elsevier B.V. All rights reserved.

  20. Dissociable roles of internal feelings and face recognition ability in facial expression decoding.

    PubMed

    Zhang, Lin; Song, Yiying; Liu, Ling; Liu, Jia

    2016-05-15

    The problem of emotion recognition has been tackled by researchers in both affective computing and cognitive neuroscience. While affective computing relies on analyzing visual features from facial expressions, it has been proposed that humans recognize emotions by internally simulating the emotional states conveyed by others' expressions, in addition to perceptual analysis of facial features. Here we investigated whether and how our internal feelings contributed to the ability to decode facial expressions. In two independent large samples of participants, we observed that individuals who generally experienced richer internal feelings exhibited a higher ability to decode facial expressions, and the contribution of internal feelings was independent of face recognition ability. Further, using voxel-based morphometry, we found that the gray matter volume (GMV) of bilateral superior temporal sulcus (STS) and the right inferior parietal lobule was associated with facial expression decoding through the mediating effect of internal feelings, while the GMV of bilateral STS, precuneus, and the right central opercular cortex contributed to facial expression decoding through the mediating effect of face recognition ability. In addition, the clusters in bilateral STS involved in the two components were neighboring yet separate. Our results may provide clues about the mechanism by which internal feelings, in addition to face recognition ability, serve as an important instrument for humans in facial expression decoding. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Children with Autism Spectrum Disorder scan own-race faces differently from other-race faces.

    PubMed

    Yi, Li; Quinn, Paul C; Fan, Yuebo; Huang, Dan; Feng, Cong; Joseph, Lisa; Li, Jiao; Lee, Kang

    2016-01-01

    It has been well documented that people recognize and scan other-race faces differently from faces of their own race. The current study examined whether this cross-racial difference in face processing found in the typical population also exists in individuals with Autism Spectrum Disorder (ASD). Participants included 5- to 10-year-old children with ASD (n=29), typically developing (TD) children matched on chronological age (n=29), and TD children matched on nonverbal IQ (n=29). Children completed a face recognition task in which they were asked to memorize and recognize both own- and other-race faces while their eye movements were tracked. We found no recognition advantage for own-race faces relative to other-race faces in any of the three groups. However, eye-tracking results indicated that, similar to TD children, children with ASD exhibited a cross-racial face-scanning pattern: they looked at the eyes of other-race faces longer than at those of own-race faces, whereas they looked at the mouth of own-race faces longer than at that of other-race faces. The findings suggest that although children with ASD have difficulty with processing some aspects of faces, their ability to process face race information is relatively spared. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. The utility of multiple synthesized views in the recognition of unfamiliar faces.

    PubMed

    Jones, Scott P; Dwyer, Dominic M; Lewis, Michael B

    2017-05-01

    The ability to recognize an unfamiliar individual on the basis of prior exposure to a photograph is notoriously poor and prone to errors, but recognition accuracy is improved when multiple photographs are available. In applied situations, when only limited real images are available (e.g., from a mugshot or CCTV image), the generation of new images might provide a technological prosthesis for otherwise fallible human recognition. We report two experiments examining the effects of providing computer-generated additional views of a target face. In Experiment 1, provision of computer-generated views supported better target face recognition than exposure to the target image alone and equivalent performance to that for exposure of multiple photograph views. Experiment 2 replicated the advantage of providing generated views, but also indicated an advantage for multiple viewings of the single target photograph. These results strengthen the claim that identifying a target face can be improved by providing multiple synthesized views based on a single target image. In addition, our results suggest that the degree of advantage provided by synthesized views may be affected by the quality of synthesized material.

  3. Exposure to the self-face facilitates identification of dynamic facial expressions: influences on individual differences.

    PubMed

    Li, Yuan Hang; Tottenham, Nim

    2013-04-01

    A growing literature suggests that the self-face is involved in processing the facial expressions of others. The authors experimentally activated self-face representations to assess its effects on the recognition of dynamically emerging facial expressions of others. They exposed participants to videos of either their own faces (self-face prime) or faces of others (nonself-face prime) prior to a facial expression judgment task. Their results show that experimentally activating self-face representations results in earlier recognition of dynamically emerging facial expression. As a group, participants in the self-face prime condition recognized expressions earlier (when less affective perceptual information was available) compared to participants in the nonself-face prime condition. There were individual differences in performance, such that poorer expression identification was associated with higher autism traits (in this neurocognitively healthy sample). However, when randomized into the self-face prime condition, participants with high autism traits performed as well as those with low autism traits. Taken together, these data suggest that the ability to recognize facial expressions in others is linked with the internal representations of our own faces. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  4. The Pandora Effect: The Power and Peril of Curiosity.

    PubMed

    Hsee, Christopher K; Ruan, Bowen

    2016-05-01

    Curiosity-the desire for information-underlies many human activities, from reading celebrity gossip to developing nuclear science. Curiosity is well recognized as a human blessing. Is it also a human curse? Tales about such things as Pandora's box suggest that it is, but scientific evidence is lacking. In four controlled experiments, we demonstrated that curiosity could lead humans to expose themselves to aversive stimuli (even electric shocks) for no apparent benefits. The research suggests that humans possess an inherent desire, independent of consequentialist considerations, to resolve uncertainty; when facing something uncertain and feeling curious, they will act to resolve the uncertainty even if they expect negative consequences. This research reveals the potential perverse side of curiosity, and is particularly relevant to the current epoch, the epoch of information, and to the scientific community, a community with high curiosity. © The Author(s) 2016.

  5. How distinct is the coding of face identity and expression? Evidence for some common dimensions in face space.

    PubMed

    Rhodes, Gillian; Pond, Stephen; Burton, Nichola; Kloth, Nadine; Jeffery, Linda; Bell, Jason; Ewing, Louise; Calder, Andrew J; Palermo, Romina

    2015-09-01

    Traditional models of face perception emphasize distinct routes for processing face identity and expression. These models have been highly influential in guiding neural and behavioural research on the mechanisms of face perception. However, it is becoming clear that specialised brain areas for coding identity and expression may respond to both attributes and that identity and expression perception can interact. Here we use perceptual aftereffects to demonstrate the existence of dimensions in perceptual face space that code both identity and expression, further challenging the traditional view. Specifically, we find a significant positive association between face identity aftereffects and expression aftereffects, which dissociates from other face (gaze) and non-face (tilt) aftereffects. Importantly, individual variation in the adaptive calibration of these common dimensions significantly predicts ability to recognize both identity and expression. These results highlight the role of common dimensions in our ability to recognize identity and expression, and show why the high-level visual processing of these attributes is not entirely distinct. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Newborns' Face Recognition: Role of Inner and Outer Facial Features

    ERIC Educational Resources Information Center

    Turati, Chiara; Macchi Cassia, Viola; Simion, Francesca; Leo, Irene

    2006-01-01

    Existing data indicate that newborns are able to recognize individual faces, but little is known about what perceptual cues drive this ability. The current study showed that either the inner or outer features of the face can act as sufficient cues for newborns' face recognition (Experiment 1), but the outer part of the face enjoys an advantage…

  7. Body-Based Gender Recognition Using Images from Visible and Thermal Cameras

    PubMed Central

    Nguyen, Dat Tien; Park, Kang Ryoung

    2016-01-01

    Gender information has many useful applications in computer vision systems, such as surveillance systems, counting the number of males and females in a shopping mall, accessing control systems in restricted areas, or any human-computer interaction system. In most previous studies, researchers attempted to recognize gender by using visible light images of the human face or body. However, shadow, illumination, and time of day greatly affect the performance of these methods. To overcome this problem, we propose a new gender recognition method based on the combination of visible light and thermal camera images of the human body. Experimental results, through various kinds of feature extraction and fusion methods, show that our approach is efficient for gender recognition through a comparison of recognition rates with conventional systems. PMID:26828487

  8. Body-Based Gender Recognition Using Images from Visible and Thermal Cameras.

    PubMed

    Nguyen, Dat Tien; Park, Kang Ryoung

    2016-01-27

    Gender information has many useful applications in computer vision systems, such as surveillance systems, counting the number of males and females in a shopping mall, accessing control systems in restricted areas, or any human-computer interaction system. In most previous studies, researchers attempted to recognize gender by using visible light images of the human face or body. However, shadow, illumination, and time of day greatly affect the performance of these methods. To overcome this problem, we propose a new gender recognition method based on the combination of visible light and thermal camera images of the human body. Experimental results, through various kinds of feature extraction and fusion methods, show that our approach is efficient for gender recognition through a comparison of recognition rates with conventional systems.

  9. Neural Correlates of Covert Face Processing: fMRI Evidence from a Prosopagnosic Patient

    PubMed Central

    Liu, Jiangang; Wang, Meiyun; Shi, Xiaohong; Feng, Lu; Li, Ling; Thacker, Justine Marie; Tian, Jie; Shi, Dapeng; Lee, Kang

    2014-01-01

    Brains can perceive or recognize a face even though we are subjectively unaware of the existence of that face. However, the exact neural correlates of such covert face processing remain unknown. Here, we compared the fMRI activities between a prosopagnosic patient and normal controls when they saw famous and unfamiliar faces. When compared with objects, the patient showed greater activation to famous faces in the fusiform face area (FFA) though he could not overtly recognize those faces. In contrast, the controls showed greater activation to both famous and unfamiliar faces in the FFA. Compared with unfamiliar faces, famous faces activated the controls', but not the patient's lateral prefrontal cortex (LPFC) known to be involved in familiar face recognition. In contrast, the patient showed greater activation in the bilateral medial frontal gyrus (MeFG). Functional connectivity analyses revealed that the patient's right middle fusiform gyrus (FG) showed enhanced connectivity to the MeFG, whereas the controls' middle FG showed enhanced connectivity to the LPFC. These findings suggest that the FFA may be involved in both covert and overt face recognition. The patient's impairment in overt face recognition may be due to the absence of the coupling between the right FG and the LPFC. PMID:23448870

  10. Identification and Classification of Facial Familiarity in Directed Lying: An ERP Study

    PubMed Central

    Sun, Delin; Chan, Chetwyn C. H.; Lee, Tatia M. C.

    2012-01-01

    Recognizing familiar faces is essential to social functioning, but little is known about how people identify human faces and classify them in terms of familiarity. Face identification involves discriminating familiar faces from unfamiliar faces, whereas face classification involves making an intentional decision to classify faces as “familiar” or “unfamiliar.” This study used a directed-lying task to explore the differentiation between identification and classification processes involved in the recognition of familiar faces. To explore this issue, the participants in this study were shown familiar and unfamiliar faces. They responded to these faces (i.e., as familiar or unfamiliar) in accordance with the instructions they were given (i.e., to lie or to tell the truth) while their EEG activity was recorded. Familiar faces (regardless of lying vs. truth) elicited significantly less negative-going N400f in the middle and right parietal and temporal regions than unfamiliar faces. Regardless of their actual familiarity, the faces that the participants classified as “familiar” elicited more negative-going N400f in the central and right temporal regions than those classified as “unfamiliar.” The P600 was related primarily with the facial identification process. Familiar faces (regardless of lying vs. truth) elicited more positive-going P600f in the middle parietal and middle occipital regions. The results suggest that N400f and P600f play different roles in the processes involved in facial recognition. The N400f appears to be associated with both the identification (judgment of familiarity) and classification of faces, while it is likely that the P600f is only associated with the identification process (recollection of facial information). Future studies should use different experimental paradigms to validate the generalizability of the results of this study. PMID:22363597

  11. Effects of aging on identifying emotions conveyed by point-light walkers.

    PubMed

    Spencer, Justine M Y; Sekuler, Allison B; Bennett, Patrick J; Giese, Martin A; Pilz, Karin S

    2016-02-01

    The visual system is able to recognize human motion simply from point lights attached to the major joints of an actor. Moreover, it has been shown that younger adults are able to recognize emotions from such dynamic point-light displays. Previous research has suggested that the ability to perceive emotional stimuli changes with age. For example, it has been shown that older adults are impaired in recognizing emotional expressions from static faces. In addition, it has been shown that older adults have difficulties perceiving visual motion, which might be helpful to recognize emotions from point-light displays. In the current study, 4 experiments were completed in which older and younger adults were asked to identify 3 emotions (happy, sad, and angry) displayed by 4 types of point-light walkers: upright and inverted normal walkers, which contained both local motion and global form information; upright scrambled walkers, which contained only local motion information; and upright random-position walkers, which contained only global form information. Overall, emotion discrimination accuracy was lower in older participants compared with younger participants, specifically when identifying sad and angry point-light walkers. In addition, observers in both age groups were able to recognize emotions from all types of point-light walkers, suggesting that both older and younger adults are able to recognize emotions from point-light walkers on the basis of local motion or global form. (c) 2016 APA, all rights reserved).

  12. Multi-texture local ternary pattern for face recognition

    NASA Astrophysics Data System (ADS)

    Essa, Almabrok; Asari, Vijayan

    2017-05-01

    In imagery and pattern analysis domain a variety of descriptors have been proposed and employed for different computer vision applications like face detection and recognition. Many of them are affected under different conditions during the image acquisition process such as variations in illumination and presence of noise, because they totally rely on the image intensity values to encode the image information. To overcome these problems, a novel technique named Multi-Texture Local Ternary Pattern (MTLTP) is proposed in this paper. MTLTP combines the edges and corners based on the local ternary pattern strategy to extract the local texture features of the input image. Then returns a spatial histogram feature vector which is the descriptor for each image that we use to recognize a human being. Experimental results using a k-nearest neighbors classifier (k-NN) on two publicly available datasets justify our algorithm for efficient face recognition in the presence of extreme variations of illumination/lighting environments and slight variation of pose conditions.

  13. The contribution of local features to familiarity judgments in music.

    PubMed

    Bigand, Emmanuel; Gérard, Yannick; Molin, Paul

    2009-07-01

    The contributions of local and global features to object identification depend upon the context. For example, while local features play an essential role in identification of words and objects, the global features are more influential in face recognition. In order to evaluate the respective strengths of local and global features for face recognition, researchers usually ask participants to recognize human faces (famous or learned) in normal and scrambled pictures. In this paper, we address a similar issue in music. We present the results of an experiment in which musically untrained participants were asked to differentiate famous from unknown musical excerpts that were presented in normal or scrambled ways. Manipulating the size of the temporal window on which the scrambling procedure was applied allowed us to evaluate the minimal length of time necessary for participants to make a familiarity judgment. Quite surprisingly, the minimum duration for differentiation of famous from unknown pieces is extremely short. This finding highlights the contribution of very local features to music memory.

  14. Age-related increase of image-invariance in the fusiform face area.

    PubMed

    Nordt, Marisa; Semmelmann, Kilian; Genç, Erhan; Weigelt, Sarah

    2018-06-01

    Face recognition undergoes prolonged development from childhood to adulthood, thereby raising the question which neural underpinnings are driving this development. Here, we address the development of the neural foundation of the ability to recognize a face across naturally varying images. Fourteen children (ages, 7-10) and 14 adults (ages, 20-23) watched images of either the same or different faces in a functional magnetic resonance imaging adaptation paradigm. The same face was either presented in exact image repetitions or in varying images. Additionally, a subset of participants completed a behavioral task, in which they decided if the face in consecutively presented images belonged to the same person. Results revealed age-related increases in neural sensitivity to face identity in the fusiform face area. Importantly, ventral temporal face-selective regions exhibited more image-invariance - as indicated by stronger adaptation for different images of the same person - in adults compared to children. Crucially, the amount of adaptation to face identity across varying images was correlated with the ability to recognize individual faces in different images. These results suggest that the increase of image-invariance in face-selective regions might be related to the development of face recognition skills. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  15. Appropriate technology and climate change adaptation

    NASA Astrophysics Data System (ADS)

    Bandala, Erick R.; Patiño-Gomez, Carlos

    2016-02-01

    Climate change is emerging as the greatest significant environmental problem for the 21st Century and the most important global challenge faced by human kind. Based on evidence recognized by the international scientific community, climate change is already an unquestionable reality, whose first effects are beginning to be measured. Available climate projections and models can assist in anticipating potential far-reaching consequences for development processes. Climatic transformations will impact the environment, biodiversity and water resources, putting several productive processes at risk; and will represent a threat to public health and water availability in quantity and quality.

  16. How the brain assigns a neural tag to arbitrary points in a high-dimensional space

    NASA Astrophysics Data System (ADS)

    Stevens, Charles

    Brains in almost all organisms need to deal with very complex stimuli. For example, most mammals are very good at face recognition, and faces are very complex objects indeed. For example, modern face recognition software represents a face as a point in a 10,000 dimensional space. Every human must be able to learn to recognize any of the 7 billion faces in the world, and can recognize familiar faces after a display of the face is viewed for only a few hundred milliseconds. Because we do not understand how faces are assigned locations in a high-dimensional space by the brain, attacking the problem of how face recognition is accomplished is very difficult. But a much easier problem of the same sort can be studied for odor recognition. For the mouse, each odor is assigned a point in a 1000 dimensional space, and the fruit fly assigns any odor a location in only a 50 dimensional space. A fly has about 50 distinct types of odorant receptor neurons (ORNs), each of which produce nerve impulses at a specific rate for each different odor. This pattern of firing produced across 50 ORNs is called `a combinatorial odor code', and this code assigns every odor a point in a 50 dimensional space that is used to identify the odor. In order to learn the odor, the brain must alter the strength of synapses. The combinatorial code cannot itself by used to change synaptic strength because all odors use same neurons to form the code, and so all synapses would be changed for any odor and the odors could not be distinguished. In order to learn an odor, the brain must assign a set of neurons - the odor tag - that have the property that these neurons (1) should make use of all of the information available about the odor, and (2) insure that any two tags overlap as little as possible (so one odor does not modify synapses used by other odors). In the talk, I will explain how the olfactory system of both the fruit fly and the mouse produce a tag for each odor that has these two properties. Supported by NSF.

  17. Individuation Experience Predicts Other-Race Effects in Holistic Processing for Both Caucasian and Black Participants

    ERIC Educational Resources Information Center

    Bukach, Cindy M.; Cottle, Jasmine; Ubiwa, JoAnna; Miller, Jessica

    2012-01-01

    Same-race (SR) faces are recognized better than other-race (OR) faces, and this other-race effect (ORE) is correlated with experience. SR faces are also processed more holistically than OR faces, suggesting one possible mechanism for poorer performance on OR faces. Studies of object expertise have shown that individuating experiences are necessary…

  18. Developmental Changes in Face Recognition during Childhood: Evidence from Upright and Inverted Faces

    ERIC Educational Resources Information Center

    de Heering, Adelaide; Rossion, Bruno; Maurer, Daphne

    2012-01-01

    Adults are experts at recognizing faces but there is controversy about how this ability develops with age. We assessed 6- to 12-year-olds and adults using a digitized version of the Benton Face Recognition Test, a sensitive tool for assessing face perception abilities. Children's response times for correct responses did not decrease between ages 6…

  19. Developmental prosopagnosia and super-recognition: no special role for surface reflectance processing

    PubMed Central

    Russell, Richard; Chatterjee, Garga; Nakayama, Ken

    2011-01-01

    Face recognition by normal subjects depends in roughly equal proportions on shape and surface reflectance cues, while object recognition depends predominantly on shape cues. It is possible that developmental prosopagnosics are deficient not in their ability to recognize faces per se, but rather in their ability to use reflectance cues. Similarly, super-recognizers’ exceptional ability with face recognition may be a result of superior surface reflectance perception and memory. We tested this possibility by administering tests of face perception and face recognition in which only shape or reflectance cues are available to developmental prosopagnosics, super-recognizers, and control subjects. Face recognition ability and the relative use of shape and pigmentation were unrelated in all the tests. Subjects who were better at using shape or reflectance cues were also better at using the other type of cue. These results do not support the proposal that variation in surface reflectance perception ability is the underlying cause of variation in face recognition ability. Instead, these findings support the idea that face recognition ability is related to neural circuits using representations that integrate shape and pigmentation information. PMID:22192636

  20. Pose-variant facial expression recognition using an embedded image system

    NASA Astrophysics Data System (ADS)

    Song, Kai-Tai; Han, Meng-Ju; Chang, Shuo-Hung

    2008-12-01

    In recent years, one of the most attractive research areas in human-robot interaction is automated facial expression recognition. Through recognizing the facial expression, a pet robot can interact with human in a more natural manner. In this study, we focus on the facial pose-variant problem. A novel method is proposed in this paper to recognize pose-variant facial expressions. After locating the face position in an image frame, the active appearance model (AAM) is applied to track facial features. Fourteen feature points are extracted to represent the variation of facial expressions. The distance between feature points are defined as the feature values. These feature values are sent to a support vector machine (SVM) for facial expression determination. The pose-variant facial expression is classified into happiness, neutral, sadness, surprise or anger. Furthermore, in order to evaluate the performance for practical applications, this study also built a low resolution database (160x120 pixels) using a CMOS image sensor. Experimental results show that the recognition rate is 84% with the self-built database.

  1. Can a Humanoid Face be Expressive? A Psychophysiological Investigation

    PubMed Central

    Lazzeri, Nicole; Mazzei, Daniele; Greco, Alberto; Rotesi, Annalisa; Lanatà, Antonio; De Rossi, Danilo Emilio

    2015-01-01

    Non-verbal signals expressed through body language play a crucial role in multi-modal human communication during social relations. Indeed, in all cultures, facial expressions are the most universal and direct signs to express innate emotional cues. A human face conveys important information in social interactions and helps us to better understand our social partners and establish empathic links. Latest researches show that humanoid and social robots are becoming increasingly similar to humans, both esthetically and expressively. However, their visual expressiveness is a crucial issue that must be improved to make these robots more realistic and intuitively perceivable by humans as not different from them. This study concerns the capability of a humanoid robot to exhibit emotions through facial expressions. More specifically, emotional signs performed by a humanoid robot have been compared with corresponding human facial expressions in terms of recognition rate and response time. The set of stimuli included standardized human expressions taken from an Ekman-based database and the same facial expressions performed by the robot. Furthermore, participants’ psychophysiological responses have been explored to investigate whether there could be differences induced by interpreting robot or human emotional stimuli. Preliminary results show a trend to better recognize expressions performed by the robot than 2D photos or 3D models. Moreover, no significant differences in the subjects’ psychophysiological state have been found during the discrimination of facial expressions performed by the robot in comparison with the same task performed with 2D photos and 3D models. PMID:26075199

  2. Quantifying facial expression recognition across viewing conditions.

    PubMed

    Goren, Deborah; Wilson, Hugh R

    2006-04-01

    Facial expressions are key to social interactions and to assessment of potential danger in various situations. Therefore, our brains must be able to recognize facial expressions when they are transformed in biologically plausible ways. We used synthetic happy, sad, angry and fearful faces to determine the amount of geometric change required to recognize these emotions during brief presentations. Five-alternative forced choice conditions involving central viewing, peripheral viewing and inversion were used to study recognition among the four emotions. Two-alternative forced choice was used to study affect discrimination when spatial frequency information in the stimulus was modified. The results show an emotion and task-dependent pattern of detection. Facial expressions presented with low peak frequencies are much harder to discriminate from neutral than faces defined by either mid or high peak frequencies. Peripheral presentation of faces also makes recognition much more difficult, except for happy faces. Differences between fearful detection and recognition tasks are probably due to common confusions with sadness when recognizing fear from among other emotions. These findings further support the idea that these emotions are processed separately from each other.

  3. A human rights approach to the health implications of food and nutrition insecurity.

    PubMed

    Ayala, Ana; Meier, Benjamin Mason

    2017-01-01

    Food and nutrition insecurity continues to pose a serious global challenge, reflecting government shortcomings in meeting international obligations to ensure the availability, accessibility, and quality of food and to ensure the highest attainable standard of health of their peoples. With global drivers like climate change, urbanization, greater armed conflict, and the globalization of unhealthy diet, particularly in under-resourced countries, food insecurity is rapidly becoming an even greater challenge for those living in poverty. International human rights law can serve a critical role in guiding governments that are struggling to protect the health of their populations, particularly among the most susceptible groups, in responding to food and nutrition insecurity. This article explores and advocates for a human rights approach to food and nutrition security, specifically identifying legal mechanisms to "domesticate" relevant international human rights standards through national policy. Recognizing nutrition security as a determinant of public health, this article recognizes the important links between the four main elements of food security (i.e., availability, stability, utilization, and access) and the normative attributes of the right to health and the right to food (i.e., availability, accessibility, affordability, and quality). In drawing from the evolution of international human rights instruments, official documents issued by international human rights treaty bodies, as well as past scholarship at the intersection of the right to health and right to food, this article interprets and articulates the intersectional rights-based obligations of national governments in the face of food and nutrition insecurity.

  4. Facial Emotion Recognition in Bipolar Disorder and Healthy Aging.

    PubMed

    Altamura, Mario; Padalino, Flavia A; Stella, Eleonora; Balzotti, Angela; Bellomo, Antonello; Palumbo, Rocco; Di Domenico, Alberto; Mammarella, Nicola; Fairfield, Beth

    2016-03-01

    Emotional face recognition is impaired in bipolar disorder, but it is not clear whether this is specific for the illness. Here, we investigated how aging and bipolar disorder influence dynamic emotional face recognition. Twenty older adults, 16 bipolar patients, and 20 control subjects performed a dynamic affective facial recognition task and a subsequent rating task. Participants pressed a key as soon as they were able to discriminate whether the neutral face was assuming a happy or angry facial expression and then rated the intensity of each facial expression. Results showed that older adults recognized happy expressions faster, whereas bipolar patients recognized angry expressions faster. Furthermore, both groups rated emotional faces more intensely than did the control subjects. This study is one of the first to compare how aging and clinical conditions influence emotional facial recognition and underlines the need to consider the role of specific and common factors in emotional face recognition.

  5. Independent Influences of Verbalization and Race on the Configural and Featural Processing of Faces: A Behavioral and Eye Movement Study

    ERIC Educational Resources Information Center

    Nakabayashi, Kazuyo; Lloyd-Jones, Toby J.; Butcher, Natalie; Liu, Chang Hong

    2012-01-01

    Describing a face in words can either hinder or help subsequent face recognition. Here, the authors examined the relationship between the benefit from verbally describing a series of faces and the same-race advantage (SRA) whereby people are better at recognizing unfamiliar faces from their own race as compared with those from other races.…

  6. Evidence for view-invariant face recognition units in unfamiliar face learning.

    PubMed

    Etchells, David B; Brooks, Joseph L; Johnston, Robert A

    2017-05-01

    Many models of face recognition incorporate the idea of a face recognition unit (FRU), an abstracted representation formed from each experience of a face which aids recognition under novel viewing conditions. Some previous studies have failed to find evidence of this FRU representation. Here, we report three experiments which investigated this theoretical construct by modifying the face learning procedure from that in previous work. During learning, one or two views of previously unfamiliar faces were shown to participants in a serial matching task. Later, participants attempted to recognize both seen and novel views of the learned faces (recognition phase). Experiment 1 tested participants' recognition of a novel view, a day after learning. Experiment 2 was identical, but tested participants on the same day as learning. Experiment 3 repeated Experiment 1, but tested participants on a novel view that was outside the rotation of those views learned. Results revealed a significant advantage, across all experiments, for recognizing a novel view when two views had been learned compared to single view learning. The observed view invariance supports the notion that an FRU representation is established during multi-view face learning under particular learning conditions.

  7. Emotional Cues during Simultaneous Face and Voice Processing: Electrophysiological Insights

    PubMed Central

    Liu, Taosheng; Pinheiro, Ana; Zhao, Zhongxin; Nestor, Paul G.; McCarley, Robert W.; Niznikiewicz, Margaret A.

    2012-01-01

    Both facial expression and tone of voice represent key signals of emotional communication but their brain processing correlates remain unclear. Accordingly, we constructed a novel implicit emotion recognition task consisting of simultaneously presented human faces and voices with neutral, happy, and angry valence, within the context of recognizing monkey faces and voices task. To investigate the temporal unfolding of the processing of affective information from human face-voice pairings, we recorded event-related potentials (ERPs) to these audiovisual test stimuli in 18 normal healthy subjects; N100, P200, N250, P300 components were observed at electrodes in the frontal-central region, while P100, N170, P270 were observed at electrodes in the parietal-occipital region. Results indicated a significant audiovisual stimulus effect on the amplitudes and latencies of components in frontal-central (P200, P300, and N250) but not the parietal occipital region (P100, N170 and P270). Specifically, P200 and P300 amplitudes were more positive for emotional relative to neutral audiovisual stimuli, irrespective of valence, whereas N250 amplitude was more negative for neutral relative to emotional stimuli. No differentiation was observed between angry and happy conditions. The results suggest that the general effect of emotion on audiovisual processing can emerge as early as 200 msec (P200 peak latency) post stimulus onset, in spite of implicit affective processing task demands, and that such effect is mainly distributed in the frontal-central region. PMID:22383987

  8. A smart technique for attendance system to recognize faces through parallelism

    NASA Astrophysics Data System (ADS)

    Prabhavathi, B.; Tanuja, V.; Madhu Viswanatham, V.; Rajashekhara Babu, M.

    2017-11-01

    Major part of recognising a person is face with the help of image processing techniques we can exploit the physical features of a person. In the old approach method that is used in schools and colleges it is there that the professor calls the student name and then the attendance for the students marked. Here in paper want to deviate from the old approach and go with the new approach by using techniques that are there in image processing. In this paper we presenting spontaneous presence for students in classroom. At first classroom image has been in use and after that image is kept in data record. For the images that are stored in the database we apply system algorithm which includes steps such as, histogram classification, noise removal, face detection and face recognition methods. So by using these steps we detect the faces and then compare it with the database. The attendance gets marked automatically if the system recognizes the faces.

  9. The Foundations of Social Cognition: Studies on Face/Voice Integration in Newborn Infants

    ERIC Educational Resources Information Center

    Streri, Arlette; Coulon, Marion; Guellai, Bahia

    2013-01-01

    A series of studies on newborns' abilities for recognizing speaking faces has been performed in order to identify the fundamental cues of social cognition. We used audiovisual dynamic faces rather than photographs or patterns of faces. Direct eye gaze and speech addressed to newborns, in interactive situations, appear to be two good candidates for…

  10. Exploring the Perceptual Spaces of Faces, Cars and Birds in Children and Adults

    ERIC Educational Resources Information Center

    Tanaka, James W.; Meixner, Tamara L.; Kantner, Justin

    2011-01-01

    While much developmental research has focused on the strategies that children employ to recognize faces, less is known about the principles governing the organization of face exemplars in perceptual memory. In this study, we tested a novel, child-friendly paradigm for investigating the organization of face, bird and car exemplars. Children ages…

  11. Observing real-time social interaction via telecommunication methods in budgerigars (Melopsittacus undulatus).

    PubMed

    Ikkatai, Yuko; Okanoya, Kazuo; Seki, Yoshimasa

    2016-07-01

    Humans communicate with one another not only face-to-face but also via modern telecommunication methods such as television and video conferencing. We readily detect the difference between people actively communicating with us and people merely acting via a broadcasting system. We developed an animal model of this novel communication method seen in humans to determine whether animals also make this distinction. We built a system for two animals to interact via audio-visual equipment in real-time, to compare behavioral differences between two conditions, an "interactive two-way condition" and a "non-interactive (one-way) condition." We measured birds' responses to stimuli which appeared in these two conditions. We used budgerigars, which are small, gregarious birds, and found that the frequency of vocal interaction with other individuals did not differ between the two conditions. However, body synchrony between the two birds was observed more often in the interactive condition, suggesting budgerigars recognized the difference between these interactive and non-interactive conditions on some level. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Neural Correlates of the In-Group Memory Advantage on the Encoding and Recognition of Faces

    PubMed Central

    Herzmann, Grit; Curran, Tim

    2013-01-01

    People have a memory advantage for faces that belong to the same group, for example, that attend the same university or have the same personality type. Faces from such in-group members are assumed to receive more attention during memory encoding and are therefore recognized more accurately. Here we use event-related potentials related to memory encoding and retrieval to investigate the neural correlates of the in-group memory advantage. Using the minimal group procedure, subjects were classified based on a bogus personality test as belonging to one of two personality types. While the electroencephalogram was recorded, subjects studied and recognized faces supposedly belonging to the subject’s own and the other personality type. Subjects recognized in-group faces more accurately than out-group faces but the effect size was small. Using the individual behavioral in-group memory advantage in multivariate analyses of covariance, we determined neural correlates of the in-group advantage. During memory encoding (300 to 1000 ms after stimulus onset), subjects with a high in-group memory advantage elicited more positive amplitudes for subsequently remembered in-group than out-group faces, showing that in-group faces received more attention and elicited more neural activity during initial encoding. Early during memory retrieval (300 to 500 ms), frontal brain areas were more activated for remembered in-group faces indicating an early detection of group membership. Surprisingly, the parietal old/new effect (600 to 900 ms) thought to indicate recollection processes differed between in-group and out-group faces independent from the behavioral in-group memory advantage. This finding suggests that group membership affects memory retrieval independent of memory performance. Comparisons with a previous study on the other-race effect, another memory phenomenon influenced by social classification of faces, suggested that the in-group memory advantage is dominated by top-down processing whereas the other-race effect is also influenced by extensive perceptual experience. PMID:24358226

  13. A bimodal biometric identification system

    NASA Astrophysics Data System (ADS)

    Laghari, Mohammad S.; Khuwaja, Gulzar A.

    2013-03-01

    Biometrics consists of methods for uniquely recognizing humans based upon one or more intrinsic physical or behavioral traits. Physicals are related to the shape of the body. Behavioral are related to the behavior of a person. However, biometric authentication systems suffer from imprecision and difficulty in person recognition due to a number of reasons and no single biometrics is expected to effectively satisfy the requirements of all verification and/or identification applications. Bimodal biometric systems are expected to be more reliable due to the presence of two pieces of evidence and also be able to meet the severe performance requirements imposed by various applications. This paper presents a neural network based bimodal biometric identification system by using human face and handwritten signature features.

  14. Comparing the visual spans for faces and letters

    PubMed Central

    He, Yingchen; Scholz, Jennifer M.; Gage, Rachel; Kallie, Christopher S.; Liu, Tingting; Legge, Gordon E.

    2015-01-01

    The visual span—the number of adjacent text letters that can be reliably recognized on one fixation—has been proposed as a sensory bottleneck that limits reading speed (Legge, Mansfield, & Chung, 2001). Like reading, searching for a face is an important daily task that involves pattern recognition. Is there a similar limitation on the number of faces that can be recognized in a single fixation? Here we report on a study in which we measured and compared the visual-span profiles for letter and face recognition. A serial two-stage model for pattern recognition was developed to interpret the data. The first stage is characterized by factors limiting recognition of isolated letters or faces, and the second stage represents the interfering effect of nearby stimuli on recognition. Our findings show that the visual span for faces is smaller than that for letters. Surprisingly, however, when differences in first-stage processing for letters and faces are accounted for, the two visual spans become nearly identical. These results suggest that the concept of visual span may describe a common sensory bottleneck that underlies different types of pattern recognition. PMID:26129858

  15. Face engagement during infancy predicts later face recognition ability in younger siblings of children with autism.

    PubMed

    de Klerk, Carina C J M; Gliga, Teodora; Charman, Tony; Johnson, Mark H

    2014-07-01

    Face recognition difficulties are frequently documented in children with autism spectrum disorders (ASD). It has been hypothesized that these difficulties result from a reduced interest in faces early in life, leading to decreased cortical specialization and atypical development of the neural circuitry for face processing. However, a recent study by our lab demonstrated that infants at increased familial risk for ASD, irrespective of their diagnostic status at 3 years, exhibit a clear orienting response to faces. The present study was conducted as a follow-up on the same cohort to investigate how measures of early engagement with faces relate to face-processing abilities later in life. We also investigated whether face recognition difficulties are specifically related to an ASD diagnosis, or whether they are present at a higher rate in all those at familial risk. At 3 years we found a reduced ability to recognize unfamiliar faces in the high-risk group that was not specific to those children who received an ASD diagnosis, consistent with face recognition difficulties being an endophenotype of the disorder. Furthermore, we found that longer looking at faces at 7 months was associated with poorer performance on the face recognition task at 3 years in the high-risk group. These findings suggest that longer looking at faces in infants at risk for ASD might reflect early face-processing difficulties and predicts difficulties with recognizing faces later in life. © 2013 The Authors. Developmental Science Published by John Wiley & Sons Ltd.

  16. Pupillary Responses to Robotic and Human Emotions: The Uncanny Valley and Media Equation Confirmed.

    PubMed

    Reuten, Anne; van Dam, Maureen; Naber, Marnix

    2018-01-01

    Physiological responses during human-robots interaction are useful alternatives to subjective measures of uncanny feelings for nearly humanlike robots (uncanny valley) and comparable emotional responses between humans and robots (media equation). However, no studies have employed the easily accessible measure of pupillometry to confirm the uncanny valley and media equation hypotheses, evidence in favor of the existence of these hypotheses in interaction with emotional robots is scarce, and previous studies have not controlled for low level image statistics across robot appearances. We therefore recorded pupil size of 40 participants that viewed and rated pictures of robotic and human faces that expressed a variety of basic emotions. The robotic faces varied along the dimension of human likeness from cartoonish to humanlike. We strictly controlled for confounding factors by removing backgrounds, hair, and color, and by equalizing low level image statistics. After the presentation phase, participants indicated to what extent the robots appeared uncanny and humanlike, and whether they could imagine social interaction with the robots in real life situations. The results show that robots rated as nearly humanlike scored higher on uncanniness, scored lower on imagined social interaction, evoked weaker pupil dilations, and their emotional expressions were more difficult to recognize. Pupils dilated most strongly to negative expressions and the pattern of pupil responses across emotions was highly similar between robot and human stimuli. These results highlight the usefulness of pupillometry in emotion studies and robot design by confirming the uncanny valley and media equation hypotheses.

  17. Virtual faces expressing emotions: an initial concomitant and construct validity study.

    PubMed

    Joyal, Christian C; Jacob, Laurence; Cigna, Marie-Hélène; Guay, Jean-Pierre; Renaud, Patrice

    2014-01-01

    Facial expressions of emotions represent classic stimuli for the study of social cognition. Developing virtual dynamic facial expressions of emotions, however, would open-up possibilities, both for fundamental and clinical research. For instance, virtual faces allow real-time Human-Computer retroactions between physiological measures and the virtual agent. The goal of this study was to initially assess concomitants and construct validity of a newly developed set of virtual faces expressing six fundamental emotions (happiness, surprise, anger, sadness, fear, and disgust). Recognition rates, facial electromyography (zygomatic major and corrugator supercilii muscles), and regional gaze fixation latencies (eyes and mouth regions) were compared in 41 adult volunteers (20 ♂, 21 ♀) during the presentation of video clips depicting real vs. virtual adults expressing emotions. Emotions expressed by each set of stimuli were similarly recognized, both by men and women. Accordingly, both sets of stimuli elicited similar activation of facial muscles and similar ocular fixation times in eye regions from man and woman participants. Further validation studies can be performed with these virtual faces among clinical populations known to present social cognition difficulties. Brain-Computer Interface studies with feedback-feedforward interactions based on facial emotion expressions can also be conducted with these stimuli.

  18. A stable biologically motivated learning mechanism for visual feature extraction to handle facial categorization.

    PubMed

    Rajaei, Karim; Khaligh-Razavi, Seyed-Mahdi; Ghodrati, Masoud; Ebrahimpour, Reza; Shiri Ahmad Abadi, Mohammad Ebrahim

    2012-01-01

    The brain mechanism of extracting visual features for recognizing various objects has consistently been a controversial issue in computational models of object recognition. To extract visual features, we introduce a new, biologically motivated model for facial categorization, which is an extension of the Hubel and Wiesel simple-to-complex cell hierarchy. To address the synaptic stability versus plasticity dilemma, we apply the Adaptive Resonance Theory (ART) for extracting informative intermediate level visual features during the learning process, which also makes this model stable against the destruction of previously learned information while learning new information. Such a mechanism has been suggested to be embedded within known laminar microcircuits of the cerebral cortex. To reveal the strength of the proposed visual feature learning mechanism, we show that when we use this mechanism in the training process of a well-known biologically motivated object recognition model (the HMAX model), it performs better than the HMAX model in face/non-face classification tasks. Furthermore, we demonstrate that our proposed mechanism is capable of following similar trends in performance as humans in a psychophysical experiment using a face versus non-face rapid categorization task.

  19. Visual scanning and recognition of Chinese, Caucasian, and racially ambiguous faces: Contributions from bottom-up facial physiognomic information and top-down knowledge of racial categories

    PubMed Central

    Wang, Qiandong; Xiao, Naiqi G.; Quinn, Paul C.; Hu, Chao S.; Qian, Miao; Fu, Genyue; Lee, Kang

    2014-01-01

    Recent studies have shown that participants use different eye movement strategies when scanning own- and other-race faces. However, it is unclear (1) whether this effect is related to face recognition performance, and (2) to what extent this effect is influenced by top-down or bottom-up facial information. In the present study, Chinese participants performed a face recognition task with Chinese faces, Caucasian faces, and racially ambiguous morphed face stimuli. For the racially ambiguous faces, we led participants to believe that they were viewing either own-race Chinese faces or other-race Caucasian faces. Results showed that (1) Chinese participants scanned the nose of the true Chinese faces more than that of the true Caucasian faces, whereas they scanned the eyes of the Caucasian faces more than those of the Chinese faces; (2) they scanned the eyes, nose, and mouth equally for the ambiguous faces in the Chinese condition compared with those in the Caucasian condition; (3) when recognizing the true Chinese target faces, but not the true target Caucasian faces, the greater the fixation proportion on the nose, the faster the participants correctly recognized these faces. The same was true when racially ambiguous face stimuli were thought to be Chinese faces. These results provide the first evidence to show that (1) visual scanning patterns of faces are related to own-race face recognition response time, and (2) it is bottom-up facial physiognomic information of racial categories that mainly contributes to face scanning. However, top-down knowledge of racial categories can influence the relationship between face scanning patterns and recognition response time. PMID:25497461

  20. Natural Products: An Alternative to Conventional Therapy for Dermatophytosis?

    PubMed

    Lopes, Graciliana; Pinto, Eugénia; Salgueiro, Lígia

    2017-02-01

    The increased incidence of fungal infections, associated with the widespread use of antifungal drugs, has resulted in the development of resistance, making it necessary to discover new therapeutic alternatives. Among fungal infections, dermatophytoses constitute a serious public health problem, affecting 20-25 % of the world population. Medicinal plants represent an endless source of bioactive molecules, and their volatile and non-volatile extracts are clearly recognized for being the historical basis of therapeutic health care. Because of this, the research on natural products with antifungal activity against dermatophytes has considerably increased in recent years. However, despite the recognized anti-dermatophytic potential of natural products, often advantageous face to commercial drugs, there is still a long way to go until their use in therapeutics. This review attempts to summarize the current status of anti-dermatophytic natural products, focusing on their mechanism of action, the developed pharmaceutical formulations and their effectiveness in human and animal models of infection.

  1. The Perception of Faces in Different Poses by One-Month-Olds.

    ERIC Educational Resources Information Center

    Sai, F.; Bushnell, I. W. R.

    The ability of 1-month-old infants to recognize their mothers visually was explored with the live faces of mother and stranger presented in three different poses: en face (full face), half-profile, and profile. Subjects were 16 infants with normal Apgar scores at birth who were volunteered by their parents after an initial contact in a maternity…

  2. Visual scanning and recognition of Chinese, Caucasian, and racially ambiguous faces: contributions from bottom-up facial physiognomic information and top-down knowledge of racial categories.

    PubMed

    Wang, Qiandong; Xiao, Naiqi G; Quinn, Paul C; Hu, Chao S; Qian, Miao; Fu, Genyue; Lee, Kang

    2015-02-01

    Recent studies have shown that participants use different eye movement strategies when scanning own- and other-race faces. However, it is unclear (1) whether this effect is related to face recognition performance, and (2) to what extent this effect is influenced by top-down or bottom-up facial information. In the present study, Chinese participants performed a face recognition task with Chinese, Caucasian, and racially ambiguous faces. For the racially ambiguous faces, we led participants to believe that they were viewing either own-race Chinese faces or other-race Caucasian faces. Results showed that (1) Chinese participants scanned the nose of the true Chinese faces more than that of the true Caucasian faces, whereas they scanned the eyes of the Caucasian faces more than those of the Chinese faces; (2) they scanned the eyes, nose, and mouth equally for the ambiguous faces in the Chinese condition compared with those in the Caucasian condition; (3) when recognizing the true Chinese target faces, but not the true target Caucasian faces, the greater the fixation proportion on the nose, the faster the participants correctly recognized these faces. The same was true when racially ambiguous face stimuli were thought to be Chinese faces. These results provide the first evidence to show that (1) visual scanning patterns of faces are related to own-race face recognition response time, and (2) it is bottom-up facial physiognomic information that mainly contributes to face scanning. However, top-down knowledge of racial categories can influence the relationship between face scanning patterns and recognition response time. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Door Security using Face Detection and Raspberry Pi

    NASA Astrophysics Data System (ADS)

    Bhutra, Venkatesh; Kumar, Harshav; Jangid, Santosh; Solanki, L.

    2018-03-01

    With the world moving towards advanced technologies, security forms a crucial part in daily life. Among the many techniques used for this purpose, Face Recognition stands as effective means of authentication and security. This paper deals with the user of principal component and security. PCA is a statistical approach used to simplify a data set. The minimum Euclidean distance found from the PCA technique is used to recognize the face. Raspberry Pi a low cost ARM based computer on a small circuit board, controls the servo motor and other sensors. The servo-motor is in turn attached to the doors of home and opens up when the face is recognized. The proposed work has been done using a self-made training database of students from B.K. Birla Institute of Engineering and Technology, Pilani, Rajasthan, India.

  4. Movement cues aid face recognition in developmental prosopagnosia.

    PubMed

    Bennetts, Rachel J; Butcher, Natalie; Lander, Karen; Udale, Robert; Bate, Sarah

    2015-11-01

    Seeing a face in motion can improve face recognition in the general population, and studies of face matching indicate that people with face recognition difficulties (developmental prosopagnosia; DP) may be able to use movement cues as a supplementary strategy to help them process faces. However, the use of facial movement cues in DP has not been examined in the context of familiar face recognition. This study examined whether people with DP were better at recognizing famous faces presented in motion, compared to static. Nine participants with DP and 14 age-matched controls completed a famous face recognition task. Each face was presented twice across 2 blocks: once in motion and once as a still image. Discriminability (A) was calculated for each block. Participants with DP showed a significant movement advantage overall. This was driven by a movement advantage in the first block, but not in the second block. Participants with DP were significantly worse than controls at identifying faces from static images, but there was no difference between those with DP and controls for moving images. Seeing a familiar face in motion can improve face recognition in people with DP, at least in some circumstances. The mechanisms behind this effect are unclear, but these results suggest that some people with DP are able to learn and recognize patterns of facial motion, and movement can act as a useful cue when face recognition is impaired. (c) 2015 APA, all rights reserved).

  5. The "parts and wholes" of face recognition: A review of the literature.

    PubMed

    Tanaka, James W; Simonyi, Diana

    2016-10-01

    It has been claimed that faces are recognized as a "whole" rather than by the recognition of individual parts. In a paper published in the Quarterly Journal of Experimental Psychology in 1993, Martha Farah and I attempted to operationalize the holistic claim using the part/whole task. In this task, participants studied a face and then their memory presented in isolation and in the whole face. Consistent with the holistic view, recognition of the part was superior when tested in the whole-face condition compared to when it was tested in isolation. The "whole face" or holistic advantage was not found for faces that were inverted, or scrambled, nor for non-face objects, suggesting that holistic encoding was specific to normal, intact faces. In this paper, we reflect on the part/whole paradigm and how it has contributed to our understanding of what it means to recognize a face as a "whole" stimulus. We describe the value of part/whole task for developing theories of holistic and non-holistic recognition of faces and objects. We discuss the research that has probed the neural substrates of holistic processing in healthy adults and people with prosopagnosia and autism. Finally, we examine how experience shapes holistic face recognition in children and recognition of own- and other-race faces in adults. The goal of this article is to summarize the research on the part/whole task and speculate on how it has informed our understanding of holistic face processing.

  6. Human Rights and the Global Fund to Fight AIDS, Tuberculosis and Malaria

    PubMed Central

    Jürgens, Ralf; Lim, Hyeyoung; Timberlake, Susan; Smith, Matthew

    2017-01-01

    Abstract The Global Fund to Fight AIDS, Tuberculosis and Malaria was created to greatly expand access to basic services to address the three diseases in its name. From its beginnings, its governance embodied some human rights principles: civil society is represented on its board, and the country coordination mechanisms that oversee funding requests to the Global Fund include representatives of people affected by the diseases. The Global Fund’s core strategies recognize that the health services it supports would not be effective or cost-effective without efforts to reduce human rights-related barriers to access and utilization of health services, particularly those faced by socially marginalized and criminalized persons. Basic human rights elements were written into Global Fund grant agreements, and various technical support measures encouraged the inclusion in funding requests of programs to reduce human rights-related barriers. A five-year initiative to provide intensive technical and financial support for the scaling up of programs to reduce these barriers in 20 countries is ongoing. PMID:29302175

  7. Inversion and contrast polarity reversal affect both encoding and recognition processes of unfamiliar faces: a repetition study using ERPs.

    PubMed

    Itier, Roxane J; Taylor, Margot J

    2002-02-01

    Using ERPs in a face recognition task, we investigated whether inversion and contrast reversal, which seem to disrupt different aspects of face configuration, differentially affected encoding and memory for faces. Upright, inverted, and negative (contrast-reversed) unknown faces were either immediately repeated (0-lag) or repeated after 1 intervening face (1-lag). The encoding condition (new) consisted of the first presentation of items correctly recognized in the two repeated conditions. 0-lag faces were recognized better and faster than 1-lag faces. Inverted and negative pictures elicited longer reaction times, lower hit rates, and higher false alarm rates than upright faces. ERP analyses revealed that negative and inverted faces affected both early (encoding) and late (recognition) stages of face processing. Early components (N170, VPP) were delayed and enhanced by both inversion and contrast reversal which also affected P1 and P2 components. Amplitudes were higher for inverted faces at frontal and parietal sites from 350 to 600 ms. Priming effects were seen at encoding stages, revealed by shorter latencies and smaller amplitudes of N170 for repeated stimuli, which did not differ depending on face type. Repeated faces yielded more positive amplitudes than new faces from 250 to 450 ms frontally and from 400 to 600 ms parietally. However, ERP differences revealed that the magnitude of this repetition effect was smaller for negative and inverted than upright faces at 0-lag but not at 1-lag condition. Thus, face encoding and recognition processes were affected by inversion and contrast-reversal differently.

  8. Offenders become the victim in virtual reality: impact of changing perspective in domestic violence.

    PubMed

    Seinfeld, S; Arroyo-Palacios, J; Iruretagoyena, G; Hortensius, R; Zapata, L E; Borland, D; de Gelder, B; Slater, M; Sanchez-Vives, M V

    2018-02-09

    The role of empathy and perspective-taking in preventing aggressive behaviors has been highlighted in several theoretical models. In this study, we used immersive virtual reality to induce a full body ownership illusion that allows offenders to be in the body of a victim of domestic abuse. A group of male domestic violence offenders and a control group without a history of violence experienced a virtual scene of abuse in first-person perspective. During the virtual encounter, the participants' real bodies were replaced with a life-sized virtual female body that moved synchronously with their own real movements. Participants' emotion recognition skills were assessed before and after the virtual experience. Our results revealed that offenders have a significantly lower ability to recognize fear in female faces compared to controls, with a bias towards classifying fearful faces as happy. After being embodied in a female victim, offenders improved their ability to recognize fearful female faces and reduced their bias towards recognizing fearful faces as happy. For the first time, we demonstrate that changing the perspective of an aggressive population through immersive virtual reality can modify socio-perceptual processes such as emotion recognition, thought to underlie this specific form of aggressive behaviors.

  9. Non-contact multi-radar smart probing of body orientation based on micro-Doppler signatures.

    PubMed

    Li, Yiran; Pal, Ranadip; Li, Changzhi

    2014-01-01

    Micro-Doppler signatures carry useful information about body movements and have been widely applied to different applications such as human activity recognition and gait analysis. In this paper, micro-Doppler signatures are used to identify body orientation. Four AC-coupled continuous-wave (CW) smart radar sensors were used to form a multiple-radar network to carry out the experiments in this paper. 162 tests were performed in total. The experiment results showed a 100% accuracy in recognizing eight body orientations, i.e., facing north, northeast, east, southeast, south, southwest, west, and northwest.

  10. Concept and design philosophy of a person-accompanying robot

    NASA Astrophysics Data System (ADS)

    Mizoguchi, Hiroshi; Shigehara, Takaomi; Goto, Yoshiyasu; Hidai, Ken-ichi; Mishima, Taketoshi

    1999-01-01

    This paper proposes a person accompanying robot as a novel human collaborative robot. The person accompanying robot is such legged mobile robot that is possible to follow the person utilizing its vision. towards future aging society, human collaboration and human support are required as novel applications of robots. Such human collaborative robots share the same space with humans. But conventional robots are isolated from humans and lack the capability to observe humans. Study on human observing function of robot is crucial to realize novel robot such as service and pet robot. To collaborate and support humans properly human collaborative robot must have capability to observe and recognize humans. Study on human observing function of robot is crucial to realize novel robot such as service and pet robot. The authors are currently implementing a prototype of the proposed accompanying robot.As a base for the human observing function of the prototype robot, we have realized face tracking utilizing skin color extraction and correlation based tracking. We also develop a method for the robot to pick up human voice clearly and remotely by utilizing microphone arrays. Results of these preliminary study suggest feasibility of the proposed robot.

  11. Video face recognition against a watch list

    NASA Astrophysics Data System (ADS)

    Abbas, Jehanzeb; Dagli, Charlie K.; Huang, Thomas S.

    2007-10-01

    Due to a large increase in the video surveillance data recently in an effort to maintain high security at public places, we need more robust systems to analyze this data and make tasks like face recognition a realistic possibility in challenging environments. In this paper we explore a watch-list scenario where we use an appearance based model to classify query faces from low resolution videos into either a watch-list or a non-watch-list face. We then use our simple yet a powerful face recognition system to recognize the faces classified as watch-list faces. Where the watch-list includes those people that we are interested in recognizing. Our system uses simple feature machine algorithms from our previous work to match video faces against still images. To test our approach, we match video faces against a large database of still images obtained from a previous work in the field from Yahoo News over a period of time. We do this matching in an efficient manner to come up with a faster and nearly real-time system. This system can be incorporated into a larger surveillance system equipped with advanced algorithms involving anomalous event detection and activity recognition. This is a step towards more secure and robust surveillance systems and efficient video data analysis.

  12. The Development of Facial Emotion Recognition: The Role of Configural Information

    ERIC Educational Resources Information Center

    Durand, Karine; Gallay, Mathieu; Seigneuric, Alix; Robichon, Fabrice; Baudouin, Jean-Yves

    2007-01-01

    The development of children's ability to recognize facial emotions and the role of configural information in this development were investigated. In the study, 100 5-, 7-, 9-, and 11-year-olds and 26 adults needed to recognize the emotion displayed by upright and upside-down faces. The same participants needed to recognize the emotion displayed by…

  13. Is That Me or My Twin? Lack of Self-Face Recognition Advantage in Identical Twins

    PubMed Central

    Martini, Matteo; Bufalari, Ilaria; Stazi, Maria Antonietta; Aglioti, Salvatore Maria

    2015-01-01

    Despite the increasing interest in twin studies and the stunning amount of research on face recognition, the ability of adult identical twins to discriminate their own faces from those of their co-twins has been scarcely investigated. One’s own face is the most distinctive feature of the bodily self, and people typically show a clear advantage in recognizing their own face even more than other very familiar identities. Given the very high level of resemblance of their faces, monozygotic twins represent a unique model for exploring self-face processing. Herein we examined the ability of monozygotic twins to distinguish their own face from the face of their co-twin and of a highly familiar individual. Results show that twins equally recognize their own face and their twin’s face. This lack of self-face advantage was negatively predicted by how much they felt physically similar to their co-twin and by their anxious or avoidant attachment style. We speculate that in monozygotic twins, the visual representation of the self-face overlaps with that of the co-twin. Thus, to distinguish the self from the co-twin, monozygotic twins have to rely much more than control participants on the multisensory integration processes upon which the sense of bodily self is based. Moreover, in keeping with the notion that attachment style influences perception of self and significant others, we propose that the observed self/co-twin confusion may depend upon insecure attachment. PMID:25853249

  14. Neural Correlates of Perceiving Emotional Faces and Bodies in Developmental Prosopagnosia: An Event-Related fMRI-Study

    PubMed Central

    Van den Stock, Jan; van de Riet, Wim A. C.; Righart, Ruthger; de Gelder, Beatrice

    2008-01-01

    Many people experience transient difficulties in recognizing faces but only a small number of them cannot recognize their family members when meeting them unexpectedly. Such face blindness is associated with serious problems in everyday life. A better understanding of the neuro-functional basis of impaired face recognition may be achieved by a careful comparison with an equally unique object category and by a adding a more realistic setting involving neutral faces as well facial expressions. We used event-related functional magnetic resonance imaging (fMRI) to investigate the neuro-functional basis of perceiving faces and bodies in three developmental prosopagnosics (DP) and matched healthy controls. Our approach involved materials consisting of neutral faces and bodies as well as faces and bodies expressing fear or happiness. The first main result is that the presence of emotional information has a different effect in the patient vs. the control group in the fusiform face area (FFA). Neutral faces trigger lower activation in the DP group, compared to the control group, while activation for facial expressions is the same in both groups. The second main result is that compared to controls, DPs have increased activation for bodies in the inferior occipital gyrus (IOG) and for neutral faces in the extrastriate body area (EBA), indicating that body and face sensitive processes are less categorically segregated in DP. Taken together our study shows the importance of using naturalistic emotional stimuli for a better understanding of developmental face deficits. PMID:18797499

  15. The “parts and wholes” of face recognition: a review of the literature

    PubMed Central

    Tanaka, James W.; Simonyi, Diana

    2016-01-01

    It has been claimed that faces are recognized as a “whole” rather than the recognition of individual parts. In a paper published in the Quarterly Journal of Experimental Psychology in 1993, Martha Farah and I attempted to operationalize the holistic claim using the part/whole task. In this task, participants studied a face and then their memory presented in isolation and in the whole face. Consistent with the holistic view, recognition of the part was superior when tested in the whole-face condition compared to when it was tested in isolation. The “whole face” or holistic advantage was not found for faces that were inverted, or scrambled, nor for non-face objects suggesting that holistic encoding was specific to normal, intact faces. In this paper, we reflect on the part/whole paradigm and how it has contributed to our understanding of what it means to recognize a face as a “whole” stimulus. We describe the value of part/whole task for developing theories of holistic and non-holistic recognition of faces and objects. We discuss the research that has probed the neural substrates of holistic processing in healthy adults and people with prosopagnosia and autism. Finally, we examine how experience shapes holistic face recognition in children and recognition of own- and other-race faces in adults. The goal of this article is to summarize the research on the part/whole task and speculate on how it has informed our understanding of holistic face processing. PMID:26886495

  16. Recognition of Own-Race and Other-Race Faces by Three-Month-Old Infants

    ERIC Educational Resources Information Center

    Sangrigoli, Sandy; De Schonen, Scania

    2004-01-01

    Background: People are better at recognizing faces of their own race than faces of another race. Such race specificity may be due to differential expertise in the two races. Method: In order to find out whether this other-race effect develops as early as face-recognition skills or whether it is a long-term effect of acquired expertise, we tested…

  17. Visual Search Efficiency is Greater for Human Faces Compared to Animal Faces

    PubMed Central

    Simpson, Elizabeth A.; Mertins, Haley L.; Yee, Krysten; Fullerton, Alison; Jakobsen, Krisztina V.

    2015-01-01

    The Animate Monitoring Hypothesis proposes that humans and animals were the most important categories of visual stimuli for ancestral humans to monitor, as they presented important challenges and opportunities for survival and reproduction; however, it remains unknown whether animal faces are located as efficiently as human faces. We tested this hypothesis by examining whether human, primate, and mammal faces elicit similarly efficient searches, or whether human faces are privileged. In the first three experiments, participants located a target (human, primate, or mammal face) among distractors (non-face objects). We found fixations on human faces were faster and more accurate than primate faces, even when controlling for search category specificity. A final experiment revealed that, even when task-irrelevant, human faces slowed searches for non-faces, suggesting some bottom-up processing may be responsible for the human face search efficiency advantage. PMID:24962122

  18. The "Eye Avoidance" Hypothesis of Autism Face Processing

    ERIC Educational Resources Information Center

    Tanaka, James W.; Sung, Andrew

    2016-01-01

    Although a growing body of research indicates that children with autism spectrum disorder (ASD) exhibit selective deficits in their ability to recognize facial identities and expressions, the source of their face impairment is, as yet, undetermined. In this paper, we consider three possible accounts of the autism face deficit: (1) the holistic…

  19. Brief Report: Face-Specific Recognition Deficits in Young Children with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Bradshaw, Jessica; Shic, Frederick; Chawarska, Katarzyna

    2011-01-01

    This study used eyetracking to investigate the ability of young children with autism spectrum disorders (ASD) to recognize social (faces) and nonsocial (simple objects and complex block patterns) stimuli using the visual paired comparison (VPC) paradigm. Typically developing (TD) children showed evidence for recognition of faces and simple…

  20. Neural Activation to Emotional Faces in Adolescents with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Weng, Shih-Jen; Carrasco, Melisa; Swartz, Johnna R.; Wiggins, Jillian Lee; Kurapati, Nikhil; Liberzon, Israel; Risi, Susan; Lord, Catherine; Monk, Christopher S.

    2011-01-01

    Background: Autism spectrum disorders (ASD) involve a core deficit in social functioning and impairments in the ability to recognize face emotions. In an emotional faces task designed to constrain group differences in attention, the present study used functional MRI to characterize activation in the amygdala, ventral prefrontal cortex (vPFC), and…

  1. Development of the other-race effect during infancy: evidence toward universality?

    PubMed

    Kelly, David J; Liu, Shaoying; Lee, Kang; Quinn, Paul C; Pascalis, Olivier; Slater, Alan M; Ge, Liezhong

    2009-09-01

    The other-race effect in face processing develops within the first year of life in Caucasian infants. It is currently unknown whether the developmental trajectory observed in Caucasian infants can be extended to other cultures. This is an important issue to investigate because recent findings from cross-cultural psychology have suggested that individuals from Eastern and Western backgrounds tend to perceive the world in fundamentally different ways. To this end, the current study investigated 3-, 6-, and 9-month-old Chinese infants' ability to discriminate faces within their own racial group and within two other racial groups (African and Caucasian). The 3-month-olds demonstrated recognition in all conditions, whereas the 6-month-olds recognized Chinese faces and displayed marginal recognition for Caucasian faces but did not recognize African faces. The 9-month-olds' recognition was limited to Chinese faces. This pattern of development is consistent with the perceptual narrowing hypothesis that our perceptual systems are shaped by experience to be optimally sensitive to stimuli most commonly encountered in one's unique cultural environment.

  2. Colloquium paper: uniquely human evolution of sialic acid genetics and biology.

    PubMed

    Varki, Ajit

    2010-05-11

    Darwinian evolution of humans from our common ancestors with nonhuman primates involved many gene-environment interactions at the population level, and the resulting human-specific genetic changes must contribute to the "Human Condition." Recent data indicate that the biology of sialic acids (which directly involves less than 60 genes) shows more than 10 uniquely human genetic changes in comparison with our closest evolutionary relatives. Known outcomes are tissue-specific changes in abundant cell-surface glycans, changes in specificity and/or expression of multiple proteins that recognize these glycans, and novel pathogen regimes. Specific events include Alu-mediated inactivation of the CMAH gene, resulting in loss of synthesis of the Sia N-glycolylneuraminic acid (Neu5Gc) and increase in expression of the precursor N-acetylneuraminic acid (Neu5Ac); increased expression of alpha2-6-linked Sias (likely because of changed expression of ST6GALI); and multiple changes in SIGLEC genes encoding Sia-recognizing Ig-like lectins (Siglecs). The last includes binding specificity changes (in Siglecs -5, -7, -9, -11, and -12); expression pattern changes (in Siglecs -1, -5, -6, and -11); gene conversion (SIGLEC11); and deletion or pseudogenization (SIGLEC13, SIGLEC14, and SIGLEC16). A nongenetic outcome of the CMAH mutation is human metabolic incorporation of foreign dietary Neu5Gc, in the face of circulating anti-Neu5Gc antibodies, generating a novel "xeno-auto-antigen" situation. Taken together, these data suggest that both the genes associated with Sia biology and the related impacts of the environment comprise a relative "hot spot" of genetic and physiological changes in human evolution, with implications for uniquely human features both in health and disease.

  3. On the other side of the fence: effects of social categorization and spatial grouping on memory and attention for own-race and other-race faces.

    PubMed

    Kloth, Nadine; Shields, Susannah E; Rhodes, Gillian

    2014-01-01

    The term "own-race bias" refers to the phenomenon that humans are typically better at recognizing faces from their own than a different race. The perceptual expertise account assumes that our face perception system has adapted to the faces we are typically exposed to, equipping it poorly for the processing of other-race faces. Sociocognitive theories assume that other-race faces are initially categorized as out-group, decreasing motivation to individuate them. Supporting sociocognitive accounts, a recent study has reported improved recognition for other-race faces when these were categorized as belonging to the participants' in-group on a second social dimension, i.e., their university affiliation. Faces were studied in groups, containing both own-race and other-race faces, half of each labeled as in-group and out-group, respectively. When study faces were spatially grouped by race, participants showed a clear own-race bias. When faces were grouped by university affiliation, recognition of other-race faces from the social in-group was indistinguishable from own-race face recognition. The present study aimed at extending this singular finding to other races of faces and participants. Forty Asian and 40 European Australian participants studied Asian and European faces for a recognition test. Faces were presented in groups, containing an equal number of own-university and other-university Asian and European faces. Between participants, faces were grouped either according to race or university affiliation. Eye tracking was used to study the distribution of spatial attention to individual faces in the display. The race of the study faces significantly affected participants' memory, with better recognition of own-race than other-race faces. However, memory was unaffected by the university affiliation of the faces and by the criterion for their spatial grouping on the display. Eye tracking revealed strong looking biases towards both own-race and own-university faces. Results are discussed in light of the theoretical accounts of the own-race bias.

  4. On the Other Side of the Fence: Effects of Social Categorization and Spatial Grouping on Memory and Attention for Own-Race and Other-Race Faces

    PubMed Central

    Kloth, Nadine; Shields, Susannah E.; Rhodes, Gillian

    2014-01-01

    The term “own-race bias” refers to the phenomenon that humans are typically better at recognizing faces from their own than a different race. The perceptual expertise account assumes that our face perception system has adapted to the faces we are typically exposed to, equipping it poorly for the processing of other-race faces. Sociocognitive theories assume that other-race faces are initially categorized as out-group, decreasing motivation to individuate them. Supporting sociocognitive accounts, a recent study has reported improved recognition for other-race faces when these were categorized as belonging to the participants' in-group on a second social dimension, i.e., their university affiliation. Faces were studied in groups, containing both own-race and other-race faces, half of each labeled as in-group and out-group, respectively. When study faces were spatially grouped by race, participants showed a clear own-race bias. When faces were grouped by university affiliation, recognition of other-race faces from the social in-group was indistinguishable from own-race face recognition. The present study aimed at extending this singular finding to other races of faces and participants. Forty Asian and 40 European Australian participants studied Asian and European faces for a recognition test. Faces were presented in groups, containing an equal number of own-university and other-university Asian and European faces. Between participants, faces were grouped either according to race or university affiliation. Eye tracking was used to study the distribution of spatial attention to individual faces in the display. The race of the study faces significantly affected participants' memory, with better recognition of own-race than other-race faces. However, memory was unaffected by the university affiliation of the faces and by the criterion for their spatial grouping on the display. Eye tracking revealed strong looking biases towards both own-race and own-university faces. Results are discussed in light of the theoretical accounts of the own-race bias. PMID:25180902

  5. Pupillary Responses to Robotic and Human Emotions: The Uncanny Valley and Media Equation Confirmed

    PubMed Central

    Reuten, Anne; van Dam, Maureen; Naber, Marnix

    2018-01-01

    Physiological responses during human–robots interaction are useful alternatives to subjective measures of uncanny feelings for nearly humanlike robots (uncanny valley) and comparable emotional responses between humans and robots (media equation). However, no studies have employed the easily accessible measure of pupillometry to confirm the uncanny valley and media equation hypotheses, evidence in favor of the existence of these hypotheses in interaction with emotional robots is scarce, and previous studies have not controlled for low level image statistics across robot appearances. We therefore recorded pupil size of 40 participants that viewed and rated pictures of robotic and human faces that expressed a variety of basic emotions. The robotic faces varied along the dimension of human likeness from cartoonish to humanlike. We strictly controlled for confounding factors by removing backgrounds, hair, and color, and by equalizing low level image statistics. After the presentation phase, participants indicated to what extent the robots appeared uncanny and humanlike, and whether they could imagine social interaction with the robots in real life situations. The results show that robots rated as nearly humanlike scored higher on uncanniness, scored lower on imagined social interaction, evoked weaker pupil dilations, and their emotional expressions were more difficult to recognize. Pupils dilated most strongly to negative expressions and the pattern of pupil responses across emotions was highly similar between robot and human stimuli. These results highlight the usefulness of pupillometry in emotion studies and robot design by confirming the uncanny valley and media equation hypotheses. PMID:29875722

  6. Assessing the Impact of Human Activities on British Columbia’s Estuaries

    PubMed Central

    Robb, Carolyn K.

    2014-01-01

    The world’s marine and coastal ecosystems are under threat and single-sector management efforts have failed to address those threats. Scientific consensus suggests that management should evolve to focus on ecosystems and their human, ecological, and physical components. Estuaries are recognized globally as one of the world’s most productive and most threatened ecosystems and many estuarine areas in British Columbia (BC) have been lost or degraded. To help prioritize activities and areas for regional management efforts, spatial information on human activities that adversely affect BC’s estuaries was compiled. Using statistical analyses, estuaries were assigned to groups facing related threats that could benefit from similar management. The results show that estuaries in the most populated marine ecosections have the highest biological importance but also the highest impacts and the lowest levels of protection. This research is timely, as it will inform ongoing marine planning, land acquisition, and stewardship efforts in BC. PMID:24937486

  7. Emotion-attention interactions in recognition memory for distractor faces.

    PubMed

    Srinivasan, Narayanan; Gupta, Rashmi

    2010-04-01

    Effective filtering of distractor information has been shown to be dependent on perceptual load. Given the salience of emotional information and the presence of emotion-attention interactions, we wanted to explore the recognition memory for emotional distractors especially as a function of focused attention and distributed attention by manipulating load and the spatial spread of attention. We performed two experiments to study emotion-attention interactions by measuring recognition memory performance for distractor neutral and emotional faces. Participants performed a color discrimination task (low-load) or letter identification task (high-load) with a letter string display in Experiment 1 and a high-load letter identification task with letters presented in a circular array in Experiment 2. The stimuli were presented against a distractor face background. The recognition memory results show that happy faces were recognized better than sad faces under conditions of less focused or distributed attention. When attention is more spatially focused, sad faces were recognized better than happy faces. The study provides evidence for emotion-attention interactions in which specific emotional information like sad or happy is associated with focused or distributed attention respectively. Distractor processing with emotional information also has implications for theories of attention. Copyright 2010 APA, all rights reserved.

  8. Characterizing the spatio-temporal dynamics of the neural events occurring prior to and up to overt recognition of famous faces.

    PubMed

    Jemel, Boutheina; Schuller, Anne-Marie; Goffaux, Valérie

    2010-10-01

    Although it is generally acknowledged that familiar face recognition is fast, mandatory, and proceeds outside conscious control, it is still unclear whether processes leading to familiar face recognition occur in a linear (i.e., gradual) or a nonlinear (i.e., all-or-none) manner. To test these two alternative accounts, we recorded scalp ERPs while participants indicated whether they recognize as familiar the faces of famous and unfamiliar persons gradually revealed in a descending sequence of frames, from the noisier to the least noisy. This presentation procedure allowed us to characterize the changes in scalp ERP responses occurring prior to and up to overt recognition. Our main finding is that gradual and all-or-none processes are possibly involved during overt recognition of familiar faces. Although the N170 and the N250 face-sensitive responses displayed an abrupt activity change at the moment of overt recognition of famous faces, later ERPs encompassing the N400 and late positive component exhibited an incremental increase in amplitude as the point of recognition approached. In addition, famous faces that were not overtly recognized at one trial before recognition elicited larger ERP potentials than unfamiliar faces, probably reflecting a covert recognition process. Overall, these findings present evidence that recognition of familiar faces implicates spatio-temporally complex neural processes exhibiting differential pattern activity changes as a function of recognition state.

  9. Caricature generalization benefits for faces learned with enhanced idiosyncratic shape or texture.

    PubMed

    Itz, Marlena L; Schweinberger, Stefan R; Kaufmann, Jürgen M

    2017-02-01

    Recent findings show benefits for learning and subsequent recognition of faces caricatured in shape or texture, but there is little evidence on whether this caricature learning advantage generalizes to recognition of veridical counterparts at test. Moreover, it has been reported that there is a relatively higher contribution of texture information, at the expense of shape information, for familiar compared to unfamiliar face recognition. The aim of this study was to examine whether veridical faces are recognized better when they were learned as caricatures compared to when they were learned as veridicals-what we call a caricature generalization benefit. Photorealistic facial stimuli derived from a 3-D camera system were caricatured selectively in either shape or texture by 50 %. Faces were learned across different images either as veridicals, shape caricatures, or texture caricatures. At test, all learned and novel faces were presented as previously unseen frontal veridicals, and participants performed an old-new task. We assessed accuracies, reaction times, and face-sensitive event-related potentials (ERPs). Faces learned as caricatures were recognized more accurately than faces learned as veridicals. At learning, N250 and LPC were largest for shape caricatures, suggesting encoding advantages of distinctive facial shape. At test, LPC was largest for faces that had been learned as texture caricatures, indicating the importance of texture for familiar face recognition. Overall, our findings demonstrate that caricature learning advantages can generalize to and, importantly, improve recognition of veridical versions of faces.

  10. Human perceptual decision making: disentangling task onset and stimulus onset.

    PubMed

    Cardoso-Leite, Pedro; Waszak, Florian; Lepsien, Jöran

    2014-07-01

    The left dorsolateral prefrontal cortex (ldlPFC) has been highlighted as a key actor in human perceptual decision-making (PDM): It is theorized to support decision-formation independently of stimulus type or motor response. PDM studies however generally confound stimulus onset and task onset: when the to-be-recognized stimulus is presented, subjects know that a stimulus is shown and can set up processing resources-even when they do not know which stimulus is shown. We hypothesized that the ldlPFC might be involved in task preparation rather than decision-formation. To test this, we asked participants to report whether sequences of noisy images contained a face or a house within an experimental design that decorrelates stimulus and task onset. Decision-related processes should yield a sustained response during the task, whereas preparation-related areas should yield transient responses at its beginning. The results show that the brain activation pattern at task onset is strikingly similar to that observed in previous PDM studies. In particular, they contradict the idea that ldlPFC forms an abstract decision and suggest instead that its activation reflects preparation for the upcoming task. We further investigated the role of the fusiform face areas and parahippocampal place areas which are thought to be face and house detectors, respectively, that feed their signals to higher level decision areas. The response patterns within these areas suggest that this interpretation is unlikely and that the decisions about the presence of a face or a house in a noisy image might instead already be computed within these areas without requiring higher-order areas. Copyright © 2013 Wiley Periodicals, Inc.

  11. Personal relevance and the human right hemisphere.

    PubMed

    Van Lancker, D

    1991-09-01

    Brain damage can selectively disrupt or distort information and ability across the range of human behaviors. One domain that has not been considered as an independent attribute consists of acquisition and maintenance of personal relevant entities such as "familiar" faces, persons, voices, names, linguistic expressions, handwriting, topography, and so on. In experimental studies of normal mentation, personal relevance is revealed in studies of emotion, arousal, affect, preference and familiarity judgments, and memory. Following focal brain damage, deficits and distortions in the experience of personal relevance, as well as in recognizing formerly personally relevant phenomena, are well known to occur. A review and interpretation of these data lead to a proposal that the right hemisphere has a special role in establishing, maintaining, and processing personally relevant aspects of the individual's world.

  12. Conceptual Barriers to Progress Within Evolutionary Biology

    PubMed Central

    Laland, Kevin N.; Odling-Smee, John; Feldman, Marcus W.; Kendal, Jeremy

    2011-01-01

    In spite of its success, Neo-Darwinism is faced with major conceptual barriers to further progress, deriving directly from its metaphysical foundations. Most importantly, neo-Darwinism fails to recognize a fundamental cause of evolutionary change, “niche construction”. This failure restricts the generality of evolutionary theory, and introduces inaccuracies. It also hinders the integration of evolutionary biology with neighbouring disciplines, including ecosystem ecology, developmental biology, and the human sciences. Ecology is forced to become a divided discipline, developmental biology is stubbornly difficult to reconcile with evolutionary theory, and the majority of biologists and social scientists are still unhappy with evolutionary accounts of human behaviour. The incorporation of niche construction as both a cause and a product of evolution removes these disciplinary boundaries while greatly generalizing the explanatory power of evolutionary theory. PMID:21572912

  13. Conceptual Barriers to Progress Within Evolutionary Biology.

    PubMed

    Laland, Kevin N; Odling-Smee, John; Feldman, Marcus W; Kendal, Jeremy

    2009-08-01

    In spite of its success, Neo-Darwinism is faced with major conceptual barriers to further progress, deriving directly from its metaphysical foundations. Most importantly, neo-Darwinism fails to recognize a fundamental cause of evolutionary change, "niche construction". This failure restricts the generality of evolutionary theory, and introduces inaccuracies. It also hinders the integration of evolutionary biology with neighbouring disciplines, including ecosystem ecology, developmental biology, and the human sciences. Ecology is forced to become a divided discipline, developmental biology is stubbornly difficult to reconcile with evolutionary theory, and the majority of biologists and social scientists are still unhappy with evolutionary accounts of human behaviour. The incorporation of niche construction as both a cause and a product of evolution removes these disciplinary boundaries while greatly generalizing the explanatory power of evolutionary theory.

  14. Oral dirofilariasis.

    PubMed

    Janardhanan, Mahija; Rakesh, S; Savithri, Vindhya

    2014-01-01

    Filariasis affecting animals can rarely cause infections in human beings through the accidental bite of potential vectors. The resulting infection in man, known as zoonotic filariasis occur worldwide. Human dirofilariasis, the most common zoonotic filariasis, is caused by the filarial worm belonging to the genus Dirofilaria. Dirofilarial worms, which are recognized as pathogenic in man can cause nodular lesions in the lung, subcutaneous tissue, peritoneal cavity or eyes. Oral dirofilariasis is extremely rare and only a few cases have been documented. We report an interesting case of dirofilariasis due to Dirofilaria repens involving buccal mucosa in a patient who presented with a facial swelling. The clinical features, diagnostic issues and treatment aspects are discussed. This paper stresses the importance of considering dirofilariasis as differential diagnosis for subcutaneous swelling of the face, especially in areas where it is endemic.

  15. Human single-neuron responses at the threshold of conscious recognition

    PubMed Central

    Quiroga, R. Quian; Mukamel, R.; Isham, E. A.; Malach, R.; Fried, I.

    2008-01-01

    We studied the responses of single neurons in the human medial temporal lobe while subjects viewed familiar faces, animals, and landmarks. By progressively shortening the duration of stimulus presentation, coupled with backward masking, we show two striking properties of these neurons. (i) Their responses are not statistically different for the 33-ms, 66-ms, and 132-ms stimulus durations, and only for the 264-ms presentations there is a significantly higher firing. (ii) These responses follow conscious perception, as indicated by the subjects' recognition report. Remarkably, when recognized, a single snapshot as brief as 33 ms was sufficient to trigger strong single-unit responses far outlasting stimulus presentation. These results suggest that neurons in the medial temporal lobe can reflect conscious recognition by “all-or-none” responses. PMID:18299568

  16. Looking but Not Seeing: Atypical Visual Scanning and Recognition of Faces in 2 and 4-Year-Old Children with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Chawarska, Katarzyna; Shic, Frederick

    2009-01-01

    This study used eye-tracking to examine visual scanning and recognition of faces by 2- and 4-year-old children with autism spectrum disorder (ASD) (N = 44) and typically developing (TD) controls (N = 30). TD toddlers at both age levels scanned and recognized faces similarly. Toddlers with ASD looked increasingly away from faces with age,…

  17. Meeting of the Minds: Recognizing Styles of Conflict Management Helps Students Develop "People Skills."

    ERIC Educational Resources Information Center

    McFarland, William P.

    1992-01-01

    When faced with conflict, people respond in one of three styles: dominating, appeasing, or cooperating. Teaching students to recognize styles and choose appropriate responses can help them deal with conflict in the workplace. (SK)

  18. Spatial Frequency and Face Processing in Children with Autism and Asperger Syndrome

    ERIC Educational Resources Information Center

    Deruelle, Christine; Rondan, Cecilie; Gepner, Bruno; Tardif, Carole

    2004-01-01

    Two experiments were designed to investigate possible abnormal face processing strategies in children with autistic spectrum disorders. A group of 11 children with autism was compared to two groups of normally developing children matched on verbal mental age and on chronological age. In the first experiment, participants had to recognize faces on…

  19. The Relation of Facial Affect Recognition and Empathy to Delinquency in Youth Offenders

    ERIC Educational Resources Information Center

    Carr, Mary B.; Lutjemeier, John A.

    2005-01-01

    Associations among facial affect recognition, empathy, and self-reported delinquency were studied in a sample of 29 male youth offenders at a probation placement facility. Youth offenders were asked to recognize facial expressions of emotions from adult faces, child faces, and cartoon faces. Youth offenders also responded to a series of statements…

  20. Designing Flight-Deck Procedures

    NASA Technical Reports Server (NTRS)

    Degani, Asaf; Wiener, L.; Shafto, Mike (Technical Monitor)

    1995-01-01

    A complex human-machine system consists of more than merely one or more human operators and a collection of hardware components. In order to operate a complex system successfully, the human-machine system must be supported by an organizational infrastructure of operating concepts, rules, guidelines, and documents. The coherency of such operating concepts, in terms of consistency and logic, is vitally important for the efficiency and safety of any complex system. In high-risk endeavors such as aircraft operations, space flight, nuclear power production, manufacturing process control, and military operations, it is essential that such support be flawless, as the price of operational error can be high. When operating rules are not adhered to, or the rules are inadequate for the task at hand, not only will the system's goals be thwarted, but there may also be tragic human and material consequences. To ensure safe and predictable operations, support to the operators, in this case flight crews, often comes in the form of standard operating procedures. These provide the crew with step-by-step guidance for carrying out their operations. Standard procedures do indeed promote uniformity, but they do so at the risk of reducing the role of human operators to a lower level. Management, however, must recognize the danger of over-procedurization, which fails to exploit one of the most valuable assets in the system, the intelligent operator who is "on the scene." The alert system designer and operations manager recognize that there cannot be a procedure for everything, and the time will come in which the operators of a complex system will face a situation for which there is no written procedure. Procedures, whether executed by humans or machines, have their place, but so does human cognition.

  1. My Brain Reads Pain in Your Face, Before Knowing Your Gender.

    PubMed

    Czekala, Claire; Mauguière, François; Mazza, Stéphanie; Jackson, Philip L; Frot, Maud

    2015-12-01

    Humans are expert at recognizing facial features whether they are variable (emotions) or unchangeable (gender). Because of its huge communicative value, pain might be detected faster in faces than unchangeable features. Based on this assumption, we aimed to find a presentation time that enables subliminal discrimination of pain facial expression without permitting gender discrimination. For 80 individuals, we compared the time needed (50, 100, 150, or 200 milliseconds) to discriminate masked static pain faces among anger and neutral faces with the time needed to discriminate male from female faces. Whether these discriminations were associated with conscious reportability was tested with confidence measures on 40 other individuals. The results showed that, at 100 milliseconds, 75% of participants discriminated pain above chance level, whereas only 20% of participants discriminated the gender. Moreover, this pain discrimination appeared to be subliminal. This priority of pain over gender might exist because, even if pain faces are complex stimuli encoding both the sensory and the affective component of pain, they signal a danger. This supports the evolution theory relating to the necessity of quickly reading aversive emotions to ensure survival but might also be at the basis of altruistic behavior such as help and compassion. This study shows that pain facial expression can be processed subliminally after brief presentation times, which might be helpful for critical emergency situations in clinical settings. Copyright © 2015 American Pain Society. Published by Elsevier Inc. All rights reserved.

  2. Face Perception and Test Reliabilities in Congenital Prosopagnosia in Seven Tests

    PubMed Central

    Esins, Janina; Schultz, Johannes; Stemper, Claudia; Kennerknecht, Ingo

    2016-01-01

    Congenital prosopagnosia, the innate impairment in recognizing faces, is a very heterogeneous disorder with different phenotypical manifestations. To investigate the nature of prosopagnosia in more detail, we tested 16 prosopagnosics and 21 controls with an extended test battery addressing various aspects of face recognition. Our results show that prosopagnosics exhibited significant impairments in several face recognition tasks: impaired holistic processing (they were tested amongst others with the Cambridge Face Memory Test (CFMT)) as well as reduced processing of configural information of faces. This test battery also revealed some new findings. While controls recognized moving faces better than static faces, prosopagnosics did not exhibit this effect. Furthermore, prosopagnosics had significantly impaired gender recognition—which is shown on a groupwise level for the first time in our study. There was no difference between groups in the automatic extraction of face identity information or in object recognition as tested with the Cambridge Car Memory Test. In addition, a methodological analysis of the tests revealed reduced reliability for holistic face processing tests in prosopagnosics. To our knowledge, this is the first study to show that prosopagnosics showed a significantly reduced reliability coefficient (Cronbach’s alpha) in the CFMT compared to the controls. We suggest that compensatory strategies employed by the prosopagnosics might be the cause for the vast variety of response patterns revealed by the reduced test reliability. This finding raises the question whether classical face tests measure the same perceptual processes in controls and prosopagnosics. PMID:27482369

  3. Implications of holistic face processing in autism and schizophrenia

    PubMed Central

    Watson, Tamara L.

    2013-01-01

    People with autism and schizophrenia have been shown to have a local bias in sensory processing and face recognition difficulties. A global or holistic processing strategy is known to be important when recognizing faces. Studies investigating face recognition in these populations are reviewed and show that holistic processing is employed despite lower overall performance in the tasks used. This implies that holistic processing is necessary but not sufficient for optimal face recognition and new avenues for research into face recognition based on network models of autism and schizophrenia are proposed. PMID:23847581

  4. Human Facial Expressions as Adaptations:Evolutionary Questions in Facial Expression Research

    PubMed Central

    SCHMIDT, KAREN L.; COHN, JEFFREY F.

    2007-01-01

    The importance of the face in social interaction and social intelligence is widely recognized in anthropology. Yet the adaptive functions of human facial expression remain largely unknown. An evolutionary model of human facial expression as behavioral adaptation can be constructed, given the current knowledge of the phenotypic variation, ecological contexts, and fitness consequences of facial behavior. Studies of facial expression are available, but results are not typically framed in an evolutionary perspective. This review identifies the relevant physical phenomena of facial expression and integrates the study of this behavior with the anthropological study of communication and sociality in general. Anthropological issues with relevance to the evolutionary study of facial expression include: facial expressions as coordinated, stereotyped behavioral phenotypes, the unique contexts and functions of different facial expressions, the relationship of facial expression to speech, the value of facial expressions as signals, and the relationship of facial expression to social intelligence in humans and in nonhuman primates. Human smiling is used as an example of adaptation, and testable hypotheses concerning the human smile, as well as other expressions, are proposed. PMID:11786989

  5. Using hypnosis to disrupt face processing: mirrored-self misidentification delusion and different visual media

    PubMed Central

    Connors, Michael H.; Barnier, Amanda J.; Coltheart, Max; Langdon, Robyn; Cox, Rochelle E.; Rivolta, Davide; Halligan, Peter W.

    2014-01-01

    Mirrored-self misidentification delusion is the belief that one’s reflection in the mirror is not oneself. This experiment used hypnotic suggestion to impair normal face processing in healthy participants and recreate key aspects of the delusion in the laboratory. From a pool of 439 participants, 22 high hypnotisable participants (“highs”) and 20 low hypnotisable participants were selected on the basis of their extreme scores on two separately administered measures of hypnotisability. These participants received a hypnotic induction and a suggestion for either impaired (i) self-face recognition or (ii) impaired recognition of all faces. Participants were tested on their ability to recognize themselves in a mirror and other visual media – including a photograph, live video, and handheld mirror – and their ability to recognize other people, including the experimenter and famous faces. Both suggestions produced impaired self-face recognition and recreated key aspects of the delusion in highs. However, only the suggestion for impaired other-face recognition disrupted recognition of other faces, albeit in a minority of highs. The findings confirm that hypnotic suggestion can disrupt face processing and recreate features of mirrored-self misidentification. The variability seen in participants’ responses also corresponds to the heterogeneity seen in clinical patients. An important direction for future research will be to examine sources of this variability within both clinical patients and the hypnotic model. PMID:24994973

  6. Gender differences in facial emotion recognition in persons with chronic schizophrenia.

    PubMed

    Weiss, Elisabeth M; Kohler, Christian G; Brensinger, Colleen M; Bilker, Warren B; Loughead, James; Delazer, Margarete; Nolan, Karen A

    2007-03-01

    The aim of the present study was to investigate possible sex differences in the recognition of facial expressions of emotion and to investigate the pattern of classification errors in schizophrenic males and females. Such an approach provides an opportunity to inspect the degree to which males and females differ in perceiving and interpreting the different emotions displayed to them and to analyze which emotions are most susceptible to recognition errors. Fifty six chronically hospitalized schizophrenic patients (38 men and 18 women) completed the Penn Emotion Recognition Test (ER40), a computerized emotion discrimination test presenting 40 color photographs of evoked happy, sad, anger, fear expressions and neutral expressions balanced for poser gender and ethnicity. We found a significant sex difference in the patterns of error rates in the Penn Emotion Recognition Test. Neutral faces were more commonly mistaken as angry in schizophrenic men, whereas schizophrenic women misinterpreted neutral faces more frequently as sad. Moreover, female faces were better recognized overall, but fear was better recognized in same gender photographs, whereas anger was better recognized in different gender photographs. The findings of the present study lend support to the notion that sex differences in aggressive behavior could be related to a cognitive style characterized by hostile attributions to neutral faces in schizophrenic men.

  7. Hemodynamic response of children with attention-deficit and hyperactive disorder (ADHD) to emotional facial expressions.

    PubMed

    Ichikawa, Hiroko; Nakato, Emi; Kanazawa, So; Shimamura, Keiichi; Sakuta, Yuiko; Sakuta, Ryoichi; Yamaguchi, Masami K; Kakigi, Ryusuke

    2014-10-01

    Children with attention-deficit/hyperactivity disorder (ADHD) have difficulty recognizing facial expressions. They identify angry expressions less accurately than typically developing (TD) children, yet little is known about their atypical neural basis for the recognition of facial expressions. Here, we used near-infrared spectroscopy (NIRS) to examine the distinctive cerebral hemodynamics of ADHD and TD children while they viewed happy and angry expressions. We measured the hemodynamic responses of 13 ADHD boys and 13 TD boys to happy and angry expressions at their bilateral temporal areas, which are sensitive to face processing. The ADHD children showed an increased concentration of oxy-Hb for happy faces but not for angry faces, while TD children showed increased oxy-Hb for both faces. Moreover, the individual peak latency of hemodynamic response in the right temporal area showed significantly greater variance in the ADHD group than in the TD group. Such atypical brain activity observed in ADHD boys may relate to their preserved ability to recognize a happy expression and their difficulty recognizing an angry expression. We firstly demonstrated that NIRS can be used to detect atypical hemodynamic response to facial expressions in ADHD children. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. Crisis Communication during Natural Disasters: Meeting Real and Perceived Needs

    NASA Astrophysics Data System (ADS)

    Jones, L.

    2017-12-01

    When significant natural disasters strike, our modern information-driven society turns to scientists, demanding information about the event. As part of their civic duty scientists respond, recognizing how the scientific information could be used to improve response to the disaster and reduce losses. However, what we often find is that the demand for information is not for improved response but to satisfy psychological, often subconscious needs. Human beings evolved our larger brains to better survive against larger and stronger predators. Recognizing that a movement of grass and the lack of birdsong means that a predator is hiding would in turn mean a greater likelihood of having progeny. Our ability to theorize comes from the need to create patterns in the face of danger that will keep us safe. From wondering about someone's exercise habits when we hear they have a heart attack, to blaming hurricane victims for not heeding evacuation orders even if they had no means to evacuate, we respond to disasters by trying to make a pattern that means that we will not suffer the same fate. Much of the demand for information after a natural disaster is a search for these patterns. Faced with a random distribution, many people still make patterns that can reduce their anxiety. The result is that meanings are ascribed to the information that is not supported by the data and was not part of the communication as intended by the scientist. The challenge for science communicators is to recognize this need and present the information is a way that both reduces the anxiety that arises from a lack of knowledge or uncertainty while making clear what patterns can or cannot be made about future risks.

  9. The neural speed of familiar face recognition.

    PubMed

    Barragan-Jason, G; Cauchoix, M; Barbeau, E J

    2015-08-01

    Rapidly recognizing familiar people from their faces appears critical for social interactions (e.g., to differentiate friend from foe). However, the actual speed at which the human brain can distinguish familiar from unknown faces still remains debated. In particular, it is not clear whether familiarity can be extracted from rapid face individualization or if it requires additional time consuming processing. We recorded scalp EEG activity in 28 subjects performing a go/no-go, famous/non-famous, unrepeated, face recognition task. Speed constraints were used to encourage subjects to use the earliest familiarity information available. Event related potential (ERP) analyses show that both the N170 and the N250 components were modulated by familiarity. The N170 modulation was related to behaviour: subjects presenting the strongest N170 modulation were also faster but less accurate than those who only showed weak N170 modulation. A complementary Multi-Variate Pattern Analysis (MVPA) confirmed ERP results and provided some more insights into the dynamics of face recognition as the N170 differential effect appeared to be related to a first transitory phase (transitory bump of decoding power) starting at around 140 ms, which returned to baseline afterwards. This bump of activity was henceforth followed by an increase of decoding power starting around 200 ms after stimulus onset. Overall, our results suggest that rather than a simple single-process, familiarity for faces may rely on a cascade of neural processes, including a coarse and fast stage starting at 140 ms and a more refined but slower stage occurring after 200 ms. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. On facial asymmetry and self-perception.

    PubMed

    Lu, Stephen M; Bartlett, Scott P

    2014-06-01

    Self-perception has been an enduring human concern since ancient times and remains a significant component of the preoperative and postoperative consultation. Despite modern technological attempts to reproduce the first-hand experience, there is no perfect substitute for human, stereoscopic, three-dimensional vision in evaluating appearance. Nowadays, however, the primary tools available to a patient for examining his or her own appearance, particularly the face, are photographs and mirrors. Patients are often unaware of how cameras and photographs can distort and degrade image quality, leading to an inaccurate representation of true appearance. Everyone knows that mirrors reverse an image, left and right, and most people recognize their own natural facial asymmetry at some level. However, few realize that emotions are not only expressed unequally by the left and right sides of the face but also perceived unequally by others. The impact and effect of this "facedness" is completely reversed by mirrors, potentially creating a significant discrepancy between what a patient perceives of himself or herself and what the surgeon or other third party sees. This article ties together the diverse threads leading to this problem and suggests several ways of mitigating the issue through technology and patient counseling.

  11. Learning to detect and combine the features of an object

    PubMed Central

    Suchow, Jordan W.; Pelli, Denis G.

    2013-01-01

    To recognize an object, it is widely supposed that we first detect and then combine its features. Familiar objects are recognized effortlessly, but unfamiliar objects—like new faces or foreign-language letters—are hard to distinguish and must be learned through practice. Here, we describe a method that separates detection and combination and reveals how each improves as the observer learns. We dissociate the steps by two independent manipulations: For each step, we do or do not provide a bionic crutch that performs it optimally. Thus, the two steps may be performed solely by the human, solely by the crutches, or cooperatively, when the human takes one step and a crutch takes the other. The crutches reveal a double dissociation between detecting and combining. Relative to the two-step ideal, the human observer’s overall efficiency for unconstrained identification equals the product of the efficiencies with which the human performs the steps separately. The two-step strategy is inefficient: Constraining the ideal to take two steps roughly halves its identification efficiency. In contrast, we find that humans constrained to take two steps perform just as well as when unconstrained, which suggests that they normally take two steps. Measuring threshold contrast (the faintness of a barely identifiable letter) as it improves with practice, we find that detection is inefficient and learned slowly. Combining is learned at a rate that is 4× higher and, after 1,000 trials, 7× more efficient. This difference explains much of the diversity of rates reported in perceptual learning studies, including effects of complexity and familiarity. PMID:23267067

  12. Face gender categorization and hemispheric asymmetries: Contrasting evidence from connected and disconnected brains.

    PubMed

    Prete, Giulia; Fabri, Mara; Foschi, Nicoletta; Tommasi, Luca

    2016-12-17

    We investigated hemispheric asymmetries in categorization of face gender by means of a divided visual field paradigm, in which female and male faces were presented unilaterally for 150ms each. A group of 60 healthy participants (30 males) and a male split-brain patient (D.D.C.) were asked to categorize the gender of the stimuli. Healthy participants categorized male faces presented in the right visual field (RVF) better and faster than when presented in the left visual field (LVF), and female faces presented in the LVF than in the RVF, independently of the participants' sex. Surprisingly, the recognition rates of D.D.C. were at chance levels - and significantly lower than those of the healthy participants - for both female and male faces presented in the RVF, as well as for female faces presented in the LVF. His performance was higher than expected by chance - and did not differ from controls - only for male faces presented in the LVF. The residual right-hemispheric ability of the split-brain patient in categorizing male faces reveals an own-gender bias lateralized in the right hemisphere, in line with the rightward own-identity and own-age bias previously shown in split-brain patients. The gender-contingent hemispheric dominance found in healthy participants confirms the previously shown right-hemispheric superiority in recognizing female faces, and also reveals a left-hemispheric superiority in recognizing male faces, adding an important evidence of hemispheric imbalance in the field of face and gender perception. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  13. Subject-specific and pose-oriented facial features for face recognition across poses.

    PubMed

    Lee, Ping-Han; Hsu, Gee-Sern; Wang, Yun-Wen; Hung, Yi-Ping

    2012-10-01

    Most face recognition scenarios assume that frontal faces or mug shots are available for enrollment to the database, faces of other poses are collected in the probe set. Given a face from the probe set, one needs to determine whether a match in the database exists. This is under the assumption that in forensic applications, most suspects have their mug shots available in the database, and face recognition aims at recognizing the suspects when their faces of various poses are captured by a surveillance camera. This paper considers a different scenario: given a face with multiple poses available, which may or may not include a mug shot, develop a method to recognize the face with poses different from those captured. That is, given two disjoint sets of poses of a face, one for enrollment and the other for recognition, this paper reports a method best for handling such cases. The proposed method includes feature extraction and classification. For feature extraction, we first cluster the poses of each subject's face in the enrollment set into a few pose classes and then decompose the appearance of the face in each pose class using Embedded Hidden Markov Model, which allows us to define a set of subject-specific and pose-priented (SSPO) facial components for each subject. For classification, an Adaboost weighting scheme is used to fuse the component classifiers with SSPO component features. The proposed method is proven to outperform other approaches, including a component-based classifier with local facial features cropped manually, in an extensive performance evaluation study.

  14. Interest and attention in facial recognition.

    PubMed

    Burgess, Melinda C R; Weaver, George E

    2003-04-01

    When applied to facial recognition, the levels of processing paradigm has yielded consistent results: faces processed in deep conditions are recognized better than faces processed under shallow conditions. However, there are multiple explanations for this occurrence. The own-race advantage in facial recognition, the tendency to recognize faces from one's own race better than faces from another race, is also consistently shown but not clearly explained. This study was designed to test the hypothesis that the levels of processing findings in facial recognition are a result of interest and attention, not differences in processing. This hypothesis was tested for both own and other faces with 105 Caucasian general psychology students. Levels of processing was manipulated as a between-subjects variable; students were asked to answer one of four types of study questions, e.g., "deep" or "shallow" processing questions, while viewing the study faces. Students' recognition of a subset of previously presented Caucasian and African-American faces from a test-set with an equal number of distractor faces was tested. They indicated their interest in and attention to the task. The typical levels of processing effect was observed with better recognition performance in the deep conditions than in the shallow conditions for both own- and other-race faces. The typical own-race advantage was also observed regardless of level of processing condition. For both own- and other-race faces, level of processing explained a significant portion of the recognition variance above and beyond what was explained by interest in and attention to the task.

  15. The Categorization-Individuation Model: An Integrative Account of the Other-Race Recognition Deficit

    ERIC Educational Resources Information Center

    Hugenberg, Kurt; Young, Steven G.; Bernstein, Michael J.; Sacco, Donald F.

    2010-01-01

    The "other-race effect" (ORE), or the finding that same-race faces are better recognized than other-race faces, is one of the best replicated phenomena in face recognition. The current article reviews existing evidence and theory and proposes a new theoretical framework for the ORE, which argues that the effect results from a confluence of social…

  16. Using Computerized Games to Teach Face Recognition Skills to Children with Autism Spectrum Disorder: The "Let's Face It!" Program

    ERIC Educational Resources Information Center

    Tanaka, James W.; Wolf, Julie M.; Klaiman, Cheryl; Koenig, Kathleen; Cockburn, Jeffrey; Herlihy, Lauren; Brown, Carla; Stahl, Sherin; Kaiser, Martha D.; Schultz, Robert T.

    2010-01-01

    Background: An emerging body of evidence indicates that relative to typically developing children, children with autism are selectively impaired in their ability to recognize facial identity. A critical question is whether face recognition skills can be enhanced through a direct training intervention. Methods: In a randomized clinical trial,…

  17. Acknowledged Dependence and the Virtues of Perinatal Hospice

    PubMed Central

    Cobb, Aaron D.

    2016-01-01

    Prenatal screening can lead to the detection and diagnosis of significantly life-limiting conditions affecting the unborn child. Recognizing the difficulties facing parents who decide to continue the pregnancy, some have proposed perinatal hospice as a new modality of care. Although the medical literature has begun to devote significant attention to these practices, systematic philosophical reflection on perinatal hospice has been relatively limited. Drawing on Alasdair MacIntyre’s account of the virtues of acknowledged dependence, I contend that perinatal hospice manifests and facilitates virtues essential to living well with human dependency and vulnerability. For this reason, perinatal hospice deserves broad support within society. PMID:26661051

  18. Cells Recognize and Prefer Bone-like Hydroxyapatite: Biochemical Understanding of Ultrathin Mineral Platelets in Bone.

    PubMed

    Liu, Cuilian; Zhai, Halei; Zhang, Zhisen; Li, Yaling; Xu, Xurong; Tang, Ruikang

    2016-11-09

    Hydroxyapatite (HAP) nanocrystallites in all types of bones are distinguished by their ultrathin characteristics, which are uniaxially oriented with fibrillar collagen to uniquely expose the (100) faces. We speculate that living organisms prefer the specific crystal morphology and orientation of HAP because of the interactions between cells and crystals at the mineral-cell interface. Here, bone-like platy HAP (p-HAP) and two different rod-like HAPs were synthesized to investigate the ultrathin mineral modulating effect on cell bioactivity and bone generation. Cell viability and osteogenic differentiation of mesenchymal stem cells (MSCs) were significantly promoted by the platy HAP with (100) faces compared to rod-like HAPs with (001) faces as the dominant crystal orientation, which indicated that MSCs can recognize the crystal face and prefer the (100) HAP faces. This face-specific preference is dependent on the selective adsorption of fibronectin (FN), a plasma protein that plays a central role in cell adhesion, on the HAP surface. This selective adsorption is further confirmed by molecule dynamics (MD) simulation. Our results demonstrate that it is an intelligent choice for cells to use ultrathin HAP with a large (100) face as a basic building block in the hierarchical structure of bone, which is crucial to the promotion of MSCs osteoinductions during bone formation.

  19. Impaired Integration of Emotional Faces and Affective Body Context in a Rare Case of Developmental Visual Agnosia

    PubMed Central

    Aviezer, Hillel; Hassin, Ran. R.; Bentin, Shlomo

    2011-01-01

    In the current study we examined the recognition of facial expressions embedded in emotionally expressive bodies in case LG, an individual with a rare form of developmental visual agnosia who suffers from severe prosopagnosia. Neuropsychological testing demonstrated that LG‘s agnosia is characterized by profoundly impaired visual integration. Unlike individuals with typical developmental prosopagnosia who display specific difficulties with face identity (but typically not expression) recognition, LG was also impaired at recognizing isolated facial expressions. By contrast, he successfully recognized the expressions portrayed by faceless emotional bodies handling affective paraphernalia. When presented with contextualized faces in emotional bodies his ability to detect the emotion expressed by a face did not improve even if it was embedded in an emotionally-congruent body context. Furthermore, in contrast to controls, LG displayed an abnormal pattern of contextual influence from emotionally-incongruent bodies. The results are interpreted in the context of a general integration deficit in developmental visual agnosia, suggesting that impaired integration may extend from the level of the face to the level of the full person. PMID:21482423

  20. Altered spontaneous neural activity in the occipital face area reflects behavioral deficits in developmental prosopagnosia.

    PubMed

    Zhao, Yuanfang; Li, Jingguang; Liu, Xiqin; Song, Yiying; Wang, Ruosi; Yang, Zetian; Liu, Jia

    2016-08-01

    Individuals with developmental prosopagnosia (DP) exhibit severe difficulties in recognizing faces and to a lesser extent, also exhibit difficulties in recognizing non-face objects. We used fMRI to investigate whether these behavioral deficits could be accounted for by altered spontaneous neural activity. Two aspects of spontaneous neural activity were measured: the intensity of neural activity in a voxel indexed by the fractional amplitude of spontaneous low-frequency fluctuations (fALFF), and the connectivity of a voxel to neighboring voxels indexed by regional homogeneity (ReHo). Compared with normal adults, both the fALFF and ReHo values within the right occipital face area (rOFA) were significantly reduced in DP subjects. Follow-up studies on the normal adults revealed that these two measures indicated further functional division of labor within the rOFA. The fALFF in the rOFA was positively correlated with behavioral performance in recognition of non-face objects, whereas ReHo in the rOFA was positively correlated with processing of faces. When considered together, the altered fALFF and ReHo within the same region (rOFA) may account for the comorbid deficits in both face and object recognition in DPs, whereas the functional division of labor in these two measures helps to explain the relative independency of deficits in face recognition and object recognition in DP. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Survey on RGB, 3D, Thermal, and Multimodal Approaches for Facial Expression Recognition: History, Trends, and Affect-Related Applications.

    PubMed

    Corneanu, Ciprian Adrian; Simon, Marc Oliu; Cohn, Jeffrey F; Guerrero, Sergio Escalera

    2016-08-01

    Facial expressions are an important way through which humans interact socially. Building a system capable of automatically recognizing facial expressions from images and video has been an intense field of study in recent years. Interpreting such expressions remains challenging and much research is needed about the way they relate to human affect. This paper presents a general overview of automatic RGB, 3D, thermal and multimodal facial expression analysis. We define a new taxonomy for the field, encompassing all steps from face detection to facial expression recognition, and describe and classify the state of the art methods accordingly. We also present the important datasets and the bench-marking of most influential methods. We conclude with a general discussion about trends, important questions and future lines of research.

  2. Human Rights and the Global Fund to Fight AIDS, Tuberculosis and Malaria: How Does a Large Funder of Basic Health Services Meet the Challenge of Rights-Based Programs?

    PubMed

    Jürgens, Ralf; Csete, Joanne; Lim, Hyeyoung; Timberlake, Susan; Smith, Matthew

    2017-12-01

    The Global Fund to Fight AIDS, Tuberculosis and Malaria was created to greatly expand access to basic services to address the three diseases in its name. From its beginnings, its governance embodied some human rights principles: civil society is represented on its board, and the country coordination mechanisms that oversee funding requests to the Global Fund include representatives of people affected by the diseases. The Global Fund's core strategies recognize that the health services it supports would not be effective or cost-effective without efforts to reduce human rights-related barriers to access and utilization of health services, particularly those faced by socially marginalized and criminalized persons. Basic human rights elements were written into Global Fund grant agreements, and various technical support measures encouraged the inclusion in funding requests of programs to reduce human rights-related barriers. A five-year initiative to provide intensive technical and financial support for the scaling up of programs to reduce these barriers in 20 countries is ongoing.

  3. Face recognition for criminal identification: An implementation of principal component analysis for face recognition

    NASA Astrophysics Data System (ADS)

    Abdullah, Nurul Azma; Saidi, Md. Jamri; Rahman, Nurul Hidayah Ab; Wen, Chuah Chai; Hamid, Isredza Rahmi A.

    2017-10-01

    In practice, identification of criminal in Malaysia is done through thumbprint identification. However, this type of identification is constrained as most of criminal nowadays getting cleverer not to leave their thumbprint on the scene. With the advent of security technology, cameras especially CCTV have been installed in many public and private areas to provide surveillance activities. The footage of the CCTV can be used to identify suspects on scene. However, because of limited software developed to automatically detect the similarity between photo in the footage and recorded photo of criminals, the law enforce thumbprint identification. In this paper, an automated facial recognition system for criminal database was proposed using known Principal Component Analysis approach. This system will be able to detect face and recognize face automatically. This will help the law enforcements to detect or recognize suspect of the case if no thumbprint present on the scene. The results show that about 80% of input photo can be matched with the template data.

  4. A survey of the dummy face and human face stimuli used in BCI paradigm.

    PubMed

    Chen, Long; Jin, Jing; Zhang, Yu; Wang, Xingyu; Cichocki, Andrzej

    2015-01-15

    It was proved that the human face stimulus were superior to the flash only stimulus in BCI system. However, human face stimulus may lead to copyright infringement problems and was hard to be edited according to the requirement of the BCI study. Recently, it was reported that facial expression changes could be done by changing a curve in a dummy face which could obtain good performance when it was applied to visual-based P300 BCI systems. In this paper, four different paradigms were presented, which were called dummy face pattern, human face pattern, inverted dummy face pattern and inverted human face pattern, to evaluate the performance of the dummy faces stimuli compared with the human faces stimuli. The key point that determined the value of dummy faces in BCI systems were whether dummy faces stimuli could obtain as good performance as human faces stimuli. Online and offline results of four different paradigms would have been obtained and comparatively analyzed. Online and offline results showed that there was no significant difference among dummy faces and human faces in ERPs, classification accuracy and information transfer rate when they were applied in BCI systems. Dummy faces stimuli could evoke large ERPs and obtain as high classification accuracy and information transfer rate as the human faces stimuli. Since dummy faces were easy to be edited and had no copyright infringement problems, it would be a good choice for optimizing the stimuli of BCI systems. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Observed touch on a non-human face is not remapped onto the human observer's own face.

    PubMed

    Beck, Brianna; Bertini, Caterina; Scarpazza, Cristina; Làdavas, Elisabetta

    2013-01-01

    Visual remapping of touch (VRT) is a phenomenon in which seeing a human face being touched enhances detection of tactile stimuli on the observer's own face, especially when the observed face expresses fear. This study tested whether VRT would occur when seeing touch on monkey faces and whether it would be similarly modulated by facial expressions. Human participants detected near-threshold tactile stimulation on their own cheeks while watching fearful, happy, and neutral human or monkey faces being concurrently touched or merely approached by fingers. We predicted minimal VRT for neutral and happy monkey faces but greater VRT for fearful monkey faces. The results with human faces replicated previous findings, demonstrating stronger VRT for fearful expressions than for happy or neutral expressions. However, there was no VRT (i.e. no difference between accuracy in touch and no-touch trials) for any of the monkey faces, regardless of facial expression, suggesting that touch on a non-human face is not remapped onto the somatosensory system of the human observer.

  6. Observed Touch on a Non-Human Face Is Not Remapped onto the Human Observer's Own Face

    PubMed Central

    Beck, Brianna; Bertini, Caterina; Scarpazza, Cristina; Làdavas, Elisabetta

    2013-01-01

    Visual remapping of touch (VRT) is a phenomenon in which seeing a human face being touched enhances detection of tactile stimuli on the observer's own face, especially when the observed face expresses fear. This study tested whether VRT would occur when seeing touch on monkey faces and whether it would be similarly modulated by facial expressions. Human participants detected near-threshold tactile stimulation on their own cheeks while watching fearful, happy, and neutral human or monkey faces being concurrently touched or merely approached by fingers. We predicted minimal VRT for neutral and happy monkey faces but greater VRT for fearful monkey faces. The results with human faces replicated previous findings, demonstrating stronger VRT for fearful expressions than for happy or neutral expressions. However, there was no VRT (i.e. no difference between accuracy in touch and no-touch trials) for any of the monkey faces, regardless of facial expression, suggesting that touch on a non-human face is not remapped onto the somatosensory system of the human observer. PMID:24250781

  7. Human versus Non-Human Face Processing: Evidence from Williams Syndrome

    ERIC Educational Resources Information Center

    Santos, Andreia; Rosset, Delphine; Deruelle, Christine

    2009-01-01

    Increased motivation towards social stimuli in Williams syndrome (WS) led us to hypothesize that a face's human status would have greater impact than face's orientation on WS' face processing abilities. Twenty-nine individuals with WS were asked to categorize facial emotion expressions in real, human cartoon and non-human cartoon faces presented…

  8. Analysis of differences between Western and East-Asian faces based on facial region segmentation and PCA for facial expression recognition

    NASA Astrophysics Data System (ADS)

    Benitez-Garcia, Gibran; Nakamura, Tomoaki; Kaneko, Masahide

    2017-01-01

    Darwin was the first one to assert that facial expressions are innate and universal, which are recognized across all cultures. However, recent some cross-cultural studies have questioned this assumed universality. Therefore, this paper presents an analysis of the differences between Western and East-Asian faces of the six basic expressions (anger, disgust, fear, happiness, sadness and surprise) focused on three individual facial regions of eyes-eyebrows, nose and mouth. The analysis is conducted by applying PCA for two feature extraction methods: appearance-based by using the pixel intensities of facial parts, and geometric-based by handling 125 feature points from the face. Both methods are evaluated using 4 standard databases for both racial groups and the results are compared with a cross-cultural human study applied to 20 participants. Our analysis reveals that differences between Westerns and East-Asians exist mainly on the regions of eyes-eyebrows and mouth for expressions of fear and disgust respectively. This work presents important findings for a better design of automatic facial expression recognition systems based on the difference between two racial groups.

  9. The fractal based analysis of human face and DNA variations during aging.

    PubMed

    Namazi, Hamidreza; Akrami, Amin; Hussaini, Jamal; Silva, Osmar N; Wong, Albert; Kulish, Vladimir V

    2017-01-16

    Human DNA is the main unit that shapes human characteristics and features such as behavior. Thus, it is expected that changes in DNA (DNA mutation) influence human characteristics and features. Face is one of the human features which is unique and also dependent on his gen. In this paper, for the first time we analyze the variations of human DNA and face simultaneously. We do this job by analyzing the fractal dimension of DNA walk and face during human aging. The results of this study show the human DNA and face get more complex by aging. These complexities are mapped on fractal exponents of DNA walk and human face. The method discussed in this paper can be further developed in order to investigate the direct influence of DNA mutation on the face variations during aging, and accordingly making a model between human face fractality and the complexity of DNA walk.

  10. Recognition of the DNA sequence by an inorganic crystal surface

    PubMed Central

    Sampaolese, Beatrice; Bergia, Anna; Scipioni, Anita; Zuccheri, Giampaolo; Savino, Maria; Samorì, Bruno; De Santis, Pasquale

    2002-01-01

    The sequence-dependent curvature is generally recognized as an important and biologically relevant property of DNA because it is involved in the formation and stability of association complexes with proteins. When a DNA tract, intrinsically curved for the periodical recurrence on the same strand of A-tracts phased with the B-DNA periodicity, is deposited on a flat surface, it exposes to that surface either a T- or an A-rich face. The surface of a freshly cleaved mica crystal recognizes those two faces and preferentially interacts with the former one. Statistical analysis of scanning force microscopy (SFM) images provides evidence of this recognition between an inorganic crystal surface and nanoscale structures of double-stranded DNA. This finding could open the way toward the use of the sequence-dependent adhesion to specific crystal faces for nanotechnological purposes. PMID:12361979

  11. Challenges Faced by Female-Students in Engineering-Education

    ERIC Educational Resources Information Center

    Madara, Diana Starovoytova; Cherotich, Sharon

    2016-01-01

    Gender-related challenges in learning technical courses are universal phenomenon. These challenges could restrain female students from achieving their fullest potential. The main focus of this study, therefore, is to examine self-recognized challenges faced by undergraduate female students in pursuing engineering at the School of Engineering…

  12. Processing of configural and componential information in face-selective cortical areas.

    PubMed

    Zhao, Mintao; Cheung, Sing-Hang; Wong, Alan C-N; Rhodes, Gillian; Chan, Erich K S; Chan, Winnie W L; Hayward, William G

    2014-01-01

    We investigated how face-selective cortical areas process configural and componential face information and how race of faces may influence these processes. Participants saw blurred (preserving configural information), scrambled (preserving componential information), and whole faces during fMRI scan, and performed a post-scan face recognition task using blurred or scrambled faces. The fusiform face area (FFA) showed stronger activation to blurred than to scrambled faces, and equivalent responses to blurred and whole faces. The occipital face area (OFA) showed stronger activation to whole than to blurred faces, which elicited similar responses to scrambled faces. Therefore, the FFA may be more tuned to process configural than componential information, whereas the OFA similarly participates in perception of both. Differences in recognizing own- and other-race blurred faces were correlated with differences in FFA activation to those faces, suggesting that configural processing within the FFA may underlie the other-race effect in face recognition.

  13. Face-body integration of intense emotional expressions of victory and defeat.

    PubMed

    Wang, Lili; Xia, Lisheng; Zhang, Dandan

    2017-01-01

    Human facial expressions can be recognized rapidly and effortlessly. However, for intense emotions from real life, positive and negative facial expressions are difficult to discriminate and the judgment of facial expressions is biased towards simultaneously perceived body expressions. This study employed event-related potentials (ERPs) to investigate the neural dynamics involved in the integration of emotional signals from facial and body expressions of victory and defeat. Emotional expressions of professional players were used to create pictures of face-body compounds, with either matched or mismatched emotional expressions in faces and bodies. Behavioral results showed that congruent emotional information of face and body facilitated the recognition of facial expressions. ERP data revealed larger P1 amplitudes for incongruent compared to congruent stimuli. Also, a main effect of body valence on the P1 was observed, with enhanced amplitudes for the stimuli with losing compared to winning bodies. The main effect of body expression was also observed in N170 and N2, with winning bodies producing larger N170/N2 amplitudes. In the later stage, a significant interaction of congruence by body valence was found on the P3 component. Winning bodies elicited lager P3 amplitudes than losing bodies did when face and body conveyed congruent emotional signals. Beyond the knowledge based on prototypical facial and body expressions, the results of this study facilitate us to understand the complexity of emotion evaluation and categorization out of laboratory.

  14. Face-body integration of intense emotional expressions of victory and defeat

    PubMed Central

    Wang, Lili; Xia, Lisheng; Zhang, Dandan

    2017-01-01

    Human facial expressions can be recognized rapidly and effortlessly. However, for intense emotions from real life, positive and negative facial expressions are difficult to discriminate and the judgment of facial expressions is biased towards simultaneously perceived body expressions. This study employed event-related potentials (ERPs) to investigate the neural dynamics involved in the integration of emotional signals from facial and body expressions of victory and defeat. Emotional expressions of professional players were used to create pictures of face-body compounds, with either matched or mismatched emotional expressions in faces and bodies. Behavioral results showed that congruent emotional information of face and body facilitated the recognition of facial expressions. ERP data revealed larger P1 amplitudes for incongruent compared to congruent stimuli. Also, a main effect of body valence on the P1 was observed, with enhanced amplitudes for the stimuli with losing compared to winning bodies. The main effect of body expression was also observed in N170 and N2, with winning bodies producing larger N170/N2 amplitudes. In the later stage, a significant interaction of congruence by body valence was found on the P3 component. Winning bodies elicited lager P3 amplitudes than losing bodies did when face and body conveyed congruent emotional signals. Beyond the knowledge based on prototypical facial and body expressions, the results of this study facilitate us to understand the complexity of emotion evaluation and categorization out of laboratory. PMID:28245245

  15. Face recognition: a convolutional neural-network approach.

    PubMed

    Lawrence, S; Giles, C L; Tsoi, A C; Back, A D

    1997-01-01

    We present a hybrid neural-network for human face recognition which compares favourably with other methods. The system combines local image sampling, a self-organizing map (SOM) neural network, and a convolutional neural network. The SOM provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the SOM, and a multilayer perceptron (MLP) in place of the convolutional network for comparison. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze the computational complexity and discuss how new classes could be added to the trained recognizer.

  16. Neural bases of eye and gaze processing: The core of social cognition

    PubMed Central

    Itier, Roxane J.; Batty, Magali

    2014-01-01

    Eyes and gaze are very important stimuli for human social interactions. Recent studies suggest that impairments in recognizing face identity, facial emotions or in inferring attention and intentions of others could be linked to difficulties in extracting the relevant information from the eye region including gaze direction. In this review, we address the central role of eyes and gaze in social cognition. We start with behavioral data demonstrating the importance of the eye region and the impact of gaze on the most significant aspects of face processing. We review neuropsychological cases and data from various imaging techniques such as fMRI/PET and ERP/MEG, in an attempt to best describe the spatio-temporal networks underlying these processes. The existence of a neuronal eye detector mechanism is discussed as well as the links between eye gaze and social cognition impairments in autism. We suggest impairments in processing eyes and gaze may represent a core deficiency in several other brain pathologies and may be central to abnormal social cognition. PMID:19428496

  17. A novel EOG/EEG hybrid human-machine interface adopting eye movements and ERPs: application to robot control.

    PubMed

    Ma, Jiaxin; Zhang, Yu; Cichocki, Andrzej; Matsuno, Fumitoshi

    2015-03-01

    This study presents a novel human-machine interface (HMI) based on both electrooculography (EOG) and electroencephalography (EEG). This hybrid interface works in two modes: an EOG mode recognizes eye movements such as blinks, and an EEG mode detects event related potentials (ERPs) like P300. While both eye movements and ERPs have been separately used for implementing assistive interfaces, which help patients with motor disabilities in performing daily tasks, the proposed hybrid interface integrates them together. In this way, both the eye movements and ERPs complement each other. Therefore, it can provide a better efficiency and a wider scope of application. In this study, we design a threshold algorithm that can recognize four kinds of eye movements including blink, wink, gaze, and frown. In addition, an oddball paradigm with stimuli of inverted faces is used to evoke multiple ERP components including P300, N170, and VPP. To verify the effectiveness of the proposed system, two different online experiments are carried out. One is to control a multifunctional humanoid robot, and the other is to control four mobile robots. In both experiments, the subjects can complete tasks effectively by using the proposed interface, whereas the best completion time is relatively short and very close to the one operated by hand.

  18. Overcrowded motor vehicle trauma from the smuggling of illegal immigrants in the desert of the Southwest.

    PubMed

    Lumpkin, Mary F; Judkins, Dan; Porter, John M; Latifi, Rifat; Williams, Mark D

    2004-12-01

    Overcrowded motor vehicle crashes caused by the very active criminal enterprise of smuggling illegal immigrants in the desert of the Southwest is a recent and under-recognized trauma etiology. A computerized database search from 1990 through 2003 of local newspaper reports of overcrowded motor vehicle crashes along the 281 miles of Arizona's border with Mexico was conducted. This area was covered by two level I trauma centers, but since July 2003 is now served only by the University Medical Center. Each of these crashes involved a single motor vehicle in poor mechanical shape packed with illegal immigrants. Speeding out of control on bad tires, high-speed rollovers result in ejection of most passengers. Since 1999, there have been 38 crashes involving 663 passengers (an average of 17 per vehicle) with an injury rate of 49 per cent and a mortality rate of 9 per cent. This relatively recent phenomenon (no reports from before 1998) of trauma resulting from human smuggling is lethal and demonstrates the smugglers' wanton disregard for human life, particularly when facing apprehension. Even a few innocent bystanders have been killed. These crashes overwhelm a region's trauma resources and must be recognized when planning the distribution of trauma resources to border states.

  19. Emotion categorization of body expressions in narrative scenarios

    PubMed Central

    Volkova, Ekaterina P.; Mohler, Betty J.; Dodds, Trevor J.; Tesch, Joachim; Bülthoff, Heinrich H.

    2014-01-01

    Humans can recognize emotions expressed through body motion with high accuracy even when the stimuli are impoverished. However, most of the research on body motion has relied on exaggerated displays of emotions. In this paper we present two experiments where we investigated whether emotional body expressions could be recognized when they were recorded during natural narration. Our actors were free to use their entire body, face, and voice to express emotions, but our resulting visual stimuli used only the upper body motion trajectories in the form of animated stick figures. Observers were asked to perform an emotion recognition task on short motion sequences using a large and balanced set of emotions (amusement, joy, pride, relief, surprise, anger, disgust, fear, sadness, shame, and neutral). Even with only upper body motion available, our results show recognition accuracy significantly above chance level and high consistency rates among observers. In our first experiment, that used more classic emotion induction setup, all emotions were well recognized. In the second study that employed narrations, four basic emotion categories (joy, anger, fear, and sadness), three non-basic emotion categories (amusement, pride, and shame) and the “neutral” category were recognized above chance. Interestingly, especially in the second experiment, observers showed a bias toward anger when recognizing the motion sequences for emotions. We discovered that similarities between motion sequences across the emotions along such properties as mean motion speed, number of peaks in the motion trajectory and mean motion span can explain a large percent of the variation in observers' responses. Overall, our results show that upper body motion is informative for emotion recognition in narrative scenarios. PMID:25071623

  20. [Neural mechanisms of facial recognition].

    PubMed

    Nagai, Chiyoko

    2007-01-01

    We review recent researches in neural mechanisms of facial recognition in the light of three aspects: facial discrimination and identification, recognition of facial expressions, and face perception in itself. First, it has been demonstrated that the fusiform gyrus has a main role of facial discrimination and identification. However, whether the FFA (fusiform face area) is really a special area for facial processing or not is controversial; some researchers insist that the FFA is related to 'becoming an expert' for some kinds of visual objects, including faces. Neural mechanisms of prosopagnosia would be deeply concerned to this issue. Second, the amygdala seems to be very concerned to recognition of facial expressions, especially fear. The amygdala, connected with the superior temporal sulcus and the orbitofrontal cortex, appears to operate the cortical function. The amygdala and the superior temporal sulcus are related to gaze recognition, which explains why a patient with bilateral amygdala damage could not recognize only a fear expression; the information from eyes is necessary for fear recognition. Finally, even a newborn infant can recognize a face as a face, which is congruent with the innate hypothesis of facial recognition. Some researchers speculate that the neural basis of such face perception is the subcortical network, comprised of the amygdala, the superior colliculus, and the pulvinar. This network would relate to covert recognition that prosopagnosic patients have.

  1. Major Challenges for the Modern Chemistry in Particular and Science in General.

    PubMed

    Uskokovíc, Vuk

    2010-11-01

    In the past few hundred years, science has exerted an enormous influence on the way the world appears to human observers. Despite phenomenal accomplishments of science, science nowadays faces numerous challenges that threaten its continued success. As scientific inventions become embedded within human societies, the challenges are further multiplied. In this critical review, some of the critical challenges for the field of modern chemistry are discussed, including: (a) interlinking theoretical knowledge and experimental approaches; (b) implementing the principles of sustainability at the roots of the chemical design; (c) defining science from a philosophical perspective that acknowledges both pragmatic and realistic aspects thereof; (d) instigating interdisciplinary research; (e) learning to recognize and appreciate the aesthetic aspects of scientific knowledge and methodology, and promote truly inspiring education in chemistry. In the conclusion, I recapitulate that the evolution of human knowledge inherently depends upon our ability to adopt creative problem-solving attitudes, and that challenges will always be present within the scope of scientific interests.

  2. EOG-sEMG Human Interface for Communication

    PubMed Central

    Tamura, Hiroki; Yan, Mingmin; Sakurai, Keiko; Tanno, Koichi

    2016-01-01

    The aim of this study is to present electrooculogram (EOG) and surface electromyogram (sEMG) signals that can be used as a human-computer interface. Establishing an efficient alternative channel for communication without overt speech and hand movements is important for increasing the quality of life for patients suffering from amyotrophic lateral sclerosis, muscular dystrophy, or other illnesses. In this paper, we propose an EOG-sEMG human-computer interface system for communication using both cross-channels and parallel lines channels on the face with the same electrodes. This system could record EOG and sEMG signals as “dual-modality” for pattern recognition simultaneously. Although as much as 4 patterns could be recognized, dealing with the state of the patients, we only choose two classes (left and right motion) of EOG and two classes (left blink and right blink) of sEMG which are easily to be realized for simulation and monitoring task. From the simulation results, our system achieved four-pattern classification with an accuracy of 95.1%. PMID:27418924

  3. EOG-sEMG Human Interface for Communication.

    PubMed

    Tamura, Hiroki; Yan, Mingmin; Sakurai, Keiko; Tanno, Koichi

    2016-01-01

    The aim of this study is to present electrooculogram (EOG) and surface electromyogram (sEMG) signals that can be used as a human-computer interface. Establishing an efficient alternative channel for communication without overt speech and hand movements is important for increasing the quality of life for patients suffering from amyotrophic lateral sclerosis, muscular dystrophy, or other illnesses. In this paper, we propose an EOG-sEMG human-computer interface system for communication using both cross-channels and parallel lines channels on the face with the same electrodes. This system could record EOG and sEMG signals as "dual-modality" for pattern recognition simultaneously. Although as much as 4 patterns could be recognized, dealing with the state of the patients, we only choose two classes (left and right motion) of EOG and two classes (left blink and right blink) of sEMG which are easily to be realized for simulation and monitoring task. From the simulation results, our system achieved four-pattern classification with an accuracy of 95.1%.

  4. Major Challenges for the Modern Chemistry in Particular and Science in General

    PubMed Central

    Uskokovíc, Vuk

    2013-01-01

    In the past few hundred years, science has exerted an enormous influence on the way the world appears to human observers. Despite phenomenal accomplishments of science, science nowadays faces numerous challenges that threaten its continued success. As scientific inventions become embedded within human societies, the challenges are further multiplied. In this critical review, some of the critical challenges for the field of modern chemistry are discussed, including: (a) interlinking theoretical knowledge and experimental approaches; (b) implementing the principles of sustainability at the roots of the chemical design; (c) defining science from a philosophical perspective that acknowledges both pragmatic and realistic aspects thereof; (d) instigating interdisciplinary research; (e) learning to recognize and appreciate the aesthetic aspects of scientific knowledge and methodology, and promote truly inspiring education in chemistry. In the conclusion, I recapitulate that the evolution of human knowledge inherently depends upon our ability to adopt creative problem-solving attitudes, and that challenges will always be present within the scope of scientific interests. PMID:24465151

  5. Rapid Categorization of Human and Ape Faces in 9-Month-Old Infants Revealed by Fast Periodic Visual Stimulation.

    PubMed

    Peykarjou, Stefanie; Hoehl, Stefanie; Pauen, Sabina; Rossion, Bruno

    2017-10-02

    This study investigates categorization of human and ape faces in 9-month-olds using a Fast Periodic Visual Stimulation (FPVS) paradigm while measuring EEG. Categorization responses are elicited only if infants discriminate between different categories and generalize across exemplars within each category. In study 1, human or ape faces were presented as standard and deviant stimuli in upright and inverted trials. Upright ape faces presented among humans elicited strong categorization responses, whereas responses for upright human faces and for inverted ape faces were smaller. Deviant inverted human faces did not elicit categorization. Data were best explained by a model with main effects of species and orientation. However, variance of low-level image characteristics was higher for the ape than the human category. Variance was matched to replicate this finding in an independent sample (study 2). Both human and ape faces elicited categorization in upright and inverted conditions, but upright ape faces elicited the strongest responses. Again, data were best explained by a model of two main effects. These experiments demonstrate that 9-month-olds rapidly categorize faces, and unfamiliar faces presented among human faces elicit increased categorization responses. This likely reflects habituation for the familiar standard category, and stronger release for the unfamiliar category deviants.

  6. A Coordinated Decentralized Approach to Online Project Development

    ERIC Educational Resources Information Center

    Mykota, David

    2013-01-01

    With the growth rate of online learning outpacing traditional face-to-face instruction, universities are beginning to recognize the importance of strategic planning in its development. Making the case for online learning requires sound project management practices and an understanding of the business models on which it is predicated. The objective…

  7. The role of experience-based perceptual learning in the face inversion effect.

    PubMed

    Civile, Ciro; Obhi, Sukhvinder S; McLaren, I P L

    2018-04-03

    Perceptual learning of the type we consider here is a consequence of experience with a class of stimuli. It amounts to an enhanced ability to discriminate between stimuli. We argue that it contributes to the ability to distinguish between faces and recognize individuals, and in particular contributes to the face inversion effect (better recognition performance for upright vs inverted faces). Previously, we have shown that experience with a prototype defined category of checkerboards leads to perceptual learning, that this produces an inversion effect, and that this effect can be disrupted by Anodal tDCS to Fp3 during pre-exposure. If we can demonstrate that the same tDCS manipulation also disrupts the inversion effect for faces, then this will strengthen the claim that perceptual learning contributes to that effect. The important question, then, is whether this tDCS procedure would significantly reduce the inversion effect for faces; stimuli that we have lifelong expertise with and for which perceptual learning has already occurred. Consequently, in the experiment reported here we investigated the effects of anodal tDCS at Fp3 during an old/new recognition task for upright and inverted faces. Our results show that stimulation significantly reduced the face inversion effect compared to controls. The effect was one of reducing recognition performance for upright faces. This result is the first to show that tDCS affects perceptual learning that has already occurred, disrupting individuals' ability to recognize upright faces. It provides further support for our account of perceptual learning and its role as a key factor in face recognition. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Face recognition based on symmetrical virtual image and original training image

    NASA Astrophysics Data System (ADS)

    Ke, Jingcheng; Peng, Yali; Liu, Shigang; Li, Jun; Pei, Zhao

    2018-02-01

    In face representation-based classification methods, we are able to obtain high recognition rate if a face has enough available training samples. However, in practical applications, we only have limited training samples to use. In order to obtain enough training samples, many methods simultaneously use the original training samples and corresponding virtual samples to strengthen the ability of representing the test sample. One is directly using the original training samples and corresponding mirror samples to recognize the test sample. However, when the test sample is nearly symmetrical while the original training samples are not, the integration of the original training and mirror samples might not well represent the test samples. To tackle the above-mentioned problem, in this paper, we propose a novel method to obtain a kind of virtual samples which are generated by averaging the original training samples and corresponding mirror samples. Then, the original training samples and the virtual samples are integrated to recognize the test sample. Experimental results on five face databases show that the proposed method is able to partly overcome the challenges of the various poses, facial expressions and illuminations of original face image.

  9. Statement on the Status and Working Conditions of Contingent Faculty

    ERIC Educational Resources Information Center

    Palmquist, Mike; Doe, Sue; McDonald, James; Newman, Beatrice Mendez; Samuels, Robert; Schell, Eileen

    2011-01-01

    In this paper, the authors call for an approach that, in recognizing the economic realities facing most institutions, attempts to put aside objections that funding is simply not available to support an expansion of the current tenure system. In calling for the changes in faculty working conditions, the authors recognize that change will…

  10. Deep Networks Can Resemble Human Feed-forward Vision in Invariant Object Recognition

    PubMed Central

    Kheradpisheh, Saeed Reza; Ghodrati, Masoud; Ganjtabesh, Mohammad; Masquelier, Timothée

    2016-01-01

    Deep convolutional neural networks (DCNNs) have attracted much attention recently, and have shown to be able to recognize thousands of object categories in natural image databases. Their architecture is somewhat similar to that of the human visual system: both use restricted receptive fields, and a hierarchy of layers which progressively extract more and more abstracted features. Yet it is unknown whether DCNNs match human performance at the task of view-invariant object recognition, whether they make similar errors and use similar representations for this task, and whether the answers depend on the magnitude of the viewpoint variations. To investigate these issues, we benchmarked eight state-of-the-art DCNNs, the HMAX model, and a baseline shallow model and compared their results to those of humans with backward masking. Unlike in all previous DCNN studies, we carefully controlled the magnitude of the viewpoint variations to demonstrate that shallow nets can outperform deep nets and humans when variations are weak. When facing larger variations, however, more layers were needed to match human performance and error distributions, and to have representations that are consistent with human behavior. A very deep net with 18 layers even outperformed humans at the highest variation level, using the most human-like representations. PMID:27601096

  11. “A room full of strangers every day”: The psychosocial impact of developmental prosopagnosia on children and their families

    PubMed Central

    Dalrymple, Kirsten A.; Fletcher, Kimberley; Corrow, Sherryse; Nair, Roshan das; Barton, Jason J. S.; Yonas, Albert; Duchaine, Brad

    2014-01-01

    Objective Individuals with developmental prosopagnosia (‘face blindness’) have severe face recognition difficulties due to a failure to develop the necessary visual mechanisms for recognizing faces. These difficulties occur in the absence of brain damage and despite normal low-level vision and intellect. Adults with developmental prosopagnosia report serious personal and emotional consequences from their inability to recognize faces, but little is known about the psychosocial consequences in childhood. Given the importance of face recognition in daily life, and the potential for unique social consequences of impaired face recognition in childhood, we sought to evaluate the impact of developmental prosopagnosia on children and their families. Methods We conducted semi-structured interviews with 8 children with developmental prosopagnosia and their parents. A battery of face recognition tests was used to confirm the face recognition impairment reported by the parents of each child. We used thematic analysis to develop common themes among the psychosocial experiences of the children and their parents. Results Three themes were developed from the child reports: 1) awareness of their difficulties, 2) coping strategies, such as using non-facial cues to identify others, and 3) social implications, such as discomfort in, and avoidance of, social situations. These themes were paralleled by the parent reports and highlight the unique social and practical challenges associated with childhood developmental prosopagnosia. Conclusion Our findings indicate a need for increased awareness and treatment of developmental prosopagnosia to help these children manage their face recognition difficulties and to promote their social and emotional wellbeing. PMID:25077856

  12. Experience moderates overlap between object and face recognition, suggesting a common ability

    PubMed Central

    Gauthier, Isabel; McGugin, Rankin W.; Richler, Jennifer J.; Herzmann, Grit; Speegle, Magen; Van Gulick, Ana E.

    2014-01-01

    Some research finds that face recognition is largely independent from the recognition of other objects; a specialized and innate ability to recognize faces could therefore have little or nothing to do with our ability to recognize objects. We propose a new framework in which recognition performance for any category is the product of domain-general ability and category-specific experience. In Experiment 1, we show that the overlap between face and object recognition depends on experience with objects. In 256 subjects we measured face recognition, object recognition for eight categories, and self-reported experience with these categories. Experience predicted neither face recognition nor object recognition but moderated their relationship: Face recognition performance is increasingly similar to object recognition performance with increasing object experience. If a subject has a lot of experience with objects and is found to perform poorly, they also prove to have a low ability with faces. In a follow-up survey, we explored the dimensions of experience with objects that may have contributed to self-reported experience in Experiment 1. Different dimensions of experience appear to be more salient for different categories, with general self-reports of expertise reflecting judgments of verbal knowledge about a category more than judgments of visual performance. The complexity of experience and current limitations in its measurement support the importance of aggregating across multiple categories. Our findings imply that both face and object recognition are supported by a common, domain-general ability expressed through experience with a category and best measured when accounting for experience. PMID:24993021

  13. Experience moderates overlap between object and face recognition, suggesting a common ability.

    PubMed

    Gauthier, Isabel; McGugin, Rankin W; Richler, Jennifer J; Herzmann, Grit; Speegle, Magen; Van Gulick, Ana E

    2014-07-03

    Some research finds that face recognition is largely independent from the recognition of other objects; a specialized and innate ability to recognize faces could therefore have little or nothing to do with our ability to recognize objects. We propose a new framework in which recognition performance for any category is the product of domain-general ability and category-specific experience. In Experiment 1, we show that the overlap between face and object recognition depends on experience with objects. In 256 subjects we measured face recognition, object recognition for eight categories, and self-reported experience with these categories. Experience predicted neither face recognition nor object recognition but moderated their relationship: Face recognition performance is increasingly similar to object recognition performance with increasing object experience. If a subject has a lot of experience with objects and is found to perform poorly, they also prove to have a low ability with faces. In a follow-up survey, we explored the dimensions of experience with objects that may have contributed to self-reported experience in Experiment 1. Different dimensions of experience appear to be more salient for different categories, with general self-reports of expertise reflecting judgments of verbal knowledge about a category more than judgments of visual performance. The complexity of experience and current limitations in its measurement support the importance of aggregating across multiple categories. Our findings imply that both face and object recognition are supported by a common, domain-general ability expressed through experience with a category and best measured when accounting for experience. © 2014 ARVO.

  14. The fusiform face area: a cortical region specialized for the perception of faces

    PubMed Central

    Kanwisher, Nancy; Yovel, Galit

    2006-01-01

    Faces are among the most important visual stimuli we perceive, informing us not only about a person's identity, but also about their mood, sex, age and direction of gaze. The ability to extract this information within a fraction of a second of viewing a face is important for normal social interactions and has probably played a critical role in the survival of our primate ancestors. Considerable evidence from behavioural, neuropsychological and neurophysiological investigations supports the hypothesis that humans have specialized cognitive and neural mechanisms dedicated to the perception of faces (the face-specificity hypothesis). Here, we review the literature on a region of the human brain that appears to play a key role in face perception, known as the fusiform face area (FFA). Section 1 outlines the theoretical background for much of this work. The face-specificity hypothesis falls squarely on one side of a longstanding debate in the fields of cognitive science and cognitive neuroscience concerning the extent to which the mind/brain is composed of: (i) special-purpose (‘domain-specific’) mechanisms, each dedicated to processing a specific kind of information (e.g. faces, according to the face-specificity hypothesis), versus (ii) general-purpose (‘domain-general’) mechanisms, each capable of operating on any kind of information. Face perception has long served both as one of the prime candidates of a domain-specific process and as a key target for attack by proponents of domain-general theories of brain and mind. Section 2 briefly reviews the prior literature on face perception from behaviour and neurophysiology. This work supports the face-specificity hypothesis and argues against its domain-general alternatives (the individuation hypothesis, the expertise hypothesis and others). Section 3 outlines the more recent evidence on this debate from brain imaging, focusing particularly on the FFA. We review the evidence that the FFA is selectively engaged in face perception, by addressing (and rebutting) five of the most widely discussed alternatives to this hypothesis. In §4, we consider recent findings that are beginning to provide clues into the computations conducted in the FFA and the nature of the representations the FFA extracts from faces. We argue that the FFA is engaged both in detecting faces and in extracting the necessary perceptual information to recognize them, and that the properties of the FFA mirror previously identified behavioural signatures of face-specific processing (e.g. the face-inversion effect). Section 5 asks how the computations and representations in the FFA differ from those occurring in other nearby regions of cortex that respond strongly to faces and objects. The evidence indicates clear functional dissociations between these regions, demonstrating that the FFA shows not only functional specificity but also area specificity. We end by speculating in §6 on some of the broader questions raised by current research on the FFA, including the developmental origins of this region and the question of whether faces are unique versus whether similarly specialized mechanisms also exist for other domains of high-level perception and cognition. PMID:17118927

  15. A robust human face detection algorithm

    NASA Astrophysics Data System (ADS)

    Raviteja, Thaluru; Karanam, Srikrishna; Yeduguru, Dinesh Reddy V.

    2012-01-01

    Human face detection plays a vital role in many applications like video surveillance, managing a face image database, human computer interface among others. This paper proposes a robust algorithm for face detection in still color images that works well even in a crowded environment. The algorithm uses conjunction of skin color histogram, morphological processing and geometrical analysis for detecting human faces. To reinforce the accuracy of face detection, we further identify mouth and eye regions to establish the presence/absence of face in a particular region of interest.

  16. The Changing Face of Afghanistan, 2001-08

    DTIC Science & Technology

    2011-07-01

    accordingly, using all of its relevant resources. While the administration recognized the enemy facing the United States and the civilized world ...and the rest of the civilized world several times.2 The President called the attacks “despicable acts of war” on September 13, 2001, and declared the...facing the United States and the civilized world was a global network of Islamic extremist groups, of which al Qaeda is but one, and their state and

  17. The Effects of Anti-Black Attitudes and Fear of Rape on Accuracy for the Recognition of Black and White Faces: Another Step Beyond the Layperson's Knowledge.

    ERIC Educational Resources Information Center

    Mack, David B.; And Others

    It was hypothesized that young white women who held antiblack attitudes and who were most fearful of being raped would be less accurate in recognizing photographs of black faces than of white faces, in comparison with young white women without these attitudes and fears. Subjects completed a racial attitude scale and a question measuring their fear…

  18. Behavioral dissociation between emotional and non-emotional facial expressions in congenital prosopagnosia

    PubMed Central

    Daini, Roberta; Comparetti, Chiara M.; Ricciardelli, Paola

    2014-01-01

    Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP) can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently from features (which are impaired in CP), and basic emotional expressions. To this end, we carried out a behavioral study in which we compared the performance of 6 CP individuals to that of typical development individuals, using upright and inverted faces. Four avatar faces with a neutral expression were presented in the initial phase. The target faces presented in the recognition phase, in which a recognition task was requested (2AFC paradigm), could be identical (neutral) to those of the initial phase or present biologically plausible changes to features, non-emotional expressions, or emotional expressions. After this task, a second task was performed, in which the participants had to detect whether or not the recognized face exactly matched the study face or showed any difference. The results confirmed the CPs' impairment in the configural processing of the invariant aspects of the face, but also showed a spared configural processing of non-emotional facial expression (task 1). Interestingly and unlike the non-emotional expressions, the configural processing of emotional expressions was compromised in CPs and did not improve their change detection ability (task 2). These new results have theoretical implications for face perception models since they suggest that, at least in CPs, non-emotional expressions are processed configurally, can be dissociated from other facial dimensions, and may serve as a compensatory strategy to achieve face recognition. PMID:25520643

  19. Behavioral dissociation between emotional and non-emotional facial expressions in congenital prosopagnosia.

    PubMed

    Daini, Roberta; Comparetti, Chiara M; Ricciardelli, Paola

    2014-01-01

    Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP) can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently from features (which are impaired in CP), and basic emotional expressions. To this end, we carried out a behavioral study in which we compared the performance of 6 CP individuals to that of typical development individuals, using upright and inverted faces. Four avatar faces with a neutral expression were presented in the initial phase. The target faces presented in the recognition phase, in which a recognition task was requested (2AFC paradigm), could be identical (neutral) to those of the initial phase or present biologically plausible changes to features, non-emotional expressions, or emotional expressions. After this task, a second task was performed, in which the participants had to detect whether or not the recognized face exactly matched the study face or showed any difference. The results confirmed the CPs' impairment in the configural processing of the invariant aspects of the face, but also showed a spared configural processing of non-emotional facial expression (task 1). Interestingly and unlike the non-emotional expressions, the configural processing of emotional expressions was compromised in CPs and did not improve their change detection ability (task 2). These new results have theoretical implications for face perception models since they suggest that, at least in CPs, non-emotional expressions are processed configurally, can be dissociated from other facial dimensions, and may serve as a compensatory strategy to achieve face recognition.

  20. Development of preference for conspecific faces in human infants.

    PubMed

    Sanefuji, Wakako; Wada, Kazuko; Yamamoto, Tomoka; Mohri, Ikuko; Taniike, Masako

    2014-04-01

    Previous studies have proposed that humans may be born with mechanisms that attend to conspecifics. However, as previous studies have relied on stimuli featuring human adults, it remains unclear whether infants attend only to adult humans or to the entire human species. We found that 1-month-old infants (n = 23) were able to differentiate between human and monkey infants' faces; however, they exhibited no preference for human infants' faces over monkey infants' faces (n = 24) and discriminated individual differences only within the category of human infants' faces (n = 30). We successfully replicated previous findings that 1-month-old infants (n = 42) preferred adult humans, even adults of other races, to adult monkeys. Further, by 3 months of age, infants (n = 55) preferred human faces to monkey faces with both infant and adult stimuli. Human infants' spontaneous preference for conspecific faces appears to be initially limited to conspecific adults and afterward extended to conspecific infants. Future research should attempt to determine whether preference for human adults results from some innate tendency to attend to conspecific adults or from the impact of early experiences with adults. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  1. Disrupting Myths of Poverty in the Face of Resistance

    ERIC Educational Resources Information Center

    Pollock, Katina; Lopez, Ann; Joshee, Reva

    2013-01-01

    This case disrupts some of the prevalent myths about families from low-income and poor households held by educators. Recognizing the inherent tensions, this case demonstrates the importance of creating equitable and inclusive learning environments. We presented some of the challenges faced by Marcus, a progressive principal, as he attempts to…

  2. Recognizing Faces Based on Inferred Traits in Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Ramachandran, Rajani; Mitchell, Peter; Ropar, Danielle

    2010-01-01

    Recent findings indicate that individuals with autism spectrum disorders (ASD) could, surprisingly, infer traits from behavioural descriptions. Now we need to know whether or not individuals with ASD are able to use trait information to identify people by their faces. In this study participants with and without ASD were presented with pairs of…

  3. Environmental Education for a Sustainable Future

    ERIC Educational Resources Information Center

    Schlesinger, William H.

    2004-01-01

    The American public is now faced with a baffling array of new environmental issues--much more complicated than the problems people faced 30 years ago. Scientists recognize new threats to the biosphere, the fabric of natural ecosystems, and the diversity of plants and animals that inhabit them. Unlike the obvious, toxic pollutants that spurred the…

  4. 77 FR 55773 - Airworthiness Directives; The Boeing Company Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-11

    ...) inspections for cracking of the left and right rib hinge bearing lugs of the aft face of the center section of... bearing lugs of the aft face of the center section of the horizontal stabilizer; measuring crack length...). Recognition That Reporting of Findings Is Not Required American Airlines stated it recognizes that reporting...

  5. Online Class Size, Note Reading, Note Writing and Collaborative Discourse

    ERIC Educational Resources Information Center

    Qiu, Mingzhu; Hewitt, Jim; Brett, Clare

    2012-01-01

    Researchers have long recognized class size as affecting students' performance in face-to-face contexts. However, few studies have examined the effects of class size on exact reading and writing loads in online graduate-level courses. This mixed-methods study examined relationships among class size, note reading, note writing, and collaborative…

  6. Superior voice recognition in a patient with acquired prosopagnosia and object agnosia.

    PubMed

    Hoover, Adria E N; Démonet, Jean-François; Steeves, Jennifer K E

    2010-11-01

    Anecdotally, it has been reported that individuals with acquired prosopagnosia compensate for their inability to recognize faces by using other person identity cues such as hair, gait or the voice. Are they therefore superior at the use of non-face cues, specifically voices, to person identity? Here, we empirically measure person and object identity recognition in a patient with acquired prosopagnosia and object agnosia. We quantify person identity (face and voice) and object identity (car and horn) recognition for visual, auditory, and bimodal (visual and auditory) stimuli. The patient is unable to recognize faces or cars, consistent with his prosopagnosia and object agnosia, respectively. He is perfectly able to recognize people's voices and car horns and bimodal stimuli. These data show a reverse shift in the typical weighting of visual over auditory information for audiovisual stimuli in a compromised visual recognition system. Moreover, the patient shows selectively superior voice recognition compared to the controls revealing that two different stimulus domains, persons and objects, may not be equally affected by sensory adaptation effects. This also implies that person and object identity recognition are processed in separate pathways. These data demonstrate that an individual with acquired prosopagnosia and object agnosia can compensate for the visual impairment and become quite skilled at using spared aspects of sensory processing. In the case of acquired prosopagnosia it is advantageous to develop a superior use of voices for person identity recognition in everyday life. Copyright © 2010 Elsevier Ltd. All rights reserved.

  7. Fast hierarchical knowledge-based approach for human face detection in color images

    NASA Astrophysics Data System (ADS)

    Jiang, Jun; Gong, Jie; Zhang, Guilin; Hu, Ruolan

    2001-09-01

    This paper presents a fast hierarchical knowledge-based approach for automatically detecting multi-scale upright faces in still color images. The approach consists of three levels. At the highest level, skin-like regions are determinated by skin model, which is based on the color attributes hue and saturation in HSV color space, as well color attributes red and green in normalized color space. In level 2, a new eye model is devised to select human face candidates in segmented skin-like regions. An important feature of the eye model is that it is independent of the scale of human face. So it is possible for finding human faces in different scale with scanning image only once, and it leads to reduction the computation time of face detection greatly. In level 3, a human face mosaic image model, which is consistent with physical structure features of human face well, is applied to judge whether there are face detects in human face candidate regions. This model includes edge and gray rules. Experiment results show that the approach has high robustness and fast speed. It has wide application perspective at human-computer interactions and visual telephone etc.

  8. Holistic processing is finely tuned for faces of one's own race.

    PubMed

    Michel, Caroline; Rossion, Bruno; Han, Jaehyun; Chung, Chan-Sup; Caldara, Roberto

    2006-07-01

    Recognizing individual faces outside one's race poses difficulty, a phenomenon known as the other-race effect. Most researchers agree that this effect results from differential experience with same-race (SR) and other-race (OR) faces. However, the specific processes that develop with visual experience and underlie the other-race effect remain to be clarified. We tested whether the integration of facial features into a whole representation-holistic processing-was larger for SR than OR faces in Caucasians and Asians without life experience with OR faces. For both classes of participants, recognition of the upper half of a composite-face stimulus was more disrupted by the bottom half (the composite-face effect) for SR than OR faces, demonstrating that SR faces are processed more holistically than OR faces. This differential holistic processing for faces of different races, probably a by-product of visual experience, may be a critical factor in the other-race effect.

  9. Issues and special features of animal health research

    PubMed Central

    2011-01-01

    In the rapidly changing context of research on animal health, INRA launched a collective discussion on the challenges facing the field, its distinguishing features, and synergies with biomedical research. As has been declared forcibly by the heads of WHO, FAO and OIE, the challenges facing animal health, beyond diseases transmissible to humans, are critically important and involve food security, agriculture economics, and the ensemble of economic activities associated with agriculture. There are in addition issues related to public health (zoonoses, xenobiotics, antimicrobial resistance), the environment, and animal welfare. Animal health research is distinguished by particular methodologies and scientific questions that stem from the specific biological features of domestic species and from animal husbandry practices. It generally does not explore the same scientific questions as research on human biology, even when the same pathogens are being studied, and the discipline is rooted in a very specific agricultural and economic context. Generic and methodological synergies nevertheless exist with biomedical research, particularly with regard to tools and biological models. Certain domestic species furthermore present more functional similarities with humans than laboratory rodents. The singularity of animal health research in relation to biomedical research should be taken into account in the organization, evaluation, and funding of the field through a policy that clearly recognizes the specific issues at stake. At the same time, the One Health approach should facilitate closer collaboration between biomedical and animal health research at the level of research teams and programmes. PMID:21864344

  10. Issues and special features of animal health research.

    PubMed

    Ducrot, Christian; Bed'hom, Bertrand; Béringue, Vincent; Coulon, Jean-Baptiste; Fourichon, Christine; Guérin, Jean-Luc; Krebs, Stéphane; Rainard, Pascal; Schwartz-Cornil, Isabelle; Torny, Didier; Vayssier-Taussat, Muriel; Zientara, Stephan; Zundel, Etienne; Pineau, Thierry

    2011-08-24

    In the rapidly changing context of research on animal health, INRA launched a collective discussion on the challenges facing the field, its distinguishing features, and synergies with biomedical research. As has been declared forcibly by the heads of WHO, FAO and OIE, the challenges facing animal health, beyond diseases transmissible to humans, are critically important and involve food security, agriculture economics, and the ensemble of economic activities associated with agriculture. There are in addition issues related to public health (zoonoses, xenobiotics, antimicrobial resistance), the environment, and animal welfare.Animal health research is distinguished by particular methodologies and scientific questions that stem from the specific biological features of domestic species and from animal husbandry practices. It generally does not explore the same scientific questions as research on human biology, even when the same pathogens are being studied, and the discipline is rooted in a very specific agricultural and economic context.Generic and methodological synergies nevertheless exist with biomedical research, particularly with regard to tools and biological models. Certain domestic species furthermore present more functional similarities with humans than laboratory rodents.The singularity of animal health research in relation to biomedical research should be taken into account in the organization, evaluation, and funding of the field through a policy that clearly recognizes the specific issues at stake. At the same time, the One Health approach should facilitate closer collaboration between biomedical and animal health research at the level of research teams and programmes.

  11. Preference for facial averageness: Evidence for a common mechanism in human and macaque infants

    PubMed Central

    Damon, Fabrice; Méary, David; Quinn, Paul C.; Lee, Kang; Simpson, Elizabeth A.; Paukner, Annika; Suomi, Stephen J.; Pascalis, Olivier

    2017-01-01

    Human adults and infants show a preference for average faces, which could stem from a general processing mechanism and may be shared among primates. However, little is known about preference for facial averageness in monkeys. We used a comparative developmental approach and eye-tracking methodology to assess visual attention in human and macaque infants to faces naturally varying in their distance from a prototypical face. In Experiment 1, we examined the preference for faces relatively close to or far from the prototype in 12-month-old human infants with human adult female faces. Infants preferred faces closer to the average than faces farther from it. In Experiment 2, we measured the looking time of 3-month-old rhesus macaques (Macaca mulatta) viewing macaque faces varying in their distance from the prototype. Like human infants, macaque infants looked longer to faces closer to the average. In Experiments 3 and 4, both species were presented with unfamiliar categories of faces (i.e., macaque infants tested with adult macaque faces; human infants and adults tested with infant macaque faces) and showed no prototype preferences, suggesting that the prototypicality effect is experience-dependent. Overall, the findings suggest a common processing mechanism across species, leading to averageness preferences in primates. PMID:28406237

  12. Emotion recognition through static faces and moving bodies: a comparison between typically developed adults and individuals with high level of autistic traits

    PubMed Central

    Actis-Grosso, Rossana; Bossi, Francesco; Ricciardelli, Paola

    2015-01-01

    We investigated whether the type of stimulus (pictures of static faces vs. body motion) contributes differently to the recognition of emotions. The performance (accuracy and response times) of 25 Low Autistic Traits (LAT group) young adults (21 males) and 20 young adults (16 males) with either High Autistic Traits or with High Functioning Autism Spectrum Disorder (HAT group) was compared in the recognition of four emotions (Happiness, Anger, Fear, and Sadness) either shown in static faces or conveyed by moving body patch-light displays (PLDs). Overall, HAT individuals were as accurate as LAT ones in perceiving emotions both with faces and with PLDs. Moreover, they correctly described non-emotional actions depicted by PLDs, indicating that they perceived the motion conveyed by the PLDs per se. For LAT participants, happiness proved to be the easiest emotion to be recognized: in line with previous studies we found a happy face advantage for faces, which for the first time was also found for bodies (happy body advantage). Furthermore, LAT participants recognized sadness better by static faces and fear by PLDs. This advantage for motion kinematics in the recognition of fear was not present in HAT participants, suggesting that (i) emotion recognition is not generally impaired in HAT individuals, (ii) the cues exploited for emotion recognition by LAT and HAT groups are not always the same. These findings are discussed against the background of emotional processing in typically and atypically developed individuals. PMID:26557101

  13. Emotion recognition through static faces and moving bodies: a comparison between typically developed adults and individuals with high level of autistic traits.

    PubMed

    Actis-Grosso, Rossana; Bossi, Francesco; Ricciardelli, Paola

    2015-01-01

    We investigated whether the type of stimulus (pictures of static faces vs. body motion) contributes differently to the recognition of emotions. The performance (accuracy and response times) of 25 Low Autistic Traits (LAT group) young adults (21 males) and 20 young adults (16 males) with either High Autistic Traits or with High Functioning Autism Spectrum Disorder (HAT group) was compared in the recognition of four emotions (Happiness, Anger, Fear, and Sadness) either shown in static faces or conveyed by moving body patch-light displays (PLDs). Overall, HAT individuals were as accurate as LAT ones in perceiving emotions both with faces and with PLDs. Moreover, they correctly described non-emotional actions depicted by PLDs, indicating that they perceived the motion conveyed by the PLDs per se. For LAT participants, happiness proved to be the easiest emotion to be recognized: in line with previous studies we found a happy face advantage for faces, which for the first time was also found for bodies (happy body advantage). Furthermore, LAT participants recognized sadness better by static faces and fear by PLDs. This advantage for motion kinematics in the recognition of fear was not present in HAT participants, suggesting that (i) emotion recognition is not generally impaired in HAT individuals, (ii) the cues exploited for emotion recognition by LAT and HAT groups are not always the same. These findings are discussed against the background of emotional processing in typically and atypically developed individuals.

  14. Two areas for familiar face recognition in the primate brain.

    PubMed

    Landi, Sofia M; Freiwald, Winrich A

    2017-08-11

    Familiarity alters face recognition: Familiar faces are recognized more accurately than unfamiliar ones and under difficult viewing conditions when unfamiliar face recognition fails. The neural basis for this fundamental difference remains unknown. Using whole-brain functional magnetic resonance imaging, we found that personally familiar faces engage the macaque face-processing network more than unfamiliar faces. Familiar faces also recruited two hitherto unknown face areas at anatomically conserved locations within the perirhinal cortex and the temporal pole. These two areas, but not the core face-processing network, responded to familiar faces emerging from a blur with a characteristic nonlinear surge, akin to the abruptness of familiar face recognition. In contrast, responses to unfamiliar faces and objects remained linear. Thus, two temporal lobe areas extend the core face-processing network into a familiar face-recognition system. Copyright © 2017 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  15. High confidence in falsely recognizing prototypical faces.

    PubMed

    Sampaio, Cristina; Reinke, Victoria; Mathews, Jeffrey; Swart, Alexandra; Wallinger, Stephen

    2018-06-01

    We applied a metacognitive approach to investigate confidence in recognition of prototypical faces. Participants were presented with sets of faces constructed digitally as deviations from prototype/base faces. Participants were then tested with a simple recognition task (Experiment 1) or a multiple-choice task (Experiment 2) for old and new items plus new prototypes, and they showed a high rate of confident false alarms to the prototypes. Confidence and accuracy relationship in this face recognition paradigm was found to be positive for standard items but negative for the prototypes; thus, it was contingent on the nature of the items used. The data have implications for lineups that employ match-to-suspect strategies.

  16. Face Pareidolia in the Rhesus Monkey.

    PubMed

    Taubert, Jessica; Wardle, Susan G; Flessert, Molly; Leopold, David A; Ungerleider, Leslie G

    2017-08-21

    Face perception in humans and nonhuman primates is rapid and accurate [1-4]. In the human brain, a network of visual-processing regions is specialized for faces [5-7]. Although face processing is a priority of the primate visual system, face detection is not infallible. Face pareidolia is the compelling illusion of perceiving facial features on inanimate objects, such as the illusory face on the surface of the moon. Although face pareidolia is commonly experienced by humans, its presence in other species is unknown. Here we provide evidence for face pareidolia in a species known to possess a complex face-processing system [8-10]: the rhesus monkey (Macaca mulatta). In a visual preference task [11, 12], monkeys looked longer at photographs of objects that elicited face pareidolia in human observers than at photographs of similar objects that did not elicit illusory faces. Examination of eye movements revealed that monkeys fixated the illusory internal facial features in a pattern consistent with how they view photographs of faces [13]. Although the specialized response to faces observed in humans [1, 3, 5-7, 14] is often argued to be continuous across primates [4, 15], it was previously unclear whether face pareidolia arose from a uniquely human capacity. For example, pareidolia could be a product of the human aptitude for perceptual abstraction or result from frequent exposure to cartoons and illustrations that anthropomorphize inanimate objects. Instead, our results indicate that the perception of illusory facial features on inanimate objects is driven by a broadly tuned face-detection mechanism that we share with other species. Published by Elsevier Ltd.

  17. Multivoxel patterns in face-sensitive temporal regions reveal an encoding schema based on detecting life in a face.

    PubMed

    Looser, Christine E; Guntupalli, Jyothi S; Wheatley, Thalia

    2013-10-01

    More than a decade of research has demonstrated that faces evoke prioritized processing in a 'core face network' of three brain regions. However, whether these regions prioritize the detection of global facial form (shared by humans and mannequins) or the detection of life in a face has remained unclear. Here, we dissociate form-based and animacy-based encoding of faces by using animate and inanimate faces with human form (humans, mannequins) and dog form (real dogs, toy dogs). We used multivariate pattern analysis of BOLD responses to uncover the representational similarity space for each area in the core face network. Here, we show that only responses in the inferior occipital gyrus are organized by global facial form alone (human vs dog) while animacy becomes an additional organizational priority in later face-processing regions: the lateral fusiform gyri (latFG) and right superior temporal sulcus. Additionally, patterns evoked by human faces were maximally distinct from all other face categories in the latFG and parts of the extended face perception system. These results suggest that once a face configuration is perceived, faces are further scrutinized for whether the face is alive and worthy of social cognitive resources.

  18. Decoding facial expressions based on face-selective and motion-sensitive areas.

    PubMed

    Liang, Yin; Liu, Baolin; Xu, Junhai; Zhang, Gaoyan; Li, Xianglin; Wang, Peiyuan; Wang, Bin

    2017-06-01

    Humans can easily recognize others' facial expressions. Among the brain substrates that enable this ability, considerable attention has been paid to face-selective areas; in contrast, whether motion-sensitive areas, which clearly exhibit sensitivity to facial movements, are involved in facial expression recognition remained unclear. The present functional magnetic resonance imaging (fMRI) study used multi-voxel pattern analysis (MVPA) to explore facial expression decoding in both face-selective and motion-sensitive areas. In a block design experiment, participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise) in images, videos, and eyes-obscured videos. Due to the use of multiple stimulus types, the impacts of facial motion and eye-related information on facial expression decoding were also examined. It was found that motion-sensitive areas showed significant responses to emotional expressions and that dynamic expressions could be successfully decoded in both face-selective and motion-sensitive areas. Compared with static stimuli, dynamic expressions elicited consistently higher neural responses and decoding performance in all regions. A significant decrease in both activation and decoding accuracy due to the absence of eye-related information was also observed. Overall, the findings showed that emotional expressions are represented in motion-sensitive areas in addition to conventional face-selective areas, suggesting that motion-sensitive regions may also effectively contribute to facial expression recognition. The results also suggested that facial motion and eye-related information played important roles by carrying considerable expression information that could facilitate facial expression recognition. Hum Brain Mapp 38:3113-3125, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  19. Do people have insight into their face recognition abilities?

    PubMed

    Palermo, Romina; Rossion, Bruno; Rhodes, Gillian; Laguesse, Renaud; Tez, Tolga; Hall, Bronwyn; Albonico, Andrea; Malaspina, Manuela; Daini, Roberta; Irons, Jessica; Al-Janabi, Shahd; Taylor, Libby C; Rivolta, Davide; McKone, Elinor

    2017-02-01

    Diagnosis of developmental or congenital prosopagnosia (CP) involves self-report of everyday face recognition difficulties, which are corroborated with poor performance on behavioural tests. This approach requires accurate self-evaluation. We examine the extent to which typical adults have insight into their face recognition abilities across four experiments involving nearly 300 participants. The experiments used five tests of face recognition ability: two that tap into the ability to learn and recognize previously unfamiliar faces [the Cambridge Face Memory Test, CFMT; Duchaine, B., & Nakayama, K. (2006). The Cambridge Face Memory Test: Results for neurologically intact individuals and an investigation of its validity using inverted face stimuli and prosopagnosic participants. Neuropsychologia, 44(4), 576-585. doi:10.1016/j.neuropsychologia.2005.07.001; and a newly devised test based on the CFMT but where the study phases involve watching short movies rather than viewing static faces-the CFMT-Films] and three that tap face matching [Benton Facial Recognition Test, BFRT; Benton, A., Sivan, A., Hamsher, K., Varney, N., & Spreen, O. (1983). Contribution to neuropsychological assessment. New York: Oxford University Press; and two recently devised sequential face matching tests]. Self-reported ability was measured with the 15-item Kennerknecht et al. questionnaire [Kennerknecht, I., Ho, N. Y., & Wong, V. C. (2008). Prevalence of hereditary prosopagnosia (HPA) in Hong Kong Chinese population. American Journal of Medical Genetics Part A, 146A(22), 2863-2870. doi:10.1002/ajmg.a.32552]; two single-item questions assessing face recognition ability; and a new 77-item meta-cognition questionnaire. Overall, we find that adults with typical face recognition abilities have only modest insight into their ability to recognize faces on behavioural tests. In a fifth experiment, we assess self-reported face recognition ability in people with CP and find that some people who expect to perform poorly on behavioural tests of face recognition do indeed perform poorly. However, it is not yet clear whether individuals within this group of poor performers have greater levels of insight (i.e., into their degree of impairment) than those with more typical levels of performance.

  20. Super-recognizers: People with extraordinary face recognition ability

    PubMed Central

    Russell, Richard; Duchaine, Brad; Nakayama, Ken

    2014-01-01

    We tested four people who claimed to have significantly better than ordinary face recognition ability. Exceptional ability was confirmed in each case. On two very different tests of face recognition, all four experimental subjects performed beyond the range of control subject performance. They also scored significantly better than average on a perceptual discrimination test with faces. This effect was larger with upright than inverted faces, and the four subjects showed a larger ‘inversion effect’ than control subjects, who in turn showed a larger inversion effect than developmental prosopagnosics. This indicates an association between face recognition ability and the magnitude of the inversion effect. Overall, these ‘super-recognizers’ are about as good at face recognition and perception as developmental prosopagnosics are bad. Our findings demonstrate the existence of people with exceptionally good face recognition ability, and show that the range of face recognition and face perception ability is wider than previously acknowledged. PMID:19293090

  1. Tolerance of geometric distortions in infant's face recognition.

    PubMed

    Yamashita, Wakayo; Kanazawa, So; Yamaguchi, Masami K

    2014-02-01

    The aim of the current study is to reveal the effect of global linear transformations (shearing, horizontal stretching, and vertical stretching) on the recognition of familiar faces (e.g., a mother's face) in 6- to 7-month-old infants. In this experiment, we applied the global linear transformations to both the infants' own mother's face and to a stranger's face, and we tested infants' preference between these faces. We found that only 7-month-old infants maintained preference for their own mother's face during the presentation of vertical stretching, while the preference for the mother's face disappeared during the presentation of shearing or horizontal stretching. These findings suggest that 7-month-old infants might not recognize faces based on calculating the absolute distance between facial features, and that the vertical dimension of facial features might be more related to infants' face recognition rather than the horizontal dimension. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. Can human eyes prevent perceptual narrowing for monkey faces in human infants?

    PubMed

    Damon, Fabrice; Bayet, Laurie; Quinn, Paul C; Hillairet de Boisferon, Anne; Méary, David; Dupierrix, Eve; Lee, Kang; Pascalis, Olivier

    2015-07-01

    Perceptual narrowing has been observed in human infants for monkey faces: 6-month-olds can discriminate between them, whereas older infants from 9 months of age display difficulty discriminating between them. The difficulty infants from 9 months have processing monkey faces has not been clearly identified. It could be due to the structural characteristics of monkey faces, particularly the key facial features that differ from human faces. The current study aimed to investigate whether the information conveyed by the eyes is of importance. We examined whether the presence of Caucasian human eyes in monkey faces allows recognition to be maintained in 6-month-olds and facilitates recognition in 9- and 12-month-olds. Our results revealed that the presence of human eyes in monkey faces maintains recognition for those faces at 6 months of age and partially facilitates recognition of those faces at 9 months of age, but not at 12 months of age. The findings are interpreted in the context of perceptual narrowing and suggest that the attenuation of processing of other-species faces is not reversed by the presence of human eyes. © 2015 Wiley Periodicals, Inc.

  3. Supplemental Instruction Online: As Effective as the Traditional Face-to-Face Model?

    ERIC Educational Resources Information Center

    Hizer, Suzanne E.; Schultz, P. W.; Bray, Richard

    2017-01-01

    Supplemental Instruction (SI) is a well-recognized model of academic assistance with a history of empirical evidence demonstrating increases in student grades and decreases in failure rates across many higher education institutions. However, as college students become more accustomed to learning in online venues, what is not known is whether an SI…

  4. A Rural Education Teacher Preparation Program: Course Design, Student Support and Engagement

    ERIC Educational Resources Information Center

    Eaton, Sarah Elaine; Gereluk, Dianne; Dressler, Roswita; Becker, Sandra

    2017-01-01

    Attracting and retaining teachers for rural and remote areas is a pervasive global problem. Currently, teacher education in Canada is primarily delivered in face-to-face formats located in urban centres or satellite campuses. There is a need for relevant and responsive teacher education programs for rural pre-service teachers. Recognizing this…

  5. A Descriptive Study of Head Start Families: FACES Technical Report I.

    ERIC Educational Resources Information Center

    O'Brien, Robert W.; D'Elio, Mary Ann; Vaden-Kiernan, Michael; Magee, Candice; Younoszai, Tina; Keane, Michael J.; Connell, David C.; Hailey, Linda

    Recognizing that families have played an essential role in the Head Start philosophy since the program's inception, the Head Start Family and Child Experiences Survey (FACES) is an effort to develop a descriptive profile of families participating in the Head Start program and services, as well as to develop, test, and refine Program Performance…

  6. Rhesus macaques recognize unique multi-modal face-voice relations of familiar individuals and not of unfamiliar ones

    PubMed Central

    Habbershon, Holly M.; Ahmed, Sarah Z.; Cohen, Yale E.

    2013-01-01

    Communication signals in non-human primates are inherently multi-modal. However, for laboratory-housed monkeys, there is relatively little evidence in support of the use of multi-modal communication signals in individual recognition. Here, we used a preferential-looking paradigm to test whether laboratory-housed rhesus could “spontaneously” (i.e., in the absence of operant training) use multi-modal communication stimuli to discriminate between known conspecifics. The multi-modal stimulus was a silent movie of two monkeys vocalizing and an audio file of the vocalization from one of the monkeys in the movie. We found that the gaze patterns of those monkeys that knew the individuals in the movie were reliably biased toward the individual that did not produce the vocalization. In contrast, there was not a systematic gaze pattern for those monkeys that did not know the individuals in the movie. These data are consistent with the hypothesis that laboratory-housed rhesus can recognize and distinguish between conspecifics based on auditory and visual communication signals. PMID:23774779

  7. Emotion recognition deficits associated with ventromedial prefrontal cortex lesions are improved by gaze manipulation.

    PubMed

    Wolf, Richard C; Pujara, Maia; Baskaya, Mustafa K; Koenigs, Michael

    2016-09-01

    Facial emotion recognition is a critical aspect of human communication. Since abnormalities in facial emotion recognition are associated with social and affective impairment in a variety of psychiatric and neurological conditions, identifying the neural substrates and psychological processes underlying facial emotion recognition will help advance basic and translational research on social-affective function. Ventromedial prefrontal cortex (vmPFC) has recently been implicated in deploying visual attention to the eyes of emotional faces, although there is mixed evidence regarding the importance of this brain region for recognition accuracy. In the present study of neurological patients with vmPFC damage, we used an emotion recognition task with morphed facial expressions of varying intensities to determine (1) whether vmPFC is essential for emotion recognition accuracy, and (2) whether instructed attention to the eyes of faces would be sufficient to improve any accuracy deficits. We found that vmPFC lesion patients are impaired, relative to neurologically healthy adults, at recognizing moderate intensity expressions of anger and that recognition accuracy can be improved by providing instructions of where to fixate. These results suggest that vmPFC may be important for the recognition of facial emotion through a role in guiding visual attention to emotionally salient regions of faces. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. [A murder case from 900 years ago? Analysis of extensive cranial trauma observed in a historical skeleton recovered in central Poland].

    PubMed

    Lorkiewicz, Wiesław; Teul, Iwona; Marchelak, Ireneusz; Tyszler, Lubomira

    2011-01-01

    This work presents the results of study of a human skeleton from the early Middle Ages recovered in Pecławice (province of Łódź), presenting signs of extensive cranial trauma suffered perimortem. The skeleton belonged to a 20-30 year-old male of sturdy build, with prominent bone processes, marked right-side asymmetry of the bones and joints of the upper extremities, and tallness (stature well above average for early medieval times). Except for the skull, the skeleton lacks any pathologic or traumatic lesions. The right side of the skull bears signs of three extensive injuries involving the frontal and parietal bones and the temporomandibular joint. Two of them penetrated deeply into the cranial cavity. The nature and location of the lesions suggests that the axe was used and that the victim was not confronted face-to-face. None of the lesions show any signs of healing. Fragmentation of the facial bones, which were mostly incomplete except for the well-preserved mandible, suggests additional blows to the face. These massive injuries must have been fatal due to damage to the brain and main blood vessels of the neck and thus they were recognized as the cause of death of the individual.

  9. Face Recognition Is Shaped by the Use of Sign Language

    ERIC Educational Resources Information Center

    Stoll, Chloé; Palluel-Germain, Richard; Caldara, Roberto; Lao, Junpeng; Dye, Matthew W. G.; Aptel, Florent; Pascalis, Olivier

    2018-01-01

    Previous research has suggested that early deaf signers differ in face processing. Which aspects of face processing are changed and the role that sign language may have played in that change are however unclear. Here, we compared face categorization (human/non-human) and human face recognition performance in early profoundly deaf signers, hearing…

  10. Quality of life philosophy II: what is a human being?

    PubMed

    Ventegodt, Søren; Andersen, Niels Jørgen; Kromann, Maximilian; Merrick, Joav

    2003-12-01

    The human being is a complex matter and many believe that just trying to understand life and what it means to be human is a futile undertaking. We believe that we have to try to understand life and get a grip on the many faces of life, because it can be of great value to us to learn to recognize the fundamental principles of how life is lived to the fullest. Learning to recognize the good and evil forces of life helps us to make use of the good ones. To be human is to balance between hundreds of extremes. Sometimes we have to avoid these extremes, but at other times it seems we should pursue them, to better understand life. With our roots in medicine, we believe in the importance of love for better health. The secret of the heart is when reason and feelings meet and we become whole. Where reason is balanced perfectly by feelings and where mind and body come together in perfect unity, a whole new quality emerges, a quality that is neither feeling nor reason, but something deeper and more complete. In this paper, we outline only enough biology to clarify what the fundamental inner conflicts are about. The insight into these conflicts gives us the key to a great deal of the problems of life. To imagine pleasures greater than sensual pleasures seems impossible to most people. What could such a joy possibly be? But somewhere deep in life exists the finest sweetness, the greatest quality in life, the pure joy of being alive that emerges when we are fully present and life is in balance. This deep joy of life is what we call experiencing the meaning of life.

  11. The Processing of Human Emotional Faces by Pet and Lab Dogs: Evidence for Lateralization and Experience Effects

    PubMed Central

    Barber, Anjuli L. A.; Randi, Dania; Müller, Corsin A.; Huber, Ludwig

    2016-01-01

    From all non-human animals dogs are very likely the best decoders of human behavior. In addition to a high sensitivity to human attentive status and to ostensive cues, they are able to distinguish between individual human faces and even between human facial expressions. However, so far little is known about how they process human faces and to what extent this is influenced by experience. Here we present an eye-tracking study with dogs emanating from two different living environments and varying experience with humans: pet and lab dogs. The dogs were shown pictures of familiar and unfamiliar human faces expressing four different emotions. The results, extracted from several different eye-tracking measurements, revealed pronounced differences in the face processing of pet and lab dogs, thus indicating an influence of the amount of exposure to humans. In addition, there was some evidence for the influences of both, the familiarity and the emotional expression of the face, and strong evidence for a left gaze bias. These findings, together with recent evidence for the dog's ability to discriminate human facial expressions, indicate that dogs are sensitive to some emotions expressed in human faces. PMID:27074009

  12. Song Recognition without Identification: When People Cannot "Name that Tune" but Can Recognize It as Familiar

    ERIC Educational Resources Information Center

    Kostic, Bogdan; Cleary, Anne M.

    2009-01-01

    Recognition without identification (RWI) is a common day-to-day experience (as when recognizing a face or a tune as familiar without being able to identify the person or the song). It is also a well-established laboratory-based empirical phenomenon: When identification of recognition test items is prevented, participants can discriminate between…

  13. A Teacher's Journey: A First-Person Account of How a Gay, Cambodian Refugee Navigated Myriad Barriers to Become Educated in the United States

    ERIC Educational Resources Information Center

    Sam, Kosal; Finley, Susan

    2015-01-01

    Educational institutions, like most social service organizations, need to recognize intersectionality and complexity and move away from monolithic conceptions of homelessness--if they recognize homelessness at all. This first person account of a gay, Cambodian refugee illustrates the enormous complexity schools face in forming institutional…

  14. Pubface: Celebrity face identification based on deep learning

    NASA Astrophysics Data System (ADS)

    Ouanan, H.; Ouanan, M.; Aksasse, B.

    2018-05-01

    In this paper, we describe a new real time application called PubFace, which allows to recognize celebrities in public spaces by employs a new pose invariant face recognition deep neural network algorithm with an extremely low error rate. To build this application, we make the following contributions: firstly, we build a novel dataset with over five million faces labelled. Secondly, we fine tuning the deep convolutional neural network (CNN) VGG-16 architecture on our new dataset that we have built. Finally, we deploy this model on the Raspberry Pi 3 model B using the OpenCv dnn module (OpenCV 3.3).

  15. Preference for Attractive Faces in Human Infants Extends beyond Conspecifics

    ERIC Educational Resources Information Center

    Quinn, Paul C.; Kelly, David J.; Lee, Kang; Pascalis, Olivier; Slater, Alan M.

    2008-01-01

    Human infants, just a few days of age, are known to prefer attractive human faces. We examined whether this preference is human-specific. Three- to 4-month-olds preferred attractive over unattractive domestic and wild cat (tiger) faces (Experiments 1 and 3). The preference was not observed when the faces were inverted, suggesting that it did not…

  16. Efficient search for a face by chimpanzees (Pan troglodytes).

    PubMed

    Tomonaga, Masaki; Imura, Tomoko

    2015-07-16

    The face is quite an important stimulus category for human and nonhuman primates in their social lives. Recent advances in comparative-cognitive research clearly indicate that chimpanzees and humans process faces in a special manner; that is, using holistic or configural processing. Both species exhibit the face-inversion effect in which the inverted presentation of a face deteriorates their perception and recognition. Furthermore, recent studies have shown that humans detect human faces among non-facial objects rapidly. We report that chimpanzees detected chimpanzee faces among non-facial objects quite efficiently. This efficient search was not limited to own-species faces. They also found human adult and baby faces--but not monkey faces--efficiently. Additional testing showed that a front-view face was more readily detected than a profile, suggesting the important role of eye-to-eye contact. Chimpanzees also detected a photograph of a banana as efficiently as a face, but a further examination clearly indicated that the banana was detected mainly due to a low-level feature (i.e., color). Efficient face detection was hampered by an inverted presentation, suggesting that configural processing of faces is a critical element of efficient face detection in both species. This conclusion was supported by a simple simulation experiment using the saliency model.

  17. Efficient search for a face by chimpanzees (Pan troglodytes)

    PubMed Central

    Tomonaga, Masaki; Imura, Tomoko

    2015-01-01

    The face is quite an important stimulus category for human and nonhuman primates in their social lives. Recent advances in comparative-cognitive research clearly indicate that chimpanzees and humans process faces in a special manner; that is, using holistic or configural processing. Both species exhibit the face-inversion effect in which the inverted presentation of a face deteriorates their perception and recognition. Furthermore, recent studies have shown that humans detect human faces among non-facial objects rapidly. We report that chimpanzees detected chimpanzee faces among non-facial objects quite efficiently. This efficient search was not limited to own-species faces. They also found human adult and baby faces-but not monkey faces-efficiently. Additional testing showed that a front-view face was more readily detected than a profile, suggesting the important role of eye-to-eye contact. Chimpanzees also detected a photograph of a banana as efficiently as a face, but a further examination clearly indicated that the banana was detected mainly due to a low-level feature (i.e., color). Efficient face detection was hampered by an inverted presentation, suggesting that configural processing of faces is a critical element of efficient face detection in both species. This conclusion was supported by a simple simulation experiment using the saliency model. PMID:26180944

  18. Controlling the vocabulary for anatomy.

    PubMed Central

    Baud, R. H.; Lovis, C.; Rassinoux, A. M.; Ruch, P.; Geissbuhler, A.

    2002-01-01

    When confronted with the representation of human anatomy, natural language processing (NLP) system designers are facing an unsolved and frequent problem: the lack of a suitable global reference. The available sources in electronic format are numerous, but none fits adequately all the constraints and needs of language analysis. These sources are usually incomplete, difficult to use or tailored to specific needs. The anatomist's or ontologist's view does not necessarily match that of the linguist. The purpose of this paper is to review most recognized sources of knowledge in anatomy usable for linguistic analysis. Their potential and limits are emphasized according to this point of view. Focus is given on the role of the consensus work of the International Federation of Associations of Anatomists (IFAA) giving the Terminologia Anatomica. PMID:12463780

  19. Orienting asymmetries and physiological reactivity in dogs' response to human emotional faces.

    PubMed

    Siniscalchi, Marcello; d'Ingeo, Serenella; Quaranta, Angelo

    2018-06-19

    Recent scientific literature shows that emotional cues conveyed by human vocalizations and odours are processed in an asymmetrical way by the canine brain. In the present study, during feeding behaviour, dogs were suddenly presented with 2-D stimuli depicting human faces expressing the Ekman's six basic emotion (e.g. anger, fear, happiness, sadness, surprise, disgust, and neutral), simultaneously into the left and right visual hemifields. A bias to turn the head towards the left (right hemisphere) rather than the right side was observed with human faces expressing anger, fear, and happiness emotions, but an opposite bias (left hemisphere) was observed with human faces expressing surprise. Furthermore, dogs displayed higher behavioural and cardiac activity to picture of human faces expressing clear arousal emotional state. Overall, results demonstrated that dogs are sensitive to emotional cues conveyed by human faces, supporting the existence of an asymmetrical emotional modulation of the canine brain to process basic human emotions.

  20. Individual differences in adaptive coding of face identity are linked to individual differences in face recognition ability.

    PubMed

    Rhodes, Gillian; Jeffery, Linda; Taylor, Libby; Hayward, William G; Ewing, Louise

    2014-06-01

    Despite their similarity as visual patterns, we can discriminate and recognize many thousands of faces. This expertise has been linked to 2 coding mechanisms: holistic integration of information across the face and adaptive coding of face identity using norms tuned by experience. Recently, individual differences in face recognition ability have been discovered and linked to differences in holistic coding. Here we show that they are also linked to individual differences in adaptive coding of face identity, measured using face identity aftereffects. Identity aftereffects correlated significantly with several measures of face-selective recognition ability. They also correlated marginally with own-race face recognition ability, suggesting a role for adaptive coding in the well-known other-race effect. More generally, these results highlight the important functional role of adaptive face-coding mechanisms in face expertise, taking us beyond the traditional focus on holistic coding mechanisms. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  1. A multi-view face recognition system based on cascade face detector and improved Dlib

    NASA Astrophysics Data System (ADS)

    Zhou, Hongjun; Chen, Pei; Shen, Wei

    2018-03-01

    In this research, we present a framework for multi-view face detect and recognition system based on cascade face detector and improved Dlib. This method is aimed to solve the problems of low efficiency and low accuracy in multi-view face recognition, to build a multi-view face recognition system, and to discover a suitable monitoring scheme. For face detection, the cascade face detector is used to extracted the Haar-like feature from the training samples, and Haar-like feature is used to train a cascade classifier by combining Adaboost algorithm. Next, for face recognition, we proposed an improved distance model based on Dlib to improve the accuracy of multiview face recognition. Furthermore, we applied this proposed method into recognizing face images taken from different viewing directions, including horizontal view, overlooks view, and looking-up view, and researched a suitable monitoring scheme. This method works well for multi-view face recognition, and it is also simulated and tested, showing satisfactory experimental results.

  2. Differing Roles of the Face and Voice in Early Human Communication: Roots of Language in Multimodal Expression

    PubMed Central

    Jhang, Yuna; Franklin, Beau; Ramsdell-Hudock, Heather L.; Oller, D. Kimbrough

    2017-01-01

    Seeking roots of language, we probed infant facial expressions and vocalizations. Both have roles in language, but the voice plays an especially flexible role, expressing a variety of functions and affect conditions with the same vocal categories—a word can be produced with many different affective flavors. This requirement of language is seen in very early infant vocalizations. We examined the extent to which affect is transmitted by early vocal categories termed “protophones” (squeals, vowel-like sounds, and growls) and by their co-occurring facial expressions, and similarly the extent to which vocal type is transmitted by the voice and co-occurring facial expressions. Our coder agreement data suggest infant affect during protophones was most reliably transmitted by the face (judged in video-only), while vocal type was transmitted most reliably by the voice (judged in audio-only). Voice alone transmitted negative affect more reliably than neutral or positive affect, suggesting infant protophones may be used especially to call for attention when the infant is in distress. By contrast, the face alone provided no significant information about protophone categories. Indeed coders in VID could scarcely recognize the difference between silence and voice when coding protophones in VID. The results suggest that partial decoupling of communicative roles for face and voice occurs even in the first months of life. Affect in infancy appears to be transmitted in a way that audio and video aspects are flexibly interwoven, as in mature language. PMID:29423398

  3. Differing Roles of the Face and Voice in Early Human Communication: Roots of Language in Multimodal Expression.

    PubMed

    Jhang, Yuna; Franklin, Beau; Ramsdell-Hudock, Heather L; Oller, D Kimbrough

    2017-01-01

    Seeking roots of language, we probed infant facial expressions and vocalizations. Both have roles in language, but the voice plays an especially flexible role, expressing a variety of functions and affect conditions with the same vocal categories-a word can be produced with many different affective flavors. This requirement of language is seen in very early infant vocalizations. We examined the extent to which affect is transmitted by early vocal categories termed "protophones" (squeals, vowel-like sounds, and growls) and by their co-occurring facial expressions, and similarly the extent to which vocal type is transmitted by the voice and co-occurring facial expressions. Our coder agreement data suggest infant affect during protophones was most reliably transmitted by the face (judged in video-only), while vocal type was transmitted most reliably by the voice (judged in audio-only). Voice alone transmitted negative affect more reliably than neutral or positive affect, suggesting infant protophones may be used especially to call for attention when the infant is in distress. By contrast, the face alone provided no significant information about protophone categories. Indeed coders in VID could scarcely recognize the difference between silence and voice when coding protophones in VID. The results suggest that partial decoupling of communicative roles for face and voice occurs even in the first months of life. Affect in infancy appears to be transmitted in a way that audio and video aspects are flexibly interwoven, as in mature language.

  4. Galactose uncovers face recognition and mental images in congenital prosopagnosia: the first case report.

    PubMed

    Esins, Janina; Schultz, Johannes; Bülthoff, Isabelle; Kennerknecht, Ingo

    2014-09-01

    A woman in her early 40s with congenital prosopagnosia and attention deficit hyperactivity disorder observed for the first time sudden and extensive improvement of her face recognition abilities, mental imagery, and sense of navigation after galactose intake. This effect of galactose on prosopagnosia has never been reported before. Even if this effect is restricted to a subform of congenital prosopagnosia, galactose might improve the condition of other prosopagnosics. Congenital prosopagnosia, the inability to recognize other people by their face, has extensive negative impact on everyday life. It has a high prevalence of about 2.5%. Monosaccharides are known to have a positive impact on cognitive performance. Here, we report the case of a prosopagnosic woman for whom the daily intake of 5 g of galactose resulted in a remarkable improvement of her lifelong face blindness, along with improved sense of orientation and more vivid mental imagery. All these improvements vanished after discontinuing galactose intake. The self-reported effects of galactose were wide-ranging and remarkably strong but could not be reproduced for 16 other prosopagnosics tested. Indications about heterogeneity within prosopagnosia have been reported; this could explain the difficulty to find similar effects in other prosopagnosics. Detailed analyses of the effects of galactose in prosopagnosia might give more insight into the effects of galactose on human cognition in general. Galactose is cheap and easy to obtain, therefore, a systematic test of its positive effects on other cases of congenital prosopagnosia may be warranted.

  5. The Occipital Face Area Is Causally Involved in Facial Viewpoint Perception

    PubMed Central

    Poltoratski, Sonia; König, Peter; Blake, Randolph; Tong, Frank; Ling, Sam

    2015-01-01

    Humans reliably recognize faces across a range of viewpoints, but the neural substrates supporting this ability remain unclear. Recent work suggests that neural selectivity to mirror-symmetric viewpoints of faces, found across a large network of visual areas, may constitute a key computational step in achieving full viewpoint invariance. In this study, we used repetitive transcranial magnetic stimulation (rTMS) to test the hypothesis that the occipital face area (OFA), putatively a key node in the face network, plays a causal role in face viewpoint symmetry perception. Each participant underwent both offline rTMS to the right OFA and sham stimulation, preceding blocks of behavioral trials. After each stimulation period, the participant performed one of two behavioral tasks involving presentation of faces in the peripheral visual field: (1) judging the viewpoint symmetry; or (2) judging the angular rotation. rTMS applied to the right OFA significantly impaired performance in both tasks when stimuli were presented in the contralateral, left visual field. Interestingly, however, rTMS had a differential effect on the two tasks performed ipsilaterally. Although viewpoint symmetry judgments were significantly disrupted, we observed no effect on the angle judgment task. This interaction, caused by ipsilateral rTMS, provides support for models emphasizing the role of interhemispheric crosstalk in the formation of viewpoint-invariant face perception. SIGNIFICANCE STATEMENT Faces are among the most salient objects we encounter during our everyday activities. Moreover, we are remarkably adept at identifying people at a glance, despite the diversity of viewpoints during our social encounters. Here, we investigate the cortical mechanisms underlying this ability by focusing on effects of viewpoint symmetry, i.e., the invariance of neural responses to mirror-symmetric facial viewpoints. We did this by temporarily disrupting neural processing in the occipital face area (OFA) using transcranial magnetic stimulation. Our results demonstrate that the OFA causally contributes to judgments facial viewpoints and suggest that effects of viewpoint symmetry, previously observed using fMRI, arise from an interhemispheric integration of visual information even when only one hemisphere receives direct visual stimulation. PMID:26674865

  6. The Occipital Face Area Is Causally Involved in Facial Viewpoint Perception.

    PubMed

    Kietzmann, Tim C; Poltoratski, Sonia; König, Peter; Blake, Randolph; Tong, Frank; Ling, Sam

    2015-12-16

    Humans reliably recognize faces across a range of viewpoints, but the neural substrates supporting this ability remain unclear. Recent work suggests that neural selectivity to mirror-symmetric viewpoints of faces, found across a large network of visual areas, may constitute a key computational step in achieving full viewpoint invariance. In this study, we used repetitive transcranial magnetic stimulation (rTMS) to test the hypothesis that the occipital face area (OFA), putatively a key node in the face network, plays a causal role in face viewpoint symmetry perception. Each participant underwent both offline rTMS to the right OFA and sham stimulation, preceding blocks of behavioral trials. After each stimulation period, the participant performed one of two behavioral tasks involving presentation of faces in the peripheral visual field: (1) judging the viewpoint symmetry; or (2) judging the angular rotation. rTMS applied to the right OFA significantly impaired performance in both tasks when stimuli were presented in the contralateral, left visual field. Interestingly, however, rTMS had a differential effect on the two tasks performed ipsilaterally. Although viewpoint symmetry judgments were significantly disrupted, we observed no effect on the angle judgment task. This interaction, caused by ipsilateral rTMS, provides support for models emphasizing the role of interhemispheric crosstalk in the formation of viewpoint-invariant face perception. Faces are among the most salient objects we encounter during our everyday activities. Moreover, we are remarkably adept at identifying people at a glance, despite the diversity of viewpoints during our social encounters. Here, we investigate the cortical mechanisms underlying this ability by focusing on effects of viewpoint symmetry, i.e., the invariance of neural responses to mirror-symmetric facial viewpoints. We did this by temporarily disrupting neural processing in the occipital face area (OFA) using transcranial magnetic stimulation. Our results demonstrate that the OFA causally contributes to judgments facial viewpoints and suggest that effects of viewpoint symmetry, previously observed using fMRI, arise from an interhemispheric integration of visual information even when only one hemisphere receives direct visual stimulation. Copyright © 2015 the authors 0270-6474/15/3516398-06$15.00/0.

  7. Differences between perception of human faces and body shapes: evidence from the composite illusion.

    PubMed

    Soria Bauser, Denise A; Suchan, Boris; Daum, Irene

    2011-01-01

    The present study aimed to investigate whether human body forms--like human faces--undergo holistic processing. Evidence for holistic face processing comes from the face composite effect: two identical top halves of a face are perceived as being different if they are presented with different bottom parts. This effect disappears if both bottom halves are shifted laterally (misaligned) or if the stimulus is rotated by 180°. We investigated whether comparable composite effects are observed for human faces and human body forms. Matching of upright faces was more accurate and faster for misaligned compared to aligned presentations. By contrast, there were no processing differences between aligned and misaligned bodies. An inversion effect emerged, with better recognition performance for upright compared to inverted bodies but not faces. The present findings provide evidence for the assumption that holistic processing--investigated with the composite illusion--is not involved in the perception of human body forms. Copyright © 2010 Elsevier Ltd. All rights reserved.

  8. Emotion perception accuracy and bias in face-to-face versus cyberbullying.

    PubMed

    Ciucci, Enrica; Baroncelli, Andrea; Nowicki, Stephen

    2014-01-01

    The authors investigated the association of traditional and cyber forms of bullying and victimization with emotion perception accuracy and emotion perception bias. Four basic emotions were considered (i.e., happiness, sadness, anger, and fear); 526 middle school students (280 females; M age = 12.58 years, SD = 1.16 years) were recruited, and emotionality was controlled. Results indicated no significant findings for girls. Boys with higher levels of traditional bullying did not show any deficit in perception accuracy of emotions, but they were prone to identify happiness and fear in faces when a different emotion was expressed; in addition, male cyberbullying was related to greater accuracy in recognizing fear. In terms of the victims, cyber victims had a global problem in recognizing emotions and a specific problem in processing anger and fear. It was concluded that emotion perception accuracy and bias were associated with bullying and victimization for boys not only in traditional settings but also in the electronic ones. Implications of these findings for possible intervention are discussed.

  9. The increase in medial prefrontal glutamate/glutamine concentration during memory encoding is associated with better memory performance and stronger functional connectivity in the human medial prefrontal-thalamus-hippocampus network.

    PubMed

    Thielen, Jan-Willem; Hong, Donghyun; Rohani Rankouhi, Seyedmorteza; Wiltfang, Jens; Fernández, Guillén; Norris, David G; Tendolkar, Indira

    2018-06-01

    The classical model of the declarative memory system describes the hippocampus and its interactions with representational brain areas in posterior neocortex as being essential for the formation of long-term episodic memories. However, new evidence suggests an extension of this classical model by assigning the medial prefrontal cortex (mPFC) a specific, yet not fully defined role in episodic memory. In this study, we utilized 1H magnetic resonance spectroscopy (MRS) and psychophysiological interaction (PPI) analysis to lend further support for the idea of a mnemonic role of the mPFC in humans. By using MRS, we measured mPFC γ-aminobutyric acid (GABA) and glutamate/glutamine (GLx) concentrations before and after volunteers memorized face-name association. We demonstrate that mPFC GLx but not GABA levels increased during the memory task, which appeared to be related to memory performance. Regarding functional connectivity, we used the subsequent memory paradigm and found that the GLx increase was associated with stronger mPFC connectivity to thalamus and hippocampus for associations subsequently recognized with high confidence as opposed to subsequently recognized with low confidence/forgotten. Taken together, we provide new evidence for an mPFC involvement in episodic memory by showing a memory-related increase in mPFC excitatory neurotransmitter levels that was associated with better memory and stronger memory-related functional connectivity in a medial prefrontal-thalamus-hippocampus network. © 2018 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  10. Dental anatomy and pathology encountered on routine CT of the head and neck.

    PubMed

    Steinklein, Jared; Nguyen, Vinh

    2013-12-01

    Although dental CT is not routinely performed at hospital imaging centers, dental and periodontal disease can be recognized on standard high-resolution CT of the neck and face. These findings can have significant implications with regard to not only dental disease, but also diseases of the sinuses, jaw, and surrounding soft tissues. This article serves to review dental and periodontal anatomy and pathology as well as other regional entities with dental involvement and to discuss the imaging findings. Recognition of dental and periodontal disease has the potential to affect management and preclude further complications, thereby preserving the smile, one of the most recognizable and attractive features of the human face and, unfortunately, often disease ridden. Although practicing good oral hygiene and visiting the dentist for regular examinations and cleanings are the most effective ways to prevent disease, some patients do not take these preventative measures. Thus, radiologists play a role in diagnosing dental disease and complications such as chronic periodontitis and abscesses, nonhealing fractures and osteomyelitis, oroantral fistulas, tumoral diseases, osteonecrosis of the jaw, and other conditions.

  11. Fishes in a changing world: learning from the past to promote sustainability of fish populations.

    PubMed

    Gordon, T A C; Harding, H R; Clever, F K; Davidson, I K; Davison, W; Montgomery, D W; Weatherhead, R C; Windsor, F M; Armstrong, J D; Bardonnet, A; Bergman, E; Britton, J R; Côté, I M; D'agostino, D; Greenberg, L A; Harborne, A R; Kahilainen, K K; Metcalfe, N B; Mills, S C; Milner, N J; Mittermayer, F H; Montorio, L; Nedelec, S L; Prokkola, J M; Rutterford, L A; Salvanes, A G V; Simpson, S D; Vainikka, A; Pinnegar, J K; Santos, E M

    2018-03-01

    Populations of fishes provide valuable services for billions of people, but face diverse and interacting threats that jeopardize their sustainability. Human population growth and intensifying resource use for food, water, energy and goods are compromising fish populations through a variety of mechanisms, including overfishing, habitat degradation and declines in water quality. The important challenges raised by these issues have been recognized and have led to considerable advances over past decades in managing and mitigating threats to fishes worldwide. In this review, we identify the major threats faced by fish populations alongside recent advances that are helping to address these issues. There are very significant efforts worldwide directed towards ensuring a sustainable future for the world's fishes and fisheries and those who rely on them. Although considerable challenges remain, by drawing attention to successful mitigation of threats to fish and fisheries we hope to provide the encouragement and direction that will allow these challenges to be overcome in the future. © 2018 The Authors. Journal of Fish Biology published by John Wiley & Sons Ltd on behalf of The Fisheries Society of the British Isles.

  12. [Comparative studies of face recognition].

    PubMed

    Kawai, Nobuyuki

    2012-07-01

    Every human being is proficient in face recognition. However, the reason for and the manner in which humans have attained such an ability remain unknown. These questions can be best answered-through comparative studies of face recognition in non-human animals. Studies in both primates and non-primates show that not only primates, but also non-primates possess the ability to extract information from their conspecifics and from human experimenters. Neural specialization for face recognition is shared with mammals in distant taxa, suggesting that face recognition evolved earlier than the emergence of mammals. A recent study indicated that a social insect, the golden paper wasp, can distinguish their conspecific faces, whereas a closely related species, which has a less complex social lifestyle with just one queen ruling a nest of underlings, did not show strong face recognition for their conspecifics. Social complexity and the need to differentiate between one another likely led humans to evolve their face recognition abilities.

  13. How is This Child Feeling? Preschool-Aged Children's Ability to Recognize Emotion in Faces and Body Poses

    ERIC Educational Resources Information Center

    Parker, Alison E.; Mathis, Erin T.; Kupersmidt, Janis B.

    2013-01-01

    Research Findings: The study examined children's recognition of emotion from faces and body poses, as well as gender differences in these recognition abilities. Preschool-aged children ("N" = 55) and their parents and teachers participated in the study. Preschool-aged children completed a web-based measure of emotion recognition skills…

  14. Callous-Unemotional Traits Are Related to Combined Deficits in Recognizing Afraid Faces and Body Poses

    ERIC Educational Resources Information Center

    Munoz, Luna C.

    2009-01-01

    Results from a study that involves letting boys aged 8-16 years label emotional faces and static body poses show that callous-unemotional traits are related to poor accuracy in the tests. The results imply that a general "fear-blindness" is associated to a lack of empathy and to violence and antisocial behavior.

  15. A Usability Evaluation of a Blended MOOC Environment: An Experimental Case Study

    ERIC Educational Resources Information Center

    Yousef, Ahmed Mohamed Fahmy; Chatti, Mohamed Amine; Schroeder, Ulrik; Wosnitza, Marold

    2015-01-01

    In the past few years, there has been an increasing interest in Massive Open Online Courses (MOOCs) as a new form of Technology-Enhanced Learning (TEL), in higher education and beyond. Recognizing the limitations of standalone MOOCs, blended MOOCs (bMOOCs) that aim at bringing in-class (i.e. face-to-face) interactions and online learning…

  16. Limitations in 4-Year-Old Children's Sensitivity to the Spacing among Facial Features

    ERIC Educational Resources Information Center

    Mondloch, Catherine J.; Thomson, Kendra

    2008-01-01

    Four-year-olds' sensitivity to differences among faces in the spacing of features was tested under 4 task conditions: judging distinctiveness when the external contour was visible and when it was occluded, simultaneous match-to-sample, and recognizing the face of a friend. In each task, the foil differed only in the spacing of features, and…

  17. A novel behavioral paradigm for assessing concept of nests in mice

    PubMed Central

    Kuang, Hui; Mei, Bing; Cui, Zhenzhong; Lin, Longnian; Tsien, Joe Z.

    2013-01-01

    Abstract concepts in the brain enable humans to efficiently and correctly recognize and categorize a seemingly infinite amount of objects and daily events. Such abstract generalization abilities are traditionally considered to be unique to humans and perhaps non-human primates. However, emerging neurophysiological recordings indicate the existence of neural correlates for the abstract concept of nests in the mouse brain. To facilitate the molecular and genetic analyses of concepts in the mouse model, we have developed a nest generalization test based on mice’s natural behavior. We show that inducible and forebrain-specific NMDA receptor knockout results in pronounced impairment in this test. Interestingly, this generalization deficit could be gradually compensated for over time by repeated experiences even in face of the continued deficit in object recognition memory. On the contrast, the forebrain-specific presenilin-1 knockout mice, which have subtle phenotypes, were normal in performing this test. Therefore, our study not only establishes a quantitative method for assessing the nest concept in mice, but also demonstrates its great potential in combining powerful mouse genetics for dissecting the molecular basis of concept formation in the brain. PMID:20350568

  18. Understanding gender bias in face recognition: effects of divided attention at encoding.

    PubMed

    Palmer, Matthew A; Brewer, Neil; Horry, Ruth

    2013-03-01

    Prior research has demonstrated a female own-gender bias in face recognition, with females better at recognizing female faces than male faces. We explored the basis for this effect by examining the effect of divided attention during encoding on females' and males' recognition of female and male faces. For female participants, divided attention impaired recognition performance for female faces to a greater extent than male faces in a face recognition paradigm (Study 1; N=113) and an eyewitness identification paradigm (Study 2; N=502). Analysis of remember-know judgments (Study 2) indicated that divided attention at encoding selectively reduced female participants' recollection of female faces at test. For male participants, divided attention selectively reduced recognition performance (and recollection) for male stimuli in Study 2, but had similar effects on recognition of male and female faces in Study 1. Overall, the results suggest that attention at encoding contributes to the female own-gender bias by facilitating the later recollection of female faces. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Face-n-Food: Gender Differences in Tuning to Faces.

    PubMed

    Pavlova, Marina A; Scheffler, Klaus; Sokolov, Alexander N

    2015-01-01

    Faces represent valuable signals for social cognition and non-verbal communication. A wealth of research indicates that women tend to excel in recognition of facial expressions. However, it remains unclear whether females are better tuned to faces. We presented healthy adult females and males with a set of newly created food-plate images resembling faces (slightly bordering on the Giuseppe Arcimboldo style). In a spontaneous recognition task, participants were shown a set of images in a predetermined order from the least to most resembling a face. Females not only more readily recognized the images as a face (they reported resembling a face on images, on which males still did not), but gave on overall more face responses. The findings are discussed in the light of gender differences in deficient face perception. As most neuropsychiatric, neurodevelopmental and psychosomatic disorders characterized by social brain abnormalities are sex specific, the task may serve as a valuable tool for uncovering impairments in visual face processing.

  20. Face-n-Food: Gender Differences in Tuning to Faces

    PubMed Central

    Pavlova, Marina A.; Scheffler, Klaus; Sokolov, Alexander N.

    2015-01-01

    Faces represent valuable signals for social cognition and non-verbal communication. A wealth of research indicates that women tend to excel in recognition of facial expressions. However, it remains unclear whether females are better tuned to faces. We presented healthy adult females and males with a set of newly created food-plate images resembling faces (slightly bordering on the Giuseppe Arcimboldo style). In a spontaneous recognition task, participants were shown a set of images in a predetermined order from the least to most resembling a face. Females not only more readily recognized the images as a face (they reported resembling a face on images, on which males still did not), but gave on overall more face responses. The findings are discussed in the light of gender differences in deficient face perception. As most neuropsychiatric, neurodevelopmental and psychosomatic disorders characterized by social brain abnormalities are sex specific, the task may serve as a valuable tool for uncovering impairments in visual face processing. PMID:26154177

  1. Learning, Interactional, and Motivational Outcomes in One-to-One Synchronous Computer-Mediated versus Face-to-Face Tutoring

    ERIC Educational Resources Information Center

    Siler, Stephanie Ann; VanLehn, Kurt

    2009-01-01

    Face-to-face (FTF) human-human tutoring has ranked among the most effective forms of instruction. However, because computer-mediated (CM) tutoring is becoming increasingly common, it is instructive to evaluate its effectiveness relative to face-to-face tutoring. Does the lack of spoken, face-to-face interaction affect learning gains and…

  2. U.S. Department of Energy, Office of Legacy Management Program Update, April-June 2009

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2009-04-01

    Welcome to the April-June 2009 issue of the U.S. Department of Energy (DOE) Office of Legacy Management (LM) Program Update. This publication is designed to provide a status of activities within LM. The Legacy Management goals are: (1) Protect human health and the environment through effective and efficient long-term surveillance and maintenance - This goal highlights DOE's responsibility to ensure long-term protection of people, the environment, and the integrity of engineered remedies and monitoring systems. (2) Preserve, protect, and make accessible legacy records and information - This goal recognizes LM's commitment to successfully manage records, information, and archives of legacymore » sites under its authority. (3) Support an effective and efficient work force structured to accomplish Departmental missions and assure continuity of contractor worker pension and medical benefits - This goal recognizes DOE's commitment to its contracted work force and the consistent management of pension and health benefits. As sites continue to close, DOE faces the challenges of managing pension plan and health benefits liability. (4) Manage legacy land and assets, emphasizing protective real and personal property reuse and disposition - This goal recognizes a DOE need for local collaborative management of legacy assets, including coordinating land use planning, personal property disposition to community reuse organizations, and protecting heritage resources (natural, cultural, and historical). (5) Improve program effectiveness through sound management - This goal recognizes that LM's goals cannot be attained efficiently unless the federal and contractor work force is motivated to meet requirements and work toward continuous performance improvement.« less

  3. Sex differences in emotion recognition: Evidence for a small overall female superiority on facial disgust.

    PubMed

    Connolly, Hannah L; Lefevre, Carmen E; Young, Andrew W; Lewis, Gary J

    2018-05-21

    Although it is widely believed that females outperform males in the ability to recognize other people's emotions, this conclusion is not well supported by the extant literature. The current study sought to provide a strong test of the female superiority hypothesis by investigating sex differences in emotion recognition for five basic emotions using stimuli well-calibrated for individual differences assessment, across two expressive domains (face and body), and in a large sample (N = 1,022: Study 1). We also assessed the stability and generalizability of our findings with two independent replication samples (N = 303: Study 2, N = 634: Study 3). In Study 1, we observed that females were superior to males in recognizing facial disgust and sadness. In contrast, males were superior to females in recognizing bodily happiness. The female superiority for recognition of facial disgust was replicated in Studies 2 and 3, and this observation also extended to an independent stimulus set in Study 2. No other sex differences were stable across studies. These findings provide evidence for the presence of sex differences in emotion recognition ability, but show that these differences are modest in magnitude and appear to be limited to facial disgust. We discuss whether this sex difference may reflect human evolutionary imperatives concerning reproductive fitness and child care. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  4. Older and Younger Adults’ Accuracy in Discerning Health and Competence in Older and Younger Faces

    PubMed Central

    Zebrowitz, Leslie A.; Franklin, Robert G.; Boshyan, Jasmine; Luevano, Victor; Agrigoroaei, Stefan; Milosavljevic, Bosiljka; Lachman, Margie E.

    2015-01-01

    We examined older and younger adults’ accuracy judging the health and competence of faces. Accuracy differed significantly from chance and varied with face age but not rater age. Health ratings were more accurate for older than younger faces, with the reverse for competence ratings. Accuracy was greater for low attractive younger faces, but not for low attractive older faces. Greater accuracy judging older faces’ health was paralleled by greater validity of attractiveness and looking older as predictors of their health. Greater accuracy judging younger faces’ competence was paralleled by greater validity of attractiveness and a positive expression as predictors of their competence. Although the ability to recognize variations in health and cognitive ability is preserved in older adulthood, the effects of face age on accuracy and the different effects of attractiveness across face age may alter social interactions across the life span. PMID:25244467

  5. The recognition of emotional expression in prosopagnosia: decoding whole and part faces.

    PubMed

    Stephan, Blossom Christa Maree; Breen, Nora; Caine, Diana

    2006-11-01

    Prosopagnosia is currently viewed within the constraints of two competing theories of face recognition, one highlighting the analysis of features, the other focusing on configural processing of the whole face. This study investigated the role of feature analysis versus whole face configural processing in the recognition of facial expression. A prosopagnosic patient, SC made expression decisions from whole and incomplete (eyes-only and mouth-only) faces where features had been obscured. SC was impaired at recognizing some (e.g., anger, sadness, and fear), but not all (e.g., happiness) emotional expressions from the whole face. Analyses of his performance on incomplete faces indicated that his recognition of some expressions actually improved relative to his performance on the whole face condition. We argue that in SC interference from damaged configural processes seem to override an intact ability to utilize part-based or local feature cues.

  6. The evolution of monogamy in response to partner scarcity

    PubMed Central

    Schacht, Ryan; Bell, Adrian V.

    2016-01-01

    The evolution of monogamy and paternal care in humans is often argued to have resulted from the needs of our expensive offspring. Recent research challenges this claim, however, contending that promiscuous male competitors and the risk of cuckoldry limit the scope for the evolution of male investment. So how did monogamy first evolve? Links between mating strategies and partner availability may offer resolution. While studies of sex roles commonly assume that optimal mating rates for males are higher, fitness payoffs to monogamy and the maintenance of a single partner can be greater when partners are rare. Thus, partner availability is increasingly recognized as a key variable structuring mating behavior. To apply these recent insights to human evolution, we model three male strategies – multiple mating, mate guarding and paternal care – in response to partner availability. Under assumed ancestral human conditions, we find that male mate guarding, rather than paternal care, drives the evolution of monogamy, as it secures a partner and ensures paternity certainty in the face of more promiscuous competitors. Accordingly, we argue that while paternal investment may be common across human societies, current patterns should not be confused with the reason pairing first evolved. PMID:27600189

  7. The evolution of monogamy in response to partner scarcity.

    PubMed

    Schacht, Ryan; Bell, Adrian V

    2016-09-07

    The evolution of monogamy and paternal care in humans is often argued to have resulted from the needs of our expensive offspring. Recent research challenges this claim, however, contending that promiscuous male competitors and the risk of cuckoldry limit the scope for the evolution of male investment. So how did monogamy first evolve? Links between mating strategies and partner availability may offer resolution. While studies of sex roles commonly assume that optimal mating rates for males are higher, fitness payoffs to monogamy and the maintenance of a single partner can be greater when partners are rare. Thus, partner availability is increasingly recognized as a key variable structuring mating behavior. To apply these recent insights to human evolution, we model three male strategies - multiple mating, mate guarding and paternal care - in response to partner availability. Under assumed ancestral human conditions, we find that male mate guarding, rather than paternal care, drives the evolution of monogamy, as it secures a partner and ensures paternity certainty in the face of more promiscuous competitors. Accordingly, we argue that while paternal investment may be common across human societies, current patterns should not be confused with the reason pairing first evolved.

  8. Observing Third-Party Attentional Relationships Affects Infants' Gaze Following: An Eye-Tracking Study

    PubMed Central

    Meng, Xianwei; Uto, Yusuke; Hashiya, Kazuhide

    2017-01-01

    Not only responding to direct social actions toward themselves, infants also pay attention to relevant information from third-party interactions. However, it is unclear whether and how infants recognize the structure of these interactions. The current study aimed to investigate how infants' observation of third-party attentional relationships influence their subsequent gaze following. Nine-month-old, 1-year-old, and 1.5-year-old infants (N = 72, 37 girls) observed video clips in which a female actor gazed at one of two toys after she and her partner either silently faced each other (face-to-face condition) or looked in opposite directions (back-to-back condition). An eye tracker was used to record the infants' looking behavior (e.g., looking time, looking frequency). The analyses revealed that younger infants followed the actor's gaze toward the target object in both conditions, but this was not the case for the 1.5-year-old infants in the back-to-back condition. Furthermore, we found that infants' gaze following could be negatively predicted by their expectation of the partner's response to the actor's head turn (i.e., they shift their gaze toward the partner immediately after they realize that the actor's head will turn). These findings suggested that the sensitivity to the difference in knowledge and attentional states in the second year of human life could be extended to third-party interactions, even without any direct involvement in the situation. Additionally, a spontaneous concern with the epistemic gap between self and other, as well as between others, develops by this age. These processes might be considered part of the fundamental basis for human communication. PMID:28149284

  9. Discrimination of human and dog faces and inversion responses in domestic dogs (Canis familiaris).

    PubMed

    Racca, Anaïs; Amadei, Eleonora; Ligout, Séverine; Guo, Kun; Meints, Kerstin; Mills, Daniel

    2010-05-01

    Although domestic dogs can respond to many facial cues displayed by other dogs and humans, it remains unclear whether they can differentiate individual dogs or humans based on facial cues alone and, if so, whether they would demonstrate the face inversion effect, a behavioural hallmark commonly used in primates to differentiate face processing from object processing. In this study, we first established the applicability of the visual paired comparison (VPC or preferential looking) procedure for dogs using a simple object discrimination task with 2D pictures. The animals demonstrated a clear looking preference for novel objects when simultaneously presented with prior-exposed familiar objects. We then adopted this VPC procedure to assess their face discrimination and inversion responses. Dogs showed a deviation from random behaviour, indicating discrimination capability when inspecting upright dog faces, human faces and object images; but the pattern of viewing preference was dependent upon image category. They directed longer viewing time at novel (vs. familiar) human faces and objects, but not at dog faces, instead, a longer viewing time at familiar (vs. novel) dog faces was observed. No significant looking preference was detected for inverted images regardless of image category. Our results indicate that domestic dogs can use facial cues alone to differentiate individual dogs and humans and that they exhibit a non-specific inversion response. In addition, the discrimination response by dogs of human and dog faces appears to differ with the type of face involved.

  10. Neural Mechanisms of Recognizing Camouflaged Objects: A Human fMRI Study

    DTIC Science & Technology

    2015-07-30

    Unlimited Final Report: Neural Mechanisms of Recognizing Camouflaged Objects: A Human fMRI Study The views, opinions and/or findings contained in this...27709-2211 Visual search, Camouflage, Functional magnetic resonance imaging ( fMRI ), Perceptual learning REPORT DOCUMENTATION PAGE 11. SPONSOR...ABSTRACT Number of Papers published in peer-reviewed journals: Final Report: Neural Mechanisms of Recognizing Camouflaged Objects: A Human fMRI Study

  11. Is the Face-Perception System Human-Specific at Birth?

    ERIC Educational Resources Information Center

    Di Giorgio, Elisa; Leo, Irene; Pascalis, Olivier; Simion, Francesca

    2012-01-01

    The present study investigates the human-specificity of the orienting system that allows neonates to look preferentially at faces. Three experiments were carried out to determine whether the face-perception system that is present at birth is broad enough to include both human and nonhuman primate faces. The results demonstrate that the newborns…

  12. Face imagery is based on featural representations.

    PubMed

    Lobmaier, Janek S; Mast, Fred W

    2008-01-01

    The effect of imagery on featural and configural face processing was investigated using blurred and scrambled faces. By means of blurring, featural information is reduced; by scrambling a face into its constituent parts configural information is lost. Twenty-four participants learned ten faces together with the sound of a name. In following matching-to-sample tasks participants had to decide whether an auditory presented name belonged to a visually presented scrambled or blurred face in two experimental conditions. In the imagery condition, the name was presented prior to the visual stimulus and participants were required to imagine the corresponding face as clearly and vividly as possible. In the perception condition name and test face were presented simultaneously, thus no facilitation via mental imagery was possible. Analyses of the hit values showed that in the imagery condition scrambled faces were recognized significantly better than blurred faces whereas there was no such effect for the perception condition. The results suggest that mental imagery activates featural representations more than configural representations.

  13. Faces in Context: Does Face Perception Depend on the Orientation of the Visual Scene?

    PubMed

    Taubert, Jessica; van Golde, Celine; Verstraten, Frans A J

    2016-10-01

    The mechanisms held responsible for familiar face recognition are thought to be orientation dependent; inverted faces are more difficult to recognize than their upright counterparts. Although this effect of inversion has been investigated extensively, researchers have typically sliced faces from photographs and presented them in isolation. As such, it is not known whether the perceived orientation of a face is inherited from the visual scene in which it appears. Here, we address this question by measuring performance in a simultaneous same-different task while manipulating both the orientation of the faces and the scene. We found that the face inversion effect survived scene inversion. Nonetheless, an improvement in performance when the scene was upside down suggests that sensitivity to identity increased when the faces were more easily segmented from the scene. Thus, while these data identify congruency with the visual environment as a contributing factor in recognition performance, they imply different mechanisms operate on upright and inverted faces. © The Author(s) 2016.

  14. Face and body recognition show similar improvement during childhood.

    PubMed

    Bank, Samantha; Rhodes, Gillian; Read, Ainsley; Jeffery, Linda

    2015-09-01

    Adults are proficient in extracting identity cues from faces. This proficiency develops slowly during childhood, with performance not reaching adult levels until adolescence. Bodies are similar to faces in that they convey identity cues and rely on specialized perceptual mechanisms. However, it is currently unclear whether body recognition mirrors the slow development of face recognition during childhood. Recent evidence suggests that body recognition develops faster than face recognition. Here we measured body and face recognition in 6- and 10-year-old children and adults to determine whether these two skills show different amounts of improvement during childhood. We found no evidence that they do. Face and body recognition showed similar improvement with age, and children, like adults, were better at recognizing faces than bodies. These results suggest that the mechanisms of face and body memory mature at a similar rate or that improvement of more general cognitive and perceptual skills underlies improvement of both face and body recognition. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Super-recognition in development: A case study of an adolescent with extraordinary face recognition skills.

    PubMed

    Bennetts, Rachel J; Mole, Joseph; Bate, Sarah

    2017-09-01

    Face recognition abilities vary widely. While face recognition deficits have been reported in children, it is unclear whether superior face recognition skills can be encountered during development. This paper presents O.B., a 14-year-old female with extraordinary face recognition skills: a "super-recognizer" (SR). O.B. demonstrated exceptional face-processing skills across multiple tasks, with a level of performance that is comparable to adult SRs. Her superior abilities appear to be specific to face identity: She showed an exaggerated face inversion effect and her superior abilities did not extend to object processing or non-identity aspects of face recognition. Finally, an eye-movement task demonstrated that O.B. spent more time than controls examining the nose - a pattern previously reported in adult SRs. O.B. is therefore particularly skilled at extracting and using identity-specific facial cues, indicating that face and object recognition are dissociable during development, and that super recognition can be detected in adolescence.

  16. A Comparative View of Face Perception

    PubMed Central

    Leopold, David A.; Rhodes, Gillian

    2010-01-01

    Face perception serves as the basis for much of human social exchange. Diverse information can be extracted about an individual from a single glance at their face, including their identity, emotional state, and direction of attention. Neuropsychological and fMRI experiments reveal a complex network of specialized areas in the human brain supporting these face-reading skills. Here we consider the evolutionary roots of human face perception by exploring the manner in which different animal species view and respond to faces. We focus on behavioral experiments collected from both primates and non-primates, assessing the types of information that animals are able to extract from the faces of their conspecifics, human experimenters, and natural predators. These experiments reveal that faces are an important category of visual stimuli for animals in all major vertebrate taxa, possibly reflecting the early emergence of neural specialization for faces in vertebrate evolution. At the same time, some aspects of facial perception are only evident in primates and a few other social mammals, and may therefore have evolved to suit the needs of complex social communication. Since the human brain likely utilizes both primitive and recently evolved neural specializations for the processing of faces, comparative studies may hold the key to understanding how these parallel circuits emerged during human evolution. PMID:20695655

  17. A comparative view of face perception.

    PubMed

    Leopold, David A; Rhodes, Gillian

    2010-08-01

    Face perception serves as the basis for much of human social exchange. Diverse information can be extracted about an individual from a single glance at their face, including their identity, emotional state, and direction of attention. Neuropsychological and functional magnetic resonance imaging (fMRI) experiments reveal a complex network of specialized areas in the human brain supporting these face-reading skills. Here we consider the evolutionary roots of human face perception by exploring the manner in which different animal species view and respond to faces. We focus on behavioral experiments collected from both primates and nonprimates, assessing the types of information that animals are able to extract from the faces of their conspecifics, human experimenters, and natural predators. These experiments reveal that faces are an important category of visual stimuli for animals in all major vertebrate taxa, possibly reflecting the early emergence of neural specialization for faces in vertebrate evolution. At the same time, some aspects of facial perception are only evident in primates and a few other social mammals, and may therefore have evolved to suit the needs of complex social communication. Because the human brain likely utilizes both primitive and recently evolved neural specializations for the processing of faces, comparative studies may hold the key to understanding how these parallel circuits emerged during human evolution. 2010 APA, all rights reserved

  18. Sleep deprivation impairs the accurate recognition of human emotions.

    PubMed

    van der Helm, Els; Gujar, Ninad; Walker, Matthew P

    2010-03-01

    Investigate the impact of sleep deprivation on the ability to recognize the intensity of human facial emotions. Randomized total sleep-deprivation or sleep-rested conditions, involving between-group and within-group repeated measures analysis. Experimental laboratory study. Thirty-seven healthy participants, (21 females) aged 18-25 y, were randomly assigned to the sleep control (SC: n = 17) or total sleep deprivation group (TSD: n = 20). Participants performed an emotional face recognition task, in which they evaluated 3 different affective face categories: Sad, Happy, and Angry, each ranging in a gradient from neutral to increasingly emotional. In the TSD group, the task was performed once under conditions of sleep deprivation, and twice under sleep-rested conditions following different durations of sleep recovery. In the SC group, the task was performed twice under sleep-rested conditions, controlling for repeatability. In the TSD group, when sleep-deprived, there was a marked and significant blunting in the recognition of Angry and Happy affective expressions in the moderate (but not extreme) emotional intensity range; differences that were most reliable and significant in female participants. No change in the recognition of Sad expressions was observed. These recognition deficits were, however, ameliorated following one night of recovery sleep. No changes in task performance were observed in the SC group. Sleep deprivation selectively impairs the accurate judgment of human facial emotions, especially threat relevant (Anger) and reward relevant (Happy) categories, an effect observed most significantly in females. Such findings suggest that sleep loss impairs discrete affective neural systems, disrupting the identification of salient affective social cues.

  19. Automatic prediction of facial trait judgments: appearance vs. structural models.

    PubMed

    Rojas, Mario; Masip, David; Todorov, Alexander; Vitria, Jordi

    2011-01-01

    Evaluating other individuals with respect to personality characteristics plays a crucial role in human relations and it is the focus of attention for research in diverse fields such as psychology and interactive computer systems. In psychology, face perception has been recognized as a key component of this evaluation system. Multiple studies suggest that observers use face information to infer personality characteristics. Interactive computer systems are trying to take advantage of these findings and apply them to increase the natural aspect of interaction and to improve the performance of interactive computer systems. Here, we experimentally test whether the automatic prediction of facial trait judgments (e.g. dominance) can be made by using the full appearance information of the face and whether a reduced representation of its structure is sufficient. We evaluate two separate approaches: a holistic representation model using the facial appearance information and a structural model constructed from the relations among facial salient points. State of the art machine learning methods are applied to a) derive a facial trait judgment model from training data and b) predict a facial trait value for any face. Furthermore, we address the issue of whether there are specific structural relations among facial points that predict perception of facial traits. Experimental results over a set of labeled data (9 different trait evaluations) and classification rules (4 rules) suggest that a) prediction of perception of facial traits is learnable by both holistic and structural approaches; b) the most reliable prediction of facial trait judgments is obtained by certain type of holistic descriptions of the face appearance; and c) for some traits such as attractiveness and extroversion, there are relationships between specific structural features and social perceptions.

  20. How Well Can Young People with Asperger's Disorder Recognize Threat and Learn about Affect in Faces?: A Pilot Study

    ERIC Educational Resources Information Center

    Miyahara, Motohide; Ruffman, Ted; Fujita, Chikako; Tsujii, Masatsugu

    2010-01-01

    The abilities to identify threat and learn about affect in facial photographs were compared between a non-autistic university student group (NUS), a matched Asperger's group (MAS) on the Standard Progressive Matrices (SPM), and an unmatched Asperger's group (UAS) who scored lower on the SPM. Participants were given pairs of faces and asked which…

  1. When Seeing Depends on Knowing: Adults with Autism Spectrum Conditions Show Diminished Top-Down Processes in the Visual Perception of Degraded Faces but Not Degraded Objects

    ERIC Educational Resources Information Center

    Loth, Eva; Gomez, Juan Carlos; Happe, Francesca

    2010-01-01

    Behavioural, neuroimaging and neurophysiological approaches emphasise the active and constructive nature of visual perception, determined not solely by the environmental input, but modulated top-down by prior knowledge. For example, degraded images, which at first appear as meaningless "blobs", can easily be recognized as, say, a face, after…

  2. Human rights in patient care: a theoretical and practical framework.

    PubMed

    Cohen, Jonathan; Ezer, Tamar

    2013-12-12

    The concept of "human rights in patient care" refers to the application of human rights principles to the context of patient care. It provides a principled alternative to the growing discourse of "patients' rights" that has evolved in response to widespread and severe human rights violations in health settings. Unlike "patients' rights," which is rooted in a consumer framework, this concept derives from inherent human dignity and neutrally applies universal, legally recognized human rights principles, protecting both patients and providers and admitting of limitations that can be justified by human rights norms. It recognizes the interrelation between patient and provider rights, particularly in contexts where providers face simultaneous obligations to patients and the state ("dual loyalty") and may be pressured to abet human rights violations. The human rights lens provides a means to examine systemic issues and state responsibility. Human rights principles that apply to patient care include both the right to the highest attainable standard of health, which covers both positive and negative guarantees in respect of health, as well as civil and political rights ranging from the patient's right to be free from torture and inhumane treatment to liberty and security of person. They also focus attention on the right of socially excluded groups to be free from discrimination in the delivery of health care. Critical rights relevant to providers include freedom of association and the enjoyment of decent work conditions. Some, but not all, of these human rights correspond to rights that have been articulated in "patients' rights" charters. Complementary to—but distinct from—bioethics, human rights in patient care carry legal force and can be applied through judicial action. They also provide a powerful language to articulate and mobilize around justice concerns, and to engage in advocacy through the media and political negotiation. As "patients' rights" movements and charters grow in popularity, it is important to link patient rights back to human rights standards and processes that are grounded in international law and consensus. Copyright © 2013 Cohen and Ezer. This is an open access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/), which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original author and source are credited.

  3. Face Recognition in Humans and Machines

    NASA Astrophysics Data System (ADS)

    O'Toole, Alice; Tistarelli, Massimo

    The study of human face recognition by psychologists and neuroscientists has run parallel to the development of automatic face recognition technologies by computer scientists and engineers. In both cases, there are analogous steps of data acquisition, image processing, and the formation of representations that can support the complex and diverse tasks we accomplish with faces. These processes can be understood and compared in the context of their neural and computational implementations. In this chapter, we present the essential elements of face recognition by humans and machines, taking a perspective that spans psychological, neural, and computational approaches. From the human side, we overview the methods and techniques used in the neurobiology of face recognition, the underlying neural architecture of the system, the role of visual attention, and the nature of the representations that emerges. From the computational side, we discuss face recognition technologies and the strategies they use to overcome challenges to robust operation over viewing parameters. Finally, we conclude the chapter with a look at some recent studies that compare human and machine performances at face recognition.

  4. Classifying Facial Actions

    PubMed Central

    Donato, Gianluca; Bartlett, Marian Stewart; Hager, Joseph C.; Ekman, Paul; Sejnowski, Terrence J.

    2010-01-01

    The Facial Action Coding System (FACS) [23] is an objective method for quantifying facial movement in terms of component actions. This system is widely used in behavioral investigations of emotion, cognitive processes, and social interaction. The coding is presently performed by highly trained human experts. This paper explores and compares techniques for automatically recognizing facial actions in sequences of images. These techniques include analysis of facial motion through estimation of optical flow; holistic spatial analysis, such as principal component analysis, independent component analysis, local feature analysis, and linear discriminant analysis; and methods based on the outputs of local filters, such as Gabor wavelet representations and local principal components. Performance of these systems is compared to naive and expert human subjects. Best performances were obtained using the Gabor wavelet representation and the independent component representation, both of which achieved 96 percent accuracy for classifying 12 facial actions of the upper and lower face. The results provide converging evidence for the importance of using local filters, high spatial frequencies, and statistical independence for classifying facial actions. PMID:21188284

  5. Scientific disputes that spill over into Research Ethics: interview with Maria Cecília de Souza Minayo.

    PubMed

    Minayo, Maria Cecília de Souza

    2015-09-01

    This is an interview with Maria Cecília de Souza Minayo, by university lecturers Iara Coelho Zito Guerriero and Maria Lúcia Magalhães Bosi. It reflects the heat of the current debates surrounding implementation of a specific protocol for evaluation of research in the Human and Social Sciences (HSS), vis-à-vis the current rules set by the National Health Council, which have a clearly biomedical bias. The interview covers the difficulties of introducing appropriate and fair rules for judgment of HSS projects, in the face of a hegemonic understanding of the very concept of science by biologists and medical doctors, who tend not to recognize other approaches unless those approaches adopt their frames of reference. In this case, the National Health Council becomes the arena of this polemic, leading researchers in the human and social sciences to ask themselves whether the health sector has the competency to create rules for other areas of knowledge.

  6. Neural representations of faces and body parts in macaque and human cortex: a comparative FMRI study.

    PubMed

    Pinsk, Mark A; Arcaro, Michael; Weiner, Kevin S; Kalkus, Jan F; Inati, Souheil J; Gross, Charles G; Kastner, Sabine

    2009-05-01

    Single-cell studies in the macaque have reported selective neural responses evoked by visual presentations of faces and bodies. Consistent with these findings, functional magnetic resonance imaging studies in humans and monkeys indicate that regions in temporal cortex respond preferentially to faces and bodies. However, it is not clear how these areas correspond across the two species. Here, we directly compared category-selective areas in macaques and humans using virtually identical techniques. In the macaque, several face- and body part-selective areas were found located along the superior temporal sulcus (STS) and middle temporal gyrus (MTG). In the human, similar to previous studies, face-selective areas were found in ventral occipital and temporal cortex and an additional face-selective area was found in the anterior temporal cortex. Face-selective areas were also found in lateral temporal cortex, including the previously reported posterior STS area. Body part-selective areas were identified in the human fusiform gyrus and lateral occipitotemporal cortex. In a first experiment, both monkey and human subjects were presented with pictures of faces, body parts, foods, scenes, and man-made objects, to examine the response profiles of each category-selective area to the five stimulus types. In a second experiment, face processing was examined by presenting upright and inverted faces. By comparing the responses and spatial relationships of the areas, we propose potential correspondences across species. Adjacent and overlapping areas in the macaque anterior STS/MTG responded strongly to both faces and body parts, similar to areas in the human fusiform gyrus and posterior STS. Furthermore, face-selective areas on the ventral bank of the STS/MTG discriminated both upright and inverted faces from objects, similar to areas in the human ventral temporal cortex. Overall, our findings demonstrate commonalities and differences in the wide-scale brain organization between the two species and provide an initial step toward establishing functionally homologous category-selective areas.

  7. Neural Representations of Faces and Body Parts in Macaque and Human Cortex: A Comparative fMRI Study

    PubMed Central

    Pinsk, Mark A.; Arcaro, Michael; Weiner, Kevin S.; Kalkus, Jan F.; Inati, Souheil J.; Gross, Charles G.; Kastner, Sabine

    2009-01-01

    Single-cell studies in the macaque have reported selective neural responses evoked by visual presentations of faces and bodies. Consistent with these findings, functional magnetic resonance imaging studies in humans and monkeys indicate that regions in temporal cortex respond preferentially to faces and bodies. However, it is not clear how these areas correspond across the two species. Here, we directly compared category-selective areas in macaques and humans using virtually identical techniques. In the macaque, several face- and body part–selective areas were found located along the superior temporal sulcus (STS) and middle temporal gyrus (MTG). In the human, similar to previous studies, face-selective areas were found in ventral occipital and temporal cortex and an additional face-selective area was found in the anterior temporal cortex. Face-selective areas were also found in lateral temporal cortex, including the previously reported posterior STS area. Body part–selective areas were identified in the human fusiform gyrus and lateral occipitotemporal cortex. In a first experiment, both monkey and human subjects were presented with pictures of faces, body parts, foods, scenes, and man-made objects, to examine the response profiles of each category-selective area to the five stimulus types. In a second experiment, face processing was examined by presenting upright and inverted faces. By comparing the responses and spatial relationships of the areas, we propose potential correspondences across species. Adjacent and overlapping areas in the macaque anterior STS/MTG responded strongly to both faces and body parts, similar to areas in the human fusiform gyrus and posterior STS. Furthermore, face-selective areas on the ventral bank of the STS/MTG discriminated both upright and inverted faces from objects, similar to areas in the human ventral temporal cortex. Overall, our findings demonstrate commonalities and differences in the wide-scale brain organization between the two species and provide an initial step toward establishing functionally homologous category-selective areas. PMID:19225169

  8. Fusing face-verification algorithms and humans.

    PubMed

    O'Toole, Alice J; Abdi, Hervé; Jiang, Fang; Phillips, P Jonathon

    2007-10-01

    It has been demonstrated recently that state-of-the-art face-recognition algorithms can surpass human accuracy at matching faces over changes in illumination. The ranking of algorithms and humans by accuracy, however, does not provide information about whether algorithms and humans perform the task comparably or whether algorithms and humans can be fused to improve performance. In this paper, we fused humans and algorithms using partial least square regression (PLSR). In the first experiment, we applied PLSR to face-pair similarity scores generated by seven algorithms participating in the Face Recognition Grand Challenge. The PLSR produced an optimal weighting of the similarity scores, which we tested for generality with a jackknife procedure. Fusing the algorithms' similarity scores using the optimal weights produced a twofold reduction of error rate over the most accurate algorithm. Next, human-subject-generated similarity scores were added to the PLSR analysis. Fusing humans and algorithms increased the performance to near-perfect classification accuracy. These results are discussed in terms of maximizing face-verification accuracy with hybrid systems consisting of multiple algorithms and humans.

  9. Development of Sensitivity to Spacing Versus Feature Changes in Pictures of Houses: Evidence for Slow Development of a General Spacing Detection Mechanism?

    ERIC Educational Resources Information Center

    Robbins, Rachel A.; Shergill, Yaadwinder; Maurer, Daphne; Lewis, Terri L.

    2011-01-01

    Adults are expert at recognizing faces, in part because of exquisite sensitivity to the spacing of facial features. Children are poorer than adults at recognizing facial identity and less sensitive to spacing differences. Here we examined the specificity of the immaturity by comparing the ability of 8-year-olds, 14-year-olds, and adults to…

  10. The face you recognize may not be the one you saw: memory conjunction errors in individuals with or without learning disability.

    PubMed

    Danielsson, Henrik; Rönnberg, Jerker; Leven, Anna; Andersson, Jan; Andersson, Karin; Lyxell, Björn

    2006-06-01

    Memory conjunction errors, that is, when a combination of two previously presented stimuli is erroneously recognized as previously having been seen, were investigated in a face recognition task with drawings and photographs in 23 individuals with learning disability, and 18 chronologically age-matched controls without learning disability. Compared to the controls, individuals with learning disability committed significantly more conjunction errors, feature errors (one old and one new component), but had lower correct recognition, when the results were adjusted for different guessing levels. A dual-processing approach gained more support than a binding approach. However, neither of the approaches could explain all of the results. The results of the learning disability group were only partly related to non-verbal intelligence.

  11. The Body That Speaks: Recombining Bodies and Speech Sources in Unscripted Face-to-Face Communication.

    PubMed

    Gillespie, Alex; Corti, Kevin

    2016-01-01

    This article examines advances in research methods that enable experimental substitution of the speaking body in unscripted face-to-face communication. A taxonomy of six hybrid social agents is presented by combining three types of bodies (mechanical, virtual, and human) with either an artificial or human speech source. Our contribution is to introduce and explore the significance of two particular hybrids: (1) the cyranoid method that enables humans to converse face-to-face through the medium of another person's body, and (2) the echoborg method that enables artificial intelligence to converse face-to-face through the medium of a human body. These two methods are distinct in being able to parse the unique influence of the human body when combined with various speech sources. We also introduce a new framework for conceptualizing the body's role in communication, distinguishing three levels: self's perspective on the body, other's perspective on the body, and self's perspective of other's perspective on the body. Within each level the cyranoid and echoborg methodologies make important research questions tractable. By conceptualizing and synthesizing these methods, we outline a novel paradigm of research on the role of the body in unscripted face-to-face communication.

  12. The Body That Speaks: Recombining Bodies and Speech Sources in Unscripted Face-to-Face Communication

    PubMed Central

    Gillespie, Alex; Corti, Kevin

    2016-01-01

    This article examines advances in research methods that enable experimental substitution of the speaking body in unscripted face-to-face communication. A taxonomy of six hybrid social agents is presented by combining three types of bodies (mechanical, virtual, and human) with either an artificial or human speech source. Our contribution is to introduce and explore the significance of two particular hybrids: (1) the cyranoid method that enables humans to converse face-to-face through the medium of another person's body, and (2) the echoborg method that enables artificial intelligence to converse face-to-face through the medium of a human body. These two methods are distinct in being able to parse the unique influence of the human body when combined with various speech sources. We also introduce a new framework for conceptualizing the body's role in communication, distinguishing three levels: self's perspective on the body, other's perspective on the body, and self's perspective of other's perspective on the body. Within each level the cyranoid and echoborg methodologies make important research questions tractable. By conceptualizing and synthesizing these methods, we outline a novel paradigm of research on the role of the body in unscripted face-to-face communication. PMID:27660616

  13. Human Trafficking: The Role of Medicine in Interrupting the Cycle of Abuse and Violence.

    PubMed

    Macias-Konstantopoulos, Wendy

    2016-10-18

    Human trafficking, a form of modern slavery, is an egregious violation of human rights with profound personal and public health implications. It includes forced labor and sexual exploitation of both U.S. and non-U.S. citizens and has been reported in all 50 states. Victims of human trafficking are currently among the most abused and disenfranchised persons in society, and they face a wide range of negative health outcomes resulting from their subjugation and exploitation. Medicine has an important role to play in mitigating the devastating effects of human trafficking on individuals and society. Victims are cared for in emergency departments, primary care offices, urgent care centers, community health clinics, and reproductive health clinics. In addition, they are unknowingly being treated in hospital inpatient units. Injuries and illnesses requiring medical attention thus represent unique windows of opportunity for trafficked persons to receive assistance from trusted health care professionals. With education and training, health care providers can recognize signs and symptoms of trafficking, provide trauma-informed care to this vulnerable population, and respond to exploited persons who are interested and ready to receive assistance. Multidisciplinary response protocols, research, and policy advocacy can enhance the impact of antitrafficking health care efforts to interrupt the cycle of abuse and violence for these victims.

  14. Electrophysiological evidence for separation between human face and non-face object processing only in the right hemisphere.

    PubMed

    Niina, Megumi; Okamura, Jun-ya; Wang, Gang

    2015-10-01

    Scalp event-related potential (ERP) studies have demonstrated larger N170 amplitudes when subjects view faces compared to items from object categories. Extensive attempts have been made to clarify face selectivity and hemispheric dominance for face processing. The purpose of this study was to investigate hemispheric differences in N170s activated by human faces and non-face objects, as well as the extent of overlap of their sources. ERP was recorded from 20 subjects while they viewed human face and non-face images. N170s obtained during the presentation of human faces appeared earlier and with larger amplitude than for other category images. Further source analysis with a two-dipole model revealed that the locations of face and object processing largely overlapped in the left hemisphere. Conversely, the source for face processing in the right hemisphere located more anterior than the source for object processing. The results suggest that the neuronal circuits for face and object processing are largely shared in the left hemisphere, with more distinct circuits in the right hemisphere. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Human Rights and Private Ordering in Virtual Worlds

    NASA Astrophysics Data System (ADS)

    Oosterbaan, Olivier

    This paper explores the application of human rights in (persistent) virtual world environments. The paper begins with describing a number of elements that most virtual environments share and that are relevant for the application of human rights in such a setting; and by describing in a general nature the application of human rights between private individuals. The paper then continues by discussing the application in virtual environments of two universally recognized human rights, namely freedom of expression, and freedom from discrimination. As these specific rights are discussed, a number of more general conclusions on the application of human rights in virtual environments are drawn. The first general conclusion being that, because virtual worlds are private environments, participants are subject to private ordering. The second general conclusion being that participants and non-participants alike have to accept at times that in-world expressions are to an extent private speech. The third general conclusion is that, where participants represent themselves in-world, other participants cannot assume that such in-world representation share the characteristics of the human player; and that where virtual environments contain game elements, participants and non-participants alike should not take everything that happens in the virtual environment at face value or literally, which does however not amount to having to accept a higher level of infringement on their rights for things that happen in such an environment.

  16. Sensitivity to First-Order Relations of Facial Elements in Infant Rhesus Macaques

    ERIC Educational Resources Information Center

    Paukner, Annika; Bower, Seth; Simpson, Elizabeth A.; Suomi, Stephen J.

    2013-01-01

    Faces are visually attractive to both human and nonhuman primates. Human neonates are thought to have a broad template for faces at birth and prefer face-like to non-face-like stimuli. To better compare developmental trajectories of face processing phylogenetically, here, we investigated preferences for face-like stimuli in infant rhesus macaques…

  17. Seeing Objects as Faces Enhances Object Detection.

    PubMed

    Takahashi, Kohske; Watanabe, Katsumi

    2015-10-01

    The face is a special visual stimulus. Both bottom-up processes for low-level facial features and top-down modulation by face expectations contribute to the advantages of face perception. However, it is hard to dissociate the top-down factors from the bottom-up processes, since facial stimuli mandatorily lead to face awareness. In the present study, using the face pareidolia phenomenon, we demonstrated that face awareness, namely seeing an object as a face, enhances object detection performance. In face pareidolia, some people see a visual stimulus, for example, three dots arranged in V shape, as a face, while others do not. This phenomenon allows us to investigate the effect of face awareness leaving the stimulus per se unchanged. Participants were asked to detect a face target or a triangle target. While target per se was identical between the two tasks, the detection sensitivity was higher when the participants recognized the target as a face. This was the case irrespective of the stimulus eccentricity or the vertical orientation of the stimulus. These results demonstrate that seeing an object as a face facilitates object detection via top-down modulation. The advantages of face perception are, therefore, at least partly, due to face awareness.

  18. Seeing Objects as Faces Enhances Object Detection

    PubMed Central

    Watanabe, Katsumi

    2015-01-01

    The face is a special visual stimulus. Both bottom-up processes for low-level facial features and top-down modulation by face expectations contribute to the advantages of face perception. However, it is hard to dissociate the top-down factors from the bottom-up processes, since facial stimuli mandatorily lead to face awareness. In the present study, using the face pareidolia phenomenon, we demonstrated that face awareness, namely seeing an object as a face, enhances object detection performance. In face pareidolia, some people see a visual stimulus, for example, three dots arranged in V shape, as a face, while others do not. This phenomenon allows us to investigate the effect of face awareness leaving the stimulus per se unchanged. Participants were asked to detect a face target or a triangle target. While target per se was identical between the two tasks, the detection sensitivity was higher when the participants recognized the target as a face. This was the case irrespective of the stimulus eccentricity or the vertical orientation of the stimulus. These results demonstrate that seeing an object as a face facilitates object detection via top-down modulation. The advantages of face perception are, therefore, at least partly, due to face awareness. PMID:27648219

  19. Can We Distinguish Emotions from Faces? Investigation of Implicit and Explicit Processes of Peak Facial Expressions.

    PubMed

    Xiao, Ruiqi; Li, Xianchun; Li, Lin; Wang, Yanmei

    2016-01-01

    Most previous studies on facial expression recognition have focused on the moderate emotions; to date, few studies have been conducted to investigate the explicit and implicit processes of peak emotions. In the current study, we used transiently peak intense expression images of athletes at the winning or losing point in competition as materials, and investigated the diagnosability of peak facial expressions at both implicit and explicit levels. In Experiment 1, participants were instructed to evaluate isolated faces, isolated bodies, and the face-body compounds, and eye-tracking movement was recorded. The results revealed that the isolated body and face-body congruent images were better recognized than isolated face and face-body incongruent images, indicating that the emotional information conveyed by facial cues was ambiguous, and the body cues influenced facial emotion recognition. Furthermore, eye movement records showed that the participants displayed distinct gaze patterns for the congruent and incongruent compounds. In Experiment 2A, the subliminal affective priming task was used, with faces as primes and bodies as targets, to investigate the unconscious emotion perception of peak facial expressions. The results showed that winning face prime facilitated reaction to winning body target, whereas losing face prime inhibited reaction to winning body target, suggesting that peak facial expressions could be perceived at the implicit level. In general, the results indicate that peak facial expressions cannot be consciously recognized but can be perceived at the unconscious level. In Experiment 2B, revised subliminal affective priming task and a strict awareness test were used to examine the validity of unconscious perception of peak facial expressions found in Experiment 2A. Results of Experiment 2B showed that reaction time to both winning body targets and losing body targets was influenced by the invisibly peak facial expression primes, which indicated the unconscious perception of peak facial expressions.

  20. Can We Distinguish Emotions from Faces? Investigation of Implicit and Explicit Processes of Peak Facial Expressions

    PubMed Central

    Xiao, Ruiqi; Li, Xianchun; Li, Lin; Wang, Yanmei

    2016-01-01

    Most previous studies on facial expression recognition have focused on the moderate emotions; to date, few studies have been conducted to investigate the explicit and implicit processes of peak emotions. In the current study, we used transiently peak intense expression images of athletes at the winning or losing point in competition as materials, and investigated the diagnosability of peak facial expressions at both implicit and explicit levels. In Experiment 1, participants were instructed to evaluate isolated faces, isolated bodies, and the face-body compounds, and eye-tracking movement was recorded. The results revealed that the isolated body and face-body congruent images were better recognized than isolated face and face-body incongruent images, indicating that the emotional information conveyed by facial cues was ambiguous, and the body cues influenced facial emotion recognition. Furthermore, eye movement records showed that the participants displayed distinct gaze patterns for the congruent and incongruent compounds. In Experiment 2A, the subliminal affective priming task was used, with faces as primes and bodies as targets, to investigate the unconscious emotion perception of peak facial expressions. The results showed that winning face prime facilitated reaction to winning body target, whereas losing face prime inhibited reaction to winning body target, suggesting that peak facial expressions could be perceived at the implicit level. In general, the results indicate that peak facial expressions cannot be consciously recognized but can be perceived at the unconscious level. In Experiment 2B, revised subliminal affective priming task and a strict awareness test were used to examine the validity of unconscious perception of peak facial expressions found in Experiment 2A. Results of Experiment 2B showed that reaction time to both winning body targets and losing body targets was influenced by the invisibly peak facial expression primes, which indicated the unconscious perception of peak facial expressions. PMID:27630604

  1. Age differences in accuracy and choosing in eyewitness identification and face recognition.

    PubMed

    Searcy, J H; Bartlett, J C; Memon, A

    1999-05-01

    Studies of aging and face recognition show age-related increases in false recognitions of new faces. To explore implications of this false alarm effect, we had young and senior adults perform (1) three eye-witness identification tasks, using both target present and target absent lineups, and (2) and old/new recognition task in which a study list of faces was followed by a test including old and new faces, along with conjunctions of old faces. Compared with the young, seniors had lower accuracy and higher choosing rates on the lineups, and they also falsely recognized more new faces on the recognition test. However, after screening for perceptual processing deficits, there was no age difference in false recognition of conjunctions, or in discriminating old faces from conjunctions. We conclude that the false alarm effect generalizes to lineup identification, but does not extend to conjunction faces. The findings are consistent with age-related deficits in recollection of context and relative age invariance in perceptual integrative processes underlying the experience of familiarity.

  2. Face identity recognition in autism spectrum disorders: a review of behavioral studies.

    PubMed

    Weigelt, Sarah; Koldewyn, Kami; Kanwisher, Nancy

    2012-03-01

    Face recognition--the ability to recognize a person from their facial appearance--is essential for normal social interaction. Face recognition deficits have been implicated in the most common disorder of social interaction: autism. Here we ask: is face identity recognition in fact impaired in people with autism? Reviewing behavioral studies we find no strong evidence for a qualitative difference in how facial identity is processed between those with and without autism: markers of typical face identity recognition, such as the face inversion effect, seem to be present in people with autism. However, quantitatively--i.e., how well facial identity is remembered or discriminated--people with autism perform worse than typical individuals. This impairment is particularly clear in face memory and in face perception tasks in which a delay intervenes between sample and test, and less so in tasks with no memory demand. Although some evidence suggests that this deficit may be specific to faces, further evidence on this question is necessary. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. A real time mobile-based face recognition with fisherface methods

    NASA Astrophysics Data System (ADS)

    Arisandi, D.; Syahputra, M. F.; Putri, I. L.; Purnamawati, S.; Rahmat, R. F.; Sari, P. P.

    2018-03-01

    Face Recognition is a field research in Computer Vision that study about learning face and determine the identity of the face from a picture sent to the system. By utilizing this face recognition technology, learning process about people’s identity between students in a university will become simpler. With this technology, student won’t need to browse student directory in university’s server site and look for the person with certain face trait. To obtain this goal, face recognition application use image processing methods consist of two phase, pre-processing phase and recognition phase. In pre-processing phase, system will process input image into the best image for recognition phase. Purpose of this pre-processing phase is to reduce noise and increase signal in image. Next, to recognize face phase, we use Fisherface Methods. This methods is chosen because of its advantage that would help system of its limited data. Therefore from experiment the accuracy of face recognition using fisherface is 90%.

  4. Global survey of hospital pharmacy practice.

    PubMed

    Doloresco, Fred; Vermeulen, Lee C

    2009-03-01

    The current state of hospital pharmacy practice around the globe and key issues facing international hospital pharmacy practice were studied. This survey assessed multiple aspects of hospital pharmacy practice within each of the Member States recognized by the United Nations. An official respondent from each nation was identified by a structured nomination process. The survey instrument was developed; pilot tested; translated into English, French, and Spanish; and distributed in July 2007. The nature, scope, and breadth of hospital pharmacy practices in medication procurement, prescribing, preparation and distribution, administration, outcomes monitoring, and human resources and training were evaluated. Descriptive statistics were used to characterize the responses. Eighty-five countries (44% of the 192 Member States) responded to the survey. The respondent sample of countries was representative of all nations in terms of population, geographic region, World Health Organization region, and level of economic development. In addition to qualifying the nature of hospital pharmacy practice, the survey highlighted numerous challenges facing the profession of pharmacy in the hospital setting around the globe, including access to medicines and adequately trained pharmacists. While the practice of hospital pharmacy differs from country to country, many nations face similar challenges, regardless of their population, location, or wealth. These survey results provide a basis for identifying opportunities for growth and development, as well as for international collaboration, to advance the profession of pharmacy and ensure that patients worldwide receive the care that they deserve.

  5. Introduction: Science, Sexuality, and Psychotherapy: Shifting Paradigms.

    PubMed

    Cerbone, Armand R

    2017-08-01

    This introduction presents an overview of the current issue (73, 8) of Journal of Clinical Psychology: In Session. This issue features a series of articles, with clinical cases, each presented to illustrate the challenges faced by individuals and couples whose sexual and gender identities and expressions do not comport with traditional and cultural norms. These articles also document the challenges to the therapists who treat them. Considered individually, each article underscores the need to recognize the importance of evidence in guiding psychotherapy in cases involving sexuality. The discussions in each article offer recommendations meant to help and guide psychotherapists. Considered collectively, they raise important questions and considerations about shifting paradigms of human sexuality. Implications for assessment and treatment of cases involving sexuality and gender identity are discussed and recommended. © 2017 Wiley Periodicals, Inc.

  6. Remembering 1500 pictures: the right hemisphere remembers better than the left.

    PubMed

    Laeng, Bruno; Øvervoll, Morten; Ole Steinsvik, Oddmar

    2007-03-01

    We hypothesized that the right hemisphere would be superior to the left hemisphere in remembering having seen a specific picture before, given its superiority in perceptually encoding specific aspects of visual form. A large set of pictures (N=1500) of animals, human faces, artifacts, landscapes, and art paintings were shown for 2s in central vision, or tachistoscopically (for 100ms) in each half visual field, to normal participants who were then tested 1-6 days later for their recognition. Images that were presented initially to the right hemisphere were better recognized than those presented to the left hemisphere. These results, obtained with participants with intact brains, large number of stimuli, and long retention delays, are consistent with previously described hemispheric differences in the memory of split-brain patients.

  7. Mental illness, stigma, and the media.

    PubMed

    Benbow, Alastair

    2007-01-01

    Society is ingrained with prejudice toward mental illness, and sufferers are often widely perceived to be dangerous or unpredictable. Reinforcement of these popular myths through the media can perpetuate the stigma surrounding mental illness, precipitating shame, self-blame, and secrecy, all of which discourage affected individuals from seeking treatment. Efforts aimed at countering stigma in mental illness are faced with the challenge of centuries of discrimination and must, therefore, replace existing stereotypes with coverage of positive outcomes, as a first step in achieving the daunting task of overcoming these negative stereotypes. Long-term anti-stigma campaigns that encompass human-rights-based, normalization, and educational approaches are needed. The involvement of the media is essential for success, but, in order for the media to be used effectively, its motivations and limitations must first be recognized and understood.

  8. Dogs can discriminate human smiling faces from blank expressions.

    PubMed

    Nagasawa, Miho; Murai, Kensuke; Mogi, Kazutaka; Kikusui, Takefumi

    2011-07-01

    Dogs have a unique ability to understand visual cues from humans. We investigated whether dogs can discriminate between human facial expressions. Photographs of human faces were used to test nine pet dogs in two-choice discrimination tasks. The training phases involved each dog learning to discriminate between a set of photographs of their owner's smiling and blank face. Of the nine dogs, five fulfilled these criteria and were selected for test sessions. In the test phase, 10 sets of photographs of the owner's smiling and blank face, which had previously not been seen by the dog, were presented. The dogs selected the owner's smiling face significantly more often than expected by chance. In subsequent tests, 10 sets of smiling and blank face photographs of 20 persons unfamiliar to the dogs were presented (10 males and 10 females). There was no statistical difference between the accuracy in the case of the owners and that in the case of unfamiliar persons with the same gender as the owner. However, the accuracy was significantly lower in the case of unfamiliar persons of the opposite gender to that of the owner, than with the owners themselves. These results suggest that dogs can learn to discriminate human smiling faces from blank faces by looking at photographs. Although it remains unclear whether dogs have human-like systems for visual processing of human facial expressions, the ability to learn to discriminate human facial expressions may have helped dogs adapt to human society.

  9. Recognizing the same face in different contexts: Testing within-person face recognition in typical development and in autism

    PubMed Central

    Neil, Louise; Cappagli, Giulia; Karaminis, Themelis; Jenkins, Rob; Pellicano, Elizabeth

    2016-01-01

    Unfamiliar face recognition follows a particularly protracted developmental trajectory and is more likely to be atypical in children with autism than those without autism. There is a paucity of research, however, examining the ability to recognize the same face across multiple naturally varying images. Here, we investigated within-person face recognition in children with and without autism. In Experiment 1, typically developing 6- and 7-year-olds, 8- and 9-year-olds, 10- and 11-year-olds, 12- to 14-year-olds, and adults were given 40 grayscale photographs of two distinct male identities (20 of each face taken at different ages, from different angles, and in different lighting conditions) and were asked to sort them by identity. Children mistook images of the same person as images of different people, subdividing each individual into many perceived identities. Younger children divided images into more perceived identities than adults and also made more misidentification errors (placing two different identities together in the same group) than older children and adults. In Experiment 2, we used the same procedure with 32 cognitively able children with autism. Autistic children reported a similar number of identities and made similar numbers of misidentification errors to a group of typical children of similar age and ability. Fine-grained analysis using matrices revealed marginal group differences in overall performance. We suggest that the immature performance in typical and autistic children could arise from problems extracting the perceptual commonalities from different images of the same person and building stable representations of facial identity. PMID:26615971

  10. The organization of conspecific face space in nonhuman primates

    PubMed Central

    Parr, Lisa A.; Taubert, Jessica; Little, Anthony C.; Hancock, Peter J. B.

    2013-01-01

    Humans and chimpanzees demonstrate numerous cognitive specializations for processing faces, but comparative studies with monkeys suggest that these may be the result of recent evolutionary adaptations. The present study utilized the novel approach of face space, a powerful theoretical framework used to understand the representation of face identity in humans, to further explore species differences in face processing. According to the theory, faces are represented by vectors in a multidimensional space, the centre of which is defined by an average face. Each dimension codes features important for describing a face’s identity, and vector length codes the feature’s distinctiveness. Chimpanzees and rhesus monkeys discriminated male and female conspecifics’ faces, rated by humans for their distinctiveness, using a computerized task. Multidimensional scaling analyses showed that the organization of face space was similar between humans and chimpanzees. Distinctive faces had the longest vectors and were the easiest for chimpanzees to discriminate. In contrast, distinctiveness did not correlate with the performance of rhesus monkeys. The feature dimensions for each species’ face space were visualized and described using morphing techniques. These results confirm species differences in the perceptual representation of conspecific faces, which are discussed within an evolutionary framework. PMID:22670823

  11. Neural correlates of text-based emoticons: a preliminary fMRI study.

    PubMed

    Kim, Ko Woon; Lee, Sang Won; Choi, Jeewook; Kim, Tae Min; Jeong, Bumseok

    2016-08-01

    Like nonverbal cues in oral interactions, text-based emoticons, which are textual portrayals of a writer's facial expressions, are commonly used in electronic device-mediated communication. Little is known, however, about how text-based emoticons are processed in the human brain. With this study, we investigated whether the text-based emoticons are processed as face expressions using fMRI. During fMRI scan, subjects were asked to respond by pressing a button, indicating whether text-based emoticons represented positive or negative emotions. Voxel-wise analyses were performed to compare the responses and contrasted with emotional versus scrambled emoticons and among emoticons with different emotions. To explore processing strategies for text-based emoticons, brain activity in the bilateral occipital and fusiform face areas were compared. In the voxel-wise analysis, both emotional and scrambled emoticons were processed mainly in the bilateral fusiform gyri, inferior division of lateral occipital cortex, inferior frontal gyri, dorsolateral prefrontal cortex (DLPFC), dorsal anterior cingulate cortex (dACC), and parietal cortex. In a percent signal change analysis, the right occipital and fusiform face areas showed significantly higher activation than left ones. In comparisons among emoticons, sad one showed significant BOLD signal decrease in the dACC, the left AIC, the bilateral thalamus, and the precuneus as compared with other conditions. The results of this study imply that people recognize text-based emoticons as pictures representing face expressions. Even though text-based emoticons contain emotional meaning, they are not associated with the amygdala while previous studies using emotional stimuli documented amygdala activation.

  12. A facial expression of pax: Assessing children's "recognition" of emotion from faces.

    PubMed

    Nelson, Nicole L; Russell, James A

    2016-01-01

    In a classic study, children were shown an array of facial expressions and asked to choose the person who expressed a specific emotion. Children were later asked to name the emotion in the face with any label they wanted. Subsequent research often relied on the same two tasks--choice from array and free labeling--to support the conclusion that children recognize basic emotions from facial expressions. Here five studies (N=120, 2- to 10-year-olds) showed that these two tasks produce illusory recognition; a novel nonsense facial expression was included in the array. Children "recognized" a nonsense emotion (pax or tolen) and two familiar emotions (fear and jealousy) from the same nonsense face. Children likely used a process of elimination; they paired the unknown facial expression with a label given in the choice-from-array task and, after just two trials, freely labeled the new facial expression with the new label. These data indicate that past studies using this method may have overestimated children's expression knowledge. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Sexual Dimorphism Analysis and Gender Classification in 3D Human Face

    NASA Astrophysics Data System (ADS)

    Hu, Yuan; Lu, Li; Yan, Jingqi; Liu, Zhi; Shi, Pengfei

    In this paper, we present the sexual dimorphism analysis in 3D human face and perform gender classification based on the result of sexual dimorphism analysis. Four types of features are extracted from a 3D human-face image. By using statistical methods, the existence of sexual dimorphism is demonstrated in 3D human face based on these features. The contributions of each feature to sexual dimorphism are quantified according to a novel criterion. The best gender classification rate is 94% by using SVMs and Matcher Weighting fusion method.This research adds to the knowledge of 3D faces in sexual dimorphism and affords a foundation that could be used to distinguish between male and female in 3D faces.

  14. Developmental origins of the face inversion effect.

    PubMed

    Cashon, Cara H; Holt, Nicholas A

    2015-01-01

    A hallmark of adults' expertise for faces is that they are better at recognizing, discriminating, and processing upright faces compared to inverted faces. We investigate the developmental origins of "the face inversion effect" by reviewing research on infants' perception of upright and inverted faces during the first year of life. We review the effects of inversion on infants' face preference, recognition, processing (holistic and second-order configural), and scanning as well as face-related neural responses. Particular attention is paid to the developmental patterns that emerge within and across these areas of face perception. We conclude that the developmental origins of the inversion effect begin in the first few months of life and grow stronger over the first year, culminating in effects that are commonly thought to indicate adult-like expertise. We posit that by the end of the first year, infants' face-processing system has become specialized to upright faces and a foundation for adults' upright-face expertise has been established. Developmental mechanisms that may facilitate the emergence of this upright-face specialization are discussed, including the roles that physical and social development may play in upright faces' becoming more meaningful to infants during the first year. © 2015 Elsevier Inc. All rights reserved.

  15. Patterns of Visual Attention to Faces and Objects in Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    McPartland, James C.; Webb, Sara Jane; Keehn, Brandon; Dawson, Geraldine

    2011-01-01

    This study used eye-tracking to examine visual attention to faces and objects in adolescents with autism spectrum disorder (ASD) and typical peers. Point of gaze was recorded during passive viewing of images of human faces, inverted human faces, monkey faces, three-dimensional curvilinear objects, and two-dimensional geometric patterns.…

  16. Comparing Biology Grades Based on Instructional Delivery and Instructor at a Community College: Face-to-Face Course Versus Online Course

    NASA Astrophysics Data System (ADS)

    Rosenzweig, Amanda H.

    Through distance learning, the community college system has been able to serve more students by providing educational opportunities to students who would otherwise be unable to attend college. The community college of focus in the study increased its online enrollments and online course offerings due to the growth of overall enrollment. The need and purpose of the study is to address if there is a difference in students' grades between face-to-face and online biology related courses and if there are differences in grades between face-to-face and online biology courses taught by different instructors and the same instructor. The study also addresses if online course delivery is a viable method to educate students in biology-related fields. The study spanned 14 semesters between spring 2006 and summer 2011. Data were collected for 6,619 students. For each student, demographic information, cumulative grade point average, ACT, and data on course performance were gathered. Student data were gathered from General Biology I, Microbiology of Human Pathogens, Human Anatomy and Physiology I, and Human Anatomy and Physiology II courses. Univariate analysis of variance, linear regression, and descriptive analysis were used to analyze the data and determine which variables significantly impacted grade achievement for face-to-face and online students in biology classes. The findings from the study showed that course type, face-to-face or online, was significant for Microbiology of Human Pathogens and Human Anatomy and Physiology I, both upper level courses. Teachers were significant for General Biology I, a lower level course, Human Anatomy and Physiology I, and Human Anatomy and Physiology II. However, in every class, there were teachers who had significant differences within their courses between their face-to-face and online courses. This study will allow information to be concluded about the relationship between the students' final grades and class type, face-to-face or online, and instructor. Administrators, faculty and students can use this information to understand what needs to be done to successfully teach and enroll in biology courses, face-to-face or online. biology courses, online courses, face-to-face courses, class type, teacher influence, grades, CGPA, community college

  17. Mapping attractor fields in face space: the atypicality bias in face recognition.

    PubMed

    Tanaka, J; Giles, M; Kremen, S; Simon, V

    1998-09-01

    A familiar face can be recognized across many changes in the stimulus input. In this research, the many-to-one mapping of face stimuli to a single face memory is referred to as a face memory's 'attractor field'. According to the attractor field approach, a face memory will be activated by any stimuli falling within the boundaries of its attractor field. It was predicted that by virtue of its location in a multi-dimensional face space, the attractor field of an atypical face will be larger than the attractor field of a typical face. To test this prediction, subjects make likeness judgments to morphed faces that contained a 50/50 contribution from an atypical and a typical parent face. The main result of four experiments was that the morph face was judged to bear a stronger resemblance to the atypical face parent than the typical face parent. The computational basis of the atypicality bias was demonstrated in a neural network simulation where morph inputs of atypical and typical representations elicited stronger activation of atypical output units than of typical output units. Together, the behavioral and simulation evidence supports the view that the attractor fields of atypical faces span over a broader region of face space that the attractor fields of typical faces.

  18. The role of movement in the recognition of famous faces.

    PubMed

    Lander, K; Christie, F; Bruce, V

    1999-11-01

    The effects of movement on the recognition of famous faces shown in difficult conditions were investigated. Images were presented as negatives, upside down (inverted), and thresholded. Results indicate that, under all these conditions, moving faces were recognized significantly better than static ones. One possible explanation of this effect could be that a moving sequence contains more static information about the different views and expressions of the face than does a single static image. However, even when the amount of static information was equated (Experiments 3 and 4), there was still an advantage for moving sequences that contained their original dynamic properties. The results suggest that the dynamics of the motion provide additional information, helping to access an established familiar face representation. Both the theoretical and the practical implications for these findings are discussed.

  19. States at Risk: America's Preparedness Report Card

    NASA Astrophysics Data System (ADS)

    Yu, R. M. S.; Strauss, B.; Kulp, S. A.; Bronzan, J.; Rodehorst, B.; Bhat, C.; Dix, B.; Savonis, M.; Wiles, R.

    2015-12-01

    Many states are already experiencing the costly impacts of extreme climate and weather events. The occurrence, frequency and intensity of these events may change under future climates. Preparing for these changes takes time, and state government agencies and communities need to recognize the risks they could potentially face and the response actions already undertaken. The States at Risk: America's Preparedness Report Card project is the first-ever study that quantifies five climate-change-driven hazards, and the relevant state government response actions in each of the 50 states. The changing characteristics of extreme heat, drought, wildfires, inland and coastal flooding were assessed for the baseline period (around year 2000) through the years 2030 and 2050 across all 50 states. Bias-corrected statistically-downscaled (BCSD) climate projections (Reclamation, 2013) and hydrology projections (Reclamation, 2014) from the Coupled Model Intercomparison Project phase 5 (CMIP5) under RCP8.5 were used. The climate change response action analysis covers five critical sectors: Transportation, Energy, Water, Human Health and Communities. It examined whether there is evidence that the state is taking action to (1) reduce current risks, (2) raise its awareness of future risks, (3) plan for adaptation to the future risks, and (4) implement specific actions to reduce future risks for each applicable hazards. Results from the two analyses were aggregated and translated into a rating system that standardizes assessments across states, which can be easily understood by both technical and non-technical audiences. The findings in this study not only serve as a screening tool for states to recognize the hazards they could potentially face as climate changes, but also serve as a roadmap for states to address the gaps in response actions, and to improve climate preparedness and resilience.

  20. Elements of person knowledge: Episodic recollection helps us to identify people but not to recognize their faces.

    PubMed

    MacKenzie, Graham; Donaldson, David I

    2016-12-01

    Faces automatically draw attention, allowing rapid assessments of personality and likely behaviour. How we respond to people is, however, highly dependent on whether we know who they are. According to face processing models person knowledge comes from an extended neural system that includes structures linked to episodic memory. Here we use scalp recorded brain signals to demonstrate the specific role of episodic memory processes during face processing. In two experiments we recorded Event-Related Potentials (ERPs) while participants made identify, familiar or unknown responses to famous faces. ERPs revealed neural signals previously associated with episodic recollection for identify but not familiar faces. These findings provide novel evidence suggesting that recollection is central to face processing, providing one source of person knowledge that can be used to moderate the initial impressions gleaned from the core neural system that supports face recognition. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  1. The Other-Race Effect Develops During Infancy

    PubMed Central

    Quinn, Paul C.; Slater, Alan M.; Lee, Kang; Ge, Liezhong; Pascalis, Olivier

    2008-01-01

    Experience plays a crucial role in the development of face processing. In the study reported here, we investigated how faces observed within the visual environment affect the development of the face-processing system during the 1st year of life. We assessed 3-, 6-, and 9-month-old Caucasian infants' ability to discriminate faces within their own racial group and within three other-race groups (African, Middle Eastern, and Chinese). The 3-month-old infants demonstrated recognition in all conditions, the 6-month-old infants were able to recognize Caucasian and Chinese faces only, and the 9-month-old infants' recognition was restricted to own-race faces. The pattern of preferences indicates that the other-race effect is emerging by 6 months of age and is present at 9 months of age. The findings suggest that facial input from the infant's visual environment is crucial for shaping the face-processing system early in infancy, resulting in differential recognition accuracy for faces of different races in adulthood. PMID:18031416

  2. Dystonia: Emotional and Mental Health

    MedlinePlus

    ... Support Frequently Asked Questions Faces of Dystonia Emotional & Mental Health Although dystonia is a movement disorder that impacts ... emotion as well as muscle movement. For years, mental health professionals have recognized that coping with a chronic ...

  3. Improving student retention in computer engineering technology

    NASA Astrophysics Data System (ADS)

    Pierozinski, Russell Ivan

    The purpose of this research project was to improve student retention in the Computer Engineering Technology program at the Northern Alberta Institute of Technology by reducing the number of dropouts and increasing the graduation rate. This action research project utilized a mixed methods approach of a survey and face-to-face interviews. The participants were male and female, with a large majority ranging from 18 to 21 years of age. The research found that participants recognized their skills and capability, but their capacity to remain in the program was dependent on understanding and meeting the demanding pace and rigour of the program. The participants recognized that curriculum delivery along with instructor-student interaction had an impact on student retention. To be successful in the program, students required support in four domains: academic, learning management, career, and social.

  4. Debate: Limitations on universality: the "right to health" and the necessity of legal nationality

    PubMed Central

    2010-01-01

    Background The "right to health," including access to basic healthcare, has been recognized as a universal human right through a number of international agreements. Attempts to protect this ideal, however, have relied on states as the guarantor of rights and have subsequently ignored stateless individuals, or those lacking legal nationality in any nation-state. While a legal nationality alone is not sufficient to guarantee that a right to healthcare is accessible, an absence of any legal nationality is almost certainly an obstacle in most cases. There are millions of so-called stateless individuals around the globe who are, in effect, denied medical citizenship in their countries of residence. A central motivating factor for this essay is the fact that statelessness as a concept is largely absent from the medical literature. The goal for this discussion, therefore, is primarily to illustrate the need for further monitoring of health access issues by the medical community, and for a great deal more research into the effects of statelessness upon access to healthcare. This is important both as a theoretical issue, in light of the recognition by many of healthcare as a universal right, as well as an empirical fact that requires further exploration and amelioration. Discussion Most discussions of the human right to health assume that every human being has legal nationality, but in reality there are at least 11 to 12 million stateless individuals worldwide who are often unable to access basic healthcare. The examples of the Roma in Europe, the hill tribes of Thailand, and many Palestinians in Israel highlight the negative health impacts associated with statelessness. Summary Stateless individuals often face an inability to access the most basic healthcare, much less the "highest attainable standard of health" outlined by international agreements. Rather than presuming nationality, statelessness must be recognized by the medical community. Additionally, it is imperative that stateless populations be recognized, the health of these populations be tracked, and more research conducted to further elaborate upon the connection between statelessness and access to healthcare services, and hence a universal right to health. PMID:20525334

  5. Efficient human face detection in infancy.

    PubMed

    Jakobsen, Krisztina V; Umstead, Lindsey; Simpson, Elizabeth A

    2016-01-01

    Adults detect conspecific faces more efficiently than heterospecific faces; however, the development of this own-species bias (OSB) remains unexplored. We tested whether 6- and 11-month-olds exhibit OSB in their attention to human and animal faces in complex visual displays with high perceptual load (25 images competing for attention). Infants (n = 48) and adults (n = 43) passively viewed arrays containing a face among 24 non-face distractors while we measured their gaze with remote eye tracking. While OSB is typically not observed until about 9 months, we found that, already by 6 months, human faces were more likely to be detected, were detected more quickly (attention capture), and received longer looks (attention holding) than animal faces. These data suggest that 6-month-olds already exhibit OSB in face detection efficiency, consistent with perceptual attunement. This specialization may reflect the biological importance of detecting conspecific faces, a foundational ability for early social interactions. © 2015 Wiley Periodicals, Inc.

  6. Self-esteem Modulates the P3 Component in Response to the Self-face Processing after Priming with Emotional Faces

    PubMed Central

    Guan, Lili; Zhao, Yufang; Wang, Yige; Chen, Yujie; Yang, Juan

    2017-01-01

    The self-face processing advantage (SPA) refers to the research finding that individuals generally recognize their own face faster than another’s face; self-face also elicits an enhanced P3 amplitude compared to another’s face. It has been suggested that social evaluation threats could weaken the SPA and that self-esteem could be regarded as a threat buffer. However, little research has directly investigated the neural evidence of how self-esteem modulates the social evaluation threat to the SPA. In the current event-related potential study, 27 healthy Chinese undergraduate students were primed with emotional faces (angry, happy, or neutral) and were asked to judge whether the target face (self, friend, and stranger) was familiar or unfamiliar. Electrophysiological results showed that after priming with emotional faces (angry and happy), self-face elicited similar P3 amplitudes to friend-face in individuals with low self-esteem, but not in individuals with high self-esteem. The results suggest that as low self-esteem raises fears of social rejection and exclusion, priming with emotional faces (angry and happy) can weaken the SPA in low self-esteem individuals but not in high self-esteem individuals. PMID:28868041

  7. Self-esteem Modulates the P3 Component in Response to the Self-face Processing after Priming with Emotional Faces.

    PubMed

    Guan, Lili; Zhao, Yufang; Wang, Yige; Chen, Yujie; Yang, Juan

    2017-01-01

    The self-face processing advantage (SPA) refers to the research finding that individuals generally recognize their own face faster than another's face; self-face also elicits an enhanced P3 amplitude compared to another's face. It has been suggested that social evaluation threats could weaken the SPA and that self-esteem could be regarded as a threat buffer. However, little research has directly investigated the neural evidence of how self-esteem modulates the social evaluation threat to the SPA. In the current event-related potential study, 27 healthy Chinese undergraduate students were primed with emotional faces (angry, happy, or neutral) and were asked to judge whether the target face (self, friend, and stranger) was familiar or unfamiliar. Electrophysiological results showed that after priming with emotional faces (angry and happy), self-face elicited similar P3 amplitudes to friend-face in individuals with low self-esteem, but not in individuals with high self-esteem. The results suggest that as low self-esteem raises fears of social rejection and exclusion, priming with emotional faces (angry and happy) can weaken the SPA in low self-esteem individuals but not in high self-esteem individuals.

  8. Face format at encoding affects the other-race effect in face memory.

    PubMed

    Zhao, Mintao; Hayward, William G; Bülthoff, Isabelle

    2014-08-07

    Memory of own-race faces is generally better than memory of other-races faces. This other-race effect (ORE) in face memory has been attributed to differences in contact, holistic processing, and motivation to individuate faces. Since most studies demonstrate the ORE with participants learning and recognizing static, single-view faces, it remains unclear whether the ORE can be generalized to different face learning conditions. Using an old/new recognition task, we tested whether face format at encoding modulates the ORE. The results showed a significant ORE when participants learned static, single-view faces (Experiment 1). In contrast, the ORE disappeared when participants learned rigidly moving faces (Experiment 2). Moreover, learning faces displayed from four discrete views produced the same results as learning rigidly moving faces (Experiment 3). Contact with other-race faces was correlated with the magnitude of the ORE. Nonetheless, the absence of the ORE in Experiments 2 and 3 cannot be readily explained by either more frequent contact with other-race faces or stronger motivation to individuate them. These results demonstrate that the ORE is sensitive to face format at encoding, supporting the hypothesis that relative involvement of holistic and featural processing at encoding mediates the ORE observed in face memory. © 2014 ARVO.

  9. Undercut feature recognition for core and cavity generation

    NASA Astrophysics Data System (ADS)

    Yusof, Mursyidah Md; Salman Abu Mansor, Mohd

    2018-01-01

    Core and cavity is one of the important components in injection mould where the quality of the final product is mostly dependent on it. In the industry, with years of experience and skill, mould designers commonly use commercial CAD software to design the core and cavity which is time consuming. This paper proposes an algorithm that detect possible undercut features and generate the core and cavity. Two approaches are presented; edge convexity and face connectivity approach. The edge convexity approach is used to recognize undercut features while face connectivity is used to divide the faces into top and bottom region.

  10. Comparison of the BCI Performance between the Semitransparent Face Pattern and the Traditional Face Pattern.

    PubMed

    Cheng, Jiao; Jin, Jing; Wang, Xingyu

    2017-01-01

    Brain-computer interface (BCI) systems allow users to communicate with the external world by recognizing the brain activity without the assistance of the peripheral motor nervous system. P300-based BCI is one of the most common used BCI systems that can obtain high classification accuracy and information transfer rate (ITR). Face stimuli can result in large event-related potentials and improve the performance of P300-based BCI. However, previous studies on face stimuli focused mainly on the effect of various face types (i.e., face expression, face familiarity, and multifaces) on the BCI performance. Studies on the influence of face transparency differences are scarce. Therefore, we investigated the effect of semitransparent face pattern (STF-P) (the subject could see the target character when the stimuli were flashed) and traditional face pattern (F-P) (the subject could not see the target character when the stimuli were flashed) on the BCI performance from the transparency perspective. Results showed that STF-P obtained significantly higher classification accuracy and ITR than those of F-P ( p < 0.05).

  11. Evolutionary Relevance and Experience Contribute to Face Discrimination in Infant Macaques ("Macaca mulatta")

    ERIC Educational Resources Information Center

    Simpson, Elizabeth A.; Suomi, Stephen J.; Paukner, Annika

    2016-01-01

    In human children and adults, familiar face types--typically own-age and own-species faces--are discriminated better than other face types; however, human infants do not appear to exhibit an own-age bias but instead better discriminate adult faces, which they see more often. There are two possible explanations for this pattern: Perceptual…

  12. Neurons responsive to face-view in the Primate Ventrolateral Prefrontal Cortex

    PubMed Central

    Romanski, Lizabeth M.; Diehl, Maria M.

    2011-01-01

    Studies have indicated that temporal and prefrontal brain regions process face and vocal information. Face-selective and vocalization-responsive neurons have been demonstrated in the ventrolateral prefrontal cortex (VLPFC) and some prefrontal cells preferentially respond to combinations of face and corresponding vocalizations. These studies suggest VLPFC in non-human primates may play a role in communication that is similar to the role of inferior frontal regions in human language processing. If VLPFC is involved in communication, information about a speaker's face including identity, face-view, gaze and emotional expression might be encoded by prefrontal neurons. In the following study, we examined the effect of face-view in ventrolateral prefrontal neurons by testing cells with auditory, visual, and a set of human and monkey faces rotated through 0°, 30°, 60°, 90°, and −30°. Prefrontal neurons responded selectively to either the identity of the face presented (human or monkey) or to the specific view of the face/head, or to both identity and face-view. Neurons which were affected by the identity of the face most often showed an increase in firing in the second part of the stimulus period. Neurons that were selective for face-view typically preferred forward face-view stimuli (0° and 30° rotation). The neurons which were selective for forward face-view were also auditory responsive compared to other neurons which responded to other views or were unselective which were not auditory responsive. Our analysis showed that the human forward face (0°) was decoded better and also contained the most information relative to other face-views. Our findings confirm a role for VLPFC in the processing and integration of face and vocalization information and add to the growing body of evidence that the primate ventrolateral prefrontal cortex plays a prominent role in social communication and is an important model in understanding the cellular mechanisms of communication. PMID:21605632

  13. The Motivational Salience of Faces Is Related to Both Their Valence and Dominance.

    PubMed

    Wang, Hongyi; Hahn, Amanda C; DeBruine, Lisa M; Jones, Benedict C

    2016-01-01

    Both behavioral and neural measures of the motivational salience of faces are positively correlated with their physical attractiveness. Whether physical characteristics other than attractiveness contribute to the motivational salience of faces is not known, however. Research with male macaques recently showed that more dominant macaques' faces hold greater motivational salience. Here we investigated whether dominance also contributes to the motivational salience of faces in human participants. Principal component analysis of third-party ratings of faces for multiple traits revealed two orthogonal components. The first component ("valence") was highly correlated with rated trustworthiness and attractiveness. The second component ("dominance") was highly correlated with rated dominance and aggressiveness. Importantly, both components were positively and independently related to the motivational salience of faces, as assessed from responses on a standard key-press task. These results show that at least two dissociable components underpin the motivational salience of faces in humans and present new evidence for similarities in how humans and non-human primates respond to facial cues of dominance.

  14. Exploring relationship between face-to-face interaction and team performance using wearable sensor badges.

    PubMed

    Watanabe, Jun-ichiro; Ishibashi, Nozomu; Yano, Kazuo

    2014-01-01

    Quantitative analyses of human-generated data collected in various fields have uncovered many patterns of complex human behaviors. However, thus far the quantitative evaluation of the relationship between the physical behaviors of employees and their performance has been inadequate. Here, we present findings demonstrating the significant relationship between the physical behaviors of employees and their performance via experiments we conducted in inbound call centers while the employees wore sensor badges. There were two main findings. First, we found that face-to-face interaction among telecommunicators and the frequency of their bodily movements caused by the face-to-face interaction had a significant correlation with the entire call center performance, which we measured as "Calls per Hour." Second, our trial to activate face-to-face interaction on the basis of data collected by the wearable sensor badges the employees wore significantly increased their performance. These results demonstrate quantitatively that human-human interaction in the physical world plays an important role in team performance.

  15. Exploring Relationship between Face-to-Face Interaction and Team Performance Using Wearable Sensor Badges

    PubMed Central

    Watanabe, Jun-ichiro; Ishibashi, Nozomu; Yano, Kazuo

    2014-01-01

    Quantitative analyses of human-generated data collected in various fields have uncovered many patterns of complex human behaviors. However, thus far the quantitative evaluation of the relationship between the physical behaviors of employees and their performance has been inadequate. Here, we present findings demonstrating the significant relationship between the physical behaviors of employees and their performance via experiments we conducted in inbound call centers while the employees wore sensor badges. There were two main findings. First, we found that face-to-face interaction among telecommunicators and the frequency of their bodily movements caused by the face-to-face interaction had a significant correlation with the entire call center performance, which we measured as “Calls per Hour.” Second, our trial to activate face-to-face interaction on the basis of data collected by the wearable sensor badges the employees wore significantly increased their performance. These results demonstrate quantitatively that human-human interaction in the physical world plays an important role in team performance. PMID:25501748

  16. See You See Me: the Role of Eye Contact in Multimodal Human-Robot Interaction.

    PubMed

    Xu, Tian Linger; Zhang, Hui; Yu, Chen

    2016-05-01

    We focus on a fundamental looking behavior in human-robot interactions - gazing at each other's face. Eye contact and mutual gaze between two social partners are critical in smooth human-human interactions. Therefore, investigating at what moments and in what ways a robot should look at a human user's face as a response to the human's gaze behavior is an important topic. Toward this goal, we developed a gaze-contingent human-robot interaction system, which relied on momentary gaze behaviors from a human user to control an interacting robot in real time. Using this system, we conducted an experiment in which human participants interacted with the robot in a joint attention task. In the experiment, we systematically manipulated the robot's gaze toward the human partner's face in real time and then analyzed the human's gaze behavior as a response to the robot's gaze behavior. We found that more face looks from the robot led to more look-backs (to the robot's face) from human participants and consequently created more mutual gaze and eye contact between the two. Moreover, participants demonstrated more coordinated and synchronized multimodal behaviors between speech and gaze when more eye contact was successfully established and maintained.

  17. Genetic specificity of face recognition.

    PubMed

    Shakeshaft, Nicholas G; Plomin, Robert

    2015-10-13

    Specific cognitive abilities in diverse domains are typically found to be highly heritable and substantially correlated with general cognitive ability (g), both phenotypically and genetically. Recent twin studies have found the ability to memorize and recognize faces to be an exception, being similarly heritable but phenotypically substantially uncorrelated both with g and with general object recognition. However, the genetic relationships between face recognition and other abilities (the extent to which they share a common genetic etiology) cannot be determined from phenotypic associations. In this, to our knowledge, first study of the genetic associations between face recognition and other domains, 2,000 18- and 19-year-old United Kingdom twins completed tests assessing their face recognition, object recognition, and general cognitive abilities. Results confirmed the substantial heritability of face recognition (61%), and multivariate genetic analyses found that most of this genetic influence is unique and not shared with other cognitive abilities.

  18. Genetic specificity of face recognition

    PubMed Central

    Shakeshaft, Nicholas G.; Plomin, Robert

    2015-01-01

    Specific cognitive abilities in diverse domains are typically found to be highly heritable and substantially correlated with general cognitive ability (g), both phenotypically and genetically. Recent twin studies have found the ability to memorize and recognize faces to be an exception, being similarly heritable but phenotypically substantially uncorrelated both with g and with general object recognition. However, the genetic relationships between face recognition and other abilities (the extent to which they share a common genetic etiology) cannot be determined from phenotypic associations. In this, to our knowledge, first study of the genetic associations between face recognition and other domains, 2,000 18- and 19-year-old United Kingdom twins completed tests assessing their face recognition, object recognition, and general cognitive abilities. Results confirmed the substantial heritability of face recognition (61%), and multivariate genetic analyses found that most of this genetic influence is unique and not shared with other cognitive abilities. PMID:26417086

  19. Social Cognition in Williams Syndrome: Face Tuning

    PubMed Central

    Pavlova, Marina A.; Heiz, Julie; Sokolov, Alexander N.; Barisnikov, Koviljka

    2016-01-01

    Many neurological, neurodevelopmental, neuropsychiatric, and psychosomatic disorders are characterized by impairments in visual social cognition, body language reading, and facial assessment of a social counterpart. Yet a wealth of research indicates that individuals with Williams syndrome exhibit remarkable concern for social stimuli and face fascination. Here individuals with Williams syndrome were presented with a set of Face-n-Food images composed of food ingredients and in different degree resembling a face (slightly bordering on the Giuseppe Arcimboldo style). The primary advantage of these images is that single components do not explicitly trigger face-specific processing, whereas in face images commonly used for investigating face perception (such as photographs or depictions), the mere occurrence of typical cues already implicates face presence. In a spontaneous recognition task, participants were shown a set of images in a predetermined order from the least to most resembling a face. Strikingly, individuals with Williams syndrome exhibited profound deficits in recognition of the Face-n-Food images as a face: they did not report seeing a face on the images, which typically developing controls effortlessly recognized as a face, and gave overall fewer face responses. This suggests atypical face tuning in Williams syndrome. The outcome is discussed in the light of a general pattern of social cognition in Williams syndrome and brain mechanisms underpinning face processing. PMID:27531986

  20. Social Cognition in Williams Syndrome: Face Tuning.

    PubMed

    Pavlova, Marina A; Heiz, Julie; Sokolov, Alexander N; Barisnikov, Koviljka

    2016-01-01

    Many neurological, neurodevelopmental, neuropsychiatric, and psychosomatic disorders are characterized by impairments in visual social cognition, body language reading, and facial assessment of a social counterpart. Yet a wealth of research indicates that individuals with Williams syndrome exhibit remarkable concern for social stimuli and face fascination. Here individuals with Williams syndrome were presented with a set of Face-n-Food images composed of food ingredients and in different degree resembling a face (slightly bordering on the Giuseppe Arcimboldo style). The primary advantage of these images is that single components do not explicitly trigger face-specific processing, whereas in face images commonly used for investigating face perception (such as photographs or depictions), the mere occurrence of typical cues already implicates face presence. In a spontaneous recognition task, participants were shown a set of images in a predetermined order from the least to most resembling a face. Strikingly, individuals with Williams syndrome exhibited profound deficits in recognition of the Face-n-Food images as a face: they did not report seeing a face on the images, which typically developing controls effortlessly recognized as a face, and gave overall fewer face responses. This suggests atypical face tuning in Williams syndrome. The outcome is discussed in the light of a general pattern of social cognition in Williams syndrome and brain mechanisms underpinning face processing.

  1. What Drives Students to Complete Online Courses? What Drives Faculty to Teach Online? Validating a Measure of Motivation Orientation in University Students and Faculty

    ERIC Educational Resources Information Center

    Johnson, Ruth; Stewart, Cindy; Bachman, Christine

    2015-01-01

    Although online student enrollment has shown double digit growth for almost a decade and academic leaders recognize that online education is necessary for enrollment growth, little is known about what motivates students to enroll in or faculty to teach face-to-face (F2F) versus online courses. The psychometric properties of a motivation scale were…

  2. A specialized face-processing model inspired by the organization of monkey face patches explains several face-specific phenomena observed in humans.

    PubMed

    Farzmahdi, Amirhossein; Rajaei, Karim; Ghodrati, Masoud; Ebrahimpour, Reza; Khaligh-Razavi, Seyed-Mahdi

    2016-04-26

    Converging reports indicate that face images are processed through specialized neural networks in the brain -i.e. face patches in monkeys and the fusiform face area (FFA) in humans. These studies were designed to find out how faces are processed in visual system compared to other objects. Yet, the underlying mechanism of face processing is not completely revealed. Here, we show that a hierarchical computational model, inspired by electrophysiological evidence on face processing in primates, is able to generate representational properties similar to those observed in monkey face patches (posterior, middle and anterior patches). Since the most important goal of sensory neuroscience is linking the neural responses with behavioral outputs, we test whether the proposed model, which is designed to account for neural responses in monkey face patches, is also able to predict well-documented behavioral face phenomena observed in humans. We show that the proposed model satisfies several cognitive face effects such as: composite face effect and the idea of canonical face views. Our model provides insights about the underlying computations that transfer visual information from posterior to anterior face patches.

  3. A specialized face-processing model inspired by the organization of monkey face patches explains several face-specific phenomena observed in humans

    PubMed Central

    Farzmahdi, Amirhossein; Rajaei, Karim; Ghodrati, Masoud; Ebrahimpour, Reza; Khaligh-Razavi, Seyed-Mahdi

    2016-01-01

    Converging reports indicate that face images are processed through specialized neural networks in the brain –i.e. face patches in monkeys and the fusiform face area (FFA) in humans. These studies were designed to find out how faces are processed in visual system compared to other objects. Yet, the underlying mechanism of face processing is not completely revealed. Here, we show that a hierarchical computational model, inspired by electrophysiological evidence on face processing in primates, is able to generate representational properties similar to those observed in monkey face patches (posterior, middle and anterior patches). Since the most important goal of sensory neuroscience is linking the neural responses with behavioral outputs, we test whether the proposed model, which is designed to account for neural responses in monkey face patches, is also able to predict well-documented behavioral face phenomena observed in humans. We show that the proposed model satisfies several cognitive face effects such as: composite face effect and the idea of canonical face views. Our model provides insights about the underlying computations that transfer visual information from posterior to anterior face patches. PMID:27113635

  4. Crystal structure of extracellular domain of human lectin-like transcript 1 (LLT1), the ligand for natural killer receptor-P1A.

    PubMed

    Kita, Shunsuke; Matsubara, Haruki; Kasai, Yoshiyuki; Tamaoki, Takaharu; Okabe, Yuki; Fukuhara, Hideo; Kamishikiryo, Jun; Krayukhina, Elena; Uchiyama, Susumu; Ose, Toyoyuki; Kuroki, Kimiko; Maenaka, Katsumi

    2015-06-01

    Emerging evidence has revealed the pivotal roles of C-type lectin-like receptors (CTLRs) in the regulation of a wide range of immune responses. Human natural killer cell receptor-P1A (NKRP1A) is one of the CTLRs and recognizes another CTLR, lectin-like transcript 1 (LLT1) on target cells to control NK, NKT and Th17 cells. The structural basis for the NKRP1A-LLT1 interaction was limitedly understood. Here, we report the crystal structure of the ectodomain of LLT1. The plausible receptor-binding face of the C-type lectin-like domain is flat, and forms an extended β-sheet. The residues of this face are relatively conserved with another CTLR, keratinocyte-associated C-type lectin, which binds to the CTLR member, NKp65. A LLT1-NKRP1A complex model, prepared using the crystal structures of LLT1 and the keratinocyte-associated C-type lectin-NKp65 complex, reasonably satisfies the charge consistency and the conformational complementarity to explain a previous mutagenesis study. Furthermore, crystal packing and analytical ultracentrifugation revealed dimer formation, which supports a complex model. Our results provide structural insights for understanding the binding modes and signal transduction mechanisms, which are likely to be conserved in the CTLR family, and for further rational drug design towards regulating the LLT1 function. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Anterior temporal face patches: a meta-analysis and empirical study

    PubMed Central

    Von Der Heide, Rebecca J.; Skipper, Laura M.; Olson, Ingrid R.

    2013-01-01

    Evidence suggests the anterior temporal lobe (ATL) plays an important role in person identification and memory. In humans, neuroimaging studies of person memory report consistent activations in the ATL to famous and personally familiar faces and studies of patients report resection or damage of the ATL causes an associative prosopagnosia in which face perception is intact but face memory is compromised. In addition, high-resolution fMRI studies of non-human primates and electrophysiological studies of humans also suggest regions of the ventral ATL are sensitive to novel faces. The current study extends previous findings by investigating whether similar subregions in the dorsal, ventral, lateral, or polar aspects of the ATL are sensitive to personally familiar, famous, and novel faces. We present the results of two studies of person memory: a meta-analysis of existing fMRI studies and an empirical fMRI study using optimized imaging parameters. Both studies showed left-lateralized ATL activations to familiar individuals while novel faces activated the right ATL. Activations to famous faces were quite ventral, similar to what has been reported in previous high-resolution fMRI studies of non-human primates. These findings suggest that face memory-sensitive patches in the human ATL are in the ventral/polar ATL. PMID:23378834

  6. Place recognition and heading retrieval are mediated by dissociable cognitive systems in mice.

    PubMed

    Julian, Joshua B; Keinath, Alexander T; Muzzio, Isabel A; Epstein, Russell A

    2015-05-19

    A lost navigator must identify its current location and recover its facing direction to restore its bearings. We tested the idea that these two tasks--place recognition and heading retrieval--might be mediated by distinct cognitive systems in mice. Previous work has shown that numerous species, including young children and rodents, use the geometric shape of local space to regain their sense of direction after disorientation, often ignoring nongeometric cues even when they are informative. Notably, these experiments have almost always been performed in single-chamber environments in which there is no ambiguity about place identity. We examined the navigational behavior of mice in a two-chamber paradigm in which animals had to both recognize the chamber in which they were located (place recognition) and recover their facing direction within that chamber (heading retrieval). In two experiments, we found that mice used nongeometric features for place recognition, but simultaneously failed to use these same features for heading retrieval, instead relying exclusively on spatial geometry. These results suggest the existence of separate systems for place recognition and heading retrieval in mice that are differentially sensitive to geometric and nongeometric cues. We speculate that a similar cognitive architecture may underlie human navigational behavior.

  7. Reading sadness beyond human faces.

    PubMed

    Chammat, Mariam; Foucher, Aurélie; Nadel, Jacqueline; Dubal, Stéphanie

    2010-08-12

    Human faces are the main emotion displayers. Knowing that emotional compared to neutral stimuli elicit enlarged ERPs components at the perceptual level, one may wonder whether this has led to an emotional facilitation bias toward human faces. To contribute to this question, we measured the P1 and N170 components of the ERPs elicited by human facial compared to artificial stimuli, namely non-humanoid robots. Fifteen healthy young adults were shown sad and neutral, upright and inverted expressions of human versus robotic displays. An increase in P1 amplitude in response to sad displays compared to neutral ones evidenced an early perceptual amplification for sadness information. P1 and N170 latencies were delayed in response to robotic stimuli compared to human ones, while N170 amplitude was not affected by media. Inverted human stimuli elicited a longer latency of P1 and a larger N170 amplitude while inverted robotic stimuli did not. As a whole, our results show that emotion facilitation is not biased to human faces but rather extend to non-human displays, thus suggesting our capacity to read emotion beyond faces. Copyright 2010 Elsevier B.V. All rights reserved.

  8. Plant Immunity

    USDA-ARS?s Scientific Manuscript database

    Plants are faced with defending themselves against a multitude of pathogens, including bacteria, fungi, viruses, nematodes, etc. Immunity is multi-layered and complex. Plants can induce defenses when they recognize small peptides, proteins or double-stranded RNA associated with pathogens. Recognitio...

  9. [Influenza virus receptors in the human airway].

    PubMed

    Shinya, Kyoko; Kawaoka, Yoshihiro

    2006-06-01

    Avian influenza A (H5N1) virus infections have resulted in more than 100 human deaths; yet, human-to-human transmission is rare. We demonstrated that the epithelial cells in the upper respiratory tract of humans mainly possess sialic acid linked to galactose by alpha 2,6 linkages (SA alpha 2,6Gal), a molecule preferentially recognized by human viruses. However, many cells in the respiratory bronchioles and alveoli possess SA alpha 2,3Gal, which is preferentially recognized by avian viruses. These facts are consistent with the observation that H5N1 viruses can be directly transmitted from birds to humans and cause serious lower respiratory tract damage in humans. Furthermore, this anatomical difference in receptor prevalence may explain why the spread of H5N1 viruses among humans is limited. However, since some H5N1 viruses isolated from humans recognize human virus receptors, additional changes must be required for these viruses to acquire the ability for efficient human-to-human transmission.

  10. International Guidelines on Human Rights and Drug Control

    PubMed Central

    Pol, Luciana

    2017-01-01

    Abstract Discrimination and inequality shape women’s experiences of drug use and in the drug trade and the impact of drug control efforts on them, with disproportionate burdens faced by poor and otherwise marginalized women. In recent years, UN member states and UN drug control and human rights entities have recognized this issue and made commitments to integrate a ‘gender perspective’ into drug control policies, with ‘gender’ limited to those conventionally deemed women. But the concept of gender in international law is broader, rooted in socially constructed and culturally determined norms and expectations around gender roles, sex, and sexuality. Also, drug control policies often fail to meaningfully address the specific needs and circumstances of women (inclusively defined), leaving them at risk of recurrent violations of their rights in the context of drugs. This article explores what it means to ‘mainstream’ this narrower version of gender into drug control efforts, using as examples various women’s experiences as people who use drugs, in the drug trade, and in the criminal justice system. It points to international guidelines on human rights and drug control as an important tool to ensure attention to women’s rights in drug control policy design and implementation. PMID:28630557

  11. International Guidelines on Human Rights and Drug Control: A Tool for Securing Women's Rights in Drug Control Policy.

    PubMed

    Schleifer, Rebecca; Pol, Luciana

    2017-06-01

    Discrimination and inequality shape women's experiences of drug use and in the drug trade and the impact of drug control efforts on them, with disproportionate burdens faced by poor and otherwise marginalized women. In recent years, UN member states and UN drug control and human rights entities have recognized this issue and made commitments to integrate a 'gender perspective' into drug control policies, with 'gender' limited to those conventionally deemed women. But the concept of gender in international law is broader, rooted in socially constructed and culturally determined norms and expectations around gender roles, sex, and sexuality. Also, drug control policies often fail to meaningfully address the specific needs and circumstances of women (inclusively defined), leaving them at risk of recurrent violations of their rights in the context of drugs. This article explores what it means to 'mainstream' this narrower version of gender into drug control efforts, using as examples various women's experiences as people who use drugs, in the drug trade, and in the criminal justice system. It points to international guidelines on human rights and drug control as an important tool to ensure attention to women's rights in drug control policy design and implementation.

  12. [Intellectual honesty in abortion problems].

    PubMed

    Werner, M

    1991-04-03

    A pastor comments on the recent ruling by the Swedish Department of Health and Social Affairs that the remains of an abortion should be "treated respectfully"--cremated or buried in a cemetery. This decision results from recognition on the part of the government and the medical establishment that a growing segment of public opinion agrees that the fetus is a human being. The new rules mean, though, that a fetus becomes human only upon its death. Logically, an abortion that is respectfully performed ought not to be performed at all. This is the fundamental problem with abortion, and no amount of arbitrary boundary drawing at various levels of supposed capability for survival at the 12th, the 18th, or the 24th week of pregnancy will alter the fact. It is necessary to face the problem with complete intellectual honesty and say that a fetus is a human being no matter what its age, but that voluntary abortion is also a social necessity. Only then can society find another abortion policy, one that recognizes that late abortions are hard to distinguish from births. The Swedish abortion policy must reflect honest facts, rather than etiological legends, preconceived ideas for which arguments must be found afterward.

  13. Caucasian Infants Scan Own- and Other-Race Faces Differently

    PubMed Central

    Wheeler, Andrea; Anzures, Gizelle; Quinn, Paul C.; Pascalis, Olivier; Omrin, Danielle S.; Lee, Kang

    2011-01-01

    Young infants are known to prefer own-race faces to other race faces and recognize own-race faces better than other-race faces. However, it is entirely unclear as to whether infants also attend to different parts of own- and other-race faces differently, which may provide an important clue as to how and why the own-race face recognition advantage emerges so early. The present study used eye tracking methodology to investigate whether 6- to 10-month-old Caucasian infants (N = 37) have differential scanning patterns for dynamically displayed own- and other-race faces. We found that even though infants spent a similar amount of time looking at own- and other-race faces, with increased age, infants increasingly looked longer at the eyes of own-race faces and less at the mouths of own-race faces. These findings suggest experience-based tuning of the infant's face processing system to optimally process own-race faces that are different in physiognomy from other-race faces. In addition, the present results, taken together with recent own- and other-race eye tracking findings with infants and adults, provide strong support for an enculturation hypothesis that East Asians and Westerners may be socialized to scan faces differently due to each culture's conventions regarding mutual gaze during interpersonal communication. PMID:21533235

  14. Perceptual Learning: 12-Month-Olds' Discrimination of Monkey Faces

    ERIC Educational Resources Information Center

    Fair, Joseph; Flom, Ross; Jones, Jacob; Martin, Justin

    2012-01-01

    Six-month-olds reliably discriminate different monkey and human faces whereas 9-month-olds only discriminate different human faces. It is often falsely assumed that perceptual narrowing reflects a permanent change in perceptual abilities. In 3 experiments, ninety-six 12-month-olds' discrimination of unfamiliar monkey faces was examined. Following…

  15. The role of relational binding in item memory: evidence from face recognition in a case of developmental amnesia.

    PubMed

    Olsen, Rosanna K; Lee, Yunjo; Kube, Jana; Rosenbaum, R Shayna; Grady, Cheryl L; Moscovitch, Morris; Ryan, Jennifer D

    2015-04-01

    Current theories state that the hippocampus is responsible for the formation of memory representations regarding relations, whereas extrahippocampal cortical regions support representations for single items. However, findings of impaired item memory in hippocampal amnesics suggest a more nuanced role for the hippocampus in item memory. The hippocampus may be necessary when the item elements need to be bound within and across episodes to form a lasting representation that can be used flexibly. The current investigation was designed to test this hypothesis in face recognition. H.C., an individual who developed with a compromised hippocampal system, and control participants incidentally studied individual faces that either varied in presentation viewpoint across study repetitions or remained in a fixed viewpoint across the study repetitions. Eye movements were recorded during encoding and participants then completed a surprise recognition memory test. H.C. demonstrated altered face viewing during encoding. Although the overall number of fixations made by H.C. was not significantly different from that of controls, the distribution of her viewing was primarily directed to the eye region. Critically, H.C. was significantly impaired in her ability to subsequently recognize faces studied from variable viewpoints, but demonstrated spared performance in recognizing faces she encoded from a fixed viewpoint, implicating a relationship between eye movement behavior in the service of a hippocampal binding function. These findings suggest that a compromised hippocampal system disrupts the ability to bind item features within and across study repetitions, ultimately disrupting recognition when it requires access to flexible relational representations. Copyright © 2015 the authors 0270-6474/15/355342-09$15.00/0.

  16. Recognizing the same face in different contexts: Testing within-person face recognition in typical development and in autism.

    PubMed

    Neil, Louise; Cappagli, Giulia; Karaminis, Themelis; Jenkins, Rob; Pellicano, Elizabeth

    2016-03-01

    Unfamiliar face recognition follows a particularly protracted developmental trajectory and is more likely to be atypical in children with autism than those without autism. There is a paucity of research, however, examining the ability to recognize the same face across multiple naturally varying images. Here, we investigated within-person face recognition in children with and without autism. In Experiment 1, typically developing 6- and 7-year-olds, 8- and 9-year-olds, 10- and 11-year-olds, 12- to 14-year-olds, and adults were given 40 grayscale photographs of two distinct male identities (20 of each face taken at different ages, from different angles, and in different lighting conditions) and were asked to sort them by identity. Children mistook images of the same person as images of different people, subdividing each individual into many perceived identities. Younger children divided images into more perceived identities than adults and also made more misidentification errors (placing two different identities together in the same group) than older children and adults. In Experiment 2, we used the same procedure with 32 cognitively able children with autism. Autistic children reported a similar number of identities and made similar numbers of misidentification errors to a group of typical children of similar age and ability. Fine-grained analysis using matrices revealed marginal group differences in overall performance. We suggest that the immature performance in typical and autistic children could arise from problems extracting the perceptual commonalities from different images of the same person and building stable representations of facial identity. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  17. Differences in Facial Emotion Recognition between First Episode Psychosis, Borderline Personality Disorder and Healthy Controls.

    PubMed

    Catalan, Ana; Gonzalez de Artaza, Maider; Bustamante, Sonia; Orgaz, Pablo; Osa, Luis; Angosto, Virxinia; Valverde, Cristina; Bilbao, Amaia; Madrazo, Arantza; van Os, Jim; Gonzalez-Torres, Miguel Angel

    2016-01-01

    Facial emotion recognition (FER) is essential to guide social functioning and behaviour for interpersonal communication. FER may be altered in severe mental illness such as in psychosis and in borderline personality disorder patients. However, it is unclear if these FER alterations are specifically related to psychosis. Awareness of FER alterations may be useful in clinical settings to improve treatment strategies. The aim of our study was to examine FER in patients with severe mental disorder and their relation with psychotic symptomatology. Socio-demographic and clinical variables were collected. Alterations on emotion recognition were assessed in 3 groups: patients with first episode psychosis (FEP) (n = 64), borderline personality patients (BPD) (n = 37) and healthy controls (n = 137), using the Degraded Facial Affect Recognition Task. The Positive and Negative Syndrome Scale, Structured Interview for Schizotypy Revised and Community Assessment of Psychic Experiences scales were used to assess positive psychotic symptoms. WAIS III subtests were used to assess IQ. Kruskal-Wallis analysis showed a significant difference between groups on the FER of neutral faces score between FEP, BPD patients and controls and between FEP patients and controls in angry face recognition. No significant differences were found between groups in the fear or happy conditions. There was a significant difference between groups in the attribution of negative emotion to happy faces. BPD and FEP groups had a much higher tendency to recognize happy faces as negatives. There was no association with the different symptom domains in either group. FEP and BPD patients have problems in recognizing neutral faces more frequently than controls. Moreover, patients tend to over-report negative emotions in recognition of happy faces. Although no relation between psychotic symptoms and FER alterations was found, these deficits could contribute to a patient's misinterpretations in daily life.

  18. The neural correlates of affect reading: an fMRI study on faces and gestures.

    PubMed

    Prochnow, D; Höing, B; Kleiser, R; Lindenberg, R; Wittsack, H-J; Schäfer, R; Franz, M; Seitz, R J

    2013-01-15

    As complex social beings, people communicate, in addition to spoken language, also via nonverbal behavior. In social face-to-face situations, people readily read the affect and intentions of others in their face expressions and gestures recognizing their meaning. Importantly, the addressee further has to discriminate the meanings of the seen communicative motor acts in order to be able to react upon them appropriately. In this functional magnetic resonance imaging study 15 healthy non-alexithymic right-handers observed video-clips that showed the dynamic evolution of emotional face expressions and gestures evolving from a neutral to a fully developed expression. We aimed at disentangling the cerebral circuits related to the observation of the incomplete and the subsequent discrimination of the evolved bodily expressions of emotion which are typical for everyday social situations. We show that the inferior temporal gyrus and the inferior and dorsal medial frontal cortex in both cerebral hemispheres were activated early in recognizing faces and gestures, while their subsequent discrimination involved the right dorsolateral frontal cortex. Interregional correlations showed that the involved regions constituted a widespread circuit allowing for a formal analysis of the seen expressions, their empathic processing and the subjective interpretation of their contextual meanings. Right-left comparisons revealed a greater activation of the right dorsal medial frontal cortex and the inferior temporal gyrus which supports the notion of a right hemispheric dominance for processing affective body expressions. These novel data provide a neurobiological basis for the intuitive understanding of other people which is relevant for socially appropriate decisions and intact social functioning. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. Lateralization for dynamic facial expressions in human superior temporal sulcus.

    PubMed

    De Winter, François-Laurent; Zhu, Qi; Van den Stock, Jan; Nelissen, Koen; Peeters, Ronald; de Gelder, Beatrice; Vanduffel, Wim; Vandenbulcke, Mathieu

    2015-02-01

    Most face processing studies in humans show stronger activation in the right compared to the left hemisphere. Evidence is largely based on studies with static stimuli focusing on the fusiform face area (FFA). Hence, the pattern of lateralization for dynamic faces is less clear. Furthermore, it is unclear whether this property is common to human and non-human primates due to predisposing processing strategies in the right hemisphere or that alternatively left sided specialization for language in humans could be the driving force behind this phenomenon. We aimed to address both issues by studying lateralization for dynamic facial expressions in monkeys and humans. Therefore, we conducted an event-related fMRI experiment in three macaques and twenty right handed humans. We presented human and monkey dynamic facial expressions (chewing and fear) as well as scrambled versions to both species. We studied lateralization in independently defined face-responsive and face-selective regions by calculating a weighted lateralization index (LIwm) using a bootstrapping method. In order to examine if lateralization in humans is related to language, we performed a separate fMRI experiment in ten human volunteers including a 'speech' expression (one syllable non-word) and its scrambled version. Both within face-responsive and selective regions, we found consistent lateralization for dynamic faces (chewing and fear) versus scrambled versions in the right human posterior superior temporal sulcus (pSTS), but not in FFA nor in ventral temporal cortex. Conversely, in monkeys no consistent pattern of lateralization for dynamic facial expressions was observed. Finally, LIwms based on the contrast between different types of dynamic facial expressions (relative to scrambled versions) revealed left-sided lateralization in human pSTS for speech-related expressions compared to chewing and emotional expressions. To conclude, we found consistent laterality effects in human posterior STS but not in visual cortex of monkeys. Based on our results, it is tempting to speculate that lateralization for dynamic face processing in humans may be driven by left-hemispheric language specialization which may not have been present yet in the common ancestor of human and macaque monkeys. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Deficits in Cross-Race Face Learning: Insights From Eye Movements and Pupillometry

    PubMed Central

    Goldinger, Stephen D.; He, Yi; Papesh, Megan H.

    2010-01-01

    The own-race bias (ORB) is a well-known finding wherein people are better able to recognize and discriminate own-race faces, relative to cross-race faces. In 2 experiments, participants viewed Asian and Caucasian faces, in preparation for recognition memory tests, while their eye movements and pupil diameters were continuously monitored. In Experiment 1 (with Caucasian participants), systematic differences emerged in both measures as a function of depicted race: While encoding cross-race faces, participants made fewer (and longer) fixations, they preferentially attended to different sets of features, and their pupils were more dilated, all relative to own-race faces. Also, in both measures, a pattern emerged wherein some participants reduced their apparent encoding effort to cross-race faces over trials. In Experiment 2 (with Asian participants), the authors observed the same patterns, although the ORB favored the opposite set of faces. Taken together, the results suggest that the ORB appears during initial perceptual encoding. Relative to own-race face encoding, cross-race encoding requires greater effort, which may reduce vigilance in some participants. PMID:19686008

  1. Color constancy in 3D-2D face recognition

    NASA Astrophysics Data System (ADS)

    Meyer, Manuel; Riess, Christian; Angelopoulou, Elli; Evangelopoulos, Georgios; Kakadiaris, Ioannis A.

    2013-05-01

    Face is one of the most popular biometric modalities. However, up to now, color is rarely actively used in face recognition. Yet, it is well-known that when a person recognizes a face, color cues can become as important as shape, especially when combined with the ability of people to identify the color of objects independent of illuminant color variations. In this paper, we examine the feasibility and effect of explicitly embedding illuminant color information in face recognition systems. We empirically examine the theoretical maximum gain of including known illuminant color to a 3D-2D face recognition system. We also investigate the impact of using computational color constancy methods for estimating the illuminant color, which is then incorporated into the face recognition framework. Our experiments show that under close-to-ideal illumination estimates, one can improve face recognition rates by 16%. When the illuminant color is algorithmically estimated, the improvement is approximately 5%. These results suggest that color constancy has a positive impact on face recognition, but the accuracy of the illuminant color estimate has a considerable effect on its benefits.

  2. The Influence of Shyness on the Scanning of Own- and Other-Race Faces in Adults

    PubMed Central

    Wang, Qiandong; Hu, Chao; Short, Lindsey A.; Fu, Genyue

    2012-01-01

    The current study explored the relationship between shyness and face scanning patterns for own- and other-race faces in adults. Participants completed a shyness inventory and a face recognition task in which their eye movements were recorded by a Tobii 1750 eye tracker. We found that: (1) Participants’ shyness scores were negatively correlated with the fixation proportion on the eyes, regardless of the race of face they viewed. The shyer the participants were, the less time they spent fixating on the eye region; (2) High shyness participants tended to fixate significantly more than low shyness participants on the regions just below the eyes as if to avoid direct eye contact; (3) When participants were recognizing own-race faces, their shyness scores were positively correlated with the normalized criterion. The shyer they were, the more apt they were to judge the faces as novel, regardless of whether they were target or foil faces. The present results support an avoidance hypothesis of shyness, suggesting that shy individuals tend to avoid directly fixating on others’ eyes, regardless of face race. PMID:23284933

  3. Is attentional prioritisation of infant faces unique in humans?: Comparative demonstrations by modified dot-probe task in monkeys.

    PubMed

    Koda, Hiroki; Sato, Anna; Kato, Akemi

    2013-09-01

    Humans innately perceive infantile features as cute. The ethologist Konrad Lorenz proposed that the infantile features of mammals and birds, known as the baby schema (kindchenschema), motivate caretaking behaviour. As biologically relevant stimuli, newborns are likely to be processed specially in terms of visual attention, perception, and cognition. Recent demonstrations on human participants have shown visual attentional prioritisation to newborn faces (i.e., newborn faces capture visual attention). Although characteristics equivalent to those found in the faces of human infants are found in nonhuman primates, attentional capture by newborn faces has not been tested in nonhuman primates. We examined whether conspecific newborn faces captured the visual attention of two Japanese monkeys using a target-detection task based on dot-probe tasks commonly used in human visual attention studies. Although visual cues enhanced target detection in subject monkeys, our results, unlike those for humans, showed no evidence of an attentional prioritisation for newborn faces by monkeys. Our demonstrations showed the validity of dot-probe task for visual attention studies in monkeys and propose a novel approach to bridge the gap between human and nonhuman primate social cognition research. This suggests that attentional capture by newborn faces is not common to macaques, but it is unclear if nursing experiences influence their perception and recognition of infantile appraisal stimuli. We need additional comparative studies to reveal the evolutionary origins of baby-schema perception and recognition. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Sex differences in social cognition: The case of face processing.

    PubMed

    Proverbio, Alice Mado

    2017-01-02

    Several studies have demonstrated that women show a greater interest for social information and empathic attitude than men. This article reviews studies on sex differences in the brain, with particular reference to how males and females process faces and facial expressions, social interactions, pain of others, infant faces, faces in things (pareidolia phenomenon), opposite-sex faces, humans vs. landscapes, incongruent behavior, motor actions, biological motion, erotic pictures, and emotional information. Sex differences in oxytocin-based attachment response and emotional memory are also mentioned. In addition, we investigated how 400 different human faces were evaluated for arousal and valence dimensions by a group of healthy male and female University students. Stimuli were carefully balanced for sensory and perceptual characteristics, age, facial expression, and sex. As a whole, women judged all human faces as more positive and more arousing than men. Furthermore, they showed a preference for the faces of children and the elderly in the arousal evaluation. Regardless of face aesthetics, age, or facial expression, women rated human faces higher than men. The preference for opposite- vs. same-sex faces strongly interacted with facial age. Overall, both women and men exhibited differences in facial processing that could be interpreted in the light of evolutionary psychobiology. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  5. Human Empathy, Personality and Experience Affect the Emotion Ratings of Dog and Human Facial Expressions.

    PubMed

    Kujala, Miiamaaria V; Somppi, Sanni; Jokela, Markus; Vainio, Outi; Parkkonen, Lauri

    2017-01-01

    Facial expressions are important for humans in communicating emotions to the conspecifics and enhancing interpersonal understanding. Many muscles producing facial expressions in humans are also found in domestic dogs, but little is known about how humans perceive dog facial expressions, and which psychological factors influence people's perceptions. Here, we asked 34 observers to rate the valence, arousal, and the six basic emotions (happiness, sadness, surprise, disgust, fear, and anger/aggressiveness) from images of human and dog faces with Pleasant, Neutral and Threatening expressions. We investigated how the subjects' personality (the Big Five Inventory), empathy (Interpersonal Reactivity Index) and experience of dog behavior affect the ratings of dog and human faces. Ratings of both species followed similar general patterns: human subjects classified dog facial expressions from pleasant to threatening very similarly to human facial expressions. Subjects with higher emotional empathy evaluated Threatening faces of both species as more negative in valence and higher in anger/aggressiveness. More empathetic subjects also rated the happiness of Pleasant humans but not dogs higher, and they were quicker in their valence judgments of Pleasant human, Threatening human and Threatening dog faces. Experience with dogs correlated positively with ratings of Pleasant and Neutral dog faces. Personality also had a minor effect on the ratings of Pleasant and Neutral faces in both species. The results imply that humans perceive human and dog facial expression in a similar manner, and the perception of both species is influenced by psychological factors of the evaluators. Especially empathy affects both the speed and intensity of rating dogs' emotional facial expressions.

  6. Human Empathy, Personality and Experience Affect the Emotion Ratings of Dog and Human Facial Expressions

    PubMed Central

    Kujala, Miiamaaria V.; Somppi, Sanni; Jokela, Markus; Vainio, Outi; Parkkonen, Lauri

    2017-01-01

    Facial expressions are important for humans in communicating emotions to the conspecifics and enhancing interpersonal understanding. Many muscles producing facial expressions in humans are also found in domestic dogs, but little is known about how humans perceive dog facial expressions, and which psychological factors influence people’s perceptions. Here, we asked 34 observers to rate the valence, arousal, and the six basic emotions (happiness, sadness, surprise, disgust, fear, and anger/aggressiveness) from images of human and dog faces with Pleasant, Neutral and Threatening expressions. We investigated how the subjects’ personality (the Big Five Inventory), empathy (Interpersonal Reactivity Index) and experience of dog behavior affect the ratings of dog and human faces. Ratings of both species followed similar general patterns: human subjects classified dog facial expressions from pleasant to threatening very similarly to human facial expressions. Subjects with higher emotional empathy evaluated Threatening faces of both species as more negative in valence and higher in anger/aggressiveness. More empathetic subjects also rated the happiness of Pleasant humans but not dogs higher, and they were quicker in their valence judgments of Pleasant human, Threatening human and Threatening dog faces. Experience with dogs correlated positively with ratings of Pleasant and Neutral dog faces. Personality also had a minor effect on the ratings of Pleasant and Neutral faces in both species. The results imply that humans perceive human and dog facial expression in a similar manner, and the perception of both species is influenced by psychological factors of the evaluators. Especially empathy affects both the speed and intensity of rating dogs’ emotional facial expressions. PMID:28114335

  7. Hemispheric differences in recognizing upper and lower facial displays of emotion.

    PubMed

    Prodan, C I; Orbelo, D M; Testa, J A; Ross, E D

    2001-01-01

    To determine if there are hemispheric differences in processing upper versus lower facial displays of emotion. Recent evidence suggests that there are two broad classes of emotions with differential hemispheric lateralization. Primary emotions (e.g. anger, fear) and associated displays are innate, are recognized across all cultures, and are thought to be modulated by the right hemisphere. Social emotions (e.g., guilt, jealousy) and associated "display rules" are learned during early child development, vary across cultures, and are thought to be modulated by the left hemisphere. Display rules are used by persons to alter, suppress or enhance primary emotional displays for social purposes. During deceitful behaviors, a subject's true emotional state is often leaked through upper rather than lower facial displays, giving rise to facial blends of emotion. We hypothesized that upper facial displays are processed preferentially by the right hemisphere, as part of the primary emotional system, while lower facial displays are processed preferentially by the left hemisphere, as part of the social emotional system. 30 strongly right-handed adult volunteers were tested tachistoscopically by randomly flashing facial displays of emotion to the right and left visual fields. The stimuli were line drawings of facial blends with different emotions displayed on the upper versus lower face. The subjects were tested under two conditions: 1) without instructions and 2) with instructions to attend to the upper face. Without instructions, the subjects robustly identified the emotion displayed on the lower face, regardless of visual field presentation. With instructions to attend to the upper face, for the left visual field they robustly identified the emotion displayed on the upper face. For the right visual field, they continued to identify the emotion displayed on the lower face, but to a lesser degree. Our results support the hypothesis that hemispheric differences exist in the ability to process upper versus lower facial displays of emotion. Attention appears to enhance the ability to explore these hemispheric differences under experimental conditions. Our data also support the recent observation that the right hemisphere has a greater ability to recognize deceitful behaviors compared with the left hemisphere. This may be attributable to the different roles the hemispheres play in modulating social versus primary emotions and related behaviors.

  8. The other-race effect in children from a multiracial population: A cross-cultural comparison.

    PubMed

    Tham, Diana Su Yun; Bremner, J Gavin; Hay, Dennis

    2017-03-01

    The role of experience with other-race faces in the development of the other-race effect was investigated through a cross-cultural comparison between 5- and 6-year-olds and 13- and 14-year-olds raised in a monoracial (British White, n=83) population and a multiracial (Malaysian Chinese, n=68) population. British White children showed an other-race effect to three other-race faces (Chinese, Malay, and African Black) that was stable across age. Malaysian Chinese children showed a recognition deficit for less experienced faces (African Black) but showed a recognition advantage for faces of which they have direct or indirect experience. Interestingly, younger (Malaysian Chinese) children showed no other-race effect for female faces such that they can recognize all female faces regardless of race. These findings point to the importance of early race and gender experiences in reorganizing the face representation to accommodate changes in experience across development. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Memory for angry faces, impulsivity, and problematic behavior in adolescence.

    PubMed

    d'Acremont, Mathieu; Van der Linden, Martial

    2007-04-01

    Research has shown that cognitive processes like the attribution of hostile intention or angry emotion to others contribute to the development and maintenance of conduct problems. However, the role of memory has been understudied in comparison with attribution biases. The aim of this study was thus to test if a memory bias for angry faces was related to conduct problems in youth. Adolescents from a junior secondary school were presented with angry and happy faces and were later asked to recognize the same faces with a neutral expression. They also completed an impulsivity questionnaire. A teacher assessed their behavior. The results showed that a better recognition of angry faces than happy faces predicted conduct problems and hyperactivity/inattention as reported by the teacher. The memory bias effect was more pronounced for impulsive adolescents. It is suggested that a memory bias for angry faces favors disruptive behavior but that a good ability to control impulses may moderate the negative impact of this bias.

  10. Identifying cognitive preferences for attractive female faces: an event-related potential experiment using a study-test paradigm.

    PubMed

    Zhang, Yan; Kong, Fanchang; Chen, Hong; Jackson, Todd; Han, Li; Meng, Jing; Yang, Zhou; Gao, Jianguo; Najam ul Hasan, Abbasi

    2011-11-01

    In this experiment, sensitivity to female facial attractiveness was examined by comparing event-related potentials (ERPs) in response to attractive and unattractive female faces within a study-test paradigm. Fourteen heterosexual participants (age range 18-24 years, mean age 21.67 years) were required to judge 84 attractive and 84 unattractive face images as either "attractive" or "unattractive." They were then asked whether they had previously viewed each face in a recognition task in which 50% of the images were novel. Analyses indicated that attractive faces elicited more enhanced ERP amplitudes than did unattractive faces in judgment (N300 and P350-550 msec) and recognition (P160 and N250-400 msec and P400-700 msec) tasks on anterior locations. Moreover, longer reaction times and higher accuracy rate were observed in identifying attractive faces than unattractive faces. In sum, this research identified neural and behavioral bases related to cognitive preferences for judging and recognizing attractive female faces. Explanations for the results are that attractive female faces arouse more intense positive emotions in participants than do unattractive faces, and they also represent reproductive fitness and mating value from the evolutionary perspective. Copyright © 2011 Wiley-Liss, Inc.

  11. I spy with my little eye: typical, daily exposure to faces documented from a first-person infant perspective.

    PubMed

    Sugden, Nicole A; Mohamed-Ali, Marwan I; Moulson, Margaret C

    2014-02-01

    Exposure to faces is known to shape and change the face processing system; however, no study has yet documented infants' natural daily first-hand exposure to faces. One- and three-month-old infants' visual experience was recorded through head-mounted cameras. The video recordings were coded for faces to determine: (1) How often are infants exposed to faces? (2) To what type of faces are they exposed? and (3) Do frequently encountered face types reflect infants' typical pattern of perceptual narrowing? As hypothesized, infants spent a large proportion of their time (25%) exposed to faces; these faces were primarily female (70%), own-race (96%), and adult-age (81%). Infants were exposed to more individual exemplars of female, own-race, and adult-age faces than to male, other-race, and child- or older-adult-age faces. Each exposure to own-race faces was longer than to other-race faces. There were no differences in exposure duration related to the gender or age of the face. Previous research has found that the face types frequently experienced by our participants are preferred over and more successfully recognized than other face types. The patterns of face exposure revealed in the current study coincide with the known trajectory of perceptual narrowing seen later in infancy. © 2013 The Authors. Developmental Psychobiology Published by Wiley Periodicals, Inc.

  12. Dynamic encoding of face information in the human fusiform gyrus.

    PubMed

    Ghuman, Avniel Singh; Brunet, Nicolas M; Li, Yuanning; Konecky, Roma O; Pyles, John A; Walls, Shawn A; Destefino, Vincent; Wang, Wei; Richardson, R Mark

    2014-12-08

    Humans' ability to rapidly and accurately detect, identify and classify faces under variable conditions derives from a network of brain regions highly tuned to face information. The fusiform face area (FFA) is thought to be a computational hub for face processing; however, temporal dynamics of face information processing in FFA remains unclear. Here we use multivariate pattern classification to decode the temporal dynamics of expression-invariant face information processing using electrodes placed directly on FFA in humans. Early FFA activity (50-75 ms) contained information regarding whether participants were viewing a face. Activity between 200 and 500 ms contained expression-invariant information about which of 70 faces participants were viewing along with the individual differences in facial features and their configurations. Long-lasting (500+ms) broadband gamma frequency activity predicted task performance. These results elucidate the dynamic computational role FFA plays in multiple face processing stages and indicate what information is used in performing these visual analyses.

  13. Neuromagnetic evidence that the right fusiform face area is essential for human face awareness: An intermittent binocular rivalry study.

    PubMed

    Kume, Yuko; Maekawa, Toshihiko; Urakawa, Tomokazu; Hironaga, Naruhito; Ogata, Katsuya; Shigyo, Maki; Tobimatsu, Shozo

    2016-08-01

    When and where the awareness of faces is consciously initiated is unclear. We used magnetoencephalography to probe the brain responses associated with face awareness under intermittent pseudo-rivalry (PR) and binocular rivalry (BR) conditions. The stimuli comprised three pictures: a human face, a monkey face and a house. In the PR condition, we detected the M130 component, which has been minimally characterized in previous research. We obtained a clear recording of the M170 component in the fusiform face area (FFA), and found that this component had an earlier response time to faces compared with other objects. The M170 occurred predominantly in the right hemisphere in both conditions. In the BR condition, the amplitude of the M130 significantly increased in the right hemisphere irrespective of the physical characteristics of the visual stimuli. Conversely, we did not detect the M170 when the face image was suppressed in the BR condition, although this component was clearly present when awareness for the face was initiated. We also found a significant difference in the latency of the M170 (human

  14. A voxel-based lesion study on facial emotion recognition after penetrating brain injury

    PubMed Central

    Dal Monte, Olga; Solomon, Jeffrey M.; Schintu, Selene; Knutson, Kristine M.; Strenziok, Maren; Pardini, Matteo; Leopold, Anne; Raymont, Vanessa; Grafman, Jordan

    2013-01-01

    The ability to read emotions in the face of another person is an important social skill that can be impaired in subjects with traumatic brain injury (TBI). To determine the brain regions that modulate facial emotion recognition, we conducted a whole-brain analysis using a well-validated facial emotion recognition task and voxel-based lesion symptom mapping (VLSM) in a large sample of patients with focal penetrating TBIs (pTBIs). Our results revealed that individuals with pTBI performed significantly worse than normal controls in recognizing unpleasant emotions. VLSM mapping results showed that impairment in facial emotion recognition was due to damage in a bilateral fronto-temporo-limbic network, including medial prefrontal cortex (PFC), anterior cingulate cortex, left insula and temporal areas. Beside those common areas, damage to the bilateral and anterior regions of PFC led to impairment in recognizing unpleasant emotions, whereas bilateral posterior PFC and left temporal areas led to impairment in recognizing pleasant emotions. Our findings add empirical evidence that the ability to read pleasant and unpleasant emotions in other people's faces is a complex process involving not only a common network that includes bilateral fronto-temporo-limbic lobes, but also other regions depending on emotional valence. PMID:22496440

  15. A social-ecological database to advance research on infrastructure development impacts in the Brazilian Amazon.

    PubMed

    Tucker Lima, Joanna M; Valle, Denis; Moretto, Evandro Mateus; Pulice, Sergio Mantovani Paiva; Zuca, Nadia Lucia; Roquetti, Daniel Rondinelli; Beduschi, Liviam Elizabeth Cordeiro; Praia, Amanda Salles; Okamoto, Claudia Parucce Franco; da Silva Carvalhaes, Vinicius Leite; Branco, Evandro Albiach; Barbezani, Bruna; Labandera, Emily; Timpe, Kelsie; Kaplan, David

    2016-08-30

    Recognized as one of the world's most vital natural and cultural resources, the Amazon faces a wide variety of threats from natural resource and infrastructure development. Within this context, rigorous scientific study of the region's complex social-ecological system is critical to inform and direct decision-making toward more sustainable environmental and social outcomes. Given the Amazon's tightly linked social and ecological components and the scope of potential development impacts, effective study of this system requires an easily accessible resource that provides a broad and reliable data baseline. This paper brings together multiple datasets from diverse disciplines (including human health, socio-economics, environment, hydrology, and energy) to provide investigators with a variety of baseline data to explore the multiple long-term effects of infrastructure development in the Brazilian Amazon.

  16. A social-ecological database to advance research on infrastructure development impacts in the Brazilian Amazon

    PubMed Central

    Tucker Lima, Joanna M.; Valle, Denis; Moretto, Evandro Mateus; Pulice, Sergio Mantovani Paiva; Zuca, Nadia Lucia; Roquetti, Daniel Rondinelli; Beduschi, Liviam Elizabeth Cordeiro; Praia, Amanda Salles; Okamoto, Claudia Parucce Franco; da Silva Carvalhaes, Vinicius Leite; Branco, Evandro Albiach; Barbezani, Bruna; Labandera, Emily; Timpe, Kelsie; Kaplan, David

    2016-01-01

    Recognized as one of the world’s most vital natural and cultural resources, the Amazon faces a wide variety of threats from natural resource and infrastructure development. Within this context, rigorous scientific study of the region’s complex social-ecological system is critical to inform and direct decision-making toward more sustainable environmental and social outcomes. Given the Amazon’s tightly linked social and ecological components and the scope of potential development impacts, effective study of this system requires an easily accessible resource that provides a broad and reliable data baseline. This paper brings together multiple datasets from diverse disciplines (including human health, socio-economics, environment, hydrology, and energy) to provide investigators with a variety of baseline data to explore the multiple long-term effects of infrastructure development in the Brazilian Amazon. PMID:27575915

  17. Recognition of rotated images using the multi-valued neuron and rotation-invariant 2D Fourier descriptors

    NASA Astrophysics Data System (ADS)

    Aizenberg, Evgeni; Bigio, Irving J.; Rodriguez-Diaz, Eladio

    2012-03-01

    The Fourier descriptors paradigm is a well-established approach for affine-invariant characterization of shape contours. In the work presented here, we extend this method to images, and obtain a 2D Fourier representation that is invariant to image rotation. The proposed technique retains phase uniqueness, and therefore structural image information is not lost. Rotation-invariant phase coefficients were used to train a single multi-valued neuron (MVN) to recognize satellite and human face images rotated by a wide range of angles. Experiments yielded 100% and 96.43% classification rate for each data set, respectively. Recognition performance was additionally evaluated under effects of lossy JPEG compression and additive Gaussian noise. Preliminary results show that the derived rotation-invariant features combined with the MVN provide a promising scheme for efficient recognition of rotated images.

  18. Safety Evaluation of Destination Lighting at Stop-Controlled Cross Intersections

    DOT National Transportation Integrated Search

    2018-02-02

    Unlit or inadequately lit intersections reduce the ability of drivers to recognize upcoming intersections during nighttime hours. Drivers also face difficulty in properly negotiating the intersection because lack of adequate lighting increases the li...

  19. Learning, Play, and Your 1- to 3-Month-Old

    MedlinePlus

    ... Staying Safe Videos for Educators Search English Español Learning, Play, and Your 1- to 3-Month-Old ... Baby to Learn Print What Your Baby Is Learning After learning to recognize your voice, your face, ...

  20. Looking to the eyes influences the processing of emotion on face-sensitive event-related potentials in 7-month-old infants.

    PubMed

    Vanderwert, Ross E; Westerlund, Alissa; Montoya, Lina; McCormick, Sarah A; Miguel, Helga O; Nelson, Charles A

    2015-10-01

    Previous studies in infants have shown that face-sensitive components of the ongoing electroencephalogram (the event-related potential, or ERP) are larger in amplitude to negative emotions (e.g., fear, anger) versus positive emotions (e.g., happy). However, it is still unclear whether the negative emotions linked with the face or the negative emotions alone contribute to these amplitude differences. We simultaneously recorded infant looking behaviors (via eye-tracking) and face-sensitive ERPs while 7-month-old infants viewed human faces or animals displaying happy, fear, or angry expressions. We observed that the amplitude of the N290 was greater (i.e., more negative) to angry animals compared to happy or fearful animals; no such differences were obtained for human faces. Eye-tracking data highlighted the importance of the eye region in processing emotional human faces. Infants that spent more time looking to the eye region of human faces showing fearful or angry expressions had greater N290 or P400 amplitudes, respectively. © 2014 Wiley Periodicals, Inc.

  1. Discrimination between smiling faces: Human observers vs. automated face analysis.

    PubMed

    Del Líbano, Mario; Calvo, Manuel G; Fernández-Martín, Andrés; Recio, Guillermo

    2018-05-11

    This study investigated (a) how prototypical happy faces (with happy eyes and a smile) can be discriminated from blended expressions with a smile but non-happy eyes, depending on type and intensity of the eye expression; and (b) how smile discrimination differs for human perceivers versus automated face analysis, depending on affective valence and morphological facial features. Human observers categorized faces as happy or non-happy, or rated their valence. Automated analysis (FACET software) computed seven expressions (including joy/happiness) and 20 facial action units (AUs). Physical properties (low-level image statistics and visual saliency) of the face stimuli were controlled. Results revealed, first, that some blended expressions (especially, with angry eyes) had lower discrimination thresholds (i.e., they were identified as "non-happy" at lower non-happy eye intensities) than others (especially, with neutral eyes). Second, discrimination sensitivity was better for human perceivers than for automated FACET analysis. As an additional finding, affective valence predicted human discrimination performance, whereas morphological AUs predicted FACET discrimination. FACET can be a valid tool for categorizing prototypical expressions, but is currently more limited than human observers for discrimination of blended expressions. Configural processing facilitates detection of in/congruence(s) across regions, and thus detection of non-genuine smiling faces (due to non-happy eyes). Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Modeling Human Dynamics of Face-to-Face Interaction Networks

    NASA Astrophysics Data System (ADS)

    Starnini, Michele; Baronchelli, Andrea; Pastor-Satorras, Romualdo

    2013-04-01

    Face-to-face interaction networks describe social interactions in human gatherings, and are the substrate for processes such as epidemic spreading and gossip propagation. The bursty nature of human behavior characterizes many aspects of empirical data, such as the distribution of conversation lengths, of conversations per person, or of interconversation times. Despite several recent attempts, a general theoretical understanding of the global picture emerging from data is still lacking. Here we present a simple model that reproduces quantitatively most of the relevant features of empirical face-to-face interaction networks. The model describes agents that perform a random walk in a two-dimensional space and are characterized by an attractiveness whose effect is to slow down the motion of people around them. The proposed framework sheds light on the dynamics of human interactions and can improve the modeling of dynamical processes taking place on the ensuing dynamical social networks.

  3. Modelling temporal networks of human face-to-face contacts with public activity and individual reachability

    NASA Astrophysics Data System (ADS)

    Zhang, Yi-Qing; Cui, Jing; Zhang, Shu-Min; Zhang, Qi; Li, Xiang

    2016-02-01

    Modelling temporal networks of human face-to-face contacts is vital both for understanding the spread of airborne pathogens and word-of-mouth spreading of information. Although many efforts have been devoted to model these temporal networks, there are still two important social features, public activity and individual reachability, have been ignored in these models. Here we present a simple model that captures these two features and other typical properties of empirical face-to-face contact networks. The model describes agents which are characterized by an attractiveness to slow down the motion of nearby people, have event-triggered active probability and perform an activity-dependent biased random walk in a square box with periodic boundary. The model quantitatively reproduces two empirical temporal networks of human face-to-face contacts which are testified by their network properties and the epidemic spread dynamics on them.

  4. Preventing facial recognition when rendering MR images of the head in three dimensions.

    PubMed

    Budin, François; Zeng, Donglin; Ghosh, Arpita; Bullitt, Elizabeth

    2008-06-01

    In the United States it is not allowed to make public any patient-specific information without the patient's consent. This ruling has led to difficulty for those interested in sharing three-dimensional (3D) images of the head and brain since a patient's face might be recognized from a 3D rendering of the skin surface. Approaches employed to date have included brain stripping and total removal of the face anterior to a cut plane, each of which lose potentially important anatomical information about the skull surface, air sinuses, and orbits. This paper describes a new approach that involves (a) definition of a plane anterior to which the face lies, and (b) an adjustable level of deformation of the skin surface anterior to that plane. On the basis of a user performance study using forced choices, we conclude that approximately 30% of individuals are at risk of recognition from 3D renderings of unaltered images and that truncation of the face below the level of the nose does not preclude facial recognition. Removal of the face anterior to a cut plane may interfere with accurate registration and may delete important anatomical information. Our new method alters little of the underlying anatomy and does not prevent effective registration into a common coordinate system. Although the methods presented here were not fully effective (one subject was consistently recognized under the forced choice study design even at the maximum deformation level employed) this paper may point a way toward solution of a difficult problem that has received little attention in the literature.

  5. Preventing Facial Recognition When Rendering MR Images of the Head in Three Dimensions

    PubMed Central

    Budin, François; Zeng, Donglin; Ghosh, Arpita; Bullitt, Elizabeth

    2008-01-01

    In the United States it is not allowed to make public any patient-specific information without the patient's consent. This ruling has led to difficulty for those interested in sharing three-dimensional (3D) images of the head and brain since a patient's face might be recognized from a 3D rendering of the skin surface. Approaches employed to date have included brain stripping and total removal of the face anterior to a cut plane, each of which lose potentially important anatomical information about the skull surface, air sinuses, and orbits. This paper describes a new approach that involves a) definition of a plane anterior to which the face lies, and b) an adjustable level of deformation of the skin surface anterior to that plane. On the basis of a user performance study using forced choices, we conclude that approximately 30% of individuals are at risk of recognition from 3D renderings of unaltered images and that truncation of the face below the level of the nose does not preclude facial recognition. Removal of the face anterior to a cut plane may interfere with accurate registration and may delete important anatomical information. Our new method alters little of the underlying anatomy and does not prevent effective registration into a common coordinate system. Although the methods presented here were not fully effective (one subject was consistently recognized under the forced choice study design even at the maximum deformation level employed) this paper may point a way toward solution of a difficult problem that has received little attention in the literature. PMID:18069044

  6. Young children perceive less humanness in outgroup faces.

    PubMed

    McLoughlin, Niamh; Tipper, Steven P; Over, Harriet

    2018-03-01

    We investigated when young children first dehumanize outgroups. Across two studies, 5- and 6-year-olds were asked to rate how human they thought a set of ambiguous doll-human face morphs were. We manipulated whether these faces belonged to their gender in- or gender outgroup (Study 1) and to a geographically based in- or outgroup (Study 2). In both studies, the tendency to perceive outgroup faces as less human relative to ingroup faces increased with age. Explicit ingroup preference, in contrast, was present even in the youngest children and remained stable across age. These results demonstrate that children dehumanize outgroup members from relatively early in development and suggest that the tendency to do so may be partially distinguishable from intergroup preference. This research has important implications for our understanding of children's perception of humanness and the origins of intergroup bias. © 2017 John Wiley & Sons Ltd.

  7. SMAS Fusion Zones Determine the Subfascial and Subcutaneous Anatomy of the Human Face: Fascial Spaces, Fat Compartments, and Models of Facial Aging.

    PubMed

    Pessa, Joel E

    2016-05-01

    Fusion zones between superficial fascia and deep fascia have been recognized by surgical anatomists since 1938. Anatomical dissection performed by the author suggested that additional superficial fascia fusion zones exist. A study was performed to evaluate and define fusion zones between the superficial and the deep fascia. Dissection of fresh and minimally preserved cadavers was performed using the accepted technique for defining anatomic spaces: dye injection combined with cross-sectional anatomical dissection. This study identified bilaminar membranes traveling from deep to superficial fascia at consistent locations in all specimens. These membranes exist as fusion zones between superficial and deep fascia, and are referred to as SMAS fusion zones. Nerves, blood vessels and lymphatics transition between the deep and superficial fascia of the face by traveling along and within these membranes, a construct that provides stability and minimizes shear. Bilaminar subfascial membranes continue into the subcutaneous tissues as unilaminar septa on their way to skin. This three-dimensional lattice of interlocking horizontal, vertical, and oblique membranes defines the anatomic boundaries of the fascial spaces as well as the deep and superficial fat compartments of the face. This information facilitates accurate volume augmentation; helps to avoid facial nerve injury; and provides the conceptual basis for understanding jowls as a manifestation of enlargement of the buccal space that occurs with age. © 2016 The American Society for Aesthetic Plastic Surgery, Inc. Reprints and permission: journals.permissions@oup.com.

  8. Emotion recognition following early psychosocial deprivation

    PubMed Central

    Nelson, Charles A.; Westerlund, Alissa; McDermott, Jennifer Martin; Zeanah, Charles H.; Fox, Nathan A.

    2014-01-01

    We examined the ability to discriminate facial expressions among 8-year-old children who had been abandoned and placed in institutions in infancy and children with no institutional rearing (Never Institutionalized Group; NIG). Following a baseline assessment (average age=22 months), half the institutionalized children were randomly assigned to a foster care intervention (foster care group; FCG) and half to remain in the institution (care as usual group; CAUG). All three groups had a more difficult time recognizing fearful as compared to neutral expressions. However, the NIG and FCG were both better at inhibiting responses to neutral and fearful faces than the CAUG. Regarding ERPs, the P1 was biggest to angry faces for the NIG, smallest among the CAUG and intermediate for the FCG. The N170 and the P300 were biggest to fear in all groups. Although the children in foster care showed improvements in their ability to recognize fear and neutral faces, and their P1 to angry was midway between the NIG and CAUG, we observed no timing of placement effects. These findings support the view that institutional rearing leads to deficits in the ability to process facial emotion, and placement in foster care partially, although incompletely, ameliorates these deficits. PMID:23627960

  9. Dog experts' brains distinguish socially relevant body postures similarly in dogs and humans.

    PubMed

    Kujala, Miiamaaria V; Kujala, Jan; Carlson, Synnöve; Hari, Riitta

    2012-01-01

    We read conspecifics' social cues effortlessly, but little is known about our abilities to understand social gestures of other species. To investigate the neural underpinnings of such skills, we used functional magnetic resonance imaging to study the brain activity of experts and non-experts of dog behavior while they observed humans or dogs either interacting with, or facing away from a conspecific. The posterior superior temporal sulcus (pSTS) of both subject groups dissociated humans facing toward each other from humans facing away, and in dog experts, a distinction also occurred for dogs facing toward vs. away in a bilateral area extending from the pSTS to the inferior temporo-occipital cortex: the dissociation of dog behavior was significantly stronger in expert than control group. Furthermore, the control group had stronger pSTS responses to humans than dogs facing toward a conspecific, whereas in dog experts, the responses were of similar magnitude. These findings suggest that dog experts' brains distinguish socially relevant body postures similarly in dogs and humans.

  10. Faces with Light Makeup Are Better Recognized than Faces with Heavy Makeup

    PubMed Central

    Tagai, Keiko; Ohtaka, Hitomi; Nittono, Hiroshi

    2016-01-01

    Many women wear facial makeup to accentuate their appeal and attractiveness. Makeup may vary from natural (light) to glamorous (heavy), depending of the context of interpersonal situations, an emphasis on femininity, and current societal makeup trends. This study examined how light makeup and heavy makeup influenced attractiveness ratings and facial recognition. In a rating task, 38 Japanese women assigned attractiveness ratings to 36 Japanese female faces with no makeup, light makeup, and heavy makeup (12 each). In a subsequent recognition task, the participants were presented with 36 old and 36 new faces. Results indicated that attractiveness was rated highest for the light makeup faces and lowest for the no makeup faces. In contrast, recognition performance was higher for the no makeup and light make up faces than for the heavy makeup faces. Faces with heavy makeup produced a higher rate of false recognition than did other faces, possibly because heavy makeup creates an impression of the style of makeup itself, rather than the individual wearing the makeup. The present study suggests that light makeup is preferable to heavy makeup in that light makeup does not interfere with individual recognition and gives beholders positive impressions. PMID:26973553

  11. Faces with Light Makeup Are Better Recognized than Faces with Heavy Makeup.

    PubMed

    Tagai, Keiko; Ohtaka, Hitomi; Nittono, Hiroshi

    2016-01-01

    Many women wear facial makeup to accentuate their appeal and attractiveness. Makeup may vary from natural (light) to glamorous (heavy), depending of the context of interpersonal situations, an emphasis on femininity, and current societal makeup trends. This study examined how light makeup and heavy makeup influenced attractiveness ratings and facial recognition. In a rating task, 38 Japanese women assigned attractiveness ratings to 36 Japanese female faces with no makeup, light makeup, and heavy makeup (12 each). In a subsequent recognition task, the participants were presented with 36 old and 36 new faces. Results indicated that attractiveness was rated highest for the light makeup faces and lowest for the no makeup faces. In contrast, recognition performance was higher for the no makeup and light make up faces than for the heavy makeup faces. Faces with heavy makeup produced a higher rate of false recognition than did other faces, possibly because heavy makeup creates an impression of the style of makeup itself, rather than the individual wearing the makeup. The present study suggests that light makeup is preferable to heavy makeup in that light makeup does not interfere with individual recognition and gives beholders positive impressions.

  12. Better the devil you know? Nonconscious processing of identity and affect of famous faces.

    PubMed

    Stone, Anna; Valentine, Tim

    2004-06-01

    The nonconscious recognition of facial identity was investigated in two experiments featuring brief (17-msec) masked stimulus presentation to prevent conscious recognition. Faces were presented in simultaneous pairs of one famous face and one unfamiliar face, and participants attempted to select the famous face. Subsequently, participants rated the famous persons as "good" or "evil" (Experiment 1) or liked or disliked (Experiment 2). In Experiments 1 and 2, responses were less accurate to faces of persons rated evil/disliked than to faces of persons rated good/liked, and faces of persons rated evil/disliked were selected significantly below chance. Experiment 2 showed the effect in a within-items analysis: A famous face was selected less often by participants who disliked the person than by participants who liked the person, and the former were selected below chance accuracy. The within-items analysis rules out possible confounding factors based on variations in physical characteristics of the stimulus faces and confirms that the effects are due to participants' attitudes toward the famous persons. The results suggest that facial identity is recognized preconsciously, and that responses may be based on affect rather than familiarity.

  13. Learning new faces in typical and atypical populations of children.

    PubMed

    Jones, Rebecca R; Blades, Mark; Coleman, Mike; Pascalis, Olivier

    2013-02-01

    Recognizing an individual as familiar is an important aspect of our social cognition, which requires both learning a face and recalling it. It has been suggested that children with autistic spectrum disorder (ASD) have deficits and abnormalities in face processing. We investigated whether the process by which unfamiliar faces become familiar differs in typically developing (TD) children, children with ASD, and children with developmental delay. Children were familiarized with a set of moving novel faces presented over a three-day period. Recognition of the learned faces was assessed at five time points during the three-day period. Both immediate and delayed recall of faces was tested. All groups showed improvements in face recognition at immediate recall, which indicated that learning had occurred. The TD population showed slightly better performance than the two other groups, however no difference was specific to the ASD group. All groups showed similar levels of improvements with time. Our results are discussed in terms of learning in ASD. © 2013 The Authors. Scandinavian Journal of Psychology © 2013 The Scandinavian Psychological Associations.

  14. Detecting Superior Face Recognition Skills in a Large Sample of Young British Adults

    PubMed Central

    Bobak, Anna K.; Pampoulov, Philip; Bate, Sarah

    2016-01-01

    The Cambridge Face Memory Test Long Form (CFMT+) and Cambridge Face Perception Test (CFPT) are typically used to assess the face processing ability of individuals who believe they have superior face recognition skills. Previous large-scale studies have presented norms for the CFPT but not the CFMT+. However, previous research has also highlighted the necessity for establishing country-specific norms for these tests, indicating that norming data is required for both tests using young British adults. The current study addressed this issue in 254 British participants. In addition to providing the first norm for performance on the CFMT+ in any large sample, we also report the first UK specific cut-off for superior face recognition on the CFPT. Further analyses identified a small advantage for females on both tests, and only small associations between objective face recognition skills and self-report measures. A secondary aim of the study was to examine the relationship between trait or social anxiety and face processing ability, and no associations were noted. The implications of these findings for the classification of super-recognizers are discussed. PMID:27713706

  15. [HUMAN RESOURCES MANAGEMENT BASED ON COMPETENCIES].

    PubMed

    Larumbe Andueza, Ma Carmen; De Mendoza Cánton, Juana Hermoso

    2016-05-01

    We are living in a time with a lot of changes in which health organizations have more challenges to face. One of them is to recognize, strengthen, develop and retain the talent they have. Competency-based human resources management is emerging as a tool that contributes to achieve that aim. Competencies from the generic or characteristic perspective: personality traits, values and motivations, which are deeply rooted in the person. Through elaborating a competencies map for the organization, and identifying the job competencies profile, above all in key jobs, the employees know what it is going to expect from them. After, detect and cover the learning needs, it is possible to achieve better adjust between worker-job. The nursing unit manager is a key job because it is a link between management team and nursing team. The way that it is performed, it will have impact on the quality of care and its team motivation. So, the most adequate person who covers this job would have a part of knowledge, skills, attitudes and compatible interests with her job. Competency-based management helps identify both the potential and learning needs to performing this job.

  16. Development of Human Face Literature Database Using Text Mining Approach: Phase I.

    PubMed

    Kaur, Paramjit; Krishan, Kewal; Sharma, Suresh K

    2018-06-01

    The face is an important part of the human body by which an individual communicates in the society. Its importance can be highlighted by the fact that a person deprived of face cannot sustain in the living world. The amount of experiments being performed and the number of research papers being published under the domain of human face have surged in the past few decades. Several scientific disciplines, which are conducting research on human face include: Medical Science, Anthropology, Information Technology (Biometrics, Robotics, and Artificial Intelligence, etc.), Psychology, Forensic Science, Neuroscience, etc. This alarms the need of collecting and managing the data concerning human face so that the public and free access of it can be provided to the scientific community. This can be attained by developing databases and tools on human face using bioinformatics approach. The current research emphasizes on creating a database concerning literature data of human face. The database can be accessed on the basis of specific keywords, journal name, date of publication, author's name, etc. The collected research papers will be stored in the form of a database. Hence, the database will be beneficial to the research community as the comprehensive information dedicated to the human face could be found at one place. The information related to facial morphologic features, facial disorders, facial asymmetry, facial abnormalities, and many other parameters can be extracted from this database. The front end has been developed using Hyper Text Mark-up Language and Cascading Style Sheets. The back end has been developed using hypertext preprocessor (PHP). The JAVA Script has used as scripting language. MySQL (Structured Query Language) is used for database development as it is most widely used Relational Database Management System. XAMPP (X (cross platform), Apache, MySQL, PHP, Perl) open source web application software has been used as the server.The database is still under the developmental phase and discusses the initial steps of its creation. The current paper throws light on the work done till date.

  17. Understanding face perception by means of human electrophysiology.

    PubMed

    Rossion, Bruno

    2014-06-01

    Electrophysiological recordings on the human scalp provide a wealth of information about the temporal dynamics and nature of face perception at a global level of brain organization. The time window between 100 and 200 ms witnesses the transition between low-level and high-level vision, an N170 component correlating with conscious interpretation of a visual stimulus as a face. This face representation is rapidly refined as information accumulates during this time window, allowing the individualization of faces. To improve the sensitivity and objectivity of face perception measures, it is increasingly important to go beyond transient visual stimulation by recording electrophysiological responses at periodic frequency rates. This approach has recently provided face perception thresholds and the first objective signature of integration of facial parts in the human brain. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Parkinson Patients' Initial Trust in Avatars: Theory and Evidence.

    PubMed

    Javor, Andrija; Ransmayr, Gerhard; Struhal, Walter; Riedl, René

    2016-01-01

    Parkinson's disease (PD) is a neurodegenerative disease that affects the motor system and cognitive and behavioral functions. Due to these impairments, PD patients also have problems in using the computer. However, using computers and the Internet could help these patients to overcome social isolation and enhance information search. Specifically, avatars (defined as virtual representations of humans) are increasingly used in online environments to enhance human-computer interaction by simulating face-to-face interaction. Our laboratory experiment investigated how PD patients behave in a trust game played with human and avatar counterparts, and we compared this behavior to the behavior of age, income, education and gender matched healthy controls. The results of our study show that PD patients trust avatar faces significantly more than human faces. Moreover, there was no significant difference between initial trust of PD patients and healthy controls in avatar faces, while PD patients trusted human faces significantly less than healthy controls. Our data suggests that PD patients' interaction with avatars may constitute an effective way of communication in situations in which trust is required (e.g., a physician recommends intake of medication). We discuss the implications of these results for several areas of human-computer interaction and neurological research.

  19. Parkinson Patients’ Initial Trust in Avatars: Theory and Evidence

    PubMed Central

    Javor, Andrija; Ransmayr, Gerhard; Struhal, Walter; Riedl, René

    2016-01-01

    Parkinson’s disease (PD) is a neurodegenerative disease that affects the motor system and cognitive and behavioral functions. Due to these impairments, PD patients also have problems in using the computer. However, using computers and the Internet could help these patients to overcome social isolation and enhance information search. Specifically, avatars (defined as virtual representations of humans) are increasingly used in online environments to enhance human-computer interaction by simulating face-to-face interaction. Our laboratory experiment investigated how PD patients behave in a trust game played with human and avatar counterparts, and we compared this behavior to the behavior of age, income, education and gender matched healthy controls. The results of our study show that PD patients trust avatar faces significantly more than human faces. Moreover, there was no significant difference between initial trust of PD patients and healthy controls in avatar faces, while PD patients trusted human faces significantly less than healthy controls. Our data suggests that PD patients’ interaction with avatars may constitute an effective way of communication in situations in which trust is required (e.g., a physician recommends intake of medication). We discuss the implications of these results for several areas of human-computer interaction and neurological research. PMID:27820864

  20. Anti-GM1 antibodies as a model of the immune response to self-glycans.

    PubMed

    Nores, Gustavo A; Lardone, Ricardo D; Comín, Romina; Alaniz, María E; Moyano, Ana L; Irazoqui, Fernando J

    2008-03-01

    Glycans are a class of molecules with high structural variability, frequently found in the plasma membrane facing the extracellular space. Because of these characteristics, glycans are often considered as recognition molecules involved in cell social functions, and as targets of pathogenic factors. Induction of anti-glycan antibodies is one of the early events in immunological defense against bacteria that colonize the body. Because of this natural infection, antibodies recognizing a variety of bacterial glycans are found in sera of adult humans and animals. The immune response to glycans is restricted by self-tolerance, and no antibodies to self-glycans should exist in normal subjects. However, antibodies recognizing structures closely related to self-glycans do exist, and can lead to production of harmful anti-self antibodies. Normal human sera contain low-affinity anti-GM1 IgM-antibodies. Similar antibodies with higher affinity or different isotype are found in some neuropathy patients. Two hypotheses have been developed to explain the origin of disease-associated anti-GM1 antibodies. According to the "molecular mimicry" hypothesis, similarity between GM1 and Campylobacter jejuni lipopolysaccharide carrying a GM1-like glycan is the cause of Guillain-Barré syndrome associated with anti-GM1 IgG-antibodies. According to the "binding site drift" hypothesis, IgM-antibodies associated with disease originate through changes in the binding site of normally occurring anti-GM1 antibodies. We now present an "integrated" hypothesis, combining the "mimicry" and "drift" concepts, which satisfactorily explains most of the published data on anti-GM1 antibodies.

Top