Human versus Non-Human Face Processing: Evidence from Williams Syndrome
ERIC Educational Resources Information Center
Santos, Andreia; Rosset, Delphine; Deruelle, Christine
2009-01-01
Increased motivation towards social stimuli in Williams syndrome (WS) led us to hypothesize that a face's human status would have greater impact than face's orientation on WS' face processing abilities. Twenty-nine individuals with WS were asked to categorize facial emotion expressions in real, human cartoon and non-human cartoon faces presented…
Kret, Mariska E; Tomonaga, Masaki
2016-01-01
For social species such as primates, the recognition of conspecifics is crucial for their survival. As demonstrated by the 'face inversion effect', humans are experts in recognizing faces and unlike objects, recognize their identity by processing it configurally. The human face, with its distinct features such as eye-whites, eyebrows, red lips and cheeks signals emotions, intentions, health and sexual attraction and, as we will show here, shares important features with the primate behind. Chimpanzee females show a swelling and reddening of the anogenital region around the time of ovulation. This provides an important socio-sexual signal for group members, who can identify individuals by their behinds. We hypothesized that chimpanzees process behinds configurally in a way humans process faces. In four different delayed matching-to-sample tasks with upright and inverted body parts, we show that humans demonstrate a face, but not a behind inversion effect and that chimpanzees show a behind, but no clear face inversion effect. The findings suggest an evolutionary shift in socio-sexual signalling function from behinds to faces, two hairless, symmetrical and attractive body parts, which might have attuned the human brain to process faces, and the human face to become more behind-like.
Face Recognition Is Shaped by the Use of Sign Language
ERIC Educational Resources Information Center
Stoll, Chloé; Palluel-Germain, Richard; Caldara, Roberto; Lao, Junpeng; Dye, Matthew W. G.; Aptel, Florent; Pascalis, Olivier
2018-01-01
Previous research has suggested that early deaf signers differ in face processing. Which aspects of face processing are changed and the role that sign language may have played in that change are however unclear. Here, we compared face categorization (human/non-human) and human face recognition performance in early profoundly deaf signers, hearing…
Niina, Megumi; Okamura, Jun-ya; Wang, Gang
2015-10-01
Scalp event-related potential (ERP) studies have demonstrated larger N170 amplitudes when subjects view faces compared to items from object categories. Extensive attempts have been made to clarify face selectivity and hemispheric dominance for face processing. The purpose of this study was to investigate hemispheric differences in N170s activated by human faces and non-face objects, as well as the extent of overlap of their sources. ERP was recorded from 20 subjects while they viewed human face and non-face images. N170s obtained during the presentation of human faces appeared earlier and with larger amplitude than for other category images. Further source analysis with a two-dipole model revealed that the locations of face and object processing largely overlapped in the left hemisphere. Conversely, the source for face processing in the right hemisphere located more anterior than the source for object processing. The results suggest that the neuronal circuits for face and object processing are largely shared in the left hemisphere, with more distinct circuits in the right hemisphere. Copyright © 2015 Elsevier B.V. All rights reserved.
Differences between perception of human faces and body shapes: evidence from the composite illusion.
Soria Bauser, Denise A; Suchan, Boris; Daum, Irene
2011-01-01
The present study aimed to investigate whether human body forms--like human faces--undergo holistic processing. Evidence for holistic face processing comes from the face composite effect: two identical top halves of a face are perceived as being different if they are presented with different bottom parts. This effect disappears if both bottom halves are shifted laterally (misaligned) or if the stimulus is rotated by 180°. We investigated whether comparable composite effects are observed for human faces and human body forms. Matching of upright faces was more accurate and faster for misaligned compared to aligned presentations. By contrast, there were no processing differences between aligned and misaligned bodies. An inversion effect emerged, with better recognition performance for upright compared to inverted bodies but not faces. The present findings provide evidence for the assumption that holistic processing--investigated with the composite illusion--is not involved in the perception of human body forms. Copyright © 2010 Elsevier Ltd. All rights reserved.
Configural face processing impacts race disparities in humanization and trust
Cassidy, Brittany S.; Krendl, Anne C.; Stanko, Kathleen A.; Rydell, Robert J.; Young, Steven G.; Hugenberg, Kurt
2018-01-01
The dehumanization of Black Americans is an ongoing societal problem. Reducing configural face processing, a well-studied aspect of typical face encoding, decreases the activation of human-related concepts to White faces, suggesting that the extent that faces are configurally processed contributes to dehumanization. Because Black individuals are more dehumanized relative to White individuals, the current work examined how configural processing might contribute to their greater dehumanization. Study 1 showed that inverting faces (which reduces configural processing) reduced the activation of human-related concepts toward Black more than White faces. Studies 2a and 2b showed that reducing configural processing affects dehumanization by decreasing trust and increasing homogeneity among Black versus White faces. Studies 3a–d showed that configural processing effects emerge in racial outgroups for whom untrustworthiness may be a more salient group stereotype (i.e., Black, but not Asian, faces). Study 4 provided evidence that these effects are specific to reduced configural processing versus more general perceptual disfluency. Reduced configural processing may thus contribute to the greater dehumanization of Black relative to White individuals. PMID:29910510
Barber, Anjuli L. A.; Randi, Dania; Müller, Corsin A.; Huber, Ludwig
2016-01-01
From all non-human animals dogs are very likely the best decoders of human behavior. In addition to a high sensitivity to human attentive status and to ostensive cues, they are able to distinguish between individual human faces and even between human facial expressions. However, so far little is known about how they process human faces and to what extent this is influenced by experience. Here we present an eye-tracking study with dogs emanating from two different living environments and varying experience with humans: pet and lab dogs. The dogs were shown pictures of familiar and unfamiliar human faces expressing four different emotions. The results, extracted from several different eye-tracking measurements, revealed pronounced differences in the face processing of pet and lab dogs, thus indicating an influence of the amount of exposure to humans. In addition, there was some evidence for the influences of both, the familiarity and the emotional expression of the face, and strong evidence for a left gaze bias. These findings, together with recent evidence for the dog's ability to discriminate human facial expressions, indicate that dogs are sensitive to some emotions expressed in human faces. PMID:27074009
Farzmahdi, Amirhossein; Rajaei, Karim; Ghodrati, Masoud; Ebrahimpour, Reza; Khaligh-Razavi, Seyed-Mahdi
2016-04-26
Converging reports indicate that face images are processed through specialized neural networks in the brain -i.e. face patches in monkeys and the fusiform face area (FFA) in humans. These studies were designed to find out how faces are processed in visual system compared to other objects. Yet, the underlying mechanism of face processing is not completely revealed. Here, we show that a hierarchical computational model, inspired by electrophysiological evidence on face processing in primates, is able to generate representational properties similar to those observed in monkey face patches (posterior, middle and anterior patches). Since the most important goal of sensory neuroscience is linking the neural responses with behavioral outputs, we test whether the proposed model, which is designed to account for neural responses in monkey face patches, is also able to predict well-documented behavioral face phenomena observed in humans. We show that the proposed model satisfies several cognitive face effects such as: composite face effect and the idea of canonical face views. Our model provides insights about the underlying computations that transfer visual information from posterior to anterior face patches.
Farzmahdi, Amirhossein; Rajaei, Karim; Ghodrati, Masoud; Ebrahimpour, Reza; Khaligh-Razavi, Seyed-Mahdi
2016-01-01
Converging reports indicate that face images are processed through specialized neural networks in the brain –i.e. face patches in monkeys and the fusiform face area (FFA) in humans. These studies were designed to find out how faces are processed in visual system compared to other objects. Yet, the underlying mechanism of face processing is not completely revealed. Here, we show that a hierarchical computational model, inspired by electrophysiological evidence on face processing in primates, is able to generate representational properties similar to those observed in monkey face patches (posterior, middle and anterior patches). Since the most important goal of sensory neuroscience is linking the neural responses with behavioral outputs, we test whether the proposed model, which is designed to account for neural responses in monkey face patches, is also able to predict well-documented behavioral face phenomena observed in humans. We show that the proposed model satisfies several cognitive face effects such as: composite face effect and the idea of canonical face views. Our model provides insights about the underlying computations that transfer visual information from posterior to anterior face patches. PMID:27113635
Efficient search for a face by chimpanzees (Pan troglodytes).
Tomonaga, Masaki; Imura, Tomoko
2015-07-16
The face is quite an important stimulus category for human and nonhuman primates in their social lives. Recent advances in comparative-cognitive research clearly indicate that chimpanzees and humans process faces in a special manner; that is, using holistic or configural processing. Both species exhibit the face-inversion effect in which the inverted presentation of a face deteriorates their perception and recognition. Furthermore, recent studies have shown that humans detect human faces among non-facial objects rapidly. We report that chimpanzees detected chimpanzee faces among non-facial objects quite efficiently. This efficient search was not limited to own-species faces. They also found human adult and baby faces--but not monkey faces--efficiently. Additional testing showed that a front-view face was more readily detected than a profile, suggesting the important role of eye-to-eye contact. Chimpanzees also detected a photograph of a banana as efficiently as a face, but a further examination clearly indicated that the banana was detected mainly due to a low-level feature (i.e., color). Efficient face detection was hampered by an inverted presentation, suggesting that configural processing of faces is a critical element of efficient face detection in both species. This conclusion was supported by a simple simulation experiment using the saliency model.
Efficient search for a face by chimpanzees (Pan troglodytes)
Tomonaga, Masaki; Imura, Tomoko
2015-01-01
The face is quite an important stimulus category for human and nonhuman primates in their social lives. Recent advances in comparative-cognitive research clearly indicate that chimpanzees and humans process faces in a special manner; that is, using holistic or configural processing. Both species exhibit the face-inversion effect in which the inverted presentation of a face deteriorates their perception and recognition. Furthermore, recent studies have shown that humans detect human faces among non-facial objects rapidly. We report that chimpanzees detected chimpanzee faces among non-facial objects quite efficiently. This efficient search was not limited to own-species faces. They also found human adult and baby faces-but not monkey faces-efficiently. Additional testing showed that a front-view face was more readily detected than a profile, suggesting the important role of eye-to-eye contact. Chimpanzees also detected a photograph of a banana as efficiently as a face, but a further examination clearly indicated that the banana was detected mainly due to a low-level feature (i.e., color). Efficient face detection was hampered by an inverted presentation, suggesting that configural processing of faces is a critical element of efficient face detection in both species. This conclusion was supported by a simple simulation experiment using the saliency model. PMID:26180944
Face Pareidolia in the Rhesus Monkey.
Taubert, Jessica; Wardle, Susan G; Flessert, Molly; Leopold, David A; Ungerleider, Leslie G
2017-08-21
Face perception in humans and nonhuman primates is rapid and accurate [1-4]. In the human brain, a network of visual-processing regions is specialized for faces [5-7]. Although face processing is a priority of the primate visual system, face detection is not infallible. Face pareidolia is the compelling illusion of perceiving facial features on inanimate objects, such as the illusory face on the surface of the moon. Although face pareidolia is commonly experienced by humans, its presence in other species is unknown. Here we provide evidence for face pareidolia in a species known to possess a complex face-processing system [8-10]: the rhesus monkey (Macaca mulatta). In a visual preference task [11, 12], monkeys looked longer at photographs of objects that elicited face pareidolia in human observers than at photographs of similar objects that did not elicit illusory faces. Examination of eye movements revealed that monkeys fixated the illusory internal facial features in a pattern consistent with how they view photographs of faces [13]. Although the specialized response to faces observed in humans [1, 3, 5-7, 14] is often argued to be continuous across primates [4, 15], it was previously unclear whether face pareidolia arose from a uniquely human capacity. For example, pareidolia could be a product of the human aptitude for perceptual abstraction or result from frequent exposure to cartoons and illustrations that anthropomorphize inanimate objects. Instead, our results indicate that the perception of illusory facial features on inanimate objects is driven by a broadly tuned face-detection mechanism that we share with other species. Published by Elsevier Ltd.
Dynamic encoding of face information in the human fusiform gyrus.
Ghuman, Avniel Singh; Brunet, Nicolas M; Li, Yuanning; Konecky, Roma O; Pyles, John A; Walls, Shawn A; Destefino, Vincent; Wang, Wei; Richardson, R Mark
2014-12-08
Humans' ability to rapidly and accurately detect, identify and classify faces under variable conditions derives from a network of brain regions highly tuned to face information. The fusiform face area (FFA) is thought to be a computational hub for face processing; however, temporal dynamics of face information processing in FFA remains unclear. Here we use multivariate pattern classification to decode the temporal dynamics of expression-invariant face information processing using electrodes placed directly on FFA in humans. Early FFA activity (50-75 ms) contained information regarding whether participants were viewing a face. Activity between 200 and 500 ms contained expression-invariant information about which of 70 faces participants were viewing along with the individual differences in facial features and their configurations. Long-lasting (500+ms) broadband gamma frequency activity predicted task performance. These results elucidate the dynamic computational role FFA plays in multiple face processing stages and indicate what information is used in performing these visual analyses.
Visual Search Efficiency is Greater for Human Faces Compared to Animal Faces
Simpson, Elizabeth A.; Mertins, Haley L.; Yee, Krysten; Fullerton, Alison; Jakobsen, Krisztina V.
2015-01-01
The Animate Monitoring Hypothesis proposes that humans and animals were the most important categories of visual stimuli for ancestral humans to monitor, as they presented important challenges and opportunities for survival and reproduction; however, it remains unknown whether animal faces are located as efficiently as human faces. We tested this hypothesis by examining whether human, primate, and mammal faces elicit similarly efficient searches, or whether human faces are privileged. In the first three experiments, participants located a target (human, primate, or mammal face) among distractors (non-face objects). We found fixations on human faces were faster and more accurate than primate faces, even when controlling for search category specificity. A final experiment revealed that, even when task-irrelevant, human faces slowed searches for non-faces, suggesting some bottom-up processing may be responsible for the human face search efficiency advantage. PMID:24962122
Orienting asymmetries and physiological reactivity in dogs' response to human emotional faces.
Siniscalchi, Marcello; d'Ingeo, Serenella; Quaranta, Angelo
2018-06-19
Recent scientific literature shows that emotional cues conveyed by human vocalizations and odours are processed in an asymmetrical way by the canine brain. In the present study, during feeding behaviour, dogs were suddenly presented with 2-D stimuli depicting human faces expressing the Ekman's six basic emotion (e.g. anger, fear, happiness, sadness, surprise, disgust, and neutral), simultaneously into the left and right visual hemifields. A bias to turn the head towards the left (right hemisphere) rather than the right side was observed with human faces expressing anger, fear, and happiness emotions, but an opposite bias (left hemisphere) was observed with human faces expressing surprise. Furthermore, dogs displayed higher behavioural and cardiac activity to picture of human faces expressing clear arousal emotional state. Overall, results demonstrated that dogs are sensitive to emotional cues conveyed by human faces, supporting the existence of an asymmetrical emotional modulation of the canine brain to process basic human emotions.
Looser, Christine E; Guntupalli, Jyothi S; Wheatley, Thalia
2013-10-01
More than a decade of research has demonstrated that faces evoke prioritized processing in a 'core face network' of three brain regions. However, whether these regions prioritize the detection of global facial form (shared by humans and mannequins) or the detection of life in a face has remained unclear. Here, we dissociate form-based and animacy-based encoding of faces by using animate and inanimate faces with human form (humans, mannequins) and dog form (real dogs, toy dogs). We used multivariate pattern analysis of BOLD responses to uncover the representational similarity space for each area in the core face network. Here, we show that only responses in the inferior occipital gyrus are organized by global facial form alone (human vs dog) while animacy becomes an additional organizational priority in later face-processing regions: the lateral fusiform gyri (latFG) and right superior temporal sulcus. Additionally, patterns evoked by human faces were maximally distinct from all other face categories in the latFG and parts of the extended face perception system. These results suggest that once a face configuration is perceived, faces are further scrutinized for whether the face is alive and worthy of social cognitive resources.
Neural network face recognition using wavelets
NASA Astrophysics Data System (ADS)
Karunaratne, Passant V.; Jouny, Ismail I.
1997-04-01
The recognition of human faces is a phenomenon that has been mastered by the human visual system and that has been researched extensively in the domain of computer neural networks and image processing. This research is involved in the study of neural networks and wavelet image processing techniques in the application of human face recognition. The objective of the system is to acquire a digitized still image of a human face, carry out pre-processing on the image as required, an then, given a prior database of images of possible individuals, be able to recognize the individual in the image. The pre-processing segment of the system includes several procedures, namely image compression, denoising, and feature extraction. The image processing is carried out using Daubechies wavelets. Once the images have been passed through the wavelet-based image processor they can be efficiently analyzed by means of a neural network. A back- propagation neural network is used for the recognition segment of the system. The main constraints of the system is with regard to the characteristics of the images being processed. The system should be able to carry out effective recognition of the human faces irrespective of the individual's facial-expression, presence of extraneous objects such as head-gear or spectacles, and face/head orientation. A potential application of this face recognition system would be as a secondary verification method in an automated teller machine.
A robust human face detection algorithm
NASA Astrophysics Data System (ADS)
Raviteja, Thaluru; Karanam, Srikrishna; Yeduguru, Dinesh Reddy V.
2012-01-01
Human face detection plays a vital role in many applications like video surveillance, managing a face image database, human computer interface among others. This paper proposes a robust algorithm for face detection in still color images that works well even in a crowded environment. The algorithm uses conjunction of skin color histogram, morphological processing and geometrical analysis for detecting human faces. To reinforce the accuracy of face detection, we further identify mouth and eye regions to establish the presence/absence of face in a particular region of interest.
Can human eyes prevent perceptual narrowing for monkey faces in human infants?
Damon, Fabrice; Bayet, Laurie; Quinn, Paul C; Hillairet de Boisferon, Anne; Méary, David; Dupierrix, Eve; Lee, Kang; Pascalis, Olivier
2015-07-01
Perceptual narrowing has been observed in human infants for monkey faces: 6-month-olds can discriminate between them, whereas older infants from 9 months of age display difficulty discriminating between them. The difficulty infants from 9 months have processing monkey faces has not been clearly identified. It could be due to the structural characteristics of monkey faces, particularly the key facial features that differ from human faces. The current study aimed to investigate whether the information conveyed by the eyes is of importance. We examined whether the presence of Caucasian human eyes in monkey faces allows recognition to be maintained in 6-month-olds and facilitates recognition in 9- and 12-month-olds. Our results revealed that the presence of human eyes in monkey faces maintains recognition for those faces at 6 months of age and partially facilitates recognition of those faces at 9 months of age, but not at 12 months of age. The findings are interpreted in the context of perceptual narrowing and suggest that the attenuation of processing of other-species faces is not reversed by the presence of human eyes. © 2015 Wiley Periodicals, Inc.
Preference for facial averageness: Evidence for a common mechanism in human and macaque infants
Damon, Fabrice; Méary, David; Quinn, Paul C.; Lee, Kang; Simpson, Elizabeth A.; Paukner, Annika; Suomi, Stephen J.; Pascalis, Olivier
2017-01-01
Human adults and infants show a preference for average faces, which could stem from a general processing mechanism and may be shared among primates. However, little is known about preference for facial averageness in monkeys. We used a comparative developmental approach and eye-tracking methodology to assess visual attention in human and macaque infants to faces naturally varying in their distance from a prototypical face. In Experiment 1, we examined the preference for faces relatively close to or far from the prototype in 12-month-old human infants with human adult female faces. Infants preferred faces closer to the average than faces farther from it. In Experiment 2, we measured the looking time of 3-month-old rhesus macaques (Macaca mulatta) viewing macaque faces varying in their distance from the prototype. Like human infants, macaque infants looked longer to faces closer to the average. In Experiments 3 and 4, both species were presented with unfamiliar categories of faces (i.e., macaque infants tested with adult macaque faces; human infants and adults tested with infant macaque faces) and showed no prototype preferences, suggesting that the prototypicality effect is experience-dependent. Overall, the findings suggest a common processing mechanism across species, leading to averageness preferences in primates. PMID:28406237
Shannon, Robert W; Patrick, Christopher J; Venables, Noah C; He, Sheng
2013-12-01
The ability to recognize a variety of different human faces is undoubtedly one of the most important and impressive functions of the human perceptual system. Neuroimaging studies have revealed multiple brain regions (including the FFA, STS, OFA) and electrophysiological studies have identified differing brain event-related potential (ERP) components (e.g., N170, P200) possibly related to distinct types of face information processing. To evaluate the heritability of ERP components associated with face processing, including N170, P200, and LPP, we examined ERP responses to fearful and neutral face stimuli in monozygotic (MZ) and dizygotic (DZ) twins. Concordance levels for early brain response indices of face processing (N170, P200) were found to be stronger for MZ than DZ twins, providing evidence of a heritable basis to each. These findings support the idea that certain key neural mechanisms for face processing are genetically coded. Implications for understanding individual differences in recognition of facial identity and the emotional content of faces are discussed. Copyright © 2013 Elsevier Inc. All rights reserved.
Face Recognition in Humans and Machines
NASA Astrophysics Data System (ADS)
O'Toole, Alice; Tistarelli, Massimo
The study of human face recognition by psychologists and neuroscientists has run parallel to the development of automatic face recognition technologies by computer scientists and engineers. In both cases, there are analogous steps of data acquisition, image processing, and the formation of representations that can support the complex and diverse tasks we accomplish with faces. These processes can be understood and compared in the context of their neural and computational implementations. In this chapter, we present the essential elements of face recognition by humans and machines, taking a perspective that spans psychological, neural, and computational approaches. From the human side, we overview the methods and techniques used in the neurobiology of face recognition, the underlying neural architecture of the system, the role of visual attention, and the nature of the representations that emerges. From the computational side, we discuss face recognition technologies and the strategies they use to overcome challenges to robust operation over viewing parameters. Finally, we conclude the chapter with a look at some recent studies that compare human and machine performances at face recognition.
Face Recognition and Processing in a Mini Brain
2007-09-28
flying honeybees ( Apis mellifera ) as a model to understand how a non-mammalian brain learns to recognise human faces. Individual bees were trained...understand how a non-mammalian brain processes human faces is the honeybee (J Exp Biol 2005 v208p4709). Individual free flying honeybees ( Apis ... mellifera ) were provided with differential conditioning to achromatic target and distractor face images. Bee acquisition reached >70% correct choices
Sex differences in social cognition: The case of face processing.
Proverbio, Alice Mado
2017-01-02
Several studies have demonstrated that women show a greater interest for social information and empathic attitude than men. This article reviews studies on sex differences in the brain, with particular reference to how males and females process faces and facial expressions, social interactions, pain of others, infant faces, faces in things (pareidolia phenomenon), opposite-sex faces, humans vs. landscapes, incongruent behavior, motor actions, biological motion, erotic pictures, and emotional information. Sex differences in oxytocin-based attachment response and emotional memory are also mentioned. In addition, we investigated how 400 different human faces were evaluated for arousal and valence dimensions by a group of healthy male and female University students. Stimuli were carefully balanced for sensory and perceptual characteristics, age, facial expression, and sex. As a whole, women judged all human faces as more positive and more arousing than men. Furthermore, they showed a preference for the faces of children and the elderly in the arousal evaluation. Regardless of face aesthetics, age, or facial expression, women rated human faces higher than men. The preference for opposite- vs. same-sex faces strongly interacted with facial age. Overall, both women and men exhibited differences in facial processing that could be interpreted in the light of evolutionary psychobiology. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Racca, Anaïs; Guo, Kun; Meints, Kerstin; Mills, Daniel S.
2012-01-01
Sensitivity to the emotions of others provides clear biological advantages. However, in the case of heterospecific relationships, such as that existing between dogs and humans, there are additional challenges since some elements of the expression of emotions are species-specific. Given that faces provide important visual cues for communicating emotional state in both humans and dogs, and that processing of emotions is subject to brain lateralisation, we investigated lateral gaze bias in adult dogs when presented with pictures of expressive human and dog faces. Our analysis revealed clear differences in laterality of eye movements in dogs towards conspecific faces according to the emotional valence of the expressions. Differences were also found towards human faces, but to a lesser extent. For comparative purpose, a similar experiment was also run with 4-year-old children and it was observed that they showed differential processing of facial expressions compared to dogs, suggesting a species-dependent engagement of the right or left hemisphere in processing emotions. PMID:22558335
Face recognition increases during saccade preparation.
Lin, Hai; Rizak, Joshua D; Ma, Yuan-ye; Yang, Shang-chuan; Chen, Lin; Hu, Xin-tian
2014-01-01
Face perception is integral to human perception system as it underlies social interactions. Saccadic eye movements are frequently made to bring interesting visual information, such as faces, onto the fovea for detailed processing. Just before eye movement onset, the processing of some basic features, such as the orientation, of an object improves at the saccade landing point. Interestingly, there is also evidence that indicates faces are processed in early visual processing stages similar to basic features. However, it is not known whether this early enhancement of processing includes face recognition. In this study, three experiments were performed to map the timing of face presentation to the beginning of the eye movement in order to evaluate pre-saccadic face recognition. Faces were found to be similarly processed as simple objects immediately prior to saccadic movements. Starting ∼ 120 ms before a saccade to a target face, independent of whether or not the face was surrounded by other faces, the face recognition gradually improved and the critical spacing of the crowding decreased as saccade onset was approaching. These results suggest that an upcoming saccade prepares the visual system for new information about faces at the saccade landing site and may reduce the background in a crowd to target the intended face. This indicates an important role of pre-saccadic eye movement signals in human face recognition.
Sensitivity to First-Order Relations of Facial Elements in Infant Rhesus Macaques
ERIC Educational Resources Information Center
Paukner, Annika; Bower, Seth; Simpson, Elizabeth A.; Suomi, Stephen J.
2013-01-01
Faces are visually attractive to both human and nonhuman primates. Human neonates are thought to have a broad template for faces at birth and prefer face-like to non-face-like stimuli. To better compare developmental trajectories of face processing phylogenetically, here, we investigated preferences for face-like stimuli in infant rhesus macaques…
Dynamic Encoding of Face Information in the Human Fusiform Gyrus
Ghuman, Avniel Singh; Brunet, Nicolas M.; Li, Yuanning; Konecky, Roma O.; Pyles, John A.; Walls, Shawn A.; Destefino, Vincent; Wang, Wei; Richardson, R. Mark
2014-01-01
Humans’ ability to rapidly and accurately detect, identify, and classify faces under variable conditions derives from a network of brain regions highly tuned to face information. The fusiform face area (FFA) is thought to be a computational hub for face processing, however temporal dynamics of face information processing in FFA remains unclear. Here we use multivariate pattern classification to decode the temporal dynamics of expression-invariant face information processing using electrodes placed directly upon FFA in humans. Early FFA activity (50-75 ms) contained information regarding whether participants were viewing a face. Activity between 200-500 ms contained expression-invariant information about which of 70 faces participants were viewing along with the individual differences in facial features and their configurations. Long-lasting (500+ ms) broadband gamma frequency activity predicted task performance. These results elucidate the dynamic computational role FFA plays in multiple face processing stages and indicate what information is used in performing these visual analyses. PMID:25482825
From face processing to face recognition: Comparing three different processing levels.
Besson, G; Barragan-Jason, G; Thorpe, S J; Fabre-Thorpe, M; Puma, S; Ceccaldi, M; Barbeau, E J
2017-01-01
Verifying that a face is from a target person (e.g. finding someone in the crowd) is a critical ability of the human face processing system. Yet how fast this can be performed is unknown. The 'entry-level shift due to expertise' hypothesis suggests that - since humans are face experts - processing faces should be as fast - or even faster - at the individual than at superordinate levels. In contrast, the 'superordinate advantage' hypothesis suggests that faces are processed from coarse to fine, so that the opposite pattern should be observed. To clarify this debate, three different face processing levels were compared: (1) a superordinate face categorization level (i.e. detecting human faces among animal faces), (2) a face familiarity level (i.e. recognizing famous faces among unfamiliar ones) and (3) verifying that a face is from a target person, our condition of interest. The minimal speed at which faces can be categorized (∼260ms) or recognized as familiar (∼360ms) has largely been documented in previous studies, and thus provides boundaries to compare our condition of interest to. Twenty-seven participants were included. The recent Speed and Accuracy Boosting procedure paradigm (SAB) was used since it constrains participants to use their fastest strategy. Stimuli were presented either upright or inverted. Results revealed that verifying that a face is from a target person (minimal RT at ∼260ms) was remarkably fast but longer than the face categorization level (∼240ms) and was more sensitive to face inversion. In contrast, it was much faster than recognizing a face as familiar (∼380ms), a level severely affected by face inversion. Face recognition corresponding to finding a specific person in a crowd thus appears achievable in only a quarter of a second. In favor of the 'superordinate advantage' hypothesis or coarse-to-fine account of the face visual hierarchy, these results suggest a graded engagement of the face processing system across processing levels as reflected by the face inversion effects. Furthermore, they underline how verifying that a face is from a target person and detecting a face as familiar - both often referred to as "Face Recognition" - in fact differs. Copyright © 2016 Elsevier B.V. All rights reserved.
Sensitive periods for the functional specialization of the neural system for human face processing.
Röder, Brigitte; Ley, Pia; Shenoy, Bhamy H; Kekunnaya, Ramesh; Bottari, Davide
2013-10-15
The aim of the study was to identify possible sensitive phases in the development of the processing system for human faces. We tested the neural processing of faces in 11 humans who had been blind from birth and had undergone cataract surgery between 2 mo and 14 y of age. Pictures of faces and houses, scrambled versions of these pictures, and pictures of butterflies were presented while event-related potentials were recorded. Participants had to respond to the pictures of butterflies (targets) only. All participants, even those who had been blind from birth for several years, were able to categorize the pictures and to detect the targets. In healthy controls and in a group of visually impaired individuals with a history of developmental or incomplete congenital cataracts, the well-known enhancement of the N170 (negative peak around 170 ms) event-related potential to faces emerged, but a face-sensitive response was not observed in humans with a history of congenital dense cataracts. By contrast, this group showed a similar N170 response to all visual stimuli, which was indistinguishable from the N170 response to faces in the controls. The face-sensitive N170 response has been associated with the structural encoding of faces. Therefore, these data provide evidence for the hypothesis that the functional differentiation of category-specific neural representations in humans, presumably involving the elaboration of inhibitory circuits, is dependent on experience and linked to a sensitive period. Such functional specialization of neural systems seems necessary to archive high processing proficiency.
The organization of conspecific face space in nonhuman primates
Parr, Lisa A.; Taubert, Jessica; Little, Anthony C.; Hancock, Peter J. B.
2013-01-01
Humans and chimpanzees demonstrate numerous cognitive specializations for processing faces, but comparative studies with monkeys suggest that these may be the result of recent evolutionary adaptations. The present study utilized the novel approach of face space, a powerful theoretical framework used to understand the representation of face identity in humans, to further explore species differences in face processing. According to the theory, faces are represented by vectors in a multidimensional space, the centre of which is defined by an average face. Each dimension codes features important for describing a face’s identity, and vector length codes the feature’s distinctiveness. Chimpanzees and rhesus monkeys discriminated male and female conspecifics’ faces, rated by humans for their distinctiveness, using a computerized task. Multidimensional scaling analyses showed that the organization of face space was similar between humans and chimpanzees. Distinctive faces had the longest vectors and were the easiest for chimpanzees to discriminate. In contrast, distinctiveness did not correlate with the performance of rhesus monkeys. The feature dimensions for each species’ face space were visualized and described using morphing techniques. These results confirm species differences in the perceptual representation of conspecific faces, which are discussed within an evolutionary framework. PMID:22670823
Face Processing: Models For Recognition
NASA Astrophysics Data System (ADS)
Turk, Matthew A.; Pentland, Alexander P.
1990-03-01
The human ability to process faces is remarkable. We can identify perhaps thousands of faces learned throughout our lifetime and read facial expression to understand such subtle qualities as emotion. These skills are quite robust, despite sometimes large changes in the visual stimulus due to expression, aging, and distractions such as glasses or changes in hairstyle or facial hair. Computers which model and recognize faces will be useful in a variety of applications, including criminal identification, human-computer interface, and animation. We discuss models for representing faces and their applicability to the task of recognition, and present techniques for identifying faces and detecting eye blinks.
Discrimination of human and dog faces and inversion responses in domestic dogs (Canis familiaris).
Racca, Anaïs; Amadei, Eleonora; Ligout, Séverine; Guo, Kun; Meints, Kerstin; Mills, Daniel
2010-05-01
Although domestic dogs can respond to many facial cues displayed by other dogs and humans, it remains unclear whether they can differentiate individual dogs or humans based on facial cues alone and, if so, whether they would demonstrate the face inversion effect, a behavioural hallmark commonly used in primates to differentiate face processing from object processing. In this study, we first established the applicability of the visual paired comparison (VPC or preferential looking) procedure for dogs using a simple object discrimination task with 2D pictures. The animals demonstrated a clear looking preference for novel objects when simultaneously presented with prior-exposed familiar objects. We then adopted this VPC procedure to assess their face discrimination and inversion responses. Dogs showed a deviation from random behaviour, indicating discrimination capability when inspecting upright dog faces, human faces and object images; but the pattern of viewing preference was dependent upon image category. They directed longer viewing time at novel (vs. familiar) human faces and objects, but not at dog faces, instead, a longer viewing time at familiar (vs. novel) dog faces was observed. No significant looking preference was detected for inverted images regardless of image category. Our results indicate that domestic dogs can use facial cues alone to differentiate individual dogs and humans and that they exhibit a non-specific inversion response. In addition, the discrimination response by dogs of human and dog faces appears to differ with the type of face involved.
Zachariou, Valentinos; Nikas, Christine V; Safiullah, Zaid N; Gotts, Stephen J; Ungerleider, Leslie G
2017-08-01
Human face recognition is often attributed to configural processing; namely, processing the spatial relationships among the features of a face. If configural processing depends on fine-grained spatial information, do visuospatial mechanisms within the dorsal visual pathway contribute to this process? We explored this question in human adults using functional magnetic resonance imaging and transcranial magnetic stimulation (TMS) in a same-different face detection task. Within localized, spatial-processing regions of the posterior parietal cortex, configural face differences led to significantly stronger activation compared to featural face differences, and the magnitude of this activation correlated with behavioral performance. In addition, detection of configural relative to featural face differences led to significantly stronger functional connectivity between the right FFA and the spatial processing regions of the dorsal stream, whereas detection of featural relative to configural face differences led to stronger functional connectivity between the right FFA and left FFA. Critically, TMS centered on these parietal regions impaired performance on configural but not featural face difference detections. We conclude that spatial mechanisms within the dorsal visual pathway contribute to the configural processing of facial features and, more broadly, that the dorsal stream may contribute to the veridical perception of faces. Published by Oxford University Press 2016.
Frässle, Stefan; Paulus, Frieder Michel; Krach, Sören; Schweinberger, Stefan Robert; Stephan, Klaas Enno; Jansen, Andreas
2016-01-01
Perceiving human faces constitutes a fundamental ability of the human mind, integrating a wealth of information essential for social interactions in everyday life. Neuroimaging studies have unveiled a distributed neural network consisting of multiple brain regions in both hemispheres. Whereas the individual regions in the face perception network and the right-hemispheric dominance for face processing have been subject to intensive research, the functional integration among these regions and hemispheres has received considerably less attention. Using dynamic causal modeling (DCM) for fMRI, we analyzed the effective connectivity between the core regions in the face perception network of healthy humans to unveil the mechanisms underlying both intra- and interhemispheric integration. Our results suggest that the right-hemispheric lateralization of the network is due to an asymmetric face-specific interhemispheric recruitment at an early processing stage - that is, at the level of the occipital face area (OFA) but not the fusiform face area (FFA). As a structural correlate, we found that OFA gray matter volume was correlated with this asymmetric interhemispheric recruitment. Furthermore, exploratory analyses revealed that interhemispheric connection asymmetries were correlated with the strength of pupil constriction in response to faces, a measure with potential sensitivity to holistic (as opposed to feature-based) processing of faces. Overall, our findings thus provide a mechanistic description for lateralized processes in the core face perception network, point to a decisive role of interhemispheric integration at an early stage of face processing among bilateral OFA, and tentatively indicate a relation to individual variability in processing strategies for faces. These findings provide a promising avenue for systematic investigations of the potential role of interhemispheric integration in future studies. Copyright © 2015 Elsevier Inc. All rights reserved.
Holistic Processing of Static and Moving Faces
ERIC Educational Resources Information Center
Zhao, Mintao; Bülthoff, Isabelle
2017-01-01
Humans' face ability develops and matures with extensive experience in perceiving, recognizing, and interacting with faces that move most of the time. However, how facial movements affect 1 core aspect of face ability--holistic face processing--remains unclear. Here we investigated the influence of rigid facial motion on holistic and part-based…
Modeling Human Dynamics of Face-to-Face Interaction Networks
NASA Astrophysics Data System (ADS)
Starnini, Michele; Baronchelli, Andrea; Pastor-Satorras, Romualdo
2013-04-01
Face-to-face interaction networks describe social interactions in human gatherings, and are the substrate for processes such as epidemic spreading and gossip propagation. The bursty nature of human behavior characterizes many aspects of empirical data, such as the distribution of conversation lengths, of conversations per person, or of interconversation times. Despite several recent attempts, a general theoretical understanding of the global picture emerging from data is still lacking. Here we present a simple model that reproduces quantitatively most of the relevant features of empirical face-to-face interaction networks. The model describes agents that perform a random walk in a two-dimensional space and are characterized by an attractiveness whose effect is to slow down the motion of people around them. The proposed framework sheds light on the dynamics of human interactions and can improve the modeling of dynamical processes taking place on the ensuing dynamical social networks.
Neurons responsive to face-view in the Primate Ventrolateral Prefrontal Cortex
Romanski, Lizabeth M.; Diehl, Maria M.
2011-01-01
Studies have indicated that temporal and prefrontal brain regions process face and vocal information. Face-selective and vocalization-responsive neurons have been demonstrated in the ventrolateral prefrontal cortex (VLPFC) and some prefrontal cells preferentially respond to combinations of face and corresponding vocalizations. These studies suggest VLPFC in non-human primates may play a role in communication that is similar to the role of inferior frontal regions in human language processing. If VLPFC is involved in communication, information about a speaker's face including identity, face-view, gaze and emotional expression might be encoded by prefrontal neurons. In the following study, we examined the effect of face-view in ventrolateral prefrontal neurons by testing cells with auditory, visual, and a set of human and monkey faces rotated through 0°, 30°, 60°, 90°, and −30°. Prefrontal neurons responded selectively to either the identity of the face presented (human or monkey) or to the specific view of the face/head, or to both identity and face-view. Neurons which were affected by the identity of the face most often showed an increase in firing in the second part of the stimulus period. Neurons that were selective for face-view typically preferred forward face-view stimuli (0° and 30° rotation). The neurons which were selective for forward face-view were also auditory responsive compared to other neurons which responded to other views or were unselective which were not auditory responsive. Our analysis showed that the human forward face (0°) was decoded better and also contained the most information relative to other face-views. Our findings confirm a role for VLPFC in the processing and integration of face and vocalization information and add to the growing body of evidence that the primate ventrolateral prefrontal cortex plays a prominent role in social communication and is an important model in understanding the cellular mechanisms of communication. PMID:21605632
Human face processing is tuned to sexual age preferences
Ponseti, J.; Granert, O.; van Eimeren, T.; Jansen, O.; Wolff, S.; Beier, K.; Deuschl, G.; Bosinski, H.; Siebner, H.
2014-01-01
Human faces can motivate nurturing behaviour or sexual behaviour when adults see a child or an adult face, respectively. This suggests that face processing is tuned to detecting age cues of sexual maturity to stimulate the appropriate reproductive behaviour: either caretaking or mating. In paedophilia, sexual attraction is directed to sexually immature children. Therefore, we hypothesized that brain networks that normally are tuned to mature faces of the preferred gender show an abnormal tuning to sexual immature faces in paedophilia. Here, we use functional magnetic resonance imaging (fMRI) to test directly for the existence of a network which is tuned to face cues of sexual maturity. During fMRI, participants sexually attracted to either adults or children were exposed to various face images. In individuals attracted to adults, adult faces activated several brain regions significantly more than child faces. These brain regions comprised areas known to be implicated in face processing, and sexual processing, including occipital areas, the ventrolateral prefrontal cortex and, subcortically, the putamen and nucleus caudatus. The same regions were activated in paedophiles, but with a reversed preferential response pattern. PMID:24850896
The evolution of face processing in primates
Parr, Lisa A.
2011-01-01
The ability to recognize faces is an important socio-cognitive skill that is associated with a number of cognitive specializations in humans. While numerous studies have examined the presence of these specializations in non-human primates, species where face recognition would confer distinct advantages in social situations, results have been mixed. The majority of studies in chimpanzees support homologous face-processing mechanisms with humans, but results from monkey studies appear largely dependent on the type of testing methods used. Studies that employ passive viewing paradigms, like the visual paired comparison task, report evidence of similarities between monkeys and humans, but tasks that use more stringent, operant response tasks, like the matching-to-sample task, often report species differences. Moreover, the data suggest that monkeys may be less sensitive than chimpanzees and humans to the precise spacing of facial features, in addition to the surface-based cues reflected in those features, information that is critical for the representation of individual identity. The aim of this paper is to provide a comprehensive review of the available data from face-processing tasks in non-human primates with the goal of understanding the evolution of this complex cognitive skill. PMID:21536559
Vanderwert, Ross E; Westerlund, Alissa; Montoya, Lina; McCormick, Sarah A; Miguel, Helga O; Nelson, Charles A
2015-10-01
Previous studies in infants have shown that face-sensitive components of the ongoing electroencephalogram (the event-related potential, or ERP) are larger in amplitude to negative emotions (e.g., fear, anger) versus positive emotions (e.g., happy). However, it is still unclear whether the negative emotions linked with the face or the negative emotions alone contribute to these amplitude differences. We simultaneously recorded infant looking behaviors (via eye-tracking) and face-sensitive ERPs while 7-month-old infants viewed human faces or animals displaying happy, fear, or angry expressions. We observed that the amplitude of the N290 was greater (i.e., more negative) to angry animals compared to happy or fearful animals; no such differences were obtained for human faces. Eye-tracking data highlighted the importance of the eye region in processing emotional human faces. Infants that spent more time looking to the eye region of human faces showing fearful or angry expressions had greater N290 or P400 amplitudes, respectively. © 2014 Wiley Periodicals, Inc.
Lateralization for dynamic facial expressions in human superior temporal sulcus.
De Winter, François-Laurent; Zhu, Qi; Van den Stock, Jan; Nelissen, Koen; Peeters, Ronald; de Gelder, Beatrice; Vanduffel, Wim; Vandenbulcke, Mathieu
2015-02-01
Most face processing studies in humans show stronger activation in the right compared to the left hemisphere. Evidence is largely based on studies with static stimuli focusing on the fusiform face area (FFA). Hence, the pattern of lateralization for dynamic faces is less clear. Furthermore, it is unclear whether this property is common to human and non-human primates due to predisposing processing strategies in the right hemisphere or that alternatively left sided specialization for language in humans could be the driving force behind this phenomenon. We aimed to address both issues by studying lateralization for dynamic facial expressions in monkeys and humans. Therefore, we conducted an event-related fMRI experiment in three macaques and twenty right handed humans. We presented human and monkey dynamic facial expressions (chewing and fear) as well as scrambled versions to both species. We studied lateralization in independently defined face-responsive and face-selective regions by calculating a weighted lateralization index (LIwm) using a bootstrapping method. In order to examine if lateralization in humans is related to language, we performed a separate fMRI experiment in ten human volunteers including a 'speech' expression (one syllable non-word) and its scrambled version. Both within face-responsive and selective regions, we found consistent lateralization for dynamic faces (chewing and fear) versus scrambled versions in the right human posterior superior temporal sulcus (pSTS), but not in FFA nor in ventral temporal cortex. Conversely, in monkeys no consistent pattern of lateralization for dynamic facial expressions was observed. Finally, LIwms based on the contrast between different types of dynamic facial expressions (relative to scrambled versions) revealed left-sided lateralization in human pSTS for speech-related expressions compared to chewing and emotional expressions. To conclude, we found consistent laterality effects in human posterior STS but not in visual cortex of monkeys. Based on our results, it is tempting to speculate that lateralization for dynamic face processing in humans may be driven by left-hemispheric language specialization which may not have been present yet in the common ancestor of human and macaque monkeys. Copyright © 2014 Elsevier Inc. All rights reserved.
Monocular Advantage for Face Perception Implicates Subcortical Mechanisms in Adult Humans
Gabay, Shai; Nestor, Adrian; Dundas, Eva; Behrmann, Marlene
2014-01-01
The ability to recognize faces accurately and rapidly is an evolutionarily adaptive process. Most studies examining the neural correlates of face perception in adult humans have focused on a distributed cortical network of face-selective regions. There is, however, robust evidence from phylogenetic and ontogenetic studies that implicates subcortical structures, and recently, some investigations in adult humans indicate subcortical correlates of face perception as well. The questions addressed here are whether low-level subcortical mechanisms for face perception (in the absence of changes in expression) are conserved in human adults, and if so, what is the nature of these subcortical representations. In a series of four experiments, we presented pairs of images to the same or different eyes. Participants’ performance demonstrated that subcortical mechanisms, indexed by monocular portions of the visual system, play a functional role in face perception. These mechanisms are sensitive to face-like configurations and afford a coarse representation of a face, comprised of primarily low spatial frequency information, which suffices for matching faces but not for more complex aspects of face perception such as sex differentiation. Importantly, these subcortical mechanisms are not implicated in the perception of other visual stimuli, such as cars or letter strings. These findings suggest a conservation of phylogenetically and ontogenetically lower-order systems in adult human face perception. The involvement of subcortical structures in face recognition provokes a reconsideration of current theories of face perception, which are reliant on cortical level processing, inasmuch as it bolsters the cross-species continuity of the biological system for face recognition. PMID:24236767
Domain specificity versus expertise: factors influencing distinct processing of faces.
Carmel, David; Bentin, Shlomo
2002-02-01
To explore face specificity in visual processing, we compared the role of task-associated strategies and expertise on the N170 event-related potential (ERP) component elicited by human faces with the ERPs elicited by cars, birds, items of furniture, and ape faces. In Experiment 1, participants performed a car monitoring task and an animacy decision task. In Experiment 2, participants monitored human faces while faces of apes were the distracters. Faces elicited an equally conspicuous N170, significantly larger than the ERPs elicited by non-face categories regardless of whether they were ignored or had an equal status with other categories (Experiment 1), or were the targets (in Experiment 2). In contrast, the negative component elicited by cars during the same time range was larger if they were targets than if they were not. Furthermore, unlike the posterior-temporal distribution of the N170, the negative component elicited by cars and its modulation by task were more conspicuous at occipital sites. Faces of apes elicited an N170 that was similar in amplitude to that elicited by the human face targets, albeit peaking 10 ms later. As our participants were not ape experts, this pattern indicates that the N170 is face-specific, but not specie-specific, i.e. it is elicited by particular face features regardless of expertise. Overall, these results demonstrate the domain specificity of the visual mechanism implicated in processing faces, a mechanism which is not influenced by either task or expertise. The processing of other objects is probably accomplished by a more general visual processor, which is sensitive to strategic manipulations and attention.
Human face processing is tuned to sexual age preferences.
Ponseti, J; Granert, O; van Eimeren, T; Jansen, O; Wolff, S; Beier, K; Deuschl, G; Bosinski, H; Siebner, H
2014-05-01
Human faces can motivate nurturing behaviour or sexual behaviour when adults see a child or an adult face, respectively. This suggests that face processing is tuned to detecting age cues of sexual maturity to stimulate the appropriate reproductive behaviour: either caretaking or mating. In paedophilia, sexual attraction is directed to sexually immature children. Therefore, we hypothesized that brain networks that normally are tuned to mature faces of the preferred gender show an abnormal tuning to sexual immature faces in paedophilia. Here, we use functional magnetic resonance imaging (fMRI) to test directly for the existence of a network which is tuned to face cues of sexual maturity. During fMRI, participants sexually attracted to either adults or children were exposed to various face images. In individuals attracted to adults, adult faces activated several brain regions significantly more than child faces. These brain regions comprised areas known to be implicated in face processing, and sexual processing, including occipital areas, the ventrolateral prefrontal cortex and, subcortically, the putamen and nucleus caudatus. The same regions were activated in paedophiles, but with a reversed preferential response pattern. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
A novel BCI based on ERP components sensitive to configural processing of human faces
NASA Astrophysics Data System (ADS)
Zhang, Yu; Zhao, Qibin; Jing, Jin; Wang, Xingyu; Cichocki, Andrzej
2012-04-01
This study introduces a novel brain-computer interface (BCI) based on an oddball paradigm using stimuli of facial images with loss of configural face information (e.g., inversion of face). To the best of our knowledge, till now the configural processing of human faces has not been applied to BCI but widely studied in cognitive neuroscience research. Our experiments confirm that the face-sensitive event-related potential (ERP) components N170 and vertex positive potential (VPP) have reflected early structural encoding of faces and can be modulated by the configural processing of faces. With the proposed novel paradigm, we investigate the effects of ERP components N170, VPP and P300 on target detection for BCI. An eight-class BCI platform is developed to analyze ERPs and evaluate the target detection performance using linear discriminant analysis without complicated feature extraction processing. The online classification accuracy of 88.7% and information transfer rate of 38.7 bits min-1 using stimuli of inverted faces with only single trial suggest that the proposed paradigm based on the configural processing of faces is very promising for visual stimuli-driven BCI applications.
A novel BCI based on ERP components sensitive to configural processing of human faces.
Zhang, Yu; Zhao, Qibin; Jin, Jing; Wang, Xingyu; Cichocki, Andrzej
2012-04-01
This study introduces a novel brain-computer interface (BCI) based on an oddball paradigm using stimuli of facial images with loss of configural face information (e.g., inversion of face). To the best of our knowledge, till now the configural processing of human faces has not been applied to BCI but widely studied in cognitive neuroscience research. Our experiments confirm that the face-sensitive event-related potential (ERP) components N170 and vertex positive potential (VPP) have reflected early structural encoding of faces and can be modulated by the configural processing of faces. With the proposed novel paradigm, we investigate the effects of ERP components N170, VPP and P300 on target detection for BCI. An eight-class BCI platform is developed to analyze ERPs and evaluate the target detection performance using linear discriminant analysis without complicated feature extraction processing. The online classification accuracy of 88.7% and information transfer rate of 38.7 bits min(-1) using stimuli of inverted faces with only single trial suggest that the proposed paradigm based on the configural processing of faces is very promising for visual stimuli-driven BCI applications.
Social contact and other-race face processing in the human brain
Silvert, Laetitia; Hewstone, Miles; Nobre, Anna C.
2008-01-01
The present study investigated the influence social factors upon the neural processing of faces of other races using event-related potentials. A multi-tiered approach was used to identify face-specific stages of processing, to test for effects of race-of-face upon processing at these stages and to evaluate the impact of social contact and individuating experience upon these effects. The results showed that race-of-face has significant effects upon face processing, starting from early perceptual stages of structural encoding, and that social factors may play an important role in mediating these effects. PMID:19015091
Golarai, Golijeh; Liberman, Alina; Grill-Spector, Kalanit
2017-02-01
In adult humans, the ventral temporal cortex (VTC) represents faces in a reproducible topology. However, it is unknown what role visual experience plays in the development of this topology. Using functional magnetic resonance imaging in children and adults, we found a sequential development, in which the topology of face-selective activations across the VTC was matured by age 7, but the spatial extent and degree of face selectivity continued to develop past age 7 into adulthood. Importantly, own- and other-age faces were differentially represented, both in the distributed multivoxel patterns across the VTC, and also in the magnitude of responses of face-selective regions. These results provide strong evidence that experience shapes cortical representations of faces during development from childhood to adulthood. Our findings have important implications for the role of experience and age in shaping the neural substrates of face processing in the human VTC. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
From Caregivers to Peers: Puberty Shapes Human Face Perception.
Picci, Giorgia; Scherf, K Suzanne
2016-11-01
Puberty prepares mammals to sexually reproduce during adolescence. It is also hypothesized to invoke a social metamorphosis that prepares adolescents to take on adult social roles. We provide the first evidence to support this hypothesis in humans and show that pubertal development retunes the face-processing system from a caregiver bias to a peer bias. Prior to puberty, children exhibit enhanced recognition for adult female faces. With puberty, superior recognition emerges for peer faces that match one's pubertal status. As puberty progresses, so does the peer recognition bias. Adolescents become better at recognizing faces with a pubertal status similar to their own. These findings reconceptualize the adolescent "dip" in face recognition by showing that it is a recalibration of the face-processing system away from caregivers toward peers. Thus, in addition to preparing the physical body for sexual reproduction, puberty shapes the perceptual system for processing the social world in new ways. © The Author(s) 2016.
Crossing the “Uncanny Valley”: adaptation to cartoon faces can influence perception of human faces
Chen, Haiwen; Russell, Richard; Nakayama, Ken; Livingstone, Margaret
2013-01-01
Adaptation can shift what individuals identify to be a prototypical or attractive face. Past work suggests that low-level shape adaptation can affect high-level face processing but is position dependent. Adaptation to distorted images of faces can also affect face processing but only within sub-categories of faces, such as gender, age, and race/ethnicity. This study assesses whether there is a representation of face that is specific to faces (as opposed to all shapes) but general to all kinds of faces (as opposed to subcategories) by testing whether adaptation to one type of face can affect perception of another. Participants were shown cartoon videos containing faces with abnormally large eyes. Using animated videos allowed us to simulate naturalistic exposure and avoid positional shape adaptation. Results suggest that adaptation to cartoon faces with large eyes shifts preferences for human faces toward larger eyes, supporting the existence of general face representations. PMID:20465173
Neurons responsive to face-view in the primate ventrolateral prefrontal cortex.
Romanski, L M; Diehl, M M
2011-08-25
Studies have indicated that temporal and prefrontal brain regions process face and vocal information. Face-selective and vocalization-responsive neurons have been demonstrated in the ventrolateral prefrontal cortex (VLPFC) and some prefrontal cells preferentially respond to combinations of face and corresponding vocalizations. These studies suggest VLPFC in nonhuman primates may play a role in communication that is similar to the role of inferior frontal regions in human language processing. If VLPFC is involved in communication, information about a speaker's face including identity, face-view, gaze, and emotional expression might be encoded by prefrontal neurons. In the following study, we examined the effect of face-view in ventrolateral prefrontal neurons by testing cells with auditory, visual, and a set of human and monkey faces rotated through 0°, 30°, 60°, 90°, and -30°. Prefrontal neurons responded selectively to either the identity of the face presented (human or monkey) or to the specific view of the face/head, or to both identity and face-view. Neurons which were affected by the identity of the face most often showed an increase in firing in the second part of the stimulus period. Neurons that were selective for face-view typically preferred forward face-view stimuli (0° and 30° rotation). The neurons which were selective for forward face-view were also auditory responsive compared to other neurons which responded to other views or were unselective which were not auditory responsive. Our analysis showed that the human forward face (0°) was decoded better and also contained the most information relative to other face-views. Our findings confirm a role for VLPFC in the processing and integration of face and vocalization information and add to the growing body of evidence that the primate ventrolateral prefrontal cortex plays a prominent role in social communication and is an important model in understanding the cellular mechanisms of communication. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved.
Holistic processing of static and moving faces.
Zhao, Mintao; Bülthoff, Isabelle
2017-07-01
Humans' face ability develops and matures with extensive experience in perceiving, recognizing, and interacting with faces that move most of the time. However, how facial movements affect 1 core aspect of face ability-holistic face processing-remains unclear. Here we investigated the influence of rigid facial motion on holistic and part-based face processing by manipulating the presence of facial motion during study and at test in a composite face task. The results showed that rigidly moving faces were processed as holistically as static faces (Experiment 1). Holistic processing of moving faces persisted whether facial motion was presented during study, at test, or both (Experiment 2). Moreover, when faces were inverted to eliminate the contributions of both an upright face template and observers' expertise with upright faces, rigid facial motion facilitated holistic face processing (Experiment 3). Thus, holistic processing represents a general principle of face perception that applies to both static and dynamic faces, rather than being limited to static faces. These results support an emerging view that both perceiver-based and face-based factors contribute to holistic face processing, and they offer new insights on what underlies holistic face processing, how information supporting holistic face processing interacts with each other, and why facial motion may affect face recognition and holistic face processing differently. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Deep--deeper--deepest? Encoding strategies and the recognition of human faces.
Sporer, S L
1991-03-01
Various encoding strategies that supposedly promote deeper processing of human faces (e.g., character judgments) have led to better recognition than more shallow processing tasks (judging the width of the nose). However, does deeper processing actually lead to an improvement in recognition, or, conversely, does shallow processing lead to a deterioration in performance when compared with naturally employed encoding strategies? Three experiments systematically compared a total of 8 different encoding strategies manipulating depth of processing, amount of elaboration, and self-generation of judgmental categories. All strategies that required a scanning of the whole face were basically equivalent but no better than natural strategy controls. The consistently worst groups were the ones that rated faces along preselected physical dimensions. This can be explained by subjects' lesser task involvement as revealed by manipulation checks.
The Face-Processing Network Is Resilient to Focal Resection of Human Visual Cortex
Jonas, Jacques; Gomez, Jesse; Maillard, Louis; Brissart, Hélène; Hossu, Gabriela; Jacques, Corentin; Loftus, David; Colnat-Coulbois, Sophie; Stigliani, Anthony; Barnett, Michael A.; Grill-Spector, Kalanit; Rossion, Bruno
2016-01-01
Human face perception requires a network of brain regions distributed throughout the occipital and temporal lobes with a right hemisphere advantage. Present theories consider this network as either a processing hierarchy beginning with the inferior occipital gyrus (occipital face area; IOG-faces/OFA) or a multiple-route network with nonhierarchical components. The former predicts that removing IOG-faces/OFA will detrimentally affect downstream stages, whereas the latter does not. We tested this prediction in a human patient (Patient S.P.) requiring removal of the right inferior occipital cortex, including IOG-faces/OFA. We acquired multiple fMRI measurements in Patient S.P. before and after a preplanned surgery and multiple measurements in typical controls, enabling both within-subject/across-session comparisons (Patient S.P. before resection vs Patient S.P. after resection) and between-subject/across-session comparisons (Patient S.P. vs controls). We found that the spatial topology and selectivity of downstream ipsilateral face-selective regions were stable 1 and 8 month(s) after surgery. Additionally, the reliability of distributed patterns of face selectivity in Patient S.P. before versus after resection was not different from across-session reliability in controls. Nevertheless, postoperatively, representations of visual space were typical in dorsal face-selective regions but atypical in ventral face-selective regions and V1 of the resected hemisphere. Diffusion weighted imaging in Patient S.P. and controls identifies white matter tracts connecting retinotopic areas to downstream face-selective regions, which may contribute to the stable and plastic features of the face network in Patient S.P. after surgery. Together, our results support a multiple-route network of face processing with nonhierarchical components and shed light on stable and plastic features of high-level visual cortex following focal brain damage. SIGNIFICANCE STATEMENT Brain networks consist of interconnected functional regions commonly organized in processing hierarchies. Prevailing theories predict that damage to the input of the hierarchy will detrimentally affect later stages. We tested this prediction with multiple brain measurements in a rare human patient requiring surgical removal of the putative input to a network processing faces. Surprisingly, the spatial topology and selectivity of downstream face-selective regions are stable after surgery. Nevertheless, representations of visual space were typical in dorsal face-selective regions but atypical in ventral face-selective regions and V1. White matter connections from outside the face network may support these stable and plastic features. As processing hierarchies are ubiquitous in biological and nonbiological systems, our results have pervasive implications for understanding the construction of resilient networks. PMID:27511014
Almeida, Inês; van Asselen, Marieke; Castelo-Branco, Miguel
2013-09-01
In human cognition, most relevant stimuli, such as faces, are processed in central vision. However, it is widely believed that recognition of relevant stimuli (e.g. threatening animal faces) at peripheral locations is also important due to their survival value. Moreover, task instructions have been shown to modulate brain regions involved in threat recognition (e.g. the amygdala). In this respect it is also controversial whether tasks requiring explicit focus on stimulus threat content vs. implicit processing differently engage primitive subcortical structures involved in emotional appraisal. Here we have addressed the role of central vs. peripheral processing in the human amygdala using animal threatening vs. non-threatening face stimuli. First, a simple animal face recognition task with threatening and non-threatening animal faces, as well as non-face control stimuli, was employed in naïve subjects (implicit task). A subsequent task was then performed with the same stimulus categories (but different stimuli) in which subjects were told to explicitly detect threat signals. We found lateralized amygdala responses both to the spatial location of stimuli and to the threatening content of faces depending on the task performed: the right amygdala showed increased responses to central compared to left presented stimuli specifically during the threat detection task, while the left amygdala was better prone to discriminate threatening faces from non-facial displays during the animal face recognition task. Additionally, the right amygdala responded to faces during the threat detection task but only when centrally presented. Moreover, we have found no evidence for superior responses of the amygdala to peripheral stimuli. Importantly, we have found that striatal regions activate differentially depending on peripheral vs. central processing of threatening faces. Accordingly, peripheral processing of these stimuli activated more strongly the putaminal region, while central processing engaged mainly the caudate nucleus. We conclude that the human amygdala has a central bias for face stimuli, and that visual processing recruits different striatal regions, putaminal or caudate based, depending on the task and on whether peripheral or central visual processing is involved. © 2013 Elsevier Ltd. All rights reserved.
Animal, but Not Human, Faces Engage the Distributed Face Network in Adolescents with Autism
ERIC Educational Resources Information Center
Whyte, Elisabeth M.; Behrmann, Marlene; Minshew, Nancy J.; Garcia, Natalie V.; Scherf, K. Suzanne
2016-01-01
Multiple hypotheses have been offered to explain the impaired face-processing behavior and the accompanying underlying disruptions in neural circuitry among individuals with autism. We explored the specificity of atypical face-processing activation and potential alterations to fusiform gyrus (FG) morphology as potential underlying mechanisms.…
Face Patch Resting State Networks Link Face Processing to Social Cognition
Schwiedrzik, Caspar M.; Zarco, Wilbert; Everling, Stefan; Freiwald, Winrich A.
2015-01-01
Faces transmit a wealth of social information. How this information is exchanged between face-processing centers and brain areas supporting social cognition remains largely unclear. Here we identify these routes using resting state functional magnetic resonance imaging in macaque monkeys. We find that face areas functionally connect to specific regions within frontal, temporal, and parietal cortices, as well as subcortical structures supporting emotive, mnemonic, and cognitive functions. This establishes the existence of an extended face-recognition system in the macaque. Furthermore, the face patch resting state networks and the default mode network in monkeys show a pattern of overlap akin to that between the social brain and the default mode network in humans: this overlap specifically includes the posterior superior temporal sulcus, medial parietal, and dorsomedial prefrontal cortex, areas supporting high-level social cognition in humans. Together, these results reveal the embedding of face areas into larger brain networks and suggest that the resting state networks of the face patch system offer a new, easily accessible venue into the functional organization of the social brain and into the evolution of possibly uniquely human social skills. PMID:26348613
Holistic processing of human body postures: evidence from the composite effect.
Willems, Sam; Vrancken, Leia; Germeys, Filip; Verfaillie, Karl
2014-01-01
The perception of socially relevant stimuli (e.g., faces and bodies) has received considerable attention in the vision science community. It is now widely accepted that human faces are processed holistically and not only analytically. One observation that has been taken as evidence for holistic face processing is the face composite effect: two identical top halves of a face tend to be perceived as being different when combined with different bottom halves. This supports the hypothesis that face processing proceeds holistically. Indeed, the interference effect disappears when the two face parts are misaligned (blocking holistic perception). In the present study, we investigated whether there is also a composite effect for the perception of body postures: are two identical body halves perceived as being in different poses when the irrelevant body halves differ from each other? Both a horizontal (i.e., top-bottom body halves; Experiment 1) and a vertical composite effect (i.e., left-right body halves; Experiment 2) were examined by means of a delayed matching-to-sample task. Results of both experiments indicate the existence of a body posture composite effect. This provides evidence for the hypothesis that body postures, as faces, are processed holistically.
Holistic processing of human body postures: evidence from the composite effect
Willems, Sam; Vrancken, Leia; Germeys, Filip; Verfaillie, Karl
2014-01-01
The perception of socially relevant stimuli (e.g., faces and bodies) has received considerable attention in the vision science community. It is now widely accepted that human faces are processed holistically and not only analytically. One observation that has been taken as evidence for holistic face processing is the face composite effect: two identical top halves of a face tend to be perceived as being different when combined with different bottom halves. This supports the hypothesis that face processing proceeds holistically. Indeed, the interference effect disappears when the two face parts are misaligned (blocking holistic perception). In the present study, we investigated whether there is also a composite effect for the perception of body postures: are two identical body halves perceived as being in different poses when the irrelevant body halves differ from each other? Both a horizontal (i.e., top-bottom body halves; Experiment 1) and a vertical composite effect (i.e., left-right body halves; Experiment 2) were examined by means of a delayed matching-to-sample task. Results of both experiments indicate the existence of a body posture composite effect. This provides evidence for the hypothesis that body postures, as faces, are processed holistically. PMID:24999337
Face Processing in Children with ASD: Literature Review
ERIC Educational Resources Information Center
Campatelli, G.; Federico, R. R.; Apicella, F.; Sicca, F.; Muratori, F.
2013-01-01
Face processing has been studied and discussed in depth during previous decades in several branches of science, and evidence from research supports the view that this process is a highly specialized brain function. Several authors argue that difficulties in the use and comprehension of the information conveyed by human faces could represent a core…
Effect of familiarity and viewpoint on face recognition in chimpanzees
Parr, Lisa A; Siebert, Erin; Taubert, Jessica
2012-01-01
Numerous studies have shown that familiarity strongly influences how well humans recognize faces. This is particularly true when faces are encountered across a change in viewpoint. In this situation, recognition may be accomplished by matching partial or incomplete information about a face to a stored representation of the known individual, whereas such representations are not available for unknown faces. Chimpanzees, our closest living relatives, share many of the same behavioral specializations for face processing as humans, but the influence of familiarity and viewpoint have never been compared in the same study. Here, we examined the ability of chimpanzees to match the faces of familiar and unfamiliar conspecifics in their frontal and 3/4 views using a computerized task. Results showed that, while chimpanzees were able to accurately match both familiar and unfamiliar faces in their frontal orientations, performance was significantly impaired only when unfamiliar faces were presented across a change in viewpoint. Therefore, like in humans, face processing in chimpanzees appears to be sensitive to individual familiarity. We propose that familiarization is a robust mechanism for strengthening the representation of faces and has been conserved in primates to achieve efficient individual recognition over a range of natural viewing conditions. PMID:22128558
Neural networks related to dysfunctional face processing in autism spectrum disorder
Nickl-Jockschat, Thomas; Rottschy, Claudia; Thommes, Johanna; Schneider, Frank; Laird, Angela R.; Fox, Peter T.; Eickhoff, Simon B.
2016-01-01
One of the most consistent neuropsychological findings in autism spectrum disorders (ASD) is a reduced interest in and impaired processing of human faces. We conducted an activation likelihood estimation meta-analysis on 14 functional imaging studies on neural correlates of face processing enrolling a total of 164 ASD patients. Subsequently, normative whole-brain functional connectivity maps for the identified regions of significant convergence were computed for the task-independent (resting-state) and task-dependent (co-activations) state in healthy subjects. Quantitative functional decoding was performed by reference to the BrainMap database. Finally, we examined the overlap of the delineated network with the results of a previous meta-analysis on structural abnormalities in ASD as well as with brain regions involved in human action observation/imitation. We found a single cluster in the left fusiform gyrus showing significantly reduced activation during face processing in ASD across all studies. Both task-dependent and task-independent analyses indicated significant functional connectivity of this region with the temporo-occipital and lateral occipital cortex, the inferior frontal and parietal cortices, the thalamus and the amygdala. Quantitative reverse inference then indicated an association of these regions mainly with face processing, affective processing, and language-related tasks. Moreover, we found that the cortex in the region of right area V5 displaying structural changes in ASD patients showed consistent connectivity with the region showing aberrant responses in the context of face processing. Finally, this network was also implicated in the human action observation/imitation network. In summary, our findings thus suggest a functionally and structurally disturbed network of occipital regions related primarily to face (but potentially also language) processing, which interact with inferior frontal as well as limbic regions and may be the core of aberrant face processing and reduced interest in faces in ASD. PMID:24869925
Neural correlates of perceptual narrowing in cross-species face-voice matching.
Grossmann, Tobias; Missana, Manuela; Friederici, Angela D; Ghazanfar, Asif A
2012-11-01
Integrating the multisensory features of talking faces is critical to learning and extracting coherent meaning from social signals. While we know much about the development of these capacities at the behavioral level, we know very little about the underlying neural processes. One prominent behavioral milestone of these capacities is the perceptual narrowing of face-voice matching, whereby young infants match faces and voices across species, but older infants do not. In the present study, we provide neurophysiological evidence for developmental decline in cross-species face-voice matching. We measured event-related brain potentials (ERPs) while 4- and 8-month-old infants watched and listened to congruent and incongruent audio-visual presentations of monkey vocalizations and humans mimicking monkey vocalizations. The ERP results indicated that younger infants distinguished between the congruent and the incongruent faces and voices regardless of species, whereas in older infants, the sensitivity to multisensory congruency was limited to the human face and voice. Furthermore, with development, visual and frontal brain processes and their functional connectivity became more sensitive to the congruence of human faces and voices relative to monkey faces and voices. Our data show the neural correlates of perceptual narrowing in face-voice matching and support the notion that postnatal experience with species identity is associated with neural changes in multisensory processing (Lewkowicz & Ghazanfar, 2009). © 2012 Blackwell Publishing Ltd.
The many faces of research on face perception.
Little, Anthony C; Jones, Benedict C; DeBruine, Lisa M
2011-06-12
Face perception is fundamental to human social interaction. Many different types of important information are visible in faces and the processes and mechanisms involved in extracting this information are complex and can be highly specialized. The importance of faces has long been recognized by a wide range of scientists. Importantly, the range of perspectives and techniques that this breadth has brought to face perception research has, in recent years, led to many important advances in our understanding of face processing. The articles in this issue on face perception each review a particular arena of interest in face perception, variously focusing on (i) the social aspects of face perception (attraction, recognition and emotion), (ii) the neural mechanisms underlying face perception (using brain scanning, patient data, direct stimulation of the brain, visual adaptation and single-cell recording), and (iii) comparative aspects of face perception (comparing adult human abilities with those of chimpanzees and children). Here, we introduce the central themes of the issue and present an overview of the articles.
Tsao, Doris Y.
2009-01-01
Faces are among the most informative stimuli we ever perceive: Even a split-second glimpse of a person's face tells us their identity, sex, mood, age, race, and direction of attention. The specialness of face processing is acknowledged in the artificial vision community, where contests for face recognition algorithms abound. Neurological evidence strongly implicates a dedicated machinery for face processing in the human brain, to explain the double dissociability of face and object recognition deficits. Furthermore, it has recently become clear that macaques too have specialized neural machinery for processing faces. Here we propose a unifying hypothesis, deduced from computational, neurological, fMRI, and single-unit experiments: that what makes face processing special is that it is gated by an obligatory detection process. We will clarify this idea in concrete algorithmic terms, and show how it can explain a variety of phenomena associated with face processing. PMID:18558862
Matheson, H E; Bilsbury, T G; McMullen, P A
2012-03-01
A large body of research suggests that faces are processed by a specialized mechanism within the human visual system. This specialized mechanism is made up of subprocesses (Maurer, LeGrand, & Mondloch, 2002). One subprocess, called second- order relational processing, analyzes the metric distances between face parts. Importantly, it is well established that other-race faces and contrast-reversed faces are associated with impaired performance on numerous face processing tasks. Here, we investigated the specificity of second-order relational processing by testing how this process is applied to faces of different race and photographic contrast. Participants completed a feature displacement discrimination task, directly measuring the sensitivity to second-order relations between face parts. Across three experiments we show that, despite absolute differences in sensitivity in some conditions, inversion impaired performance in all conditions. The presence of robust inversion effects for all faces suggests that second-order relational processing can be applied to faces of different race and photographic contrast.
On the facilitative effects of face motion on face recognition and its development
Xiao, Naiqi G.; Perrotta, Steve; Quinn, Paul C.; Wang, Zhe; Sun, Yu-Hao P.; Lee, Kang
2014-01-01
For the past century, researchers have extensively studied human face processing and its development. These studies have advanced our understanding of not only face processing, but also visual processing in general. However, most of what we know about face processing was investigated using static face images as stimuli. Therefore, an important question arises: to what extent does our understanding of static face processing generalize to face processing in real-life contexts in which faces are mostly moving? The present article addresses this question by examining recent studies on moving face processing to uncover the influence of facial movements on face processing and its development. First, we describe evidence on the facilitative effects of facial movements on face recognition and two related theoretical hypotheses: the supplementary information hypothesis and the representation enhancement hypothesis. We then highlight several recent studies suggesting that facial movements optimize face processing by activating specific face processing strategies that accommodate to task requirements. Lastly, we review the influence of facial movements on the development of face processing in the first year of life. We focus on infants' sensitivity to facial movements and explore the facilitative effects of facial movements on infants' face recognition performance. We conclude by outlining several future directions to investigate moving face processing and emphasize the importance of including dynamic aspects of facial information to further understand face processing in real-life contexts. PMID:25009517
Multimodal processing of emotional information in 9-month-old infants I: emotional faces and voices.
Otte, R A; Donkers, F C L; Braeken, M A K A; Van den Bergh, B R H
2015-04-01
Making sense of emotions manifesting in human voice is an important social skill which is influenced by emotions in other modalities, such as that of the corresponding face. Although processing emotional information from voices and faces simultaneously has been studied in adults, little is known about the neural mechanisms underlying the development of this ability in infancy. Here we investigated multimodal processing of fearful and happy face/voice pairs using event-related potential (ERP) measures in a group of 84 9-month-olds. Infants were presented with emotional vocalisations (fearful/happy) preceded by the same or a different facial expression (fearful/happy). The ERP data revealed that the processing of emotional information appearing in human voice was modulated by the emotional expression appearing on the corresponding face: Infants responded with larger auditory ERPs after fearful compared to happy facial primes. This finding suggests that infants dedicate more processing capacities to potentially threatening than to non-threatening stimuli. Copyright © 2014 Elsevier Inc. All rights reserved.
Simulation of talking faces in the human brain improves auditory speech recognition
von Kriegstein, Katharina; Dogan, Özgür; Grüter, Martina; Giraud, Anne-Lise; Kell, Christian A.; Grüter, Thomas; Kleinschmidt, Andreas; Kiebel, Stefan J.
2008-01-01
Human face-to-face communication is essentially audiovisual. Typically, people talk to us face-to-face, providing concurrent auditory and visual input. Understanding someone is easier when there is visual input, because visual cues like mouth and tongue movements provide complementary information about speech content. Here, we hypothesized that, even in the absence of visual input, the brain optimizes both auditory-only speech and speaker recognition by harvesting speaker-specific predictions and constraints from distinct visual face-processing areas. To test this hypothesis, we performed behavioral and neuroimaging experiments in two groups: subjects with a face recognition deficit (prosopagnosia) and matched controls. The results show that observing a specific person talking for 2 min improves subsequent auditory-only speech and speaker recognition for this person. In both prosopagnosics and controls, behavioral improvement in auditory-only speech recognition was based on an area typically involved in face-movement processing. Improvement in speaker recognition was only present in controls and was based on an area involved in face-identity processing. These findings challenge current unisensory models of speech processing, because they show that, in auditory-only speech, the brain exploits previously encoded audiovisual correlations to optimize communication. We suggest that this optimization is based on speaker-specific audiovisual internal models, which are used to simulate a talking face. PMID:18436648
Zhang, Jiedong; Liu, Jia
2015-01-01
Most of human daily social interactions rely on the ability to successfully recognize faces. Yet ∼2% of the human population suffers from face blindness without any acquired brain damage [this is also known as developmental prosopagnosia (DP) or congenital prosopagnosia]). Despite the presence of severe behavioral face recognition deficits, surprisingly, a majority of DP individuals exhibit normal face selectivity in the right fusiform face area (FFA), a key brain region involved in face configural processing. This finding, together with evidence showing impairments downstream from the right FFA in DP individuals, has led some to argue that perhaps the right FFA is largely intact in DP individuals. Using fMRI multivoxel pattern analysis, here we report the discovery of a neural impairment in the right FFA of DP individuals that may play a critical role in mediating their face-processing deficits. In seven individuals with DP, we discovered that, despite the right FFA's preference for faces and it showing decoding for the different face parts, it exhibited impaired face configural decoding and did not contain distinct neural response patterns for the intact and the scrambled face configurations. This abnormality was not present throughout the ventral visual cortex, as normal neural decoding was found in an adjacent object-processing region. To our knowledge, this is the first direct neural evidence showing impaired face configural processing in the right FFA in individuals with DP. The discovery of this neural impairment provides a new clue to our understanding of the neural basis of DP. PMID:25632131
The Face-Processing Network Is Resilient to Focal Resection of Human Visual Cortex.
Weiner, Kevin S; Jonas, Jacques; Gomez, Jesse; Maillard, Louis; Brissart, Hélène; Hossu, Gabriela; Jacques, Corentin; Loftus, David; Colnat-Coulbois, Sophie; Stigliani, Anthony; Barnett, Michael A; Grill-Spector, Kalanit; Rossion, Bruno
2016-08-10
Human face perception requires a network of brain regions distributed throughout the occipital and temporal lobes with a right hemisphere advantage. Present theories consider this network as either a processing hierarchy beginning with the inferior occipital gyrus (occipital face area; IOG-faces/OFA) or a multiple-route network with nonhierarchical components. The former predicts that removing IOG-faces/OFA will detrimentally affect downstream stages, whereas the latter does not. We tested this prediction in a human patient (Patient S.P.) requiring removal of the right inferior occipital cortex, including IOG-faces/OFA. We acquired multiple fMRI measurements in Patient S.P. before and after a preplanned surgery and multiple measurements in typical controls, enabling both within-subject/across-session comparisons (Patient S.P. before resection vs Patient S.P. after resection) and between-subject/across-session comparisons (Patient S.P. vs controls). We found that the spatial topology and selectivity of downstream ipsilateral face-selective regions were stable 1 and 8 month(s) after surgery. Additionally, the reliability of distributed patterns of face selectivity in Patient S.P. before versus after resection was not different from across-session reliability in controls. Nevertheless, postoperatively, representations of visual space were typical in dorsal face-selective regions but atypical in ventral face-selective regions and V1 of the resected hemisphere. Diffusion weighted imaging in Patient S.P. and controls identifies white matter tracts connecting retinotopic areas to downstream face-selective regions, which may contribute to the stable and plastic features of the face network in Patient S.P. after surgery. Together, our results support a multiple-route network of face processing with nonhierarchical components and shed light on stable and plastic features of high-level visual cortex following focal brain damage. Brain networks consist of interconnected functional regions commonly organized in processing hierarchies. Prevailing theories predict that damage to the input of the hierarchy will detrimentally affect later stages. We tested this prediction with multiple brain measurements in a rare human patient requiring surgical removal of the putative input to a network processing faces. Surprisingly, the spatial topology and selectivity of downstream face-selective regions are stable after surgery. Nevertheless, representations of visual space were typical in dorsal face-selective regions but atypical in ventral face-selective regions and V1. White matter connections from outside the face network may support these stable and plastic features. As processing hierarchies are ubiquitous in biological and nonbiological systems, our results have pervasive implications for understanding the construction of resilient networks. Copyright © 2016 the authors 0270-6474/16/368426-16$15.00/0.
Behavioural and neurophysiological evidence for face identity and face emotion processing in animals
Tate, Andrew J; Fischer, Hanno; Leigh, Andrea E; Kendrick, Keith M
2006-01-01
Visual cues from faces provide important social information relating to individual identity, sexual attraction and emotional state. Behavioural and neurophysiological studies on both monkeys and sheep have shown that specialized skills and neural systems for processing these complex cues to guide behaviour have evolved in a number of mammals and are not present exclusively in humans. Indeed, there are remarkable similarities in the ways that faces are processed by the brain in humans and other mammalian species. While human studies with brain imaging and gross neurophysiological recording approaches have revealed global aspects of the face-processing network, they cannot investigate how information is encoded by specific neural networks. Single neuron electrophysiological recording approaches in both monkeys and sheep have, however, provided some insights into the neural encoding principles involved and, particularly, the presence of a remarkable degree of high-level encoding even at the level of a specific face. Recent developments that allow simultaneous recordings to be made from many hundreds of individual neurons are also beginning to reveal evidence for global aspects of a population-based code. This review will summarize what we have learned so far from these animal-based studies about the way the mammalian brain processes the faces and the emotions they can communicate, as well as associated capacities such as how identity and emotion cues are dissociated and how face imagery might be generated. It will also try to highlight what questions and advances in knowledge still challenge us in order to provide a complete understanding of just how brain networks perform this complex and important social recognition task. PMID:17118930
Tate, Andrew J; Fischer, Hanno; Leigh, Andrea E; Kendrick, Keith M
2006-12-29
Visual cues from faces provide important social information relating to individual identity, sexual attraction and emotional state. Behavioural and neurophysiological studies on both monkeys and sheep have shown that specialized skills and neural systems for processing these complex cues to guide behaviour have evolved in a number of mammals and are not present exclusively in humans. Indeed, there are remarkable similarities in the ways that faces are processed by the brain in humans and other mammalian species. While human studies with brain imaging and gross neurophysiological recording approaches have revealed global aspects of the face-processing network, they cannot investigate how information is encoded by specific neural networks. Single neuron electrophysiological recording approaches in both monkeys and sheep have, however, provided some insights into the neural encoding principles involved and, particularly, the presence of a remarkable degree of high-level encoding even at the level of a specific face. Recent developments that allow simultaneous recordings to be made from many hundreds of individual neurons are also beginning to reveal evidence for global aspects of a population-based code. This review will summarize what we have learned so far from these animal-based studies about the way the mammalian brain processes the faces and the emotions they can communicate, as well as associated capacities such as how identity and emotion cues are dissociated and how face imagery might be generated. It will also try to highlight what questions and advances in knowledge still challenge us in order to provide a complete understanding of just how brain networks perform this complex and important social recognition task.
Schneider, Till R; Hipp, Joerg F; Domnick, Claudia; Carl, Christine; Büchel, Christian; Engel, Andreas K
2018-05-26
Human faces are among the most salient visual stimuli and act both as socially and emotionally relevant signals. Faces and especially faces with emotional expression receive prioritized processing in the human brain and activate a distributed network of brain areas reflected, e.g., in enhanced oscillatory neuronal activity. However, an inconsistent picture emerged so far regarding neuronal oscillatory activity across different frequency-bands modulated by emotionally and socially relevant stimuli. The individual level of anxiety among healthy populations might be one explanation for these inconsistent findings. Therefore, we tested the hypothesis whether oscillatory neuronal activity is associated with individual anxiety levels during perception of faces with neutral and fearful facial expressions. We recorded neuronal activity using magnetoencephalography (MEG) in 27 healthy participants and determined their individual state anxiety levels. Images of human faces with neutral and fearful expressions, and physically matched visual control stimuli were presented while participants performed a simple color detection task. Spectral analyses revealed that face processing and in particular processing of fearful faces was characterized by enhanced neuronal activity in the theta- and gamma-band and decreased activity in the beta-band in early visual cortex and the fusiform gyrus (FFG). Moreover, the individuals' state anxiety levels correlated positively with the gamma-band response and negatively with the beta response in the FFG and the amygdala. Our results suggest that oscillatory neuronal activity plays an important role in affective face processing and is dependent on the individual level of state anxiety. Our work provides new insights on the role of oscillatory neuronal activity underlying processing of faces. Copyright © 2018. Published by Elsevier Inc.
The other-race and other-species effects in face perception – a subordinate-level analysis
Dahl, Christoph D.; Rasch, Malte J.; Chen, Chien-Chung
2014-01-01
The ability of face discrimination is modulated by the frequency of exposure to a category of faces. In other words, lower discrimination performance was measured for infrequently encountered faces as opposed to frequently encountered ones. This phenomenon has been described in the literature: the own-race advantage, a benefit in processing own-race as opposed to the other-race faces, and the own-species advantage, a benefit in processing the conspecific type of faces as opposed to the heterospecific type. So far, the exact parameters that drive either of these two effects are not fully understood. In the following we present a full assessment of data in human participants describing the discrimination performances across two races (Asian and Caucasian) as well as a range of non-human primate faces (chimpanzee, Rhesus macaque and marmoset). We measured reaction times of Asian participants performing a delayed matching-to-sample task, and correlated the results with similarity estimates of facial configuration and face parts. We found faster discrimination of own-race above other-race/species faces. Further, we found a strong reliance on configural information in upright own-species/-race faces and on individual face parts in all inverted face classes, supporting the assumption of specialized processing for the face class of most frequent exposure. PMID:25285092
Kanai, Ryota; Bahrami, Bahador; Rees, Geraint
2015-01-01
Social cues conveyed by the human face, such as eye gaze direction, are evaluated even before they are consciously perceived. While there is substantial individual variability in such evaluation, its neural basis is unknown. Here we asked whether individual differences in preconscious evaluation of social face traits were associated with local variability in brain structure. Adult human participants (n = 36) monocularly viewed faces varying in dominance and trustworthiness, which were suppressed from awareness by a dynamic noise pattern shown to the other eye. The time taken for faces to emerge from suppression and become visible (t2e) was used as a measure of potency in competing for visual awareness. Both dominant and untrustworthy faces resulted in slower t2e than neutral faces, with substantial individual variability in these effects. Individual differences in t2e were correlated with gray matter volume in right insula for dominant faces, and with gray matter volume in medial prefrontal cortex, right temporoparietal junction and bilateral fusiform face area for untrustworthy faces. Thus, individual differences in preconscious social processing can be predicted from local brain structure, and separable correlates for facial dominance and untrustworthiness suggest distinct mechanisms of preconscious processing. PMID:25193945
Artificial faces are harder to remember
Balas, Benjamin; Pacella, Jonathan
2015-01-01
Observers interact with artificial faces in a range of different settings and in many cases must remember and identify computer-generated faces. In general, however, most adults have heavily biased experience favoring real faces over synthetic faces. It is well known that face recognition abilities are affected by experience such that faces belonging to “out-groups” defined by race or age are more poorly remembered and harder to discriminate from one another than faces belonging to the “in-group.” Here, we examine the extent to which artificial faces form an “out-group” in this sense when other perceptual categories are matched. We rendered synthetic faces using photographs of real human faces and compared performance in a memory task and a discrimination task across real and artificial versions of the same faces. We found that real faces were easier to remember, but only slightly more discriminable than artificial faces. Artificial faces were also equally susceptible to the well-known face inversion effect, suggesting that while these patterns are still processed by the human visual system in a face-like manner, artificial appearance does compromise the efficiency of face processing. PMID:26195852
Sato, Wataru; Kochiyama, Takanori; Uono, Shota; Matsuda, Kazumi; Usui, Keiko; Usui, Naotaka; Inoue, Yushi; Toichi, Motomi
2017-09-01
Faces contain multifaceted information that is important for human communication. Neuroimaging studies have revealed face-specific activation in multiple brain regions, including the inferior occipital gyrus (IOG) and amygdala; it is often assumed that these regions constitute the neural network responsible for the processing of faces. However, it remains unknown whether and how these brain regions transmit information during face processing. This study investigated these questions by applying dynamic causal modeling of induced responses to human intracranial electroencephalography data recorded from the IOG and amygdala during the observation of faces, mosaics, and houses in upright and inverted orientations. Model comparisons assessing the experimental effects of upright faces versus upright houses and upright faces versus upright mosaics consistently indicated that the model having face-specific bidirectional modulatory effects between the IOG and amygdala was the most probable. The experimental effect between upright versus inverted faces also favored the model with bidirectional modulatory effects between the IOG and amygdala. The spectral profiles of modulatory effects revealed both same-frequency (e.g., gamma-gamma) and cross-frequency (e.g., theta-gamma) couplings. These results suggest that the IOG and amygdala communicate rapidly with each other using various types of oscillations for the efficient processing of faces. Hum Brain Mapp 38:4511-4524, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
A Comparative Survey of Methods for Remote Heart Rate Detection From Frontal Face Videos
Wang, Chen; Pun, Thierry; Chanel, Guillaume
2018-01-01
Remotely measuring physiological activity can provide substantial benefits for both the medical and the affective computing applications. Recent research has proposed different methodologies for the unobtrusive detection of heart rate (HR) using human face recordings. These methods are based on subtle color changes or motions of the face due to cardiovascular activities, which are invisible to human eyes but can be captured by digital cameras. Several approaches have been proposed such as signal processing and machine learning. However, these methods are compared with different datasets, and there is consequently no consensus on method performance. In this article, we describe and evaluate several methods defined in literature, from 2008 until present day, for the remote detection of HR using human face recordings. The general HR processing pipeline is divided into three stages: face video processing, face blood volume pulse (BVP) signal extraction, and HR computation. Approaches presented in the paper are classified and grouped according to each stage. At each stage, algorithms are analyzed and compared based on their performance using the public database MAHNOB-HCI. Results found in this article are limited on MAHNOB-HCI dataset. Results show that extracted face skin area contains more BVP information. Blind source separation and peak detection methods are more robust with head motions for estimating HR. PMID:29765940
Rutishauser, Ueli; Mamelak, Adam N.; Adolphs, Ralph
2015-01-01
The amygdala’s role in emotion and social perception has been intensively investigated primarily through studies using fMRI. Recently, this topic has been examined using single-unit recordings in both humans and monkeys, with a focus on face processing. The findings provide novel insights, including several surprises: amygdala neurons have very long response latencies, show highly nonlinear responses to whole faces, and can be exquisitely selective for very specific parts of faces such as the eyes. In humans, the responses of amygdala neurons correlate with internal states evoked by faces, rather than with their objective features. Current and future studies extend the investigations to psychiatric illnesses such as autism, in which atypical face processing is a hallmark of social dysfunction. PMID:25847686
Kim, Jinyoung; Kang, Min-Suk; Cho, Yang Seok; Lee, Sang-Hun
2017-01-01
As documented by Darwin 150 years ago, emotion expressed in human faces readily draws our attention and promotes sympathetic emotional reactions. How do such reactions to the expression of emotion affect our goal-directed actions? Despite the substantial advance made in the neural mechanisms of both cognitive control and emotional processing, it is not yet known well how these two systems interact. Here, we studied how emotion expressed in human faces influences cognitive control of conflict processing, spatial selective attention and inhibitory control in particular, using the Eriksen flanker paradigm. In this task, participants viewed displays of a central target face flanked by peripheral faces and were asked to judge the gender of the target face; task-irrelevant emotion expressions were embedded in the target face, the flanking faces, or both. We also monitored how emotion expression affects gender judgment performance while varying the relative timing between the target and flanker faces. As previously reported, we found robust gender congruency effects, namely slower responses to the target faces whose gender was incongruent with that of the flanker faces, when the flankers preceded the target by 0.1 s. When the flankers further advanced the target by 0.3 s, however, the congruency effect vanished in most of the viewing conditions, except for when emotion was expressed only in the flanking faces or when congruent emotion was expressed in the target and flanking faces. These results suggest that emotional saliency can prolong a substantial degree of conflict by diverting bottom-up attention away from the target, and that inhibitory control on task-irrelevant information from flanking stimuli is deterred by the emotional congruency between target and flanking stimuli. PMID:28676780
Wang, Zhe; Quinn, Paul C; Jin, Haiyang; Sun, Yu-Hao P; Tanaka, James W; Pascalis, Olivier; Lee, Kang
2018-04-25
Using a composite-face paradigm, we examined the holistic processing induced by Asian faces, Caucasian faces, and monkey faces with human Asian participants in two experiments. In Experiment 1, participants were asked to judge whether the upper halves of two faces successively presented were the same or different. A composite-face effect was found for Asian faces and Caucasian faces, but not for monkey faces. In Experiment 2, participants were asked to judge whether the lower halves of the two faces successively presented were the same or different. A composite-face effect was found for monkey faces as well as for Asian faces and Caucasian faces. Collectively, these results reveal that own-species (i.e., own-race and other-race) faces engage holistic processing in both upper and lower halves of the face, but other-species (i.e., monkey) faces engage holistic processing only when participants are asked to match the lower halves of the face. The findings are discussed in the context of a region-based holistic processing account for the species-specific effect in face recognition. Copyright © 2018 Elsevier Ltd. All rights reserved.
Trujillo, Logan T.; Jankowitsch, Jessica M.; Langlois, Judith H.
2014-01-01
Multiple studies show that people prefer attractive over unattractive faces. But what is an attractive face and why is it preferred? Averageness theory claims that faces are perceived as attractive when their facial configuration approximates the mathematical average facial configuration of the population. Conversely, faces that deviate from this average configuration are perceived as unattractive. The theory predicts that both attractive and mathematically averaged faces should be processed more fluently than unattractive faces, whereas the averaged faces should be processed marginally more fluently than the attractive faces. We compared neurocognitive and behavioral responses to attractive, unattractive, and averaged human faces to test these predictions. We recorded event-related potentials (ERPs) and reaction times (RTs) from 48 adults while they discriminated between human and chimpanzee faces. Participants categorized averaged and high attractive faces as “human” faster than low attractive faces. The posterior N170 (150 – 225 ms) face-evoked ERP component was smaller in response to high attractive and averaged faces versus low attractive faces. Single-trial EEG analysis indicated that this reduced ERP response arose from the engagement of fewer neural resources and not from a change in the temporal consistency of how those resources were engaged. These findings provide novel evidence that faces are perceived as attractive when they approximate a facial configuration close to the population average and suggest that processing fluency underlies preferences for attractive faces. PMID:24326966
A Comparative View of Face Perception
Leopold, David A.; Rhodes, Gillian
2010-01-01
Face perception serves as the basis for much of human social exchange. Diverse information can be extracted about an individual from a single glance at their face, including their identity, emotional state, and direction of attention. Neuropsychological and fMRI experiments reveal a complex network of specialized areas in the human brain supporting these face-reading skills. Here we consider the evolutionary roots of human face perception by exploring the manner in which different animal species view and respond to faces. We focus on behavioral experiments collected from both primates and non-primates, assessing the types of information that animals are able to extract from the faces of their conspecifics, human experimenters, and natural predators. These experiments reveal that faces are an important category of visual stimuli for animals in all major vertebrate taxa, possibly reflecting the early emergence of neural specialization for faces in vertebrate evolution. At the same time, some aspects of facial perception are only evident in primates and a few other social mammals, and may therefore have evolved to suit the needs of complex social communication. Since the human brain likely utilizes both primitive and recently evolved neural specializations for the processing of faces, comparative studies may hold the key to understanding how these parallel circuits emerged during human evolution. PMID:20695655
A comparative view of face perception.
Leopold, David A; Rhodes, Gillian
2010-08-01
Face perception serves as the basis for much of human social exchange. Diverse information can be extracted about an individual from a single glance at their face, including their identity, emotional state, and direction of attention. Neuropsychological and functional magnetic resonance imaging (fMRI) experiments reveal a complex network of specialized areas in the human brain supporting these face-reading skills. Here we consider the evolutionary roots of human face perception by exploring the manner in which different animal species view and respond to faces. We focus on behavioral experiments collected from both primates and nonprimates, assessing the types of information that animals are able to extract from the faces of their conspecifics, human experimenters, and natural predators. These experiments reveal that faces are an important category of visual stimuli for animals in all major vertebrate taxa, possibly reflecting the early emergence of neural specialization for faces in vertebrate evolution. At the same time, some aspects of facial perception are only evident in primates and a few other social mammals, and may therefore have evolved to suit the needs of complex social communication. Because the human brain likely utilizes both primitive and recently evolved neural specializations for the processing of faces, comparative studies may hold the key to understanding how these parallel circuits emerged during human evolution. 2010 APA, all rights reserved
Impaired threat prioritisation after selective bilateral amygdala lesions
Bach, Dominik R.; Hurlemann, Rene; Dolan, Raymond J.
2015-01-01
The amygdala is proposed to process threat-related information in non-human animals. In humans, empirical evidence from lesion studies has provided the strongest evidence for a role in emotional face recognition and social judgement. Here we use a face-in-the-crowd (FITC) task which in healthy control individuals reveals prioritised threat processing, evident in faster serial search for angry compared to happy target faces. We investigate AM and BG, two individuals with bilateral amygdala lesions due to Urbach–Wiethe syndrome, and 16 control individuals. In lesion patients we show a reversal of a threat detection advantage indicating a profound impairment in prioritising threat information. This is the first direct demonstration that human amygdala lesions impair prioritisation of threatening faces, providing evidence that this structure has a causal role in responding to imminent danger. PMID:25282058
Riesenhuber, Maximilian; Wolff, Brian S.
2009-01-01
Summary A recent article in Acta Psychologica (“Picture-plane inversion leads to qualitative changes of face perception” by B. Rossion, 2008) criticized several aspects of an earlier paper of ours (Riesenhuber et al., “Face processing in humans is compatible with a simple shape-based model of vision”, Proc Biol Sci, 2004). We here address Rossion’s criticisms and correct some misunderstandings. To frame the discussion, we first review our previously presented computational model of face recognition in cortex (Jiang et al., “Evaluation of a shape-based model of human face discrimination using fMRI and behavioral techniques”, Neuron, 2006) that provides a concrete biologically plausible computational substrate for holistic coding, namely a neural representation learned for upright faces, in the spirit of the original simple-to-complex hierarchical model of vision by Hubel and Wiesel. We show that Rossion’s and others’ data support the model, and that there is actually a convergence of views on the mechanisms underlying face recognition, in particular regarding holistic processing. PMID:19665104
Face-to-face: Perceived personal relevance amplifies face processing
Pittig, Andre; Schupp, Harald T.; Alpers, Georg W.
2017-01-01
Abstract The human face conveys emotional and social information, but it is not well understood how these two aspects influence face perception. In order to model a group situation, two faces displaying happy, neutral or angry expressions were presented. Importantly, faces were either facing the observer, or they were presented in profile view directed towards, or looking away from each other. In Experiment 1 (n = 64), face pairs were rated regarding perceived relevance, wish-to-interact, and displayed interactivity, as well as valence and arousal. All variables revealed main effects of facial expression (emotional > neutral), face orientation (facing observer > towards > away) and interactions showed that evaluation of emotional faces strongly varies with their orientation. Experiment 2 (n = 33) examined the temporal dynamics of perceptual-attentional processing of these face constellations with event-related potentials. Processing of emotional and neutral faces differed significantly in N170 amplitudes, early posterior negativity (EPN), and sustained positive potentials. Importantly, selective emotional face processing varied as a function of face orientation, indicating early emotion-specific (N170, EPN) and late threat-specific effects (LPP, sustained positivity). Taken together, perceived personal relevance to the observer—conveyed by facial expression and face direction—amplifies emotional face processing within triadic group situations. PMID:28158672
The lasting effects of process-specific versus stimulus-specific learning during infancy.
Hadley, Hillary; Pickron, Charisse B; Scott, Lisa S
2015-09-01
The capacity to tell the difference between two faces within an infrequently experienced face group (e.g. other species, other race) declines from 6 to 9 months of age unless infants learn to match these faces with individual-level names. Similarly, the use of individual-level labels can also facilitate differentiation of a group of non-face objects (strollers). This early learning leads to increased neural specialization for previously unfamiliar face or object groups. The current investigation aimed to determine whether early conceptual learning between 6 and 9 months leads to sustained behavioral advantages and neural changes in these same children at 4-6 years of age. Results suggest that relative to a control group of children with no previous training and to children with infant category-level naming experience, children with early individual-level training exhibited faster response times to human faces. Further, individual-level training with a face group - but not an object group - led to more adult-like neural responses for human faces. These results suggest that early individual-level learning results in long-lasting process-specific effects, which benefit categories that continue to be perceived and recognized at the individual level (e.g. human faces). © 2014 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Rose, Jake; Martin, Michael; Bourlai, Thirimachos
2014-06-01
In law enforcement and security applications, the acquisition of face images is critical in producing key trace evidence for the successful identification of potential threats. The goal of the study is to demonstrate that steroid usage significantly affects human facial appearance and hence, the performance of commercial and academic face recognition (FR) algorithms. In this work, we evaluate the performance of state-of-the-art FR algorithms on two unique face image datasets of subjects before (gallery set) and after (probe set) steroid (or human growth hormone) usage. For the purpose of this study, datasets of 73 subjects were created from multiple sources found on the Internet, containing images of men and women before and after steroid usage. Next, we geometrically pre-processed all images of both face datasets. Then, we applied image restoration techniques on the same face datasets, and finally, we applied FR algorithms in order to match the pre-processed face images of our probe datasets against the face images of the gallery set. Experimental results demonstrate that only a specific set of FR algorithms obtain the most accurate results (in terms of the rank-1 identification rate). This is because there are several factors that influence the efficiency of face matchers including (i) the time lapse between the before and after image pre-processing and restoration face photos, (ii) the usage of different drugs (e.g. Dianabol, Winstrol, and Decabolan), (iii) the usage of different cameras to capture face images, and finally, (iv) the variability of standoff distance, illumination and other noise factors (e.g. motion noise). All of the previously mentioned complicated scenarios make clear that cross-scenario matching is a very challenging problem and, thus, further investigation is required.
Neuro-fuzzy model for estimating race and gender from geometric distances of human face across pose
NASA Astrophysics Data System (ADS)
Nanaa, K.; Rahman, M. N. A.; Rizon, M.; Mohamad, F. S.; Mamat, M.
2018-03-01
Classifying human face based on race and gender is a vital process in face recognition. It contributes to an index database and eases 3D synthesis of the human face. Identifying race and gender based on intrinsic factor is problematic, which is more fitting to utilizing nonlinear model for estimating process. In this paper, we aim to estimate race and gender in varied head pose. For this purpose, we collect dataset from PICS and CAS-PEAL databases, detect the landmarks and rotate them to the frontal pose. After geometric distances are calculated, all of distance values will be normalized. Implementation is carried out by using Neural Network Model and Fuzzy Logic Model. These models are combined by using Adaptive Neuro-Fuzzy Model. The experimental results showed that the optimization of address fuzzy membership. Model gives a better assessment rate and found that estimating race contributing to a more accurate gender assessment.
Haist, Frank; Adamo, Maha; Han, Jarnet; Lee, Kang; Stiles, Joan
2013-01-01
Expertise in processing faces is a cornerstone of human social interaction. However, the developmental course of many key brain regions supporting face preferential processing in the human brain remains undefined. Here, we present findings from an FMRI study using a simple viewing paradigm of faces and objects in a continuous age sample covering the age range from 6 years through adulthood. These findings are the first to use such a sample paired with whole-brain FMRI analyses to investigate development within the core and extended face networks across the developmental spectrum from middle childhood to adulthood. We found evidence, albeit modest, for a developmental trend in the volume of the right fusiform face area (rFFA) but no developmental change in the intensity of activation. From a spatial perspective, the middle portion of the right fusiform gyrus most commonly found in adult studies of face processing was increasingly likely to be included in the FFA as age increased to adulthood. Outside of the FFA, the most striking finding was that children hyperactivated nearly every aspect of the extended face system relative to adults, including the amygdala, anterior temporal pole, insula, inferior frontal gyrus, anterior cingulate gyrus, and parietal cortex. Overall, the findings suggest that development is best characterized by increasing modulation of face-sensitive regions throughout the brain to engage only those systems necessary for task requirements. PMID:23948645
Distinct spatial frequency sensitivities for processing faces and emotional expressions.
Vuilleumier, Patrik; Armony, Jorge L; Driver, Jon; Dolan, Raymond J
2003-06-01
High and low spatial frequency information in visual images is processed by distinct neural channels. Using event-related functional magnetic resonance imaging (fMRI) in humans, we show dissociable roles of such visual channels for processing faces and emotional fearful expressions. Neural responses in fusiform cortex, and effects of repeating the same face identity upon fusiform activity, were greater with intact or high-spatial-frequency face stimuli than with low-frequency faces, regardless of emotional expression. In contrast, amygdala responses to fearful expressions were greater for intact or low-frequency faces than for high-frequency faces. An activation of pulvinar and superior colliculus by fearful expressions occurred specifically with low-frequency faces, suggesting that these subcortical pathways may provide coarse fear-related inputs to the amygdala.
The Other-Race Effect in Infancy: Evidence Using a Morphing Technique
ERIC Educational Resources Information Center
Hayden, Angela; Bhatt, Ramesh S.; Joseph, Jane E.; Tanaka, James W.
2007-01-01
Human adults are more accurate at discriminating faces from their own race than faces from another race. This "other-race effect" (ORE) has been characterized as a reflection of face processing specialization arising from differential experience with own-race faces. We examined whether 3.5-month-old infants exhibit ORE using morphed faces on which…
The N170 component is sensitive to face-like stimuli: a study of Chinese Peking opera makeup.
Liu, Tiantian; Mu, Shoukuan; He, Huamin; Zhang, Lingcong; Fan, Cong; Ren, Jie; Zhang, Mingming; He, Weiqi; Luo, Wenbo
2016-12-01
The N170 component is considered a neural marker of face-sensitive processing. In the present study, the face-sensitive N170 component of event-related potentials (ERPs) was investigated with a modified oddball paradigm using a natural face (the standard stimulus), human- and animal-like makeup stimuli, scrambled control images that mixed human- and animal-like makeup pieces, and a grey control image. Nineteen participants were instructed to respond within 1000 ms by pressing the ' F ' or ' J ' key in response to the standard or deviant stimuli, respectively. We simultaneously recorded ERPs, response accuracy, and reaction times. The behavioral results showed that the main effect of stimulus type was significant for reaction time, whereas there were no significant differences in response accuracies among stimulus types. In relation to the ERPs, N170 amplitudes elicited by human-like makeup stimuli, animal-like makeup stimuli, scrambled control images, and a grey control image progressively decreased. A right hemisphere advantage was observed in the N170 amplitudes for human-like makeup stimuli, animal-like makeup stimuli, and scrambled control images but not for grey control image. These results indicate that the N170 component is sensitive to face-like stimuli and reflect configural processing in face recognition.
Face-to-face: Perceived personal relevance amplifies face processing.
Bublatzky, Florian; Pittig, Andre; Schupp, Harald T; Alpers, Georg W
2017-05-01
The human face conveys emotional and social information, but it is not well understood how these two aspects influence face perception. In order to model a group situation, two faces displaying happy, neutral or angry expressions were presented. Importantly, faces were either facing the observer, or they were presented in profile view directed towards, or looking away from each other. In Experiment 1 (n = 64), face pairs were rated regarding perceived relevance, wish-to-interact, and displayed interactivity, as well as valence and arousal. All variables revealed main effects of facial expression (emotional > neutral), face orientation (facing observer > towards > away) and interactions showed that evaluation of emotional faces strongly varies with their orientation. Experiment 2 (n = 33) examined the temporal dynamics of perceptual-attentional processing of these face constellations with event-related potentials. Processing of emotional and neutral faces differed significantly in N170 amplitudes, early posterior negativity (EPN), and sustained positive potentials. Importantly, selective emotional face processing varied as a function of face orientation, indicating early emotion-specific (N170, EPN) and late threat-specific effects (LPP, sustained positivity). Taken together, perceived personal relevance to the observer-conveyed by facial expression and face direction-amplifies emotional face processing within triadic group situations. © The Author (2017). Published by Oxford University Press.
Waller, Bridget M; Bard, Kim A; Vick, Sarah-Jane; Smith Pasqualini, Marcia C
2007-11-01
Human face perception is a finely tuned, specialized process. When comparing faces between species, therefore, it is essential to consider how people make these observational judgments. Comparing facial expressions may be particularly problematic, given that people tend to consider them categorically as emotional signals, which may affect how accurately specific details are processed. The bared-teeth display (BT), observed in most primates, has been proposed as a homologue of the human smile (J. A. R. A. M. van Hooff, 1972). In this study, judgments of similarity between BT displays of chimpanzees (Pan troglodytes) and human smiles varied in relation to perceived emotional valence. When a chimpanzee BT was interpreted as fearful, observers tended to underestimate the magnitude of the relationship between certain features (the extent of lip corner raise) and human smiles. These judgments may reflect the combined effects of categorical emotional perception, configural face processing, and perceptual organization in mental imagery and may demonstrate the advantages of using standardized observational methods in comparative facial expression research. Copyright 2007 APA.
Dogs can discriminate human smiling faces from blank expressions.
Nagasawa, Miho; Murai, Kensuke; Mogi, Kazutaka; Kikusui, Takefumi
2011-07-01
Dogs have a unique ability to understand visual cues from humans. We investigated whether dogs can discriminate between human facial expressions. Photographs of human faces were used to test nine pet dogs in two-choice discrimination tasks. The training phases involved each dog learning to discriminate between a set of photographs of their owner's smiling and blank face. Of the nine dogs, five fulfilled these criteria and were selected for test sessions. In the test phase, 10 sets of photographs of the owner's smiling and blank face, which had previously not been seen by the dog, were presented. The dogs selected the owner's smiling face significantly more often than expected by chance. In subsequent tests, 10 sets of smiling and blank face photographs of 20 persons unfamiliar to the dogs were presented (10 males and 10 females). There was no statistical difference between the accuracy in the case of the owners and that in the case of unfamiliar persons with the same gender as the owner. However, the accuracy was significantly lower in the case of unfamiliar persons of the opposite gender to that of the owner, than with the owners themselves. These results suggest that dogs can learn to discriminate human smiling faces from blank faces by looking at photographs. Although it remains unclear whether dogs have human-like systems for visual processing of human facial expressions, the ability to learn to discriminate human facial expressions may have helped dogs adapt to human society.
Visual adaptation and face perception
Webster, Michael A.; MacLeod, Donald I. A.
2011-01-01
The appearance of faces can be strongly affected by the characteristics of faces viewed previously. These perceptual after-effects reflect processes of sensory adaptation that are found throughout the visual system, but which have been considered only relatively recently in the context of higher level perceptual judgements. In this review, we explore the consequences of adaptation for human face perception, and the implications of adaptation for understanding the neural-coding schemes underlying the visual representation of faces. The properties of face after-effects suggest that they, in part, reflect response changes at high and possibly face-specific levels of visual processing. Yet, the form of the after-effects and the norm-based codes that they point to show many parallels with the adaptations and functional organization that are thought to underlie the encoding of perceptual attributes like colour. The nature and basis for human colour vision have been studied extensively, and we draw on ideas and principles that have been developed to account for norms and normalization in colour vision to consider potential similarities and differences in the representation and adaptation of faces. PMID:21536555
Visual adaptation and face perception.
Webster, Michael A; MacLeod, Donald I A
2011-06-12
The appearance of faces can be strongly affected by the characteristics of faces viewed previously. These perceptual after-effects reflect processes of sensory adaptation that are found throughout the visual system, but which have been considered only relatively recently in the context of higher level perceptual judgements. In this review, we explore the consequences of adaptation for human face perception, and the implications of adaptation for understanding the neural-coding schemes underlying the visual representation of faces. The properties of face after-effects suggest that they, in part, reflect response changes at high and possibly face-specific levels of visual processing. Yet, the form of the after-effects and the norm-based codes that they point to show many parallels with the adaptations and functional organization that are thought to underlie the encoding of perceptual attributes like colour. The nature and basis for human colour vision have been studied extensively, and we draw on ideas and principles that have been developed to account for norms and normalization in colour vision to consider potential similarities and differences in the representation and adaptation of faces.
Initial eye movements during face identification are optimal and similar across cultures
Or, Charles C.-F.; Peterson, Matthew F.; Eckstein, Miguel P.
2015-01-01
Culture influences not only human high-level cognitive processes but also low-level perceptual operations. Some perceptual operations, such as initial eye movements to faces, are critical for extraction of information supporting evolutionarily important tasks such as face identification. The extent of cultural effects on these crucial perceptual processes is unknown. Here, we report that the first gaze location for face identification was similar across East Asian and Western Caucasian cultural groups: Both fixated a featureless point between the eyes and the nose, with smaller between-group than within-group differences and with a small horizontal difference across cultures (8% of the interocular distance). We also show that individuals of both cultural groups initially fixated at a slightly higher point on Asian faces than on Caucasian faces. The initial fixations were found to be both fundamental in acquiring the majority of information for face identification and optimal, as accuracy deteriorated when observers held their gaze away from their preferred fixations. An ideal observer that integrated facial information with the human visual system's varying spatial resolution across the visual field showed a similar information distribution across faces of both races and predicted initial human fixations. The model consistently replicated the small vertical difference between human fixations to Asian and Caucasian faces but did not predict the small horizontal leftward bias of Caucasian observers. Together, the results suggest that initial eye movements during face identification may be driven by brain mechanisms aimed at maximizing accuracy, and less influenced by culture. The findings increase our understanding of the interplay between the brain's aims to optimally accomplish basic perceptual functions and to respond to sociocultural influences. PMID:26382003
Encoding deficit during face processing within the right fusiform face area in schizophrenia.
Walther, Sebastian; Federspiel, Andrea; Horn, Helge; Bianchi, Piero; Wiest, Roland; Wirth, Miranka; Strik, Werner; Müller, Thomas Jörg
2009-06-30
Face processing is crucial to social interaction, but is impaired in schizophrenia patients, who experience delays in face recognition, difficulties identifying others, and misperceptions of affective content. The right fusiform face area plays an important role in the early stages of human face processing and thus may be affected in schizophrenia. The aim of the study was therefore to investigate whether face processing deficits are related to dysfunctions of the right fusiform face area in schizophrenia patients compared with controls. In a rapid, event-related functional magnetic resonance imaging (fMRI) design, we investigated the encoding of new faces, as well as the recognition of newly learned, famous, and unfamiliar faces, in 13 schizophrenia patients and 21 healthy controls. We applied region of interest analysis to each individual's right fusiform face area and tested for group differences. Controls displayed higher blood oxygenation level dependent (BOLD) activation during the memorization of faces that were later successfully recognized. In schizophrenia patients, this effect was not observed. During the recognition task, schizophrenia patients exhibited lower BOLD responses, less accuracy, and longer reaction times to famous and unfamiliar faces. Our results support the hypothesis that impaired face processing in schizophrenia is related to early-stage deficits during the encoding and recognition of faces.
The hows and whys of face memory: level of construal influences the recognition of human faces
Wyer, Natalie A.; Hollins, Timothy J.; Pahl, Sabine; Roper, Jean
2015-01-01
Three experiments investigated the influence of level of construal (i.e., the interpretation of actions in terms of their meaning or their details) on different stages of face memory. We employed a standard multiple-face recognition paradigm, with half of the faces inverted at test. Construal level was manipulated prior to recognition (Experiment 1), during study (Experiment 2) or both (Experiment 3). The results support a general advantage for high-level construal over low-level construal at both study and at test, and suggest that matching processing style between study and recognition has no advantage. These experiments provide additional evidence in support of a link between semantic processing (i.e., construal) and visual (i.e., face) processing. We conclude with a discussion of implications for current theories relating to both construal and face processing. PMID:26500586
Closed-loop dialog model of face-to-face communication with a photo-real virtual human
NASA Astrophysics Data System (ADS)
Kiss, Bernadette; Benedek, Balázs; Szijárto, Gábor; Takács, Barnabás
2004-01-01
We describe an advanced Human Computer Interaction (HCI) model that employs photo-realistic virtual humans to provide digital media users with information, learning services and entertainment in a highly personalized and adaptive manner. The system can be used as a computer interface or as a tool to deliver content to end-users. We model the interaction process between the user and the system as part of a closed loop dialog taking place between the participants. This dialog, exploits the most important characteristics of a face-to-face communication process, including the use of non-verbal gestures and meta communication signals to control the flow of information. Our solution is based on a Virtual Human Interface (VHI) technology that was specifically designed to be able to create emotional engagement between the virtual agent and the user, thus increasing the efficiency of learning and/or absorbing any information broadcasted through this device. The paper reviews the basic building blocks and technologies needed to create such a system and discusses its advantages over other existing methods.
Devue, Christel; Barsics, Catherine
2016-10-01
Most humans seem to demonstrate astonishingly high levels of skill in face processing if one considers the sophisticated level of fine-tuned discrimination that face recognition requires. However, numerous studies now indicate that the ability to process faces is not as fundamental as once thought and that performance can range from despairingly poor to extraordinarily high across people. Here we studied people who are super specialists of faces, namely portrait artists, to examine how their specific visual experience with faces relates to a range of face processing skills (perceptual discrimination, short- and longer term recognition). Artists show better perceptual discrimination and, to some extent, recognition of newly learned faces than controls. They are also more accurate on other perceptual tasks (i.e., involving non-face stimuli or mental rotation). By contrast, artists do not display an advantage compared to controls on longer term face recognition (i.e., famous faces) nor on person recognition from other sensorial modalities (i.e., voices). Finally, the face inversion effect exists in artists and controls and is not modulated by artistic practice. Advantages in face processing for artists thus seem to closely mirror perceptual and visual short term memory skills involved in portraiture. Copyright © 2016 Elsevier Ltd. All rights reserved.
Neural evidence for the subliminal processing of facial trustworthiness in infancy.
Jessen, Sarah; Grossmann, Tobias
2017-04-22
Face evaluation is thought to play a vital role in human social interactions. One prominent aspect is the evaluation of facial signs of trustworthiness, which has been shown to occur reliably, rapidly, and without conscious awareness in adults. Recent developmental work indicates that the sensitivity to facial trustworthiness has early ontogenetic origins as it can already be observed in infancy. However, it is unclear whether infants' sensitivity to facial signs of trustworthiness relies upon conscious processing of a face or, similar to adults, occurs also in response to subliminal faces. To investigate this question, we conducted an event-related brain potential (ERP) study, in which we presented 7-month-old infants with faces varying in trustworthiness. Facial stimuli were presented subliminally (below infants' face visibility threshold) for only 50ms and then masked by presenting a scrambled face image. Our data revealed that infants' ERP responses to subliminally presented faces differed as a function of trustworthiness. Specifically, untrustworthy faces elicited an enhanced negative slow wave (800-1000ms) at frontal and central electrodes. The current findings critically extend prior work by showing that, similar to adults, infants' neural detection of facial signs of trustworthiness occurs also in response to subliminal face. This supports the view that detecting facial trustworthiness is an early developing and automatic process in humans. Copyright © 2017 Elsevier Ltd. All rights reserved.
Sex differences in face gender recognition: an event-related potential study.
Sun, Yueting; Gao, Xiaochao; Han, Shihui
2010-04-23
Multiple level neurocognitive processes are involved in face processing in humans. The present study examined whether the early face processing such as structural encoding is modulated by task demands that manipulate attention to perceptual or social features of faces and such an effect, if any, is different between men and women. Event-related brain potentials were recorded from male and female adults while they identified a low-level perceptual feature of faces (i.e., face orientation) and a high-level social feature of faces (i.e., gender). We found that task demands that required the processing of face orientations or face gender resulted in modulations of both the early occipital/temporal negativity (N170) and the late central/parietal positivity (P3). The N170 amplitude was smaller in the gender relative to the orientation identification task whereas the P3 amplitude was larger in the gender identification task relative to the orientation identification task. In addition, these effects were much stronger in women than in men. Our findings suggest that attention to social information in faces such as gender modulates both the early encoding of facial structures and late evaluative process of faces to a greater degree in women than in men.
Face processing in Williams syndrome is already atypical in infancy.
D'Souza, Dean; Cole, Victoria; Farran, Emily K; Brown, Janice H; Humphreys, Kate; Howard, John; Rodic, Maja; Dekker, Tessa M; D'Souza, Hana; Karmiloff-Smith, Annette
2015-01-01
Face processing is a crucial socio-cognitive ability. Is it acquired progressively or does it constitute an innately-specified, face-processing module? The latter would be supported if some individuals with seriously impaired intelligence nonetheless showed intact face-processing abilities. Some theorists claim that Williams syndrome (WS) provides such evidence since, despite IQs in the 50s, adolescents/adults with WS score in the normal range on standardized face-processing tests. Others argue that atypical neural and cognitive processes underlie WS face-processing proficiencies. But what about infants with WS? Do they start with typical face-processing abilities, with atypicality developing later, or are atypicalities already evident in infancy? We used an infant familiarization/novelty design and compared infants with WS to typically developing controls as well as to a group of infants with Down syndrome matched on both mental and chronological age. Participants were familiarized with a schematic face, after which they saw a novel face in which either the features (eye shape) were changed or just the configuration of the original features. Configural changes were processed successfully by controls, but not by infants with WS who were only sensitive to featural changes and who showed syndrome-specific profiles different from infants with the other neurodevelopmental disorder. Our findings indicate that theorists can no longer use the case of WS to support claims that evolution has endowed the human brain with an independent face-processing module.
Pinsk, Mark A; Arcaro, Michael; Weiner, Kevin S; Kalkus, Jan F; Inati, Souheil J; Gross, Charles G; Kastner, Sabine
2009-05-01
Single-cell studies in the macaque have reported selective neural responses evoked by visual presentations of faces and bodies. Consistent with these findings, functional magnetic resonance imaging studies in humans and monkeys indicate that regions in temporal cortex respond preferentially to faces and bodies. However, it is not clear how these areas correspond across the two species. Here, we directly compared category-selective areas in macaques and humans using virtually identical techniques. In the macaque, several face- and body part-selective areas were found located along the superior temporal sulcus (STS) and middle temporal gyrus (MTG). In the human, similar to previous studies, face-selective areas were found in ventral occipital and temporal cortex and an additional face-selective area was found in the anterior temporal cortex. Face-selective areas were also found in lateral temporal cortex, including the previously reported posterior STS area. Body part-selective areas were identified in the human fusiform gyrus and lateral occipitotemporal cortex. In a first experiment, both monkey and human subjects were presented with pictures of faces, body parts, foods, scenes, and man-made objects, to examine the response profiles of each category-selective area to the five stimulus types. In a second experiment, face processing was examined by presenting upright and inverted faces. By comparing the responses and spatial relationships of the areas, we propose potential correspondences across species. Adjacent and overlapping areas in the macaque anterior STS/MTG responded strongly to both faces and body parts, similar to areas in the human fusiform gyrus and posterior STS. Furthermore, face-selective areas on the ventral bank of the STS/MTG discriminated both upright and inverted faces from objects, similar to areas in the human ventral temporal cortex. Overall, our findings demonstrate commonalities and differences in the wide-scale brain organization between the two species and provide an initial step toward establishing functionally homologous category-selective areas.
Neural Representations of Faces and Body Parts in Macaque and Human Cortex: A Comparative fMRI Study
Pinsk, Mark A.; Arcaro, Michael; Weiner, Kevin S.; Kalkus, Jan F.; Inati, Souheil J.; Gross, Charles G.; Kastner, Sabine
2009-01-01
Single-cell studies in the macaque have reported selective neural responses evoked by visual presentations of faces and bodies. Consistent with these findings, functional magnetic resonance imaging studies in humans and monkeys indicate that regions in temporal cortex respond preferentially to faces and bodies. However, it is not clear how these areas correspond across the two species. Here, we directly compared category-selective areas in macaques and humans using virtually identical techniques. In the macaque, several face- and body part–selective areas were found located along the superior temporal sulcus (STS) and middle temporal gyrus (MTG). In the human, similar to previous studies, face-selective areas were found in ventral occipital and temporal cortex and an additional face-selective area was found in the anterior temporal cortex. Face-selective areas were also found in lateral temporal cortex, including the previously reported posterior STS area. Body part–selective areas were identified in the human fusiform gyrus and lateral occipitotemporal cortex. In a first experiment, both monkey and human subjects were presented with pictures of faces, body parts, foods, scenes, and man-made objects, to examine the response profiles of each category-selective area to the five stimulus types. In a second experiment, face processing was examined by presenting upright and inverted faces. By comparing the responses and spatial relationships of the areas, we propose potential correspondences across species. Adjacent and overlapping areas in the macaque anterior STS/MTG responded strongly to both faces and body parts, similar to areas in the human fusiform gyrus and posterior STS. Furthermore, face-selective areas on the ventral bank of the STS/MTG discriminated both upright and inverted faces from objects, similar to areas in the human ventral temporal cortex. Overall, our findings demonstrate commonalities and differences in the wide-scale brain organization between the two species and provide an initial step toward establishing functionally homologous category-selective areas. PMID:19225169
Kuo, Po-Chih; Chen, Yong-Sheng; Chen, Li-Fen
2018-05-01
The main challenge in decoding neural representations lies in linking neural activity to representational content or abstract concepts. The transformation from a neural-based to a low-dimensional representation may hold the key to encoding perceptual processes in the human brain. In this study, we developed a novel model by which to represent two changeable features of faces: face viewpoint and gaze direction. These features are embedded in spatiotemporal brain activity derived from magnetoencephalographic data. Our decoding results demonstrate that face viewpoint and gaze direction can be represented by manifold structures constructed from brain responses in the bilateral occipital face area and right superior temporal sulcus, respectively. Our results also show that the superposition of brain activity in the manifold space reveals the viewpoints of faces as well as directions of gazes as perceived by the subject. The proposed manifold representation model provides a novel opportunity to gain further insight into the processing of information in the human brain. © 2018 Wiley Periodicals, Inc.
Minami, T; Goto, K; Kitazaki, M; Nakauchi, S
2011-03-10
In humans, face configuration, contour and color may affect face perception, which is important for social interactions. This study aimed to determine the effect of color information on face perception by measuring event-related potentials (ERPs) during the presentation of natural- and bluish-colored faces. Our results demonstrated that the amplitude of the N170 event-related potential, which correlates strongly with face processing, was higher in response to a bluish-colored face than to a natural-colored face. However, gamma-band activity was insensitive to the deviation from a natural face color. These results indicated that color information affects the N170 associated with a face detection mechanism, which suggests that face color is important for face detection. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved.
Spatiotemporal dynamics in human visual cortex rapidly encode the emotional content of faces.
Dima, Diana C; Perry, Gavin; Messaritaki, Eirini; Zhang, Jiaxiang; Singh, Krish D
2018-06-08
Recognizing emotion in faces is important in human interaction and survival, yet existing studies do not paint a consistent picture of the neural representation supporting this task. To address this, we collected magnetoencephalography (MEG) data while participants passively viewed happy, angry and neutral faces. Using time-resolved decoding of sensor-level data, we show that responses to angry faces can be discriminated from happy and neutral faces as early as 90 ms after stimulus onset and only 10 ms later than faces can be discriminated from scrambled stimuli, even in the absence of differences in evoked responses. Time-resolved relevance patterns in source space track expression-related information from the visual cortex (100 ms) to higher-level temporal and frontal areas (200-500 ms). Together, our results point to a system optimised for rapid processing of emotional faces and preferentially tuned to threat, consistent with the important evolutionary role that such a system must have played in the development of human social interactions. © 2018 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.
Neurons in the human amygdala encode face identity, but not gaze direction.
Mormann, Florian; Niediek, Johannes; Tudusciuc, Oana; Quesada, Carlos M; Coenen, Volker A; Elger, Christian E; Adolphs, Ralph
2015-11-01
The amygdala is important for face processing, and direction of eye gaze is one of the most socially salient facial signals. Recording from over 200 neurons in the amygdala of neurosurgical patients, we found robust encoding of the identity of neutral-expression faces, but not of their direction of gaze. Processing of gaze direction may rely on a predominantly cortical network rather than the amygdala.
Discrimination between smiling faces: Human observers vs. automated face analysis.
Del Líbano, Mario; Calvo, Manuel G; Fernández-Martín, Andrés; Recio, Guillermo
2018-05-11
This study investigated (a) how prototypical happy faces (with happy eyes and a smile) can be discriminated from blended expressions with a smile but non-happy eyes, depending on type and intensity of the eye expression; and (b) how smile discrimination differs for human perceivers versus automated face analysis, depending on affective valence and morphological facial features. Human observers categorized faces as happy or non-happy, or rated their valence. Automated analysis (FACET software) computed seven expressions (including joy/happiness) and 20 facial action units (AUs). Physical properties (low-level image statistics and visual saliency) of the face stimuli were controlled. Results revealed, first, that some blended expressions (especially, with angry eyes) had lower discrimination thresholds (i.e., they were identified as "non-happy" at lower non-happy eye intensities) than others (especially, with neutral eyes). Second, discrimination sensitivity was better for human perceivers than for automated FACET analysis. As an additional finding, affective valence predicted human discrimination performance, whereas morphological AUs predicted FACET discrimination. FACET can be a valid tool for categorizing prototypical expressions, but is currently more limited than human observers for discrimination of blended expressions. Configural processing facilitates detection of in/congruence(s) across regions, and thus detection of non-genuine smiling faces (due to non-happy eyes). Copyright © 2018 Elsevier B.V. All rights reserved.
Ontogeny of the maxilla in Neanderthals and their ancestors
Lacruz, Rodrigo S.; Bromage, Timothy G.; O'Higgins, Paul; Arsuaga, Juan-Luis; Stringer, Chris; Godinho, Ricardo Miguel; Warshaw, Johanna; Martínez, Ignacio; Gracia-Tellez, Ana; de Castro, José María Bermúdez; Carbonell, Eudald
2015-01-01
Neanderthals had large and projecting (prognathic) faces similar to those of their putative ancestors from Sima de los Huesos (SH) and different from the retracted modern human face. When such differences arose during development and the morphogenetic modifications involved are unknown. We show that maxillary growth remodelling (bone formation and resorption) of the Devil's Tower (Gibraltar 2) and La Quina 18 Neanderthals and four SH hominins, all sub-adults, show extensive bone deposition, whereas in modern humans extensive osteoclastic bone resorption is found in the same regions. This morphogenetic difference is evident by ∼5 years of age. Modern human faces are distinct from those of the Neanderthal and SH fossils in part because their postnatal growth processes differ markedly. The growth remodelling identified in these fossil hominins is shared with Australopithecus and early Homo but not with modern humans suggesting that the modern human face is developmentally derived. PMID:26639346
Ontogeny of the maxilla in Neanderthals and their ancestors.
Lacruz, Rodrigo S; Bromage, Timothy G; O'Higgins, Paul; Arsuaga, Juan-Luis; Stringer, Chris; Godinho, Ricardo Miguel; Warshaw, Johanna; Martínez, Ignacio; Gracia-Tellez, Ana; de Castro, José María Bermúdez; Carbonell, Eudald
2015-12-07
Neanderthals had large and projecting (prognathic) faces similar to those of their putative ancestors from Sima de los Huesos (SH) and different from the retracted modern human face. When such differences arose during development and the morphogenetic modifications involved are unknown. We show that maxillary growth remodelling (bone formation and resorption) of the Devil's Tower (Gibraltar 2) and La Quina 18 Neanderthals and four SH hominins, all sub-adults, show extensive bone deposition, whereas in modern humans extensive osteoclastic bone resorption is found in the same regions. This morphogenetic difference is evident by ∼5 years of age. Modern human faces are distinct from those of the Neanderthal and SH fossils in part because their postnatal growth processes differ markedly. The growth remodelling identified in these fossil hominins is shared with Australopithecus and early Homo but not with modern humans suggesting that the modern human face is developmentally derived.
Functional organization of the face-sensitive areas in human occipital-temporal cortex.
Shao, Hanyu; Weng, Xuchu; He, Sheng
2017-08-15
Human occipital-temporal cortex features several areas sensitive to faces, presumably forming the biological substrate for face perception. To date, there are piecemeal insights regarding the functional organization of these regions. They have come, however, from studies that are far from homogeneous with regard to the regions involved, the experimental design, and the data analysis approach. In order to provide an overall view of the functional organization of the face-sensitive areas, it is necessary to conduct a comprehensive study that taps into the pivotal functional properties of all the face-sensitive areas, within the context of the same experimental design, and uses multiple data analysis approaches. In this study, we identified the most robustly activated face-sensitive areas in bilateral occipital-temporal cortices (i.e., AFP, aFFA, pFFA, OFA, pcSTS, pSTS) and systemically compared their regionally averaged activation and multivoxel activation patterns to 96 images from 16 object categories, including faces and non-faces. This condition-rich and single-image analysis approach critically samples the functional properties of a brain region, allowing us to test how two basic functional properties, namely face-category selectivity and face-exemplar sensitivity are distributed among these regions. Moreover, by examining the correlational structure of neural responses to the 96 images, we characterize their interactions in the greater face-processing network. We found that (1) r-pFFA showed the highest face-category selectivity, followed by l-pFFA, bilateral aFFA and OFA, and then bilateral pcSTS. In contrast, bilateral AFP and pSTS showed low face-category selectivity; (2) l-aFFA, l-pcSTS and bilateral AFP showed evidence of face-exemplar sensitivity; (3) r-OFA showed high overall response similarities with bilateral LOC and r-pFFA, suggesting it might be a transitional stage between general and face-selective information processing; (4) r-aFFA showed high face-selective response similarity with r-pFFA and r-OFA, indicating it was specifically involved in processing face information. Results also reveal two properties of these face sensitive regions across the two hemispheres: (1) the averaged left intra-hemispheric response similarity for the images was lower than the averaged right intra-hemispheric and the inter-hemispheric response similarity, implying convergence of face processing towards the right hemisphere, and (2) the response similarities between homologous regions in the two hemispheres decreased as information processing proceeded from the early, more posterior, processing stage (OFA), indicating an increasing degree of hemispheric specialization and right hemisphere bias for face information processing. This study contributes to an emerging picture of how faces are processed within the occipital and temporal cortex. Copyright © 2017 Elsevier Inc. All rights reserved.
Koda, Hiroki; Sato, Anna; Kato, Akemi
2013-09-01
Humans innately perceive infantile features as cute. The ethologist Konrad Lorenz proposed that the infantile features of mammals and birds, known as the baby schema (kindchenschema), motivate caretaking behaviour. As biologically relevant stimuli, newborns are likely to be processed specially in terms of visual attention, perception, and cognition. Recent demonstrations on human participants have shown visual attentional prioritisation to newborn faces (i.e., newborn faces capture visual attention). Although characteristics equivalent to those found in the faces of human infants are found in nonhuman primates, attentional capture by newborn faces has not been tested in nonhuman primates. We examined whether conspecific newborn faces captured the visual attention of two Japanese monkeys using a target-detection task based on dot-probe tasks commonly used in human visual attention studies. Although visual cues enhanced target detection in subject monkeys, our results, unlike those for humans, showed no evidence of an attentional prioritisation for newborn faces by monkeys. Our demonstrations showed the validity of dot-probe task for visual attention studies in monkeys and propose a novel approach to bridge the gap between human and nonhuman primate social cognition research. This suggests that attentional capture by newborn faces is not common to macaques, but it is unclear if nursing experiences influence their perception and recognition of infantile appraisal stimuli. We need additional comparative studies to reveal the evolutionary origins of baby-schema perception and recognition. Copyright © 2013 Elsevier B.V. All rights reserved.
Unconscious processing of facial attractiveness: invisible attractive faces orient visual attention.
Hung, Shao-Min; Nieh, Chih-Hsuan; Hsieh, Po-Jang
2016-11-16
Past research has proven human's extraordinary ability to extract information from a face in the blink of an eye, including its emotion, gaze direction, and attractiveness. However, it remains elusive whether facial attractiveness can be processed and influences our behaviors in the complete absence of conscious awareness. Here we demonstrate unconscious processing of facial attractiveness with three distinct approaches. In Experiment 1, the time taken for faces to break interocular suppression was measured. The results showed that attractive faces enjoyed the privilege of breaking suppression and reaching consciousness earlier. In Experiment 2, we further showed that attractive faces had lower visibility thresholds, again suggesting that facial attractiveness could be processed more easily to reach consciousness. Crucially, in Experiment 3, a significant decrease of accuracy on an orientation discrimination task subsequent to an invisible attractive face showed that attractive faces, albeit suppressed and invisible, still exerted an effect by orienting attention. Taken together, for the first time, we show that facial attractiveness can be processed in the complete absence of consciousness, and an unconscious attractive face is still capable of directing our attention.
Energy conservation using face detection
NASA Astrophysics Data System (ADS)
Deotale, Nilesh T.; Kalbande, Dhananjay R.; Mishra, Akassh A.
2011-10-01
Computerized Face Detection, is concerned with the difficult task of converting a video signal of a person to written text. It has several applications like face recognition, simultaneous multiple face processing, biometrics, security, video surveillance, human computer interface, image database management, digital cameras use face detection for autofocus, selecting regions of interest in photo slideshows that use a pan-and-scale and The Present Paper deals with energy conservation using face detection. Automating the process to a computer requires the use of various image processing techniques. There are various methods that can be used for Face Detection such as Contour tracking methods, Template matching, Controlled background, Model based, Motion based and color based. Basically, the video of the subject are converted into images are further selected manually for processing. However, several factors like poor illumination, movement of face, viewpoint-dependent Physical appearance, Acquisition geometry, Imaging conditions, Compression artifacts makes Face detection difficult. This paper reports an algorithm for conservation of energy using face detection for various devices. The present paper suggests Energy Conservation can be done by Detecting the Face and reducing the brightness of complete image and then adjusting the brightness of the particular area of an image where the face is located using histogram equalization.
Romanski, Lizabeth M.
2012-01-01
The integration of facial gestures and vocal signals is an essential process in human communication and relies on an interconnected circuit of brain regions, including language regions in the inferior frontal gyrus (IFG). Studies have determined that ventral prefrontal cortical regions in macaques [e.g., the ventrolateral prefrontal cortex (VLPFC)] share similar cytoarchitectonic features as cortical areas in the human IFG, suggesting structural homology. Anterograde and retrograde tracing studies show that macaque VLPFC receives afferents from the superior and inferior temporal gyrus, which provide complex auditory and visual information, respectively. Moreover, physiological studies have shown that single neurons in VLPFC integrate species-specific face and vocal stimuli. Although bimodal responses may be found across a wide region of prefrontal cortex, vocalization responsive cells, which also respond to faces, are mainly found in anterior VLPFC. This suggests that VLPFC may be specialized to process and integrate social communication information, just as the IFG is specialized to process and integrate speech and gestures in the human brain. PMID:22723356
Dissimilar processing of emotional facial expressions in human and monkey temporal cortex
Zhu, Qi; Nelissen, Koen; Van den Stock, Jan; De Winter, François-Laurent; Pauwels, Karl; de Gelder, Beatrice; Vanduffel, Wim; Vandenbulcke, Mathieu
2013-01-01
Emotional facial expressions play an important role in social communication across primates. Despite major progress made in our understanding of categorical information processing such as for objects and faces, little is known, however, about how the primate brain evolved to process emotional cues. In this study, we used functional magnetic resonance imaging (fMRI) to compare the processing of emotional facial expressions between monkeys and humans. We used a 2 × 2 × 2 factorial design with species (human and monkey), expression (fear and chewing) and configuration (intact versus scrambled) as factors. At the whole brain level, selective neural responses to conspecific emotional expressions were anatomically confined to the superior temporal sulcus (STS) in humans. Within the human STS, we found functional subdivisions with a face-selective right posterior STS area that also responded selectively to emotional expressions of other species and a more anterior area in the right middle STS that responded specifically to human emotions. Hence, we argue that the latter region does not show a mere emotion-dependent modulation of activity but is primarily driven by human emotional facial expressions. Conversely, in monkeys, emotional responses appeared in earlier visual cortex and outside face-selective regions in inferior temporal cortex that responded also to multiple visual categories. Within monkey IT, we also found areas that were more responsive to conspecific than to non-conspecific emotional expressions but these responses were not as specific as in human middle STS. Overall, our results indicate that human STS may have developed unique properties to deal with social cues such as emotional expressions. PMID:23142071
Being BOLD: The neural dynamics of face perception.
Gentile, Francesco; Ales, Justin; Rossion, Bruno
2017-01-01
According to a non-hierarchical view of human cortical face processing, selective responses to faces may emerge in a higher-order area of the hierarchy, in the lateral part of the middle fusiform gyrus (fusiform face area [FFA]) independently from face-selective responses in the lateral inferior occipital gyrus (occipital face area [OFA]), a lower order area. Here we provide a stringent test of this hypothesis by gradually revealing segmented face stimuli throughout strict linear descrambling of phase information [Ales et al., 2012]. Using a short sampling rate (500 ms) of fMRI acquisition and single subject statistical analysis, we show a face-selective responses emerging earlier, that is, at a lower level of structural (i.e., phase) information, in the FFA compared with the OFA. In both regions, a face detection response emerging at a lower level of structural information for upright than inverted faces, both in the FFA and OFA, in line with behavioral responses and with previous findings of delayed responses to inverted faces with direct recordings of neural activity were also reported. Overall, these results support the non-hierarchical view of human cortical face processing and open new perspectives for time-resolved analysis at the single subject level of fMRI data obtained during continuously evolving visual stimulation. Hum Brain Mapp 38:120-139, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Hornung, Jonas; Kogler, Lydia; Erb, Michael; Freiherr, Jessica; Derntl, Birgit
2018-05-01
The androgen derivative androstadienone (AND) is a substance found in human sweat and thus may act as human chemosignal. With the current experiment, we aimed to explore in which way AND affects interference processing during an emotional Stroop task which used human faces as target and emotional words as distractor stimuli. This was complemented by functional magnetic resonance imaging (fMRI) to unravel the neural mechanism of AND-action. Based on previous accounts we expected AND to increase neural activation in areas commonly implicated in evaluation of emotional face processing and to change neural activation in brain regions linked to interference processing. For this aim, a total of 80 healthy individuals (oral contraceptive users, luteal women, men) were tested twice on two consecutive days with an emotional Stroop task using fMRI. Our results suggest that AND increases interference processing in brain areas that are heavily recruited during emotional conflict. At the same time, correlation analyses revealed that this neural interference processing was paralleled by higher behavioral costs (response times) with higher interference related brain activation under AND. Furthermore, AND elicited higher activation in regions implicated in emotional face processing including right fusiform gyrus, inferior frontal gyrus and dorsomedial cortex. In this connection, neural activation was not coupled to behavioral outcome. Furthermore, despite previous accounts of increased hypothalamic activation under AND, we were not able to replicate this finding and discuss possible reasons for this discrepancy. To conclude, AND increased interference processing in regions heavily recruited during emotional conflict which was coupled to higher costs in resolving emotional conflicts with stronger interference-related brain activation under AND. At the moment it remains unclear whether these effects are due to changes in conflict detection or resolution. However, evidence most consistently suggests that AND does not draw attention to the most potent socio-emotional information (human faces) but rather highlights representations of emotional words. Copyright © 2018 Elsevier Inc. All rights reserved.
Neural Correlates of Human and Monkey Face Processing in 9-Month-Old Infants
ERIC Educational Resources Information Center
Scott, Lisa S.; Shannon, Robert W.; Nelson, Charles A.
2006-01-01
Behavioral and electrophysiological evidence suggests a gradual, experience-dependent specialization of cortical face processing systems that takes place largely in the 1st year of life. To further investigate these findings, event-related potentials (ERPs) were collected from typically developing 9-month-old infants presented with pictures of…
Leube, Dirk T; Yoon, Hyo Woon; Rapp, Alexander; Erb, Michael; Grodd, Wolfgang; Bartels, Mathias; Kircher, Tilo T J
2003-05-22
Perception of upright faces relies on configural processing. Therefore recognition of inverted, compared to upright faces is impaired. In a functional magnetic resonance imaging experiment we investigated the neural correlate of a face inversion task. Thirteen healthy subjects were presented with a equal number of upright and inverted faces alternating with a low level baseline with an upright and inverted picture of an abstract symbol. Brain activation was calculated for upright minus inverted faces. For this differential contrast, we found a signal change in the right superior temporal sulcus and right insula. Configural properties are processed in a network comprising right superior temporal and insular cortex.
Event-related potential and eye tracking evidence of the developmental dynamics of face processing.
Meaux, Emilie; Hernandez, Nadia; Carteau-Martin, Isabelle; Martineau, Joëlle; Barthélémy, Catherine; Bonnet-Brilhault, Frédérique; Batty, Magali
2014-04-01
Although the wide neural network and specific processes related to faces have been revealed, the process by which face-processing ability develops remains unclear. An interest in faces appears early in infancy, and developmental findings to date have suggested a long maturation process of the mechanisms involved in face processing. These developmental changes may be supported by the acquisition of more efficient strategies to process faces (theory of expertise) and by the maturation of the face neural network identified in adults. This study aimed to clarify the link between event-related potential (ERP) development in response to faces and the behavioral changes in the way faces are scanned throughout childhood. Twenty-six young children (4-10 years of age) were included in two experimental paradigms, the first exploring ERPs during face processing, the second investigating the visual exploration of faces using an eye-tracking system. The results confirmed significant age-related changes in visual ERPs (P1, N170 and P2). Moreover, an increased interest in the eye region and an attentional shift from the mouth to the eyes were also revealed. The proportion of early fixations on the eye region was correlated with N170 and P2 characteristics, highlighting a link between the development of ERPs and gaze behavior. We suggest that these overall developmental dynamics may be sustained by a gradual, experience-dependent specialization in face processing (i.e. acquisition of face expertise), which produces a more automatic and efficient network associated with effortless identification of faces, and allows the emergence of human-specific social and communication skills. © 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Piepers, Daniel W.; Robbins, Rachel A.
2012-01-01
It is widely agreed that the human face is processed differently from other objects. However there is a lack of consensus on what is meant by a wide array of terms used to describe this “special” face processing (e.g., holistic and configural) and the perceptually relevant information within a face (e.g., relational properties and configuration). This paper will review existing models of holistic/configural processing, discuss how they differ from one another conceptually, and review the wide variety of measures used to tap into these concepts. In general we favor a model where holistic processing of a face includes some or all of the interrelations between features and has separate coding for features. However, some aspects of the model remain unclear. We propose the use of moving faces as a way of clarifying what types of information are included in the holistic representation of a face. PMID:23413184
Emotion unfolded by motion: a role for parietal lobe in decoding dynamic facial expressions.
Sarkheil, Pegah; Goebel, Rainer; Schneider, Frank; Mathiak, Klaus
2013-12-01
Facial expressions convey important emotional and social information and are frequently applied in investigations of human affective processing. Dynamic faces may provide higher ecological validity to examine perceptual and cognitive processing of facial expressions. Higher order processing of emotional faces was addressed by varying the task and virtual face models systematically. Blood oxygenation level-dependent activation was assessed using functional magnetic resonance imaging in 20 healthy volunteers while viewing and evaluating either emotion or gender intensity of dynamic face stimuli. A general linear model analysis revealed that high valence activated a network of motion-responsive areas, indicating that visual motion areas support perceptual coding for the motion-based intensity of facial expressions. The comparison of emotion with gender discrimination task revealed increased activation of inferior parietal lobule, which highlights the involvement of parietal areas in processing of high level features of faces. Dynamic emotional stimuli may help to emphasize functions of the hypothesized 'extended' over the 'core' system for face processing.
Studies in the Human Use of Controlled English
2015-12-01
Controlled English (CE) is intended to aid human problem solving processes when analysing data and generating high-value conclusions in collaboration...state of affairs. The second approach is to guide a user face-to-face to formulate free English sentences into CE to solve a logic problem. The paper describes both approaches and provides an informal analysis of the results to date.
Letting Our Hearts Break: On Facing the "Hidden Wound" of Human Supremacy
ERIC Educational Resources Information Center
Martusewicz, Rebecca
2014-01-01
In this paper I argue that education must be defined by our willingness to experience compassion in the face of others' suffering and thus by an ethical imperative, and seek to expose psycho-social processes of shame as dark matters that inferiorize and subjugate those expressing such compassion for the more-than-human world. Beginning with…
Rangarajan, Vinitha; Parvizi, Josef
2016-03-01
The ventral temporal cortex (VTC) contains several areas with selective responses to words, numbers, faces, and objects as demonstrated by numerous human and primate imaging and electrophysiological studies. Our recent work using electrocorticography (ECoG) confirmed the presence of face-selective neuronal populations in the human fusiform gyrus (FG) in patients implanted with intracranial electrodes in either the left or right hemisphere. Electrical brain stimulation (EBS) disrupted the conscious perception of faces only when it was delivered in the right, but not left, FG. In contrast to our previous findings, here we report both negative and positive EBS effects in right and left FG, respectively. The presence of right hemisphere language dominance in the first, and strong left-handedness and poor language processing performance in the second case, provide indirect clues about the functional architecture of the human VTC in relation to hemispheric asymmetries in language processing and handedness. Copyright © 2015 Elsevier Ltd. All rights reserved.
Nestor, Adrian; Vettel, Jean M; Tarr, Michael J
2013-11-01
What basic visual structures underlie human face detection and how can we extract such structures directly from the amplitude of neural responses elicited by face processing? Here, we address these issues by investigating an extension of noise-based image classification to BOLD responses recorded in high-level visual areas. First, we assess the applicability of this classification method to such data and, second, we explore its results in connection with the neural processing of faces. To this end, we construct luminance templates from white noise fields based on the response of face-selective areas in the human ventral cortex. Using behaviorally and neurally-derived classification images, our results reveal a family of simple but robust image structures subserving face representation and detection. Thus, we confirm the role played by classical face selective regions in face detection and we help clarify the representational basis of this perceptual function. From a theory standpoint, our findings support the idea of simple but highly diagnostic neurally-coded features for face detection. At the same time, from a methodological perspective, our work demonstrates the ability of noise-based image classification in conjunction with fMRI to help uncover the structure of high-level perceptual representations. Copyright © 2012 Wiley Periodicals, Inc.
Isomura, Tomoko; Ogawa, Shino; Yamada, Satoko; Shibasaki, Masahiro; Masataka, Nobuo
2014-01-01
Previous studies have demonstrated that angry faces capture humans' attention more rapidly than emotionally positive faces. This phenomenon is referred to as the anger superiority effect (ASE). Despite atypical emotional processing, adults and children with Autism Spectrum Disorders (ASD) have been reported to show ASE as well as typically developed (TD) individuals. So far, however, few studies have clarified whether or not the mechanisms underlying ASE are the same for both TD and ASD individuals. Here, we tested how TD and ASD children process schematic emotional faces during detection by employing a recognition task in combination with a face-in-the-crowd task. Results of the face-in-the-crowd task revealed the prevalence of ASE both in TD and ASD children. However, the results of the recognition task revealed group differences: In TD children, detection of angry faces required more configural face processing and disrupted the processing of local features. In ASD children, on the other hand, it required more feature-based processing rather than configural processing. Despite the small sample sizes, these findings provide preliminary evidence that children with ASD, in contrast to TD children, show quick detection of angry faces by extracting local features in faces. PMID:24904477
NASA Astrophysics Data System (ADS)
Tsagkrasoulis, Dimosthenis; Hysi, Pirro; Spector, Tim; Montana, Giovanni
2017-04-01
The human face is a complex trait under strong genetic control, as evidenced by the striking visual similarity between twins. Nevertheless, heritability estimates of facial traits have often been surprisingly low or difficult to replicate. Furthermore, the construction of facial phenotypes that correspond to naturally perceived facial features remains largely a mystery. We present here a large-scale heritability study of face geometry that aims to address these issues. High-resolution, three-dimensional facial models have been acquired on a cohort of 952 twins recruited from the TwinsUK registry, and processed through a novel landmarking workflow, GESSA (Geodesic Ensemble Surface Sampling Algorithm). The algorithm places thousands of landmarks throughout the facial surface and automatically establishes point-wise correspondence across faces. These landmarks enabled us to intuitively characterize facial geometry at a fine level of detail through curvature measurements, yielding accurate heritability maps of the human face (www.heritabilitymaps.info).
Effects of configural processing on the perceptual spatial resolution for face features.
Namdar, Gal; Avidan, Galia; Ganel, Tzvi
2015-11-01
Configural processing governs human perception across various domains, including face perception. An established marker of configural face perception is the face inversion effect, in which performance is typically better for upright compared to inverted faces. In two experiments, we tested whether configural processing could influence basic visual abilities such as perceptual spatial resolution (i.e., the ability to detect spatial visual changes). Face-related perceptual spatial resolution was assessed by measuring the just noticeable difference (JND) to subtle positional changes between specific features in upright and inverted faces. The results revealed robust inversion effect for spatial sensitivity to configural-based changes, such as the distance between the mouth and the nose, or the distance between the eyes and the nose. Critically, spatial resolution for face features within the region of the eyes (e.g., the interocular distance between the eyes) was not affected by inversion, suggesting that the eye region operates as a separate 'gestalt' unit which is relatively immune to manipulations that would normally hamper configural processing. Together these findings suggest that face orientation modulates fundamental psychophysical abilities including spatial resolution. Furthermore, they indicate that classic psychophysical methods can be used as a valid measure of configural face processing. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sumriddetchkajorn, Sarun; Chaitavon, Kosom
2009-07-01
This paper introduces a parallel measurement approach for fast infrared-based human temperature screening suitable for use in a large public area. Our key idea is based on the combination of simple image processing algorithms, infrared technology, and human flow management. With this multidisciplinary concept, we arrange as many people as possible in a two-dimensional space in front of a thermal imaging camera and then highlight all human facial areas through simple image filtering, image morphological, and particle analysis processes. In this way, an individual's face in live thermal image can be located and the maximum facial skin temperature can be monitored and displayed. Our experiment shows a measured 1 ms processing time in highlighting all human face areas. With a thermal imaging camera having an FOV lens of 24° × 18° and 320 × 240 active pixels, the maximum facial skin temperatures from three people's faces located at 1.3 m from the camera can also be simultaneously monitored and displayed in a measured rate of 31 fps, limited by the looping process in determining coordinates of all faces. For our 3-day test under the ambient temperature of 24-30 °C, 57-72% relative humidity, and weak wind from the outside hospital building, hyperthermic patients can be identified with 100% sensitivity and 36.4% specificity when the temperature threshold level and the offset temperature value are appropriately chosen. Appropriately locating our system away from the building doors, air conditioners and electric fans in order to eliminate wind blow coming toward the camera lens can significantly help improve our system specificity.
The hierarchical brain network for face recognition.
Zhen, Zonglei; Fang, Huizhen; Liu, Jia
2013-01-01
Numerous functional magnetic resonance imaging (fMRI) studies have identified multiple cortical regions that are involved in face processing in the human brain. However, few studies have characterized the face-processing network as a functioning whole. In this study, we used fMRI to identify face-selective regions in the entire brain and then explore the hierarchical structure of the face-processing network by analyzing functional connectivity among these regions. We identified twenty-five regions mainly in the occipital, temporal and frontal cortex that showed a reliable response selective to faces (versus objects) across participants and across scan sessions. Furthermore, these regions were clustered into three relatively independent sub-networks in a face-recognition task on the basis of the strength of functional connectivity among them. The functionality of the sub-networks likely corresponds to the recognition of individual identity, retrieval of semantic knowledge and representation of emotional information. Interestingly, when the task was switched to object recognition from face recognition, the functional connectivity between the inferior occipital gyrus and the rest of the face-selective regions were significantly reduced, suggesting that this region may serve as an entry node in the face-processing network. In sum, our study provides empirical evidence for cognitive and neural models of face recognition and helps elucidate the neural mechanisms underlying face recognition at the network level.
Neuronal integration in visual cortex elevates face category tuning to conscious face perception
Fahrenfort, Johannes J.; Snijders, Tineke M.; Heinen, Klaartje; van Gaal, Simon; Scholte, H. Steven; Lamme, Victor A. F.
2012-01-01
The human brain has the extraordinary capability to transform cluttered sensory input into distinct object representations. For example, it is able to rapidly and seemingly without effort detect object categories in complex natural scenes. Surprisingly, category tuning is not sufficient to achieve conscious recognition of objects. What neural process beyond category extraction might elevate neural representations to the level where objects are consciously perceived? Here we show that visible and invisible faces produce similar category-selective responses in the ventral visual cortex. The pattern of neural activity evoked by visible faces could be used to decode the presence of invisible faces and vice versa. However, only visible faces caused extensive response enhancements and changes in neural oscillatory synchronization, as well as increased functional connectivity between higher and lower visual areas. We conclude that conscious face perception is more tightly linked to neural processes of sustained information integration and binding than to processes accommodating face category tuning. PMID:23236162
Functional selectivity for face processing in the temporal voice area of early deaf individuals
van Ackeren, Markus J.; Rabini, Giuseppe; Zonca, Joshua; Foa, Valentina; Baruffaldi, Francesca; Rezk, Mohamed; Pavani, Francesco; Rossion, Bruno; Collignon, Olivier
2017-01-01
Brain systems supporting face and voice processing both contribute to the extraction of important information for social interaction (e.g., person identity). How does the brain reorganize when one of these channels is absent? Here, we explore this question by combining behavioral and multimodal neuroimaging measures (magneto-encephalography and functional imaging) in a group of early deaf humans. We show enhanced selective neural response for faces and for individual face coding in a specific region of the auditory cortex that is typically specialized for voice perception in hearing individuals. In this region, selectivity to face signals emerges early in the visual processing hierarchy, shortly after typical face-selective responses in the ventral visual pathway. Functional and effective connectivity analyses suggest reorganization in long-range connections from early visual areas to the face-selective temporal area in individuals with early and profound deafness. Altogether, these observations demonstrate that regions that typically specialize for voice processing in the hearing brain preferentially reorganize for face processing in born-deaf people. Our results support the idea that cross-modal plasticity in the case of early sensory deprivation relates to the original functional specialization of the reorganized brain regions. PMID:28652333
Ramírez, Fernando M
2018-05-01
Viewpoint-invariant face recognition is thought to be subserved by a distributed network of occipitotemporal face-selective areas that, except for the human anterior temporal lobe, have been shown to also contain face-orientation information. This review begins by highlighting the importance of bilateral symmetry for viewpoint-invariant recognition and face-orientation perception. Then, monkey electrophysiological evidence is surveyed describing key tuning properties of face-selective neurons-including neurons bimodally tuned to mirror-symmetric face-views-followed by studies combining functional magnetic resonance imaging (fMRI) and multivariate pattern analyses to probe the representation of face-orientation and identity information in humans. Altogether, neuroimaging studies suggest that face-identity is gradually disentangled from face-orientation information along the ventral visual processing stream. The evidence seems to diverge, however, regarding the prevalent form of tuning of neural populations in human face-selective areas. In this context, caveats possibly leading to erroneous inferences regarding mirror-symmetric coding are exposed, including the need to distinguish angular from Euclidean distances when interpreting multivariate pattern analyses. On this basis, this review argues that evidence from the fusiform face area is best explained by a view-sensitive code reflecting head angular disparity, consistent with a role of this area in face-orientation perception. Finally, the importance is stressed of explicit models relating neural properties to large-scale signals.
The Superior Temporal Sulcus Is Causally Connected to the Amygdala: A Combined TBS-fMRI Study.
Pitcher, David; Japee, Shruti; Rauth, Lionel; Ungerleider, Leslie G
2017-02-01
Nonhuman primate neuroanatomical studies have identified a cortical pathway from the superior temporal sulcus (STS) projecting into dorsal subregions of the amygdala, but whether this same pathway exists in humans is unknown. Here, we addressed this question by combining theta burst transcranial magnetic stimulation (TBS) with fMRI to test the prediction that the STS and amygdala are functionally connected during face perception. Human participants (N = 17) were scanned, over two sessions, while viewing 3 s video clips of moving faces, bodies, and objects. During these sessions, TBS was delivered over the face-selective right posterior STS (rpSTS) or over the vertex control site. A region-of-interest analysis revealed results consistent with our hypothesis. Namely, TBS delivered over the rpSTS reduced the neural response to faces (but not to bodies or objects) in the rpSTS, right anterior STS (raSTS), and right amygdala, compared with TBS delivered over the vertex. By contrast, TBS delivered over the rpSTS did not significantly reduce the neural response to faces in the right fusiform face area or right occipital face area. This pattern of results is consistent with the existence of a cortico-amygdala pathway in humans for processing face information projecting from the rpSTS, via the raSTS, into the amygdala. This conclusion is consistent with nonhuman primate neuroanatomy and with existing face perception models. Neuroimaging studies have identified multiple face-selective regions in the brain, but the functional connections between these regions are unknown. In the present study, participants were scanned with fMRI while viewing movie clips of faces, bodies, and objects before and after transient disruption of the face-selective right posterior superior temporal sulcus (rpSTS). Results showed that TBS disruption reduced the neural response to faces, but not to bodies or objects, in the rpSTS, right anterior STS (raSTS), and right amygdala. These results are consistent with the existence of a cortico-amygdala pathway in humans for processing face information projecting from the rpSTS, via the raSTS, into the amygdala. This conclusion is consistent with nonhuman primate neuroanatomy and with existing face perception models. Copyright © 2017 the authors 0270-6474/17/371156-06$15.00/0.
Perception and Processing of Faces in the Human Brain Is Tuned to Typical Feature Locations
Schwarzkopf, D. Samuel; Alvarez, Ivan; Lawson, Rebecca P.; Henriksson, Linda; Kriegeskorte, Nikolaus; Rees, Geraint
2016-01-01
Faces are salient social stimuli whose features attract a stereotypical pattern of fixations. The implications of this gaze behavior for perception and brain activity are largely unknown. Here, we characterize and quantify a retinotopic bias implied by typical gaze behavior toward faces, which leads to eyes and mouth appearing most often in the upper and lower visual field, respectively. We found that the adult human visual system is tuned to these contingencies. In two recognition experiments, recognition performance for isolated face parts was better when they were presented at typical, rather than reversed, visual field locations. The recognition cost of reversed locations was equal to ∼60% of that for whole face inversion in the same sample. Similarly, an fMRI experiment showed that patterns of activity evoked by eye and mouth stimuli in the right inferior occipital gyrus could be separated with significantly higher accuracy when these features were presented at typical, rather than reversed, visual field locations. Our findings demonstrate that human face perception is determined not only by the local position of features within a face context, but by whether features appear at the typical retinotopic location given normal gaze behavior. Such location sensitivity may reflect fine-tuning of category-specific visual processing to retinal input statistics. Our findings further suggest that retinotopic heterogeneity might play a role for face inversion effects and for the understanding of conditions affecting gaze behavior toward faces, such as autism spectrum disorders and congenital prosopagnosia. SIGNIFICANCE STATEMENT Faces attract our attention and trigger stereotypical patterns of visual fixations, concentrating on inner features, like eyes and mouth. Here we show that the visual system represents face features better when they are shown at retinal positions where they typically fall during natural vision. When facial features were shown at typical (rather than reversed) visual field locations, they were discriminated better by humans and could be decoded with higher accuracy from brain activity patterns in the right occipital face area. This suggests that brain representations of face features do not cover the visual field uniformly. It may help us understand the well-known face-inversion effect and conditions affecting gaze behavior toward faces, such as prosopagnosia and autism spectrum disorders. PMID:27605606
Eye coding mechanisms in early human face event-related potentials.
Rousselet, Guillaume A; Ince, Robin A A; van Rijsbergen, Nicola J; Schyns, Philippe G
2014-11-10
In humans, the N170 event-related potential (ERP) is an integrated measure of cortical activity that varies in amplitude and latency across trials. Researchers often conjecture that N170 variations reflect cortical mechanisms of stimulus coding for recognition. Here, to settle the conjecture and understand cortical information processing mechanisms, we unraveled the coding function of N170 latency and amplitude variations in possibly the simplest socially important natural visual task: face detection. On each experimental trial, 16 observers saw face and noise pictures sparsely sampled with small Gaussian apertures. Reverse-correlation methods coupled with information theory revealed that the presence of the eye specifically covaries with behavioral and neural measurements: the left eye strongly modulates reaction times and lateral electrodes represent mainly the presence of the contralateral eye during the rising part of the N170, with maximum sensitivity before the N170 peak. Furthermore, single-trial N170 latencies code more about the presence of the contralateral eye than N170 amplitudes and early latencies are associated with faster reaction times. The absence of these effects in control images that did not contain a face refutes alternative accounts based on retinal biases or allocation of attention to the eye location on the face. We conclude that the rising part of the N170, roughly 120-170 ms post-stimulus, is a critical time-window in human face processing mechanisms, reflecting predominantly, in a face detection task, the encoding of a single feature: the contralateral eye. © 2014 ARVO.
Bublatzky, Florian; Gerdes, Antje B. M.; White, Andrew J.; Riemer, Martin; Alpers, Georg W.
2014-01-01
Human face perception is modulated by both emotional valence and social relevance, but their interaction has rarely been examined. Event-related brain potentials (ERP) to happy, neutral, and angry facial expressions with different degrees of social relevance were recorded. To implement a social anticipation task, relevance was manipulated by presenting faces of two specific actors as future interaction partners (socially relevant), whereas two other face actors remained non-relevant. In a further control task all stimuli were presented without specific relevance instructions (passive viewing). Face stimuli of four actors (2 women, from the KDEF) were randomly presented for 1s to 26 participants (16 female). Results showed an augmented N170, early posterior negativity (EPN), and late positive potential (LPP) for emotional in contrast to neutral facial expressions. Of particular interest, face processing varied as a function of experimental tasks. Whereas task effects were observed for P1 and EPN regardless of instructed relevance, LPP amplitudes were modulated by emotional facial expression and relevance manipulation. The LPP was specifically enhanced for happy facial expressions of the anticipated future interaction partners. This underscores that social relevance can impact face processing already at an early stage of visual processing. These findings are discussed within the framework of motivated attention and face processing theories. PMID:25076881
Johnson, Mark H; Senju, Atsushi; Tomalski, Przemyslaw
2015-03-01
Johnson and Morton (1991. Biology and Cognitive Development: The Case of Face Recognition. Blackwell, Oxford) used Gabriel Horn's work on the filial imprinting model to inspire a two-process theory of the development of face processing in humans. In this paper we review evidence accrued over the past two decades from infants and adults, and from other primates, that informs this two-process model. While work with newborns and infants has been broadly consistent with predictions from the model, further refinements and questions have been raised. With regard to adults, we discuss more recent evidence on the extension of the model to eye contact detection, and to subcortical face processing, reviewing functional imaging and patient studies. We conclude with discussion of outstanding caveats and future directions of research in this field. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
ERIC Educational Resources Information Center
Hayden, Angela; Bhatt, Ramesh S.; Reed, Andrea; Corbly, Christine R.; Joseph, Jane E.
2007-01-01
Sensitivity to second-order relational information (i.e., spatial relations among features such as the distance between eyes) is a vital part of achieving expertise with face processing. Prior research is unclear on whether infants are sensitive to second-order differences seen in typical human populations. In the current experiments, we examined…
Processing language in face-to-face conversation: Questions with gestures get faster responses.
Holler, Judith; Kendrick, Kobin H; Levinson, Stephen C
2017-09-08
The home of human language use is face-to-face interaction, a context in which communicative exchanges are characterised not only by bodily signals accompanying what is being said but also by a pattern of alternating turns at talk. This transition between turns is astonishingly fast-typically a mere 200-ms elapse between a current and a next speaker's contribution-meaning that comprehending, producing, and coordinating conversational contributions in time is a significant challenge. This begs the question of whether the additional information carried by bodily signals facilitates or hinders language processing in this time-pressured environment. We present analyses of multimodal conversations revealing that bodily signals appear to profoundly influence language processing in interaction: Questions accompanied by gestures lead to shorter turn transition times-that is, to faster responses-than questions without gestures, and responses come earlier when gestures end before compared to after the question turn has ended. These findings hold even after taking into account prosodic patterns and other visual signals, such as gaze. The empirical findings presented here provide a first glimpse of the role of the body in the psycholinguistic processes underpinning human communication.
The Naked Truth: The Face and Body Sensitive N170 Response Is Enhanced for Nude Bodies
Hietanen, Jari K.; Nummenmaa, Lauri
2011-01-01
Recent event-related potential studies have shown that the occipitotemporal N170 component - best known for its sensitivity to faces - is also sensitive to perception of human bodies. Considering that in the timescale of evolution clothing is a relatively new invention that hides the bodily features relevant for sexual selection and arousal, we investigated whether the early N170 brain response would be enhanced to nude over clothed bodies. In two experiments, we measured N170 responses to nude bodies, bodies wearing swimsuits, clothed bodies, faces, and control stimuli (cars). We found that the N170 amplitude was larger to opposite and same-sex nude vs. clothed bodies. Moreover, the N170 amplitude increased linearly as the amount of clothing decreased from full clothing via swimsuits to nude bodies. Strikingly, the N170 response to nude bodies was even greater than that to faces, and the N170 amplitude to bodies was independent of whether the face of the bodies was visible or not. All human stimuli evoked greater N170 responses than did the control stimulus. Autonomic measurements and self-evaluations showed that nude bodies were affectively more arousing compared to the other stimulus categories. We conclude that the early visual processing of human bodies is sensitive to the visibility of the sex-related features of human bodies and that the visual processing of other people's nude bodies is enhanced in the brain. This enhancement is likely to reflect affective arousal elicited by nude bodies. Such facilitated visual processing of other people's nude bodies is possibly beneficial in identifying potential mating partners and competitors, and for triggering sexual behavior. PMID:22110574
Cheetham, Marcus; Suter, Pascal; Jancke, Lutz
2014-01-01
The Uncanny Valley Hypothesis (UVH) predicts that greater difficulty perceptually discriminating between categorically ambiguous human and humanlike characters (e.g., highly realistic robot) evokes negatively valenced (i.e., uncanny) affect. An ABX perceptual discrimination task and signal detection analysis was used to examine the profile of perceptual discrimination (PD) difficulty along the UVH' dimension of human likeness (DHL). This was represented using avatar-to-human morph continua. Rejecting the implicitly assumed profile of PD difficulty underlying the UVH' prediction, Experiment 1 showed that PD difficulty was reduced for categorically ambiguous faces but, notably, enhanced for human faces. Rejecting the UVH' predicted relationship between PD difficulty and negative affect (assessed in terms of the UVH' familiarity dimension), Experiment 2 demonstrated that greater PD difficulty correlates with more positively valenced affect. Critically, this effect was strongest for the ambiguous faces, suggesting a correlative relationship between PD difficulty and feelings of familiarity more consistent with the metaphor happy valley. This relationship is also consistent with a fluency amplification instead of the hitherto proposed hedonic fluency account of affect along the DHL. Experiment 3 found no evidence that the asymmetry in the profile of PD along the DHL is attributable to a differential processing bias (cf. other-race effect), i.e., processing avatars at a category level but human faces at an individual level. In conclusion, the present data for static faces show clear effects that, however, strongly challenge the UVH' implicitly assumed profile of PD difficulty along the DHL and the predicted relationship between this and feelings of familiarity.
Cheetham, Marcus; Suter, Pascal; Jancke, Lutz
2014-01-01
The Uncanny Valley Hypothesis (UVH) predicts that greater difficulty perceptually discriminating between categorically ambiguous human and humanlike characters (e.g., highly realistic robot) evokes negatively valenced (i.e., uncanny) affect. An ABX perceptual discrimination task and signal detection analysis was used to examine the profile of perceptual discrimination (PD) difficulty along the UVH' dimension of human likeness (DHL). This was represented using avatar-to-human morph continua. Rejecting the implicitly assumed profile of PD difficulty underlying the UVH' prediction, Experiment 1 showed that PD difficulty was reduced for categorically ambiguous faces but, notably, enhanced for human faces. Rejecting the UVH' predicted relationship between PD difficulty and negative affect (assessed in terms of the UVH' familiarity dimension), Experiment 2 demonstrated that greater PD difficulty correlates with more positively valenced affect. Critically, this effect was strongest for the ambiguous faces, suggesting a correlative relationship between PD difficulty and feelings of familiarity more consistent with the metaphor happy valley. This relationship is also consistent with a fluency amplification instead of the hitherto proposed hedonic fluency account of affect along the DHL. Experiment 3 found no evidence that the asymmetry in the profile of PD along the DHL is attributable to a differential processing bias (cf. other-race effect), i.e., processing avatars at a category level but human faces at an individual level. In conclusion, the present data for static faces show clear effects that, however, strongly challenge the UVH' implicitly assumed profile of PD difficulty along the DHL and the predicted relationship between this and feelings of familiarity. PMID:25477829
Greater sensitivity of the cortical face processing system to perceptually-equated face detection
Maher, S.; Ekstrom, T.; Tong, Y.; Nickerson, L.D.; Frederick, B.; Chen, Y.
2015-01-01
Face detection, the perceptual capacity to identify a visual stimulus as a face before probing deeper into specific attributes (such as its identity or emotion), is essential for social functioning. Despite the importance of this functional capacity, face detection and its underlying brain mechanisms are not well understood. This study evaluated the roles that the cortical face processing system, which is identified largely through studying other aspects of face perception, play in face detection. Specifically, we used functional magnetic resonance imaging (fMRI) to examine the activations of the fusifom face area (FFA), occipital face area (OFA) and superior temporal sulcus (STS) when face detection was isolated from other aspects of face perception and when face detection was perceptually-equated across individual human participants (n=20). During face detection, FFA and OFA were significantly activated, even for stimuli presented at perceptual-threshold levels, whereas STS was not. During tree detection, however, FFA and OFA were responsive only for highly salient (i.e., high contrast) stimuli. Moreover, activation of FFA during face detection predicted a significant portion of the perceptual performance levels that were determined psychophysically for each participant. This pattern of result indicates that FFA and OFA have a greater sensitivity to face detection signals and selectively support the initial process of face vs. non-face object perception. PMID:26592952
Photogrammetric Network for Evaluation of Human Faces for Face Reconstruction Purpose
NASA Astrophysics Data System (ADS)
Schrott, P.; Detrekői, Á.; Fekete, K.
2012-08-01
Facial reconstruction is the process of reconstructing the geometry of faces of persons from skeletal remains. A research group (BME Cooperation Research Center for Biomechanics) was formed representing several organisations to combine knowledgebases of different disciplines like anthropology, medical, mechanical, archaeological sciences etc. to computerize the face reconstruction process based on a large dataset of 3D face and skull models gathered from living persons: cranial data from CT scans and face models from photogrammetric evaluations. The BUTE Dept. of Photogrammetry and Geoinformatics works on the method and technology of the 3D data acquisition for the face models. In this paper we will present the research and results of the photogrammetric network design, the modelling to deal with visibility constraints, and the investigation of the developed basic photogrammetric configuration to specify the result characteristics to be expected using the device built for the photogrammetric face measurements.
The neural organization of perception in chess experts.
Krawczyk, Daniel C; Boggan, Amy L; McClelland, M Michelle; Bartlett, James C
2011-07-20
The human visual system responds to expertise, and it has been suggested that regions that process faces also process other objects of expertise including chess boards by experts. We tested whether chess and face processing overlap in brain activity using fMRI. Chess experts and novices exhibited face selective areas, but these regions showed no selectivity to chess configurations relative to other stimuli. We next compared neural responses to chess and to scrambled chess displays to isolate areas relevant to expertise. Areas within the posterior cingulate, orbitofrontal cortex, and right temporal cortex were active in this comparison in experts over novices. We also compared chess and face responses within the posterior cingulate and found this area responsive to chess only in experts. These findings indicate that the configurations in chess are not strongly processed by face-selective regions that are selective for faces in individuals who have expertise in both domains. Further, the area most consistently involved in chess did not show overlap with faces. Overall, these results suggest that expert visual processing may be similar at the level of recognition, but need not show the same neural correlates. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
The Hierarchical Brain Network for Face Recognition
Zhen, Zonglei; Fang, Huizhen; Liu, Jia
2013-01-01
Numerous functional magnetic resonance imaging (fMRI) studies have identified multiple cortical regions that are involved in face processing in the human brain. However, few studies have characterized the face-processing network as a functioning whole. In this study, we used fMRI to identify face-selective regions in the entire brain and then explore the hierarchical structure of the face-processing network by analyzing functional connectivity among these regions. We identified twenty-five regions mainly in the occipital, temporal and frontal cortex that showed a reliable response selective to faces (versus objects) across participants and across scan sessions. Furthermore, these regions were clustered into three relatively independent sub-networks in a face-recognition task on the basis of the strength of functional connectivity among them. The functionality of the sub-networks likely corresponds to the recognition of individual identity, retrieval of semantic knowledge and representation of emotional information. Interestingly, when the task was switched to object recognition from face recognition, the functional connectivity between the inferior occipital gyrus and the rest of the face-selective regions were significantly reduced, suggesting that this region may serve as an entry node in the face-processing network. In sum, our study provides empirical evidence for cognitive and neural models of face recognition and helps elucidate the neural mechanisms underlying face recognition at the network level. PMID:23527282
The Effects of Prediction on the Perception for Own-Race and Other-Race Faces
Ran, Guangming; Zhang, Qi; Chen, Xu; Pan, Yangu
2014-01-01
Human beings do not passively perceive important social features about others such as race and age in social interactions. Instead, it is proposed that humans might continuously generate predictions about these social features based on prior similar experiences. Pre-awareness of racial information conveyed by others' faces enables individuals to act in “culturally appropriate” ways, which is useful for interpersonal relations in different ethnicity groups. However, little is known about the effects of prediction on the perception for own-race and other-race faces. Here, we addressed this issue using high temporal resolution event-related potential techniques. In total, data from 24 participants (13 women and 11 men) were analyzed. It was found that the N170 amplitudes elicited by other-race faces, but not own-race faces, were significantly smaller in the predictable condition compared to the unpredictable condition, reflecting a switch to holistic processing of other-race faces when those faces were predictable. In this respect, top-down prediction about face race might contribute to the elimination of the other-race effect (one face recognition impairment). Furthermore, smaller P300 amplitudes were observed for the predictable than for unpredictable conditions, which suggested that the prediction of race reduced the neural responses of human brains. PMID:25422892
Super-Memorizers Are Not Super-Recognizers
Ramon, Meike; Miellet, Sebastien; Dzieciol, Anna M.; Konrad, Boris Nikolai
2016-01-01
Humans have a natural expertise in recognizing faces. However, the nature of the interaction between this critical visual biological skill and memory is yet unclear. Here, we had the unique opportunity to test two individuals who have had exceptional success in the World Memory Championships, including several world records in face-name association memory. We designed a range of face processing tasks to determine whether superior/expert face memory skills are associated with distinctive perceptual strategies for processing faces. Superior memorizers excelled at tasks involving associative face-name learning. Nevertheless, they were as impaired as controls in tasks probing the efficiency of the face system: face inversion and the other-race effect. Super memorizers did not show increased hippocampal volumes, and exhibited optimal generic eye movement strategies when they performed complex multi-item face-name associations. Our data show that the visual computations of the face system are not malleable and are robust to acquired expertise involving extensive training of associative memory. PMID:27008627
Super-Memorizers Are Not Super-Recognizers.
Ramon, Meike; Miellet, Sebastien; Dzieciol, Anna M; Konrad, Boris Nikolai; Dresler, Martin; Caldara, Roberto
2016-01-01
Humans have a natural expertise in recognizing faces. However, the nature of the interaction between this critical visual biological skill and memory is yet unclear. Here, we had the unique opportunity to test two individuals who have had exceptional success in the World Memory Championships, including several world records in face-name association memory. We designed a range of face processing tasks to determine whether superior/expert face memory skills are associated with distinctive perceptual strategies for processing faces. Superior memorizers excelled at tasks involving associative face-name learning. Nevertheless, they were as impaired as controls in tasks probing the efficiency of the face system: face inversion and the other-race effect. Super memorizers did not show increased hippocampal volumes, and exhibited optimal generic eye movement strategies when they performed complex multi-item face-name associations. Our data show that the visual computations of the face system are not malleable and are robust to acquired expertise involving extensive training of associative memory.
Category search speeds up face-selective fMRI responses in a non-hierarchical cortical face network.
Jiang, Fang; Badler, Jeremy B; Righi, Giulia; Rossion, Bruno
2015-05-01
The human brain is extremely efficient at detecting faces in complex visual scenes, but the spatio-temporal dynamics of this remarkable ability, and how it is influenced by category-search, remain largely unknown. In the present study, human subjects were shown gradually-emerging images of faces or cars in visual scenes, while neural activity was recorded using functional magnetic resonance imaging (fMRI). Category search was manipulated by the instruction to indicate the presence of either a face or a car, in different blocks, as soon as an exemplar of the target category was detected in the visual scene. The category selectivity of most face-selective areas was enhanced when participants were instructed to report the presence of faces in gradually decreasing noise stimuli. Conversely, the same regions showed much less selectivity when participants were instructed instead to detect cars. When "face" was the target category, the fusiform face area (FFA) showed consistently earlier differentiation of face versus car stimuli than did the "occipital face area" (OFA). When "car" was the target category, only the FFA showed differentiation of face versus car stimuli. These observations provide further challenges for hierarchical models of cortical face processing and show that during gradual revealing of information, selective category-search may decrease the required amount of information, enhancing and speeding up category-selective responses in the human brain. Copyright © 2015 Elsevier Ltd. All rights reserved.
Jiang, Xiong; Bollich, Angela; Cox, Patrick; Hyder, Eric; James, Joette; Gowani, Saqib Ali; Hadjikhani, Nouchine; Blanz, Volker; Manoach, Dara S.; Barton, Jason J.S.; Gaillard, William D.; Riesenhuber, Maximilian
2013-01-01
Individuals with Autism Spectrum Disorder (ASD) appear to show a general face discrimination deficit across a range of tasks including social–emotional judgments as well as identification and discrimination. However, functional magnetic resonance imaging (fMRI) studies probing the neural bases of these behavioral differences have produced conflicting results: while some studies have reported reduced or no activity to faces in ASD in the Fusiform Face Area (FFA), a key region in human face processing, others have suggested more typical activation levels, possibly reflecting limitations of conventional fMRI techniques to characterize neuron-level processing. Here, we test the hypotheses that face discrimination abilities are highly heterogeneous in ASD and are mediated by FFA neurons, with differences in face discrimination abilities being quantitatively linked to variations in the estimated selectivity of face neurons in the FFA. Behavioral results revealed a wide distribution of face discrimination performance in ASD, ranging from typical performance to chance level performance. Despite this heterogeneity in perceptual abilities, individual face discrimination performance was well predicted by neural selectivity to faces in the FFA, estimated via both a novel analysis of local voxel-wise correlations, and the more commonly used fMRI rapid adaptation technique. Thus, face processing in ASD appears to rely on the FFA as in typical individuals, differing quantitatively but not qualitatively. These results for the first time mechanistically link variations in the ASD phenotype to specific differences in the typical face processing circuit, identifying promising targets for interventions. PMID:24179786
Fukushima, Hirokata; Hirata, Satoshi; Ueno, Ari; Matsuda, Goh; Fuwa, Kohki; Sugama, Keiko; Kusunoki, Kiyo; Hirai, Masahiro; Hiraki, Kazuo; Tomonaga, Masaki; Hasegawa, Toshikazu
2010-01-01
Background The neural system of our closest living relative, the chimpanzee, is a topic of increasing research interest. However, electrophysiological examinations of neural activity during visual processing in awake chimpanzees are currently lacking. Methodology/Principal Findings In the present report, skin-surface event-related brain potentials (ERPs) were measured while a fully awake chimpanzee observed photographs of faces and objects in two experiments. In Experiment 1, human faces and stimuli composed of scrambled face images were displayed. In Experiment 2, three types of pictures (faces, flowers, and cars) were presented. The waveforms evoked by face stimuli were distinguished from other stimulus types, as reflected by an enhanced early positivity appearing before 200 ms post stimulus, and an enhanced late negativity after 200 ms, around posterior and occipito-temporal sites. Face-sensitive activity was clearly observed in both experiments. However, in contrast to the robustly observed face-evoked N170 component in humans, we found that faces did not elicit a peak in the latency range of 150–200 ms in either experiment. Conclusions/Significance Although this pilot study examined a single subject and requires further examination, the observed scalp voltage patterns suggest that selective processing of faces in the chimpanzee brain can be detected by recording surface ERPs. In addition, this non-invasive method for examining an awake chimpanzee can be used to extend our knowledge of the characteristics of visual cognition in other primate species. PMID:20967284
Looking away from faces: influence of high-level visual processes on saccade programming.
Morand, Stéphanie M; Grosbras, Marie-Hélène; Caldara, Roberto; Harvey, Monika
2010-03-30
Human faces capture attention more than other visual stimuli. Here we investigated whether such face-specific biases rely on automatic (involuntary) or voluntary orienting responses. To this end, we used an anti-saccade paradigm, which requires the ability to inhibit a reflexive automatic response and to generate a voluntary saccade in the opposite direction of the stimulus. To control for potential low-level confounds in the eye-movement data, we manipulated the high-level visual properties of the stimuli while normalizing their global low-level visual properties. Eye movements were recorded in 21 participants who performed either pro- or anti-saccades to a face, car, or noise pattern, randomly presented to the left or right of a fixation point. For each trial, a symbolic cue instructed the observer to generate either a pro-saccade or an anti-saccade. We report a significant increase in anti-saccade error rates for faces compared to cars and noise patterns, as well as faster pro-saccades to faces and cars in comparison to noise patterns. These results indicate that human faces induce stronger involuntary orienting responses than other visual objects, i.e., responses that are beyond the control of the observer. Importantly, this involuntary processing cannot be accounted for by global low-level visual factors.
What's in a crowd? Analysis of face-to-face behavioral networks.
Isella, Lorenzo; Stehlé, Juliette; Barrat, Alain; Cattuto, Ciro; Pinton, Jean-François; Van den Broeck, Wouter
2011-02-21
The availability of new data sources on human mobility is opening new avenues for investigating the interplay of social networks, human mobility and dynamical processes such as epidemic spreading. Here we analyze data on the time-resolved face-to-face proximity of individuals in large-scale real-world scenarios. We compare two settings with very different properties, a scientific conference and a long-running museum exhibition. We track the behavioral networks of face-to-face proximity, and characterize them from both a static and a dynamic point of view, exposing differences and similarities. We use our data to investigate the dynamics of a susceptible-infected model for epidemic spreading that unfolds on the dynamical networks of human proximity. The spreading patterns are markedly different for the conference and the museum case, and they are strongly impacted by the causal structure of the network data. A deeper study of the spreading paths shows that the mere knowledge of static aggregated networks would lead to erroneous conclusions about the transmission paths on the dynamical networks. Copyright © 2010 Elsevier Ltd. All rights reserved.
Learning to recognize face shapes through serial exploration.
Wallraven, Christian; Whittingstall, Lisa; Bülthoff, Heinrich H
2013-05-01
Human observers are experts at visual face recognition due to specialized visual mechanisms for face processing that evolve with perceptual expertize. Such expertize has long been attributed to the use of configural processing, enabled by fast, parallel information encoding of the visual information in the face. Here we tested whether participants can learn to efficiently recognize faces that are serially encoded-that is, when only partial visual information about the face is available at any given time. For this, ten participants were trained in gaze-restricted face recognition in which face masks were viewed through a small aperture controlled by the participant. Tests comparing trained with untrained performance revealed (1) a marked improvement in terms of speed and accuracy, (2) a gradual development of configural processing strategies, and (3) participants' ability to rapidly learn and accurately recognize novel exemplars. This performance pattern demonstrates that participants were able to learn new strategies to compensate for the serial nature of information encoding. The results are discussed in terms of expertize acquisition and relevance for other sensory modalities relying on serial encoding.
Peelen, Marius V; Wiggett, Alison J; Downing, Paul E
2006-03-16
Accurate perception of the actions and intentions of other people is essential for successful interactions in a social environment. Several cortical areas that support this process respond selectively in fMRI to static and dynamic displays of human bodies and faces. Here we apply pattern-analysis techniques to arrive at a new understanding of the neural response to biological motion. Functionally defined body-, face-, and motion-selective visual areas all responded significantly to "point-light" human motion. Strikingly, however, only body selectivity was correlated, on a voxel-by-voxel basis, with biological motion selectivity. We conclude that (1) biological motion, through the process of structure-from-motion, engages areas involved in the analysis of the static human form; (2) body-selective regions in posterior fusiform gyrus and posterior inferior temporal sulcus overlap with, but are distinct from, face- and motion-selective regions; (3) the interpretation of region-of-interest findings may be substantially altered when multiple patterns of selectivity are considered.
Individual Differences in Face Identity Processing with Fast Periodic Visual Stimulation.
Xu, Buyun; Liu-Shuang, Joan; Rossion, Bruno; Tanaka, James
2017-08-01
A growing body of literature suggests that human individuals differ in their ability to process face identity. These findings mainly stem from explicit behavioral tasks, such as the Cambridge Face Memory Test (CFMT). However, it remains an open question whether such individual differences can be found in the absence of an explicit face identity task and when faces have to be individualized at a single glance. In the current study, we tested 49 participants with a recently developed fast periodic visual stimulation (FPVS) paradigm [Liu-Shuang, J., Norcia, A. M., & Rossion, B. An objective index of individual face discrimination in the right occipitotemporal cortex by means of fast periodic oddball stimulation. Neuropsychologia, 52, 57-72, 2014] in EEG to rapidly, objectively, and implicitly quantify face identity processing. In the FPVS paradigm, one face identity (A) was presented at the frequency of 6 Hz, allowing only one gaze fixation, with different face identities (B, C, D) presented every fifth face (1.2 Hz; i.e., AAAABAAAACAAAAD…). Results showed a face individuation response at 1.2 Hz and its harmonics, peaking over occipitotemporal locations. The magnitude of this response showed high reliability across different recording sequences and was significant in all but two participants, with the magnitude and lateralization differing widely across participants. There was a modest but significant correlation between the individuation response amplitude and the performance of the behavioral CFMT task, despite the fact that CFMT and FPVS measured different aspects of face identity processing. Taken together, the current study highlights the FPVS approach as a promising means for studying individual differences in face identity processing.
Saito, Atsuko; Hamada, Hiroki; Kikusui, Takefumi; Mogi, Kazutaka; Nagasawa, Miho; Mitsui, Shohei; Higuchi, Takashi; Hasegawa, Toshikazu; Hiraki, Kazuo
2014-01-01
The neuropeptide oxytocin plays a central role in prosocial and parental behavior in non-human mammals as well as humans. It has been suggested that oxytocin may affect visual processing of infant faces and emotional reaction to infants. Healthy male volunteers (N = 13) were tested for their ability to detect infant or adult faces among adult or infant faces (facial visual search task). Urine samples were collected from all participants before the study to measure the concentration of oxytocin. Urinary oxytocin positively correlated with performance in the facial visual search task. However, task performance and its correlation with oxytocin concentration did not differ between infant faces and adult faces. Our data suggests that endogenous oxytocin is related to facial visual cognition, but does not promote infant-specific responses in unmarried men who are not fathers.
Seeing Jesus in toast: Neural and behavioral correlates of face pareidolia
Liu, Jiangang; Li, Jun; Feng, Lu; Li, Ling; Tian, Jie; Lee, Kang
2014-01-01
Face pareidolia is the illusory perception of non-existent faces. The present study, for the first time, contrasted behavioral and neural responses of face pareidolia with those of letter pareidolia to explore face-specific behavioral and neural responses during illusory face processing. Participants were shown pure-noise images but were led to believe that 50% of them contained either faces or letters; they reported seeing faces or letters illusorily 34% and 38% of the time, respectively. The right fusiform face area (rFFA) showed a specific response when participants “saw” faces as opposed to letters in the pure-noise images. Behavioral responses during face pareidolia produced a classification image that resembled a face, whereas those during letter pareidolia produced a classification image that was letter-like. Further, the extent to which such behavioral classification images resembled faces was directly related to the level of face-specific activations in the right FFA. This finding suggests that the right FFA plays a specific role not only in processing of real faces but also in illusory face perception, perhaps serving to facilitate the interaction between bottom-up information from the primary visual cortex and top-down signals from the prefrontal cortex (PFC). Whole brain analyses revealed a network specialized in face pareidolia, including both the frontal and occipito-temporal regions. Our findings suggest that human face processing has a strong top-down component whereby sensory input with even the slightest suggestion of a face can result in the interpretation of a face. PMID:24583223
Li, Tianbi; Wang, Xueqin; Pan, Junhao; Feng, Shuyuan; Gong, Mengyuan; Wu, Yaxue; Li, Guoxiang; Li, Sheng; Yi, Li
2017-11-01
The processing of social stimuli, such as human faces, is impaired in individuals with autism spectrum disorder (ASD), which could be accounted for by their lack of social motivation. The current study examined how the attentional processing of faces in children with ASD could be modulated by the learning of face-reward associations. Sixteen high-functioning children with ASD and 20 age- and ability-matched typically developing peers participated in the experiments. All children started with a reward learning task, in which the children were presented with three female faces that were attributed with positive, negative, and neutral values, and were required to remember the faces and their associated values. After this, they were tested on the recognition of the learned faces and a visual search task in which the learned faces served as the distractor. We found a modulatory effect of the face-reward associations on the visual search but not the recognition performance in both groups despite the lower efficacy among children with ASD in learning the face-reward associations. Specifically, both groups responded faster when one of the distractor faces was associated with positive or negative values than when the distractor face was neutral, suggesting an efficient attentional processing of these reward-associated faces. Our findings provide direct evidence for the perceptual-level modulatory effect of reward learning on the attentional processing of faces in individuals with ASD. Autism Res 2017, 10: 1797-1807. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. In our study, we tested whether the face processing of individuals with ASD could be changed when the faces were associated with different social meanings. We found no effect of social meanings on face recognition, but both groups responded faster in the visual search task when one of the distractor faces was associated with positive or negative values than when the neutral face. The findings suggest that children with ASD could efficiently process faces associated with different values like typical children. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.
Sugase-Miyamoto, Yasuko; Matsumoto, Narihisa; Ohyama, Kaoru; Kawano, Kenji
2014-09-10
To investigate the effect of face inversion and thatcherization (eye inversion) on temporal processing stages of facial information, single neuron activities in the temporal cortex (area TE) of two rhesus monkeys were recorded. Test stimuli were colored pictures of monkey faces (four with four different expressions), human faces (three with four different expressions), and geometric shapes. Modifications were made in each face-picture, and its four variations were used as stimuli: upright original, inverted original, upright thatcherized, and inverted thatcherized faces. A total of 119 neurons responded to at least one of the upright original facial stimuli. A majority of the neurons (71%) showed activity modulations depending on upright and inverted presentations, and a lesser number of neurons (13%) showed activity modulations depending on original and thatcherized face conditions. In the case of face inversion, information about the fine category (facial identity and expression) decreased, whereas information about the global category (monkey vs human vs shape) was retained for both the original and thatcherized faces. Principal component analysis on the neuronal population responses revealed that the global categorization occurred regardless of the face inversion and that the inverted faces were represented near the upright faces in the principal component analysis space. By contrast, the face inversion decreased the ability to represent human facial identity and monkey facial expression. Thus, the neuronal population represented inverted faces as faces but failed to represent the identity and expression of the inverted faces, indicating that the neuronal representation in area TE cause the perceptual effect of face inversion. Copyright © 2014 the authors 0270-6474/14/3412457-13$15.00/0.
Gomez, Jesse; Pestilli, Franco; Witthoft, Nathan; Golarai, Golijeh; Liberman, Alina; Poltoratski, Sonia; Yoon, Jennifer; Grill-Spector, Kalanit
2014-01-01
Summary It is unknown if the white matter properties associated with specific visual networks selectively affect category-specific processing. In a novel protocol we combined measurements of white matter structure, functional selectivity, and behavior in the same subjects. We find two parallel white matter pathways along the ventral temporal lobe connecting to either face-selective or place-selective regions. Diffusion properties of portions of these tracts adjacent to face- and place-selective regions of ventral temporal cortex correlate with behavioral performance for face or place processing, respectively. Strikingly, adults with developmental prosopagnosia (face blindness) express an atypical structure-behavior relationship near face-selective cortex, suggesting that white matter atypicalities in this region may have behavioral consequences. These data suggest that examining the interplay between cortical function, anatomical connectivity, and visual behavior is integral to understanding functional networks and their role in producing visual abilities and deficits. PMID:25569351
The µ-opioid system promotes visual attention to faces and eyes.
Chelnokova, Olga; Laeng, Bruno; Løseth, Guro; Eikemo, Marie; Willoch, Frode; Leknes, Siri
2016-12-01
Paying attention to others' faces and eyes is a cornerstone of human social behavior. The µ-opioid receptor (MOR) system, central to social reward-processing in rodents and primates, has been proposed to mediate the capacity for affiliative reward in humans. We assessed the role of the human MOR system in visual exploration of faces and eyes of conspecifics. Thirty healthy males received a novel, bidirectional battery of psychopharmacological treatment (an MOR agonist, a non-selective opioid antagonist, or placebo, on three separate days). Eye-movements were recorded while participants viewed facial photographs. We predicted that the MOR system would promote visual exploration of faces, and hypothesized that MOR agonism would increase, whereas antagonism decrease overt attention to the information-rich eye region. The expected linear effect of MOR manipulation on visual attention to the stimuli was observed, such that MOR agonism increased while antagonism decreased visual exploration of faces and overt attention to the eyes. The observed effects suggest that the human MOR system promotes overt visual attention to socially significant cues, in line with theories linking reward value to gaze control and target selection. Enhanced attention to others' faces and eyes represents a putative behavioral mechanism through which the human MOR system promotes social interest. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
The neuroscience of face processing and identification in eyewitnesses and offenders.
Werner, Nicole-Simone; Kühnel, Sina; Markowitsch, Hans J
2013-12-06
Humans are experts in face perception. We are better able to distinguish between the differences of faces and their components than between any other kind of objects. Several studies investigating the underlying neural networks provided evidence for deviated face processing in criminal individuals, although results are often confounded by accompanying mental or addiction disorders. On the other hand, face processing in non-criminal healthy persons can be of high juridical interest in cases of witnessing a felony and afterward identifying a culprit. Memory and therefore recognition of a person can be affected by many parameters and thus become distorted. But also face processing itself is modulated by different factors like facial characteristics, degree of familiarity, and emotional relation. These factors make the comparison of different cases, as well as the transfer of laboratory results to real live settings very challenging. Several neuroimaging studies have been published in recent years and some progress was made connecting certain brain activation patterns with the correct recognition of an individual. However, there is still a long way to go before brain imaging can make a reliable contribution to court procedures.
The Neuroscience of Face Processing and Identification in Eyewitnesses and Offenders
Werner, Nicole-Simone; Kühnel, Sina; Markowitsch, Hans J.
2013-01-01
Humans are experts in face perception. We are better able to distinguish between the differences of faces and their components than between any other kind of objects. Several studies investigating the underlying neural networks provided evidence for deviated face processing in criminal individuals, although results are often confounded by accompanying mental or addiction disorders. On the other hand, face processing in non-criminal healthy persons can be of high juridical interest in cases of witnessing a felony and afterward identifying a culprit. Memory and therefore recognition of a person can be affected by many parameters and thus become distorted. But also face processing itself is modulated by different factors like facial characteristics, degree of familiarity, and emotional relation. These factors make the comparison of different cases, as well as the transfer of laboratory results to real live settings very challenging. Several neuroimaging studies have been published in recent years and some progress was made connecting certain brain activation patterns with the correct recognition of an individual. However, there is still a long way to go before brain imaging can make a reliable contribution to court procedures. PMID:24367306
Human face recognition using eigenface in cloud computing environment
NASA Astrophysics Data System (ADS)
Siregar, S. T. M.; Syahputra, M. F.; Rahmat, R. F.
2018-02-01
Doing a face recognition for one single face does not take a long time to process, but if we implement attendance system or security system on companies that have many faces to be recognized, it will take a long time. Cloud computing is a computing service that is done not on a local device, but on an internet connected to a data center infrastructure. The system of cloud computing also provides a scalability solution where cloud computing can increase the resources needed when doing larger data processing. This research is done by applying eigenface while collecting data as training data is also done by using REST concept to provide resource, then server can process the data according to existing stages. After doing research and development of this application, it can be concluded by implementing Eigenface, recognizing face by applying REST concept as endpoint in giving or receiving related information to be used as a resource in doing model formation to do face recognition.
Neural signatures of conscious and unconscious emotional face processing in human infants.
Jessen, Sarah; Grossmann, Tobias
2015-03-01
Human adults can process emotional information both with and without conscious awareness, and it has been suggested that the two processes rely on partly distinct brain mechanisms. However, the developmental origins of these brain processes are unknown. In the present event-related brain potential (ERP) study, we examined the brain responses of 7-month-old infants in response to subliminally (50 and 100 msec) and supraliminally (500 msec) presented happy and fearful facial expressions. Our results revealed that infants' brain responses (Pb and Nc) over central electrodes distinguished between emotions irrespective of stimulus duration, whereas the discrimination between emotions at occipital electrodes (N290 and P400) only occurred when faces were presented supraliminally (above threshold). This suggests that early in development the human brain not only discriminates between happy and fearful facial expressions irrespective of conscious perception, but also that, similar to adults, supraliminal and subliminal emotion processing relies on distinct neural processes. Our data further suggest that the processing of emotional facial expressions differs across infants depending on their behaviorally shown perceptual sensitivity. The current ERP findings suggest that distinct brain processes underpinning conscious and unconscious emotion perception emerge early in ontogeny and can therefore be seen as a key feature of human social functioning. Copyright © 2014 Elsevier Ltd. All rights reserved.
Individual differences in perceiving and recognizing faces-One element of social cognition.
Wilhelm, Oliver; Herzmann, Grit; Kunina, Olga; Danthiir, Vanessa; Schacht, Annekathrin; Sommer, Werner
2010-09-01
Recognizing faces swiftly and accurately is of paramount importance to humans as a social species. Individual differences in the ability to perform these tasks may therefore reflect important aspects of social or emotional intelligence. Although functional models of face cognition based on group and single case studies postulate multiple component processes, little is known about the ability structure underlying individual differences in face cognition. In 2 large individual differences experiments (N = 151 and N = 209), a broad variety of face-cognition tasks were tested and the component abilities of face cognition-face perception, face memory, and the speed of face cognition-were identified and then replicated. Experiment 2 also showed that the 3 face-cognition abilities are clearly distinct from immediate and delayed memory, mental speed, general cognitive ability, and object cognition. These results converge with functional and neuroanatomical models of face cognition by demonstrating the difference between face perception and face memory. The results also underline the importance of distinguishing between speed and accuracy of face cognition. Together our results provide a first step toward establishing face-processing abilities as an independent ability reflecting elements of social intelligence. (PsycINFO Database Record (c) 2010 APA, all rights reserved).
Hornung, Jonas; Kogler, Lydia; Wolpert, Stephan; Freiherr, Jessica; Derntl, Birgit
2017-01-01
The androgen derivative androstadienone is a substance found in human sweat and thus is a putative human chemosignal. Androstadienone has been studied with respect to effects on mood states, attractiveness ratings, physiological and neural activation. With the current experiment, we aimed to explore in which way androstadienone affects attention to social cues (human faces). Moreover, we wanted to test whether effects depend on specific emotions, the participants' sex and individual sensitivity to smell androstadienone. To do so, we investigated 56 healthy individuals (thereof 29 females taking oral contraceptives) with two attention tasks on two consecutive days (once under androstadienone, once under placebo exposure in pseudorandomized order). With an emotional dot-probe task we measured visuo-spatial cueing while an emotional Stroop task allowed us to investigate interference control. Our results suggest that androstadienone acts in a sex, task and emotion-specific manner as a reduction in interference processes in the emotional Stroop task was only apparent for angry faces in men under androstadienone exposure. More specifically, men showed a smaller difference in reaction times for congruent compared to incongruent trials. At the same time also women were slightly affected by smelling androstadienone as they classified angry faces more often correctly under androstadienone. For the emotional dot-probe task no modulation by androstadienone was observed. Furthermore, in both attention paradigms individual sensitivity to androstadienone was neither correlated with reaction times nor error rates in men and women. To conclude, exposure to androstadienone seems to potentiate the relevance of angry faces in both men and women in connection with interference control, while processes of visuo-spatial cueing remain unaffected.
Kogler, Lydia; Wolpert, Stephan; Freiherr, Jessica; Derntl, Birgit
2017-01-01
The androgen derivative androstadienone is a substance found in human sweat and thus is a putative human chemosignal. Androstadienone has been studied with respect to effects on mood states, attractiveness ratings, physiological and neural activation. With the current experiment, we aimed to explore in which way androstadienone affects attention to social cues (human faces). Moreover, we wanted to test whether effects depend on specific emotions, the participants' sex and individual sensitivity to smell androstadienone. To do so, we investigated 56 healthy individuals (thereof 29 females taking oral contraceptives) with two attention tasks on two consecutive days (once under androstadienone, once under placebo exposure in pseudorandomized order). With an emotional dot-probe task we measured visuo-spatial cueing while an emotional Stroop task allowed us to investigate interference control. Our results suggest that androstadienone acts in a sex, task and emotion-specific manner as a reduction in interference processes in the emotional Stroop task was only apparent for angry faces in men under androstadienone exposure. More specifically, men showed a smaller difference in reaction times for congruent compared to incongruent trials. At the same time also women were slightly affected by smelling androstadienone as they classified angry faces more often correctly under androstadienone. For the emotional dot-probe task no modulation by androstadienone was observed. Furthermore, in both attention paradigms individual sensitivity to androstadienone was neither correlated with reaction times nor error rates in men and women. To conclude, exposure to androstadienone seems to potentiate the relevance of angry faces in both men and women in connection with interference control, while processes of visuo-spatial cueing remain unaffected. PMID:28369152
Robust Selectivity for Faces in the Human Amygdala in the Absence of Expressions
Mende-Siedlecki, Peter; Verosky, Sara C.; Turk-Browne, Nicholas B.; Todorov, Alexander
2014-01-01
There is a well-established posterior network of cortical regions that plays a central role in face processing and that has been investigated extensively. In contrast, although responsive to faces, the amygdala is not considered a core face-selective region, and its face selectivity has never been a topic of systematic research in human neuroimaging studies. Here, we conducted a large-scale group analysis of fMRI data from 215 participants. We replicated the posterior network observed in prior studies but found equally robust and reliable responses to faces in the amygdala. These responses were detectable in most individual participants, but they were also highly sensitive to the initial statistical threshold and habituated more rapidly than the responses in posterior face-selective regions. A multivariate analysis showed that the pattern of responses to faces across voxels in the amygdala had high reliability over time. Finally, functional connectivity analyses showed stronger coupling between the amygdala and posterior face-selective regions during the perception of faces than during the perception of control visual categories. These findings suggest that the amygdala should be considered a core face-selective region. PMID:23984945
Conway, Bevil R.; Kanwisher, Nancy G.
2016-01-01
The existence of color-processing regions in the human ventral visual pathway (VVP) has long been known from patient and imaging studies, but their location in the cortex relative to other regions, their selectivity for color compared with other properties (shape and object category), and their relationship to color-processing regions found in nonhuman primates remain unclear. We addressed these questions by scanning 13 subjects with fMRI while they viewed two versions of movie clips (colored, achromatic) of five different object classes (faces, scenes, bodies, objects, scrambled objects). We identified regions in each subject that were selective for color, faces, places, and object shape, and measured responses within these regions to the 10 conditions in independently acquired data. We report two key findings. First, the three previously reported color-biased regions (located within a band running posterior–anterior along the VVP, present in most of our subjects) were sandwiched between face-selective cortex and place-selective cortex, forming parallel bands of face, color, and place selectivity that tracked the fusiform gyrus/collateral sulcus. Second, the posterior color-biased regions showed little or no selectivity for object shape or for particular stimulus categories and showed no interaction of color preference with stimulus category, suggesting that they code color independently of shape or stimulus category; moreover, the shape-biased lateral occipital region showed no significant color bias. These observations mirror results in macaque inferior temporal cortex (Lafer-Sousa and Conway, 2013), and taken together, these results suggest a homology in which the entire tripartite face/color/place system of primates migrated onto the ventral surface in humans over the course of evolution. SIGNIFICANCE STATEMENT Here we report that color-biased cortex is sandwiched between face-selective and place-selective cortex on the bottom surface of the brain in humans. This face/color/place organization mirrors that seen on the lateral surface of the temporal lobe in macaques, suggesting that the entire tripartite system is homologous between species. This result validates the use of macaques as a model for human vision, making possible more powerful investigations into the connectivity, precise neural codes, and development of this part of the brain. In addition, we find substantial segregation of color from shape selectivity in posterior regions, as observed in macaques, indicating a considerable dissociation of the processing of shape and color in both species. PMID:26843649
The development of emotion perception in face and voice during infancy.
Grossmann, Tobias
2010-01-01
Interacting with others by reading their emotional expressions is an essential social skill in humans. How this ability develops during infancy and what brain processes underpin infants' perception of emotion in different modalities are the questions dealt with in this paper. Literature review. The first part provides a systematic review of behavioral findings on infants' developing emotion-reading abilities. The second part presents a set of new electrophysiological studies that provide insights into the brain processes underlying infants' developing abilities. Throughout, evidence from unimodal (face or voice) and multimodal (face and voice) processing of emotion is considered. The implications of the reviewed findings for our understanding of developmental models of emotion processing are discussed. The reviewed infant data suggest that (a) early in development, emotion enhances the sensory processing of faces and voices, (b) infants' ability to allocate increased attentional resources to negative emotional information develops earlier in the vocal domain than in the facial domain, and (c) at least by the age of 7 months, infants reliably match and recognize emotional information across face and voice.
Aging effects on selective attention-related electroencephalographic patterns during face encoding.
Deiber, M-P; Rodriguez, C; Jaques, D; Missonnier, P; Emch, J; Millet, P; Gold, G; Giannakopoulos, P; Ibañez, V
2010-11-24
Previous electrophysiological studies revealed that human faces elicit an early visual event-related potential (ERP) within the occipito-temporal cortex, the N170 component. Although face perception has been proposed to rely on automatic processing, the impact of selective attention on N170 remains controversial both in young and elderly individuals. Using early visual ERP and alpha power analysis, we assessed the influence of aging on selective attention to faces during delayed-recognition tasks for face and letter stimuli, examining 36 elderly and 20 young adults with preserved cognition. Face recognition performance worsened with age. Aging induced a latency delay of the N1 component for faces and letters, as well as of the face N170 component. Contrasting with letters, ignored faces elicited larger N1 and N170 components than attended faces in both age groups. This counterintuitive attention effect on face processing persisted when scenes replaced letters. In contrast with young, elderly subjects failed to suppress irrelevant letters when attending faces. Whereas attended stimuli induced a parietal alpha band desynchronization within 300-1000 ms post-stimulus with bilateral-to-right distribution for faces and left lateralization for letters, ignored and passively viewed stimuli elicited a central alpha synchronization larger on the right hemisphere. Aging delayed the latency of this alpha synchronization for both face and letter stimuli, and reduced its amplitude for ignored letters. These results suggest that due to their social relevance, human faces may cause paradoxical attention effects on early visual ERP components, but they still undergo classical top-down control as a function of endogenous selective attention. Aging does not affect the face bottom-up alerting mechanism but reduces the top-down suppression of distracting letters, possibly impinging upon face recognition, and more generally delays the top-down suppression of task-irrelevant information. Copyright © 2010 IBRO. Published by Elsevier Ltd. All rights reserved.
Seeing Jesus in toast: neural and behavioral correlates of face pareidolia.
Liu, Jiangang; Li, Jun; Feng, Lu; Li, Ling; Tian, Jie; Lee, Kang
2014-04-01
Face pareidolia is the illusory perception of non-existent faces. The present study, for the first time, contrasted behavioral and neural responses of face pareidolia with those of letter pareidolia to explore face-specific behavioral and neural responses during illusory face processing. Participants were shown pure-noise images but were led to believe that 50% of them contained either faces or letters; they reported seeing faces or letters illusorily 34% and 38% of the time, respectively. The right fusiform face area (rFFA) showed a specific response when participants "saw" faces as opposed to letters in the pure-noise images. Behavioral responses during face pareidolia produced a classification image (CI) that resembled a face, whereas those during letter pareidolia produced a CI that was letter-like. Further, the extent to which such behavioral CIs resembled faces was directly related to the level of face-specific activations in the rFFA. This finding suggests that the rFFA plays a specific role not only in processing of real faces but also in illusory face perception, perhaps serving to facilitate the interaction between bottom-up information from the primary visual cortex and top-down signals from the prefrontal cortex (PFC). Whole brain analyses revealed a network specialized in face pareidolia, including both the frontal and occipitotemporal regions. Our findings suggest that human face processing has a strong top-down component whereby sensory input with even the slightest suggestion of a face can result in the interpretation of a face. Copyright © 2014 Elsevier Ltd. All rights reserved.
Human sex differences in emotional processing of own-race and other-race faces.
Ran, Guangming; Chen, Xu; Pan, Yangu
2014-06-18
There is evidence that women and men show differences in the perception of affective facial expressions. However, none of the previous studies directly investigated sex differences in emotional processing of own-race and other-race faces. The current study addressed this issue using high time resolution event-related potential techniques. In total, data from 25 participants (13 women and 12 men) were analyzed. It was found that women showed increased N170 amplitudes to negative White faces compared with negative Chinese faces over the right hemisphere electrodes. This result suggests that women show enhanced sensitivity to other-race faces showing negative emotions (fear or disgust), which may contribute toward evolution. However, the current data showed that men had increased N170 amplitudes to happy Chinese versus happy White faces over the left hemisphere electrodes, indicating that men show enhanced sensitivity to own-race faces showing positive emotions (happiness). In this respect, men might use past pleasant emotional experiences to boost recognition of own-race faces.
Gender in facial representations: a contrast-based study of adaptation within and between the sexes.
Oruç, Ipek; Guo, Xiaoyue M; Barton, Jason J S
2011-01-18
Face aftereffects are proving to be an effective means of examining the properties of face-specific processes in the human visual system. We examined the role of gender in the neural representation of faces using a contrast-based adaptation method. If faces of different genders share the same representational face space, then adaptation to a face of one gender should affect both same- and different-gender faces. Further, if these aftereffects differ in magnitude, this may indicate distinct gender-related factors in the organization of this face space. To control for a potential confound between physical similarity and gender, we used a Bayesian ideal observer and human discrimination data to construct a stimulus set in which pairs of different-gender faces were equally dissimilar as same-gender pairs. We found that the recognition of both same-gender and different-gender faces was suppressed following a brief exposure of 100 ms. Moreover, recognition was more suppressed for test faces of a different-gender than those of the same-gender as the adaptor, despite the equivalence in physical and psychophysical similarity. Our results suggest that male and female faces likely occupy the same face space, allowing transfer of aftereffects between the genders, but that there are special properties that emerge along gender-defining dimensions of this space.
Neural microgenesis of personally familiar face recognition
Ramon, Meike; Vizioli, Luca; Liu-Shuang, Joan; Rossion, Bruno
2015-01-01
Despite a wealth of information provided by neuroimaging research, the neural basis of familiar face recognition in humans remains largely unknown. Here, we isolated the discriminative neural responses to unfamiliar and familiar faces by slowly increasing visual information (i.e., high-spatial frequencies) to progressively reveal faces of unfamiliar or personally familiar individuals. Activation in ventral occipitotemporal face-preferential regions increased with visual information, independently of long-term face familiarity. In contrast, medial temporal lobe structures (perirhinal cortex, amygdala, hippocampus) and anterior inferior temporal cortex responded abruptly when sufficient information for familiar face recognition was accumulated. These observations suggest that following detailed analysis of individual faces in core posterior areas of the face-processing network, familiar face recognition emerges categorically in medial temporal and anterior regions of the extended cortical face network. PMID:26283361
Neural microgenesis of personally familiar face recognition.
Ramon, Meike; Vizioli, Luca; Liu-Shuang, Joan; Rossion, Bruno
2015-09-01
Despite a wealth of information provided by neuroimaging research, the neural basis of familiar face recognition in humans remains largely unknown. Here, we isolated the discriminative neural responses to unfamiliar and familiar faces by slowly increasing visual information (i.e., high-spatial frequencies) to progressively reveal faces of unfamiliar or personally familiar individuals. Activation in ventral occipitotemporal face-preferential regions increased with visual information, independently of long-term face familiarity. In contrast, medial temporal lobe structures (perirhinal cortex, amygdala, hippocampus) and anterior inferior temporal cortex responded abruptly when sufficient information for familiar face recognition was accumulated. These observations suggest that following detailed analysis of individual faces in core posterior areas of the face-processing network, familiar face recognition emerges categorically in medial temporal and anterior regions of the extended cortical face network.
Reflectance from images: a model-based approach for human faces.
Fuchs, Martin; Blanz, Volker; Lensch, Hendrik; Seidel, Hans-Peter
2005-01-01
In this paper, we present an image-based framework that acquires the reflectance properties of a human face. A range scan of the face is not required. Based on a morphable face model, the system estimates the 3D shape and establishes point-to-point correspondence across images taken from different viewpoints and across different individuals' faces. This provides a common parameterization of all reconstructed surfaces that can be used to compare and transfer BRDF data between different faces. Shape estimation from images compensates deformations of the face during the measurement process, such as facial expressions. In the common parameterization, regions of homogeneous materials on the face surface can be defined a priori. We apply analytical BRDF models to express the reflectance properties of each region and we estimate their parameters in a least-squares fit from the image data. For each of the surface points, the diffuse component of the BRDF is locally refined, which provides high detail. We present results for multiple analytical BRDF models, rendered at novel orientations and lighting conditions.
Jenkins, Rob; Burton, A. Mike
2011-01-01
Photographs are often used to establish the identity of an individual or to verify that they are who they claim to be. Yet, recent research shows that it is surprisingly difficult to match a photo to a face. Neither humans nor machines can perform this task reliably. Although human perceivers are good at matching familiar faces, performance with unfamiliar faces is strikingly poor. The situation is no better for automatic face recognition systems. In practical settings, automatic systems have been consistently disappointing. In this review, we suggest that failure to distinguish between familiar and unfamiliar face processing has led to unrealistic expectations about face identification in applied settings. We also argue that a photograph is not necessarily a reliable indicator of facial appearance, and develop our proposal that summary statistics can provide more stable face representations. In particular, we show that image averaging stabilizes facial appearance by diluting aspects of the image that vary between snapshots of the same person. We review evidence that the resulting images can outperform photographs in both behavioural experiments and computer simulations, and outline promising directions for future research. PMID:21536553
Jung, Wookyoung; Kang, Joong-Gu; Jeon, Hyeonjin; Shim, Miseon; Sun Kim, Ji; Leem, Hyun-Sung; Lee, Seung-Hwan
2017-08-01
Faces are processed best when they are presented in the left visual field (LVF), a phenomenon known as LVF superiority. Although one eye contributes more when perceiving faces, it is unclear how the dominant eye (DE), the eye we unconsciously use when performing a monocular task, affects face processing. Here, we examined the influence of the DE on the LVF superiority for faces using event-related potentials. Twenty left-eye-dominant (LDE group) and 23 right-eye-dominant (RDE group) participants performed the experiments. Face stimuli were randomly presented in the LVF or right visual field (RVF). The RDE group exhibited significantly larger N170 amplitudes compared with the LDE group. Faces presented in the LVF elicited N170 amplitudes that were significantly more negative in the RDE group than they were in the LDE group, whereas the amplitudes elicited by stimuli presented in the RVF were equivalent between the groups. The LVF superiority was maintained in the RDE group but not in the LDE group. Our results provide the first neural evidence of the DE's effects on the LVF superiority for faces. We propose that the RDE may be more biologically specialized for face processing. © The Author (2017). Published by Oxford University Press.
Jung, Wookyoung; Kang, Joong-Gu; Jeon, Hyeonjin; Shim, Miseon; Sun Kim, Ji; Leem, Hyun-Sung
2017-01-01
Abstract Faces are processed best when they are presented in the left visual field (LVF), a phenomenon known as LVF superiority. Although one eye contributes more when perceiving faces, it is unclear how the dominant eye (DE), the eye we unconsciously use when performing a monocular task, affects face processing. Here, we examined the influence of the DE on the LVF superiority for faces using event-related potentials. Twenty left-eye-dominant (LDE group) and 23 right-eye-dominant (RDE group) participants performed the experiments. Face stimuli were randomly presented in the LVF or right visual field (RVF). The RDE group exhibited significantly larger N170 amplitudes compared with the LDE group. Faces presented in the LVF elicited N170 amplitudes that were significantly more negative in the RDE group than they were in the LDE group, whereas the amplitudes elicited by stimuli presented in the RVF were equivalent between the groups. The LVF superiority was maintained in the RDE group but not in the LDE group. Our results provide the first neural evidence of the DE’s effects on the LVF superiority for faces. We propose that the RDE may be more biologically specialized for face processing. PMID:28379584
Junghöfer, Markus; Rehbein, Maimu Alissa; Maitzen, Julius; Schindler, Sebastian
2017-01-01
Abstract Humans have a remarkable capacity for rapid affective learning. For instance, using first-order US such as odors or electric shocks, magnetoencephalography (MEG) studies of multi-CS conditioning demonstrate enhanced early (<150 ms) and mid-latency (150–300 ms) visual evoked responses to affectively conditioned faces, together with changes in stimulus evaluation. However, particularly in social contexts, human affective learning is often mediated by language, a class of complex higher-order US. To elucidate mechanisms of this type of learning, we investigate how face processing changes following verbal evaluative multi-CS conditioning. Sixty neutral expression male faces were paired with phrases about aversive crimes (30) or neutral occupations (30). Post conditioning, aversively associated faces evoked stronger magnetic fields in a mid-latency interval between 220 and 320 ms, localized primarily in left visual cortex. Aversively paired faces were also rated as more arousing and more unpleasant, evaluative changes occurring both with and without contingency awareness. However, no early MEG effects were found, implying that verbal evaluative conditioning may require conceptual processing and does not engage rapid, possibly sub-cortical, pathways. Results demonstrate the efficacy of verbal evaluative multi-CS conditioning and indicate both common and distinct neural mechanisms of first- and higher-order multi-CS conditioning, thereby informing theories of associative learning. PMID:28008078
Junghöfer, Markus; Rehbein, Maimu Alissa; Maitzen, Julius; Schindler, Sebastian; Kissler, Johanna
2017-04-01
Humans have a remarkable capacity for rapid affective learning. For instance, using first-order US such as odors or electric shocks, magnetoencephalography (MEG) studies of multi-CS conditioning demonstrate enhanced early (<150 ms) and mid-latency (150-300 ms) visual evoked responses to affectively conditioned faces, together with changes in stimulus evaluation. However, particularly in social contexts, human affective learning is often mediated by language, a class of complex higher-order US. To elucidate mechanisms of this type of learning, we investigate how face processing changes following verbal evaluative multi-CS conditioning. Sixty neutral expression male faces were paired with phrases about aversive crimes (30) or neutral occupations (30). Post conditioning, aversively associated faces evoked stronger magnetic fields in a mid-latency interval between 220 and 320 ms, localized primarily in left visual cortex. Aversively paired faces were also rated as more arousing and more unpleasant, evaluative changes occurring both with and without contingency awareness. However, no early MEG effects were found, implying that verbal evaluative conditioning may require conceptual processing and does not engage rapid, possibly sub-cortical, pathways. Results demonstrate the efficacy of verbal evaluative multi-CS conditioning and indicate both common and distinct neural mechanisms of first- and higher-order multi-CS conditioning, thereby informing theories of associative learning. © The Author (2016). Published by Oxford University Press.
Identification and Classification of Facial Familiarity in Directed Lying: An ERP Study
Sun, Delin; Chan, Chetwyn C. H.; Lee, Tatia M. C.
2012-01-01
Recognizing familiar faces is essential to social functioning, but little is known about how people identify human faces and classify them in terms of familiarity. Face identification involves discriminating familiar faces from unfamiliar faces, whereas face classification involves making an intentional decision to classify faces as “familiar” or “unfamiliar.” This study used a directed-lying task to explore the differentiation between identification and classification processes involved in the recognition of familiar faces. To explore this issue, the participants in this study were shown familiar and unfamiliar faces. They responded to these faces (i.e., as familiar or unfamiliar) in accordance with the instructions they were given (i.e., to lie or to tell the truth) while their EEG activity was recorded. Familiar faces (regardless of lying vs. truth) elicited significantly less negative-going N400f in the middle and right parietal and temporal regions than unfamiliar faces. Regardless of their actual familiarity, the faces that the participants classified as “familiar” elicited more negative-going N400f in the central and right temporal regions than those classified as “unfamiliar.” The P600 was related primarily with the facial identification process. Familiar faces (regardless of lying vs. truth) elicited more positive-going P600f in the middle parietal and middle occipital regions. The results suggest that N400f and P600f play different roles in the processes involved in facial recognition. The N400f appears to be associated with both the identification (judgment of familiarity) and classification of faces, while it is likely that the P600f is only associated with the identification process (recollection of facial information). Future studies should use different experimental paradigms to validate the generalizability of the results of this study. PMID:22363597
Face processing in different brain areas, and critical band masking.
Rolls, Edmund T
2008-09-01
Neurophysiological evidence is described showing that some neurons in the macaque inferior temporal visual cortex have responses that are invariant with respect to the position, size, view, and spatial frequency of faces and objects, and that these neurons show rapid processing and rapid learning. Critical band spatial frequency masking is shown to be a property of these face-selective neurons and of the human visual perception of faces. Which face or object is present is encoded using a distributed representation in which each neuron conveys independent information in its firing rate, with little information evident in the relative time of firing of different neurons. This ensemble encoding has the advantages of maximizing the information in the representation useful for discrimination between stimuli using a simple weighted sum of the neuronal firing by the receiving neurons, generalization, and graceful degradation. These invariant representations are ideally suited to provide the inputs to brain regions such as the orbitofrontal cortex and amygdala that learn the reinforcement associations of an individual's face, for then the learning, and the appropriate social and emotional responses generalize to other views of the same face. A theory is described of how such invariant representations may be produced by self-organizing learning in a hierarchically organized set of visual cortical areas with convergent connectivity. The theory utilizes either temporal or spatial continuity with an associative synaptic modification rule. Another population of neurons in the cortex in the superior temporal sulcus encodes other aspects of faces such as face expression, eye-gaze, face view, and whether the head is moving. These neurons thus provide important additional inputs to parts of the brain such as the orbitofrontal cortex and amygdala that are involved in social communication and emotional behaviour. Outputs of these systems reach the amygdala, in which face-selective neurons are found, and also the orbitofrontal cortex, in which some neurons are tuned to face identity and others to face expression. In humans, activation of the orbitofrontal cortex is found when a change of face expression acts as a social signal that behaviour should change; and damage to the human orbitofrontal and pregenual cingulate cortex can impair face and voice expression identification, and also the reversal of emotional behaviour that normally occurs when reinforcers are reversed.
Lin, Jo-Fu Lotus; Silva-Pereyra, Juan; Chou, Chih-Che; Lin, Fa-Hsuan
2018-04-11
Variability in neuronal response latency has been typically considered caused by random noise. Previous studies of single cells and large neuronal populations have shown that the temporal variability tends to increase along the visual pathway. Inspired by these previous studies, we hypothesized that functional areas at later stages in the visual pathway of face processing would have larger variability in the response latency. To test this hypothesis, we used magnetoencephalographic data collected when subjects were presented with images of human faces. Faces are known to elicit a sequence of activity from the primary visual cortex to the fusiform gyrus. Our results revealed that the fusiform gyrus showed larger variability in the response latency compared to the calcarine fissure. Dynamic and spectral analyses of the latency variability indicated that the response latency in the fusiform gyrus was more variable than in the calcarine fissure between 70 ms and 200 ms after the stimulus onset and between 4 Hz and 40 Hz, respectively. The sequential processing of face information from the calcarine sulcus to the fusiform sulcus was more reliably detected based on sizes of the response variability than instants of the maximal response peaks. With two areas in the ventral visual pathway, we show that the variability in response latency across brain areas can be used to infer the sequence of cortical activity.
Electrophysiological brain dynamics during the esthetic judgment of human bodies and faces.
Muñoz, Francisco; Martín-Loeches, Manuel
2015-01-12
This experiment investigated how the esthetic judgment of human body and face modulates cognitive and affective processes. We hypothesized that judgments on ugliness and beauty would elicit separable event-related brain potentials (ERP) patterns, depending on the esthetic value of body and faces in both genders. In a pretest session, participants evaluated images in a range from very ugly to very beautiful, what generated three sets of beautiful, ugly and neutral faces and bodies. In the recording session, they performed a task consisting in a beautiful-neutral-ugly judgment. Cognitive and affective effects were observed on a differential pattern of ERP components (P200, P300 and LPC). Main findings revealed a P200 amplitude increase to ugly images, probably the result of a negativity bias in attentional processes. A P300 increase was found mostly to beautiful images, particularly to female bodies, consistent with the salience of these stimuli, particularly for stimulus categorization. LPC appeared significantly larger to both ugly and beautiful images, probably reflecting later, decision processes linked to keeping information in working memory. This finding was especially remarkable for ugly male faces. Our findings are discussed on the ground of evolutionary and adaptive value of esthetics in person evaluation. This article is part of a Special Issue entitled Hold Item. Copyright © 2014 Elsevier B.V. All rights reserved.
Caharel, Stéphanie; Jiang, Fang; Blanz, Volker; Rossion, Bruno
2009-10-01
The human brain recognizes faces by means of two main diagnostic sources of information: three-dimensional (3D) shape and two-dimensional (2D) surface reflectance. Here we used event-related potentials (ERPs) in a face adaptation paradigm to examine the time-course of processing for these two types of information. With a 3D morphable model, we generated pairs of faces that were either identical, varied in 3D shape only, in 2D surface reflectance only, or in both. Sixteen human observers discriminated individual faces in these 4 types of pairs, in which a first (adapting) face was followed shortly by a second (test) face. Behaviorally, observers were as accurate and as fast for discriminating individual faces based on either 3D shape or 2D surface reflectance alone, but were faster when both sources of information were present. As early as the face-sensitive N170 component (approximately 160 ms following the test face), there was larger amplitude for changes in 3D shape relative to the repetition of the same face, especially over the right occipito-temporal electrodes. However, changes in 2D reflectance between the adapter and target face did not increase the N170 amplitude. At about 250 ms, both 3D shape and 2D reflectance contributed equally, and the largest difference in amplitude compared to the repetition of the same face was found when both 3D shape and 2D reflectance were combined, in line with observers' behavior. These observations indicate that evidence to recognize individual faces accumulate faster in the right hemisphere human visual cortex from diagnostic 3D shape information than from 2D surface reflectance information.
Kottlow, Mara; Jann, Kay; Dierks, Thomas; Koenig, Thomas
2012-08-01
Gamma zero-lag phase synchronization has been measured in the animal brain during visual binding. Human scalp EEG studies used a phase locking factor (trial-to-trial phase-shift consistency) or gamma amplitude to measure binding but did not analyze common-phase signals so far. This study introduces a method to identify networks oscillating with near zero-lag phase synchronization in human subjects. We presented unpredictably moving face parts (NOFACE) which - during some periods - produced a complete schematic face (FACE). The amount of zero-lag phase synchronization was measured using global field synchronization (GFS). GFS provides global information on the amount of instantaneous coincidences in specific frequencies throughout the brain. Gamma GFS was increased during the FACE condition. To localize the underlying areas, we correlated gamma GFS with simultaneously recorded BOLD responses. Positive correlates comprised the bilateral middle fusiform gyrus and the left precuneus. These areas may form a network of areas transiently synchronized during face integration, including face-specific as well as binding-specific regions and regions for visual processing in general. Thus, the amount of zero-lag phase synchronization between remote regions of the human visual system can be measured with simultaneously acquired EEG/fMRI. Copyright © 2012 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
The non-linear development of the right hemispheric specialization for human face perception.
Lochy, Aliette; de Heering, Adélaïde; Rossion, Bruno
2017-06-24
The developmental origins of human adults' right hemispheric specialization for face perception remain unclear. On the one hand, infant studies have shown a right hemispheric advantage for face perception. On the other hand, it has been proposed that the adult right hemispheric lateralization for face perception slowly emerges during childhood due to reading acquisition, which increases left lateralized posterior responses to competing written material (e.g., visual letters and words). Since methodological approaches used in infant and children typically differ when their face capabilities are explored, resolving this issue has been difficult. Here we tested 5-year-old preschoolers varying in their level of visual letter knowledge with the same fast periodic visual stimulation (FPVS) paradigm leading to strongly right lateralized electrophysiological occipito-temporal face-selective responses in 4- to 6-month-old infants (de Heering and Rossion, 2015). Children's face-selective response was quantitatively larger and differed in scalp topography from infants', but did not differ across hemispheres. There was a small positive correlation between preschoolers' letter knowledge and a non-normalized index of right hemispheric specialization for faces. These observations show that previous discrepant results in the literature reflect a genuine nonlinear development of the neural processes underlying face perception and are not merely due to methodological differences across age groups. We discuss several factors that could contribute to the adult right hemispheric lateralization for faces, such as myelination of the corpus callosum and reading acquisition. Our findings point to the value of FPVS coupled with electroencephalography to assess specialized face perception processes throughout development with the same methodology. Copyright © 2017 Elsevier Ltd. All rights reserved.
2004-03-01
When applying experience to new situations, the process is very similar. Faced with a new situation, a human generally looks for ways in which...find the best course of action, the human would compare current goals to those it faced in the previous experiences and choose the path that...154. Saperstein, Alvin (1995) “War and Chaos”. American Scientist, vol. 84. November-December 1995. pp. 548-557. 155. Sargent, Robert G . (1991
Understanding the symptoms of schizophrenia using visual scan paths.
Phillips, M L; David, A S
1994-11-01
This paper highlights the role of the visual scan path as a physiological marker of information processing, while investigating positive symptomatology in schizophrenia. The current literature is reviewed using computer search facilities (Medline). Schizophrenics either scan or stare extensively, the latter related to negative symptoms. Schizophrenics particularly scan when viewing human faces. Scan paths in schizophrenics are important when viewing meaningful stimuli such as human faces, because of the relationship between abnormal perception of stimuli and symptomatology in these subjects.
Neural responses to facial expression and face identity in the monkey amygdala.
Gothard, K M; Battaglia, F P; Erickson, C A; Spitler, K M; Amaral, D G
2007-02-01
The amygdala is purported to play an important role in face processing, yet the specificity of its activation to face stimuli and the relative contribution of identity and expression to its activation are unknown. In the current study, neural activity in the amygdala was recorded as monkeys passively viewed images of monkey faces, human faces, and objects on a computer monitor. Comparable proportions of neurons responded selectively to images from each category. Neural responses to monkey faces were further examined to determine whether face identity or facial expression drove the face-selective responses. The majority of these neurons (64%) responded both to identity and facial expression, suggesting that these parameters are processed jointly in the amygdala. Large fractions of neurons, however, showed pure identity-selective or expression-selective responses. Neurons were selective for a particular facial expression by either increasing or decreasing their firing rate compared with the firing rates elicited by the other expressions. Responses to appeasing faces were often marked by significant decreases of firing rates, whereas responses to threatening faces were strongly associated with increased firing rate. Thus global activation in the amygdala might be larger to threatening faces than to neutral or appeasing faces.
Multiview face detection based on position estimation over multicamera surveillance system
NASA Astrophysics Data System (ADS)
Huang, Ching-chun; Chou, Jay; Shiu, Jia-Hou; Wang, Sheng-Jyh
2012-02-01
In this paper, we propose a multi-view face detection system that locates head positions and indicates the direction of each face in 3-D space over a multi-camera surveillance system. To locate 3-D head positions, conventional methods relied on face detection in 2-D images and projected the face regions back to 3-D space for correspondence. However, the inevitable false face detection and rejection usually degrades the system performance. Instead, our system searches for the heads and face directions over the 3-D space using a sliding cube. Each searched 3-D cube is projected onto the 2-D camera views to determine the existence and direction of human faces. Moreover, a pre-process to estimate the locations of candidate targets is illustrated to speed-up the searching process over the 3-D space. In summary, our proposed method can efficiently fuse multi-camera information and suppress the ambiguity caused by detection errors. Our evaluation shows that the proposed approach can efficiently indicate the head position and face direction on real video sequences even under serious occlusion.
How Fast is Famous Face Recognition?
Barragan-Jason, Gladys; Lachat, Fanny; Barbeau, Emmanuel J.
2012-01-01
The rapid recognition of familiar faces is crucial for social interactions. However the actual speed with which recognition can be achieved remains largely unknown as most studies have been carried out without any speed constraints. Different paradigms have been used, leading to conflicting results, and although many authors suggest that face recognition is fast, the speed of face recognition has not been directly compared to “fast” visual tasks. In this study, we sought to overcome these limitations. Subjects performed three tasks, a familiarity categorization task (famous faces among unknown faces), a superordinate categorization task (human faces among animal ones), and a gender categorization task. All tasks were performed under speed constraints. The results show that, despite the use of speed constraints, subjects were slow when they had to categorize famous faces: minimum reaction time was 467 ms, which is 180 ms more than during superordinate categorization and 160 ms more than in the gender condition. Our results are compatible with a hierarchy of face processing from the superordinate level to the familiarity level. The processes taking place between detection and recognition need to be investigated in detail. PMID:23162503
Face race processing and racial bias in early development: A perceptual-social linkage.
Lee, Kang; Quinn, Paul C; Pascalis, Olivier
2017-06-01
Infants have asymmetrical exposure to different types of faces (e.g., more human than other-species, more female than male, and more own-race than other-race). What are the developmental consequences of such experiential asymmetry? Here we review recent advances in research on the development of cross-race face processing. The evidence suggests that greater exposure to own- than other-race faces in infancy leads to developmentally early perceptual differences in visual preference, recognition, category formation, and scanning of own- and other-race faces. Further, such perceptual differences in infancy may be associated with the emergence of implicit racial bias, consistent with a Perceptual-Social Linkage Hypothesis. Current and future work derived from this hypothesis may lay an important empirical foundation for the development of intervention programs to combat the early occurrence of implicit racial bias.
Young, Steven G; Hugenberg, Kurt; Bernstein, Michael J; Sacco, Donald F
2012-05-01
Although humans possess well-developed face processing expertise, face processing is nevertheless subject to a variety of biases. Perhaps the best known of these biases is the Cross-Race Effect--the tendency to have more accurate recognition for same-race than cross-race faces. The current work reviews the evidence for and provides a critical review of theories of the Cross-Race Effect, including perceptual expertise and social cognitive accounts of the bias. The authors conclude that recent hybrid models of the Cross-Race Effect, which combine elements of both perceptual expertise and social cognitive frameworks, provide an opportunity for theoretical synthesis and advancement not afforded by independent expertise or social cognitive models. Finally, the authors suggest future research directions intended to further develop a comprehensive and integrative understanding of biases in face recognition.
Facial detection using deep learning
NASA Astrophysics Data System (ADS)
Sharma, Manik; Anuradha, J.; Manne, H. K.; Kashyap, G. S. C.
2017-11-01
In the recent past, we have observed that Facebook has developed an uncanny ability to recognize people in photographs. Previously, we had to tag people in photos by clicking on them and typing their name. Now as soon as we upload a photo, Facebook tags everyone on its own. Facebook can recognize faces with 98% accuracy which is pretty much as good as humans can do. This technology is called Face Detection. Face detection is a popular topic in biometrics. We have surveillance cameras in public places for video capture as well as security purposes. The main advantages of this algorithm over other are uniqueness and approval. We need speed and accuracy to identify. But face detection is really a series of several related problems: First, look at a picture and find all the faces in it. Second, focus on each face and understand that even if a face is turned in a weird direction or in bad lighting, it is still the same person. Third select features which can be used to identify each face uniquely like size of the eyes, face etc. Finally, compare these features to data we have to find the person name. As a human, your brain is wired to do all of this automatically and instantly. In fact, humans are too good at recognizing faces. Computers are not capable of this kind of high-level generalization, so we must teach them how to do each step in this process separately. The growth of face detection is largely driven by growing applications such as credit card verification, surveillance video images, authentication for banking and security system access.
The face-selective N170 component is modulated by facial color.
Nakajima, Kae; Minami, Tetsuto; Nakauchi, Shigeki
2012-08-01
Faces play an important role in social interaction by conveying information and emotion. Of the various components of the face, color particularly provides important clues with regard to perception of age, sex, health status, and attractiveness. In event-related potential (ERP) studies, the N170 component has been identified as face-selective. To determine the effect of color on face processing, we investigated the modulation of N170 by facial color. We recorded ERPs while subjects viewed facial color stimuli at 8 hue angles, which were generated by rotating the original facial color distribution around the white point by 45° for each human face. Responses to facial color were localized to the left, but not to the right hemisphere. N170 amplitudes gradually increased in proportion to the increase in hue angle from the natural-colored face. This suggests that N170 amplitude in the left hemisphere reflects processing of facial color information. Copyright © 2012 Elsevier Ltd. All rights reserved.
Mere social categorization modulates identification of facial expressions of emotion.
Young, Steven G; Hugenberg, Kurt
2010-12-01
The ability of the human face to communicate emotional states via facial expressions is well known, and past research has established the importance and universality of emotional facial expressions. However, recent evidence has revealed that facial expressions of emotion are most accurately recognized when the perceiver and expresser are from the same cultural ingroup. The current research builds on this literature and extends this work. Specifically, we find that mere social categorization, using a minimal-group paradigm, can create an ingroup emotion-identification advantage even when the culture of the target and perceiver is held constant. Follow-up experiments show that this effect is supported by differential motivation to process ingroup versus outgroup faces and that this motivational disparity leads to more configural processing of ingroup faces than of outgroup faces. Overall, the results point to distinct processing modes for ingroup and outgroup faces, resulting in differential identification accuracy for facial expressions of emotion. PsycINFO Database Record (c) 2010 APA, all rights reserved.
Arguments Against a Configural Processing Account of Familiar Face Recognition.
Burton, A Mike; Schweinberger, Stefan R; Jenkins, Rob; Kaufmann, Jürgen M
2015-07-01
Face recognition is a remarkable human ability, which underlies a great deal of people's social behavior. Individuals can recognize family members, friends, and acquaintances over a very large range of conditions, and yet the processes by which they do this remain poorly understood, despite decades of research. Although a detailed understanding remains elusive, face recognition is widely thought to rely on configural processing, specifically an analysis of spatial relations between facial features (so-called second-order configurations). In this article, we challenge this traditional view, raising four problems: (1) configural theories are underspecified; (2) large configural changes leave recognition unharmed; (3) recognition is harmed by nonconfigural changes; and (4) in separate analyses of face shape and face texture, identification tends to be dominated by texture. We review evidence from a variety of sources and suggest that failure to acknowledge the impact of familiarity on facial representations may have led to an overgeneralization of the configural account. We argue instead that second-order configural information is remarkably unimportant for familiar face recognition. © The Author(s) 2015.
Right hemispheric dominance in gaze-triggered reflexive shift of attention in humans.
Okada, Takashi; Sato, Wataru; Toichi, Motomi
2006-11-01
Recent findings suggest a right hemispheric dominance in gaze-triggered shifts of attention. The aim of this study was to clarify the dominant hemisphere in the gaze processing that mediates attentional shift. A target localization task, with preceding non-predicative gaze cues presented to each visual field, was undertaken by 44 healthy subjects, measuring reaction time (RT). A face identification task was also given to determine hemispheric dominance in face processing for each subject. RT differences between valid and invalid cues were larger when presented in the left rather than the right visual field. This held true regardless of individual hemispheric dominance in face processing. Together, these results indicate right hemispheric dominance in gaze-triggered reflexive shifts of attention in normal healthy subjects.
A Brain Network Processing the Age of Faces
Homola, György A.; Jbabdi, Saad; Beckmann, Christian F.; Bartsch, Andreas J.
2012-01-01
Age is one of the most salient aspects in faces and of fundamental cognitive and social relevance. Although face processing has been studied extensively, brain regions responsive to age have yet to be localized. Using evocative face morphs and fMRI, we segregate two areas extending beyond the previously established face-sensitive core network, centered on the inferior temporal sulci and angular gyri bilaterally, both of which process changes of facial age. By means of probabilistic tractography, we compare their patterns of functional activation and structural connectivity. The ventral portion of Wernicke's understudied perpendicular association fasciculus is shown to interconnect the two areas, and activation within these clusters is related to the probability of fiber connectivity between them. In addition, post-hoc age-rating competence is found to be associated with high response magnitudes in the left angular gyrus. Our results provide the first evidence that facial age has a distinct representation pattern in the posterior human brain. We propose that particular face-sensitive nodes interact with additional object-unselective quantification modules to obtain individual estimates of facial age. This brain network processing the age of faces differs from the cortical areas that have previously been linked to less developmental but instantly changeable face aspects. Our probabilistic method of associating activations with connectivity patterns reveals an exemplary link that can be used to further study, assess and quantify structure-function relationships. PMID:23185334
Gender differences in BOLD activation to face photographs and video vignettes.
Fine, Jodene Goldenring; Semrud-Clikeman, Margaret; Zhu, David C
2009-07-19
Few neuroimaging studies have reported gender differences in response to human emotions, and those that have examined such differences have utilized face photographs. This study presented not only human face photographs of positive and negative emotions, but also video vignettes of positive and negative social human interactions in an attempt to provide a more ecologically appropriate stimuli paradigm. Ten male and 10 female healthy right-handed young adults were shown positive and negative affective social human faces and video vignettes to elicit gender differences in social/emotional perception. Conservative ROI (region of interest) analysis indicated greater male than female activation to positive affective photos in the anterior cingulate, medial frontal gyrus, superior frontal gyrus and superior temporal gyrus, all in the right hemisphere. No significant ROI gender differences were observed to negative affective photos. Male greater than female activation was seen in ROIs of the left posterior cingulate and the right inferior temporal gyrus to positive social videos. Male greater than female activation occurred in only the left middle temporal ROI for negative social videos. Consistent with previous findings, males were more lateralized than females. Although more activation was observed overall to video compared to photo conditions, males and females appear to process social video stimuli more similarly to one another than they do for photos. This study is a step forward in understanding the social brain with more ecologically valid stimuli that more closely approximates the demands of real-time social and affective processing.
Emotional facial expressions reduce neural adaptation to face identity.
Gerlicher, Anna M V; van Loon, Anouk M; Scholte, H Steven; Lamme, Victor A F; van der Leij, Andries R
2014-05-01
In human social interactions, facial emotional expressions are a crucial source of information. Repeatedly presented information typically leads to an adaptation of neural responses. However, processing seems sustained with emotional facial expressions. Therefore, we tested whether sustained processing of emotional expressions, especially threat-related expressions, would attenuate neural adaptation. Neutral and emotional expressions (happy, mixed and fearful) of same and different identity were presented at 3 Hz. We used electroencephalography to record the evoked steady-state visual potentials (ssVEP) and tested to what extent the ssVEP amplitude adapts to the same when compared with different face identities. We found adaptation to the identity of a neutral face. However, for emotional faces, adaptation was reduced, decreasing linearly with negative valence, with the least adaptation to fearful expressions. This short and straightforward method may prove to be a valuable new tool in the study of emotional processing.
Emotional Cues during Simultaneous Face and Voice Processing: Electrophysiological Insights
Liu, Taosheng; Pinheiro, Ana; Zhao, Zhongxin; Nestor, Paul G.; McCarley, Robert W.; Niznikiewicz, Margaret A.
2012-01-01
Both facial expression and tone of voice represent key signals of emotional communication but their brain processing correlates remain unclear. Accordingly, we constructed a novel implicit emotion recognition task consisting of simultaneously presented human faces and voices with neutral, happy, and angry valence, within the context of recognizing monkey faces and voices task. To investigate the temporal unfolding of the processing of affective information from human face-voice pairings, we recorded event-related potentials (ERPs) to these audiovisual test stimuli in 18 normal healthy subjects; N100, P200, N250, P300 components were observed at electrodes in the frontal-central region, while P100, N170, P270 were observed at electrodes in the parietal-occipital region. Results indicated a significant audiovisual stimulus effect on the amplitudes and latencies of components in frontal-central (P200, P300, and N250) but not the parietal occipital region (P100, N170 and P270). Specifically, P200 and P300 amplitudes were more positive for emotional relative to neutral audiovisual stimuli, irrespective of valence, whereas N250 amplitude was more negative for neutral relative to emotional stimuli. No differentiation was observed between angry and happy conditions. The results suggest that the general effect of emotion on audiovisual processing can emerge as early as 200 msec (P200 peak latency) post stimulus onset, in spite of implicit affective processing task demands, and that such effect is mainly distributed in the frontal-central region. PMID:22383987
The processing of social stimuli in early infancy: from faces to biological motion perception.
Simion, Francesca; Di Giorgio, Elisa; Leo, Irene; Bardi, Lara
2011-01-01
There are several lines of evidence which suggests that, since birth, the human system detects social agents on the basis of at least two properties: the presence of a face and the way they move. This chapter reviews the infant research on the origin of brain specialization for social stimuli and on the role of innate mechanisms and perceptual experience in shaping the development of the social brain. Two lines of convergent evidence on face detection and biological motion detection will be presented to demonstrate the innate predispositions of the human system to detect social stimuli at birth. As for face detection, experiments will be presented to demonstrate that, by virtue of nonspecific attentional biases, a very coarse template of faces become active at birth. As for biological motion detection, studies will be presented to demonstrate that, since birth, the human system is able to detect social stimuli on the basis of their properties such as the presence of a semi-rigid motion named biological motion. Overall, the empirical evidence converges in supporting the notion that the human system begins life broadly tuned to detect social stimuli and that the progressive specialization will narrow the system for social stimuli as a function of experience. Copyright © 2011 Elsevier B.V. All rights reserved.
A unified coding strategy for processing faces and voices
Yovel, Galit; Belin, Pascal
2013-01-01
Both faces and voices are rich in socially-relevant information, which humans are remarkably adept at extracting, including a person's identity, age, gender, affective state, personality, etc. Here, we review accumulating evidence from behavioral, neuropsychological, electrophysiological, and neuroimaging studies which suggest that the cognitive and neural processing mechanisms engaged by perceiving faces or voices are highly similar, despite the very different nature of their sensory input. The similarity between the two mechanisms likely facilitates the multi-modal integration of facial and vocal information during everyday social interactions. These findings emphasize a parsimonious principle of cerebral organization, where similar computational problems in different modalities are solved using similar solutions. PMID:23664703
Gender in Facial Representations: A Contrast-Based Study of Adaptation within and between the Sexes
Oruç, Ipek; Guo, Xiaoyue M.; Barton, Jason J. S.
2011-01-01
Face aftereffects are proving to be an effective means of examining the properties of face-specific processes in the human visual system. We examined the role of gender in the neural representation of faces using a contrast-based adaptation method. If faces of different genders share the same representational face space, then adaptation to a face of one gender should affect both same- and different-gender faces. Further, if these aftereffects differ in magnitude, this may indicate distinct gender-related factors in the organization of this face space. To control for a potential confound between physical similarity and gender, we used a Bayesian ideal observer and human discrimination data to construct a stimulus set in which pairs of different-gender faces were equally dissimilar as same-gender pairs. We found that the recognition of both same-gender and different-gender faces was suppressed following a brief exposure of 100ms. Moreover, recognition was more suppressed for test faces of a different-gender than those of the same-gender as the adaptor, despite the equivalence in physical and psychophysical similarity. Our results suggest that male and female faces likely occupy the same face space, allowing transfer of aftereffects between the genders, but that there are special properties that emerge along gender-defining dimensions of this space. PMID:21267414
Facial attractiveness judgements reflect learning of parental age characteristics.
Perrett, David I; Penton-Voak, Ian S; Little, Anthony C; Tiddeman, Bernard P; Burt, D Michael; Schmidt, Natalie; Oxley, Roz; Kinloch, Nicholas; Barrett, Louise
2002-05-07
Mate preferences are shaped by infant experience of parental characteristics in a wide variety of species. Similar processes in humans may lead to physical similarity between parents and mates, yet this possibility has received little attention. The age of parents is one salient physical characteristic that offspring may attend to. The current study used computer-graphic faces to examine how preferences for age in faces were influenced by parental age. We found that women born to 'old' parents (over 30) were less impressed by youth, and more attracted to age cues in male faces than women with 'young' parents (under 30). For men, preferences for female faces were influenced by their mother's age and not their father's age, but only for long-term relationships. These data indicate that judgements of facial attractiveness in humans reflect the learning of parental characteristics.
Modeling human faces with multi-image photogrammetry
NASA Astrophysics Data System (ADS)
D'Apuzzo, Nicola
2002-03-01
Modeling and measurement of the human face have been increasing by importance for various purposes. Laser scanning, coded light range digitizers, image-based approaches and digital stereo photogrammetry are the used methods currently employed in medical applications, computer animation, video surveillance, teleconferencing and virtual reality to produce three dimensional computer models of the human face. Depending on the application, different are the requirements. Ours are primarily high accuracy of the measurement and automation in the process. The method presented in this paper is based on multi-image photogrammetry. The equipment, the method and results achieved with this technique are here depicted. The process is composed of five steps: acquisition of multi-images, calibration of the system, establishment of corresponding points in the images, computation of their 3-D coordinates and generation of a surface model. The images captured by five CCD cameras arranged in front of the subject are digitized by a frame grabber. The complete system is calibrated using a reference object with coded target points, which can be measured fully automatically. To facilitate the establishment of correspondences in the images, texture in the form of random patterns can be projected from two directions onto the face. The multi-image matching process, based on a geometrical constrained least squares matching algorithm, produces a dense set of corresponding points in the five images. Neighborhood filters are then applied on the matching results to remove the errors. After filtering the data, the three-dimensional coordinates of the matched points are computed by forward intersection using the results of the calibration process; the achieved mean accuracy is about 0.2 mm in the sagittal direction and about 0.1 mm in the lateral direction. The last step of data processing is the generation of a surface model from the point cloud and the application of smooth filters. Moreover, a color texture image can be draped over the model to achieve a photorealistic visualization. The advantage of the presented method over laser scanning and coded light range digitizers is the acquisition of the source data in a fraction of a second, allowing the measurement of human faces with higher accuracy and the possibility to measure dynamic events like the speech of a person.
Effects of spatial frequency and location of fearful faces on human amygdala activity.
Morawetz, Carmen; Baudewig, Juergen; Treue, Stefan; Dechent, Peter
2011-01-31
Facial emotion perception plays a fundamental role in interpersonal social interactions. Images of faces contain visual information at various spatial frequencies. The amygdala has previously been reported to be preferentially responsive to low-spatial frequency (LSF) rather than to high-spatial frequency (HSF) filtered images of faces presented at the center of the visual field. Furthermore, it has been proposed that the amygdala might be especially sensitive to affective stimuli in the periphery. In the present study we investigated the impact of spatial frequency and stimulus eccentricity on face processing in the human amygdala and fusiform gyrus using functional magnetic resonance imaging (fMRI). The spatial frequencies of pictures of fearful faces were filtered to produce images that retained only LSF or HSF information. Facial images were presented either in the left or right visual field at two different eccentricities. In contrast to previous findings, we found that the amygdala responds to LSF and HSF stimuli in a similar manner regardless of the location of the affective stimuli in the visual field. Furthermore, the fusiform gyrus did not show differential responses to spatial frequency filtered images of faces. Our findings argue against the view that LSF information plays a crucial role in the processing of facial expressions in the amygdala and of a higher sensitivity to affective stimuli in the periphery. Copyright © 2010 Elsevier B.V. All rights reserved.
Face and emotion expression processing and the serotonin transporter polymorphism 5-HTTLPR/rs22531.
Hildebrandt, A; Kiy, A; Reuter, M; Sommer, W; Wilhelm, O
2016-06-01
Face cognition, including face identity and facial expression processing, is a crucial component of socio-emotional abilities, characterizing humans as highest developed social beings. However, for these trait domains molecular genetic studies investigating gene-behavior associations based on well-founded phenotype definitions are still rare. We examined the relationship between 5-HTTLPR/rs25531 polymorphisms - related to serotonin-reuptake - and the ability to perceive and recognize faces and emotional expressions in human faces. For this aim we conducted structural equation modeling on data from 230 young adults, obtained by using a comprehensive, multivariate task battery with maximal effort tasks. By additionally modeling fluid intelligence and immediate and delayed memory factors, we aimed to address the discriminant relationships of the 5-HTTLPR/rs25531 polymorphisms with socio-emotional abilities. We found a robust association between the 5-HTTLPR/rs25531 polymorphism and facial emotion perception. Carriers of two long (L) alleles outperformed carriers of one or two S alleles. Weaker associations were present for face identity perception and memory for emotional facial expressions. There was no association between the 5-HTTLPR/rs25531 polymorphism and non-social abilities, demonstrating discriminant validity of the relationships. We discuss the implications and possible neural mechanisms underlying these novel findings. © 2016 John Wiley & Sons Ltd and International Behavioural and Neural Genetics Society.
Adjudicating between face-coding models with individual-face fMRI responses
Kriegeskorte, Nikolaus
2017-01-01
The perceptual representation of individual faces is often explained with reference to a norm-based face space. In such spaces, individuals are encoded as vectors where identity is primarily conveyed by direction and distinctiveness by eccentricity. Here we measured human fMRI responses and psychophysical similarity judgments of individual face exemplars, which were generated as realistic 3D animations using a computer-graphics model. We developed and evaluated multiple neurobiologically plausible computational models, each of which predicts a representational distance matrix and a regional-mean activation profile for 24 face stimuli. In the fusiform face area, a face-space coding model with sigmoidal ramp tuning provided a better account of the data than one based on exemplar tuning. However, an image-processing model with weighted banks of Gabor filters performed similarly. Accounting for the data required the inclusion of a measurement-level population averaging mechanism that approximates how fMRI voxels locally average distinct neuronal tunings. Our study demonstrates the importance of comparing multiple models and of modeling the measurement process in computational neuroimaging. PMID:28746335
Face Coding Is Bilateral in the Female Brain
Proverbio, Alice Mado; Riva, Federica; Martin, Eleonora; Zani, Alberto
2010-01-01
Background It is currently believed that face processing predominantly activates the right hemisphere in humans, but available literature is very inconsistent. Methodology/Principal Findings In this study, ERPs were recorded in 50 right-handed women and men in response to 390 faces (of different age and sex), and 130 technological objects. Results showed no sex difference in the amplitude of N170 to objects; a much larger face-specific response over the right hemisphere in men, and a bilateral response in women; a lack of face-age coding effect over the left hemisphere in men, with no differences in N170 to faces as a function of age; a significant bilateral face-age coding effect in women. Conclusions/Significance LORETA reconstruction showed a significant left and right asymmetry in the activation of the fusiform gyrus (BA19), in women and men, respectively. The present data reveal a lesser degree of lateralization of brain functions related to face coding in women than men. In this light, they may provide an explanation of the inconsistencies in the available literature concerning the asymmetric activity of left and right occipito-temporal cortices devoted to face perception during processing of face identity, structure, familiarity or affective content. PMID:20574528
Face coding is bilateral in the female brain.
Proverbio, Alice Mado; Riva, Federica; Martin, Eleonora; Zani, Alberto
2010-06-21
It is currently believed that face processing predominantly activates the right hemisphere in humans, but available literature is very inconsistent. In this study, ERPs were recorded in 50 right-handed women and men in response to 390 faces (of different age and sex), and 130 technological objects. Results showed no sex difference in the amplitude of N170 to objects; a much larger face-specific response over the right hemisphere in men, and a bilateral response in women; a lack of face-age coding effect over the left hemisphere in men, with no differences in N170 to faces as a function of age; a significant bilateral face-age coding effect in women. LORETA reconstruction showed a significant left and right asymmetry in the activation of the fusiform gyrus (BA19), in women and men, respectively. The present data reveal a lesser degree of lateralization of brain functions related to face coding in women than men. In this light, they may provide an explanation of the inconsistencies in the available literature concerning the asymmetric activity of left and right occipito-temporal cortices devoted to face perception during processing of face identity, structure, familiarity or affective content.
Garrido, Lucia; Driver, Jon; Dolan, Raymond J.; Duchaine, Bradley C.; Furl, Nicholas
2016-01-01
Face processing is mediated by interactions between functional areas in the occipital and temporal lobe, and the fusiform face area (FFA) and anterior temporal lobe play key roles in the recognition of facial identity. Individuals with developmental prosopagnosia (DP), a lifelong face recognition impairment, have been shown to have structural and functional neuronal alterations in these areas. The present study investigated how face selectivity is generated in participants with normal face processing, and how functional abnormalities associated with DP, arise as a function of network connectivity. Using functional magnetic resonance imaging and dynamic causal modeling, we examined effective connectivity in normal participants by assessing network models that include early visual cortex (EVC) and face-selective areas and then investigated the integrity of this connectivity in participants with DP. Results showed that a feedforward architecture from EVC to the occipital face area, EVC to FFA, and EVC to posterior superior temporal sulcus (pSTS) best explained how face selectivity arises in both controls and participants with DP. In this architecture, the DP group showed reduced connection strengths on feedforward connections carrying face information from EVC to FFA and EVC to pSTS. These altered network dynamics in DP contribute to the diminished face selectivity in the posterior occipitotemporal areas affected in DP. These findings suggest a novel view on the relevance of feedforward projection from EVC to posterior occipitotemporal face areas in generating cortical face selectivity and differences in face recognition ability. SIGNIFICANCE STATEMENT Areas of the human brain showing enhanced activation to faces compared to other objects or places have been extensively studied. However, the factors leading to this face selectively have remained mostly unknown. We show that effective connectivity from early visual cortex to posterior occipitotemporal face areas gives rise to face selectivity. Furthermore, people with developmental prosopagnosia, a lifelong face recognition impairment, have reduced face selectivity in the posterior occipitotemporal face areas and left anterior temporal lobe. We show that this reduced face selectivity can be predicted by effective connectivity from early visual cortex to posterior occipitotemporal face areas. This study presents the first network-based account of how face selectivity arises in the human brain. PMID:27030766
Herzmann, Grit; Bird, Christopher W.; Freeman, Megan; Curran, Tim
2013-01-01
Oxytocin has been shown to affect human social information processing including recognition memory for faces. Here we investigated the neural processes underlying the effect of oxytocin on memorizing own-race and other-race faces in men and women. In a placebo-controlled, doubleblind, between-subject study, participants received either oxytocin or placebo before studying own-race and other-race faces. We recorded event-related potentials (ERPs) during both the study and recognition phase to investigate neural correlates of oxytocin’s effect on memory encoding, memory retrieval, and perception. Oxytocin increased the accuracy of familiarity judgments in the recognition test. Neural correlates for this effect were found in ERPs related to memory encoding and retrieval but not perception. In contrast to its facilitating effects on familiarity, oxytocin impaired recollection judgments, but in men only. Oxytocin did not differentially affect own-race and other-race faces. This study shows that oxytocin influences memory, but not perceptual processes, in a face recognition task and is the first to reveal sex differences in the effect of oxytocin on face memory. Contrary to recent findings in oxytocin and moral decision making, oxytocin did not preferentially improve memory for own-race faces. PMID:23648370
Herzmann, Grit; Bird, Christopher W; Freeman, Megan; Curran, Tim
2013-10-01
Oxytocin has been shown to affect human social information processing including recognition memory for faces. Here we investigated the neural processes underlying the effect of oxytocin on memorizing own-race and other-race faces in men and women. In a placebo-controlled, double-blind, between-subject study, participants received either oxytocin or placebo before studying own-race and other-race faces. We recorded event-related potentials (ERPs) during both the study and recognition phase to investigate neural correlates of oxytocin's effect on memory encoding, memory retrieval, and perception. Oxytocin increased the accuracy of familiarity judgments in the recognition test. Neural correlates for this effect were found in ERPs related to memory encoding and retrieval but not perception. In contrast to its facilitating effects on familiarity, oxytocin impaired recollection judgments, but in men only. Oxytocin did not differentially affect own-race and other-race faces. This study shows that oxytocin influences memory, but not perceptual processes, in a face recognition task and is the first to reveal sex differences in the effect of oxytocin on face memory. Contrary to recent findings in oxytocin and moral decision making, oxytocin did not preferentially improve memory for own-race faces. Copyright © 2013 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Petzold, Knut
2017-01-01
As the experience of studying abroad can signal general and transnational human capital, it is considered to be increasingly important for professional careers, particularly in the context of economies' internationalization. However, studies using graduate surveys face problems of self-selection and studies on employers' opinions face problems of…
Network Configurations in the Human Brain Reflect Choice Bias during Rapid Face Processing.
Tu, Tao; Schneck, Noam; Muraskin, Jordan; Sajda, Paul
2017-12-13
Network interactions are likely to be instrumental in processes underlying rapid perception and cognition. Specifically, high-level and perceptual regions must interact to balance pre-existing models of the environment with new incoming stimuli. Simultaneous electroencephalography (EEG) and fMRI (EEG/fMRI) enables temporal characterization of brain-network interactions combined with improved anatomical localization of regional activity. In this paper, we use simultaneous EEG/fMRI and multivariate dynamical systems (MDS) analysis to characterize network relationships between constitute brain areas that reflect a subject's choice for a face versus nonface categorization task. Our simultaneous EEG and fMRI analysis on 21 human subjects (12 males, 9 females) identifies early perceptual and late frontal subsystems that are selective to the categorical choice of faces versus nonfaces. We analyze the interactions between these subsystems using an MDS in the space of the BOLD signal. Our main findings show that differences between face-choice and house-choice networks are seen in the network interactions between the early and late subsystems, and that the magnitude of the difference in network interaction positively correlates with the behavioral false-positive rate of face choices. We interpret this to reflect the role of saliency and expectations likely encoded in frontal "late" regions on perceptual processes occurring in "early" perceptual regions. SIGNIFICANCE STATEMENT Our choices are affected by our biases. In visual perception and cognition such biases can be commonplace and quite curious-e.g., we see a human face when staring up at a cloud formation or down at a piece of toast at the breakfast table. Here we use multimodal neuroimaging and dynamical systems analysis to measure whole-brain spatiotemporal dynamics while subjects make decisions regarding the type of object they see in rapidly flashed images. We find that the degree of interaction in these networks accounts for a substantial fraction of our bias to see faces. In general, our findings illustrate how the properties of spatiotemporal networks yield insight into the mechanisms of how we form decisions. Copyright © 2017 the authors 0270-6474/17/3712226-12$15.00/0.
Network Configurations in the Human Brain Reflect Choice Bias during Rapid Face Processing
Schneck, Noam
2017-01-01
Network interactions are likely to be instrumental in processes underlying rapid perception and cognition. Specifically, high-level and perceptual regions must interact to balance pre-existing models of the environment with new incoming stimuli. Simultaneous electroencephalography (EEG) and fMRI (EEG/fMRI) enables temporal characterization of brain–network interactions combined with improved anatomical localization of regional activity. In this paper, we use simultaneous EEG/fMRI and multivariate dynamical systems (MDS) analysis to characterize network relationships between constitute brain areas that reflect a subject's choice for a face versus nonface categorization task. Our simultaneous EEG and fMRI analysis on 21 human subjects (12 males, 9 females) identifies early perceptual and late frontal subsystems that are selective to the categorical choice of faces versus nonfaces. We analyze the interactions between these subsystems using an MDS in the space of the BOLD signal. Our main findings show that differences between face-choice and house-choice networks are seen in the network interactions between the early and late subsystems, and that the magnitude of the difference in network interaction positively correlates with the behavioral false-positive rate of face choices. We interpret this to reflect the role of saliency and expectations likely encoded in frontal “late” regions on perceptual processes occurring in “early” perceptual regions. SIGNIFICANCE STATEMENT Our choices are affected by our biases. In visual perception and cognition such biases can be commonplace and quite curious—e.g., we see a human face when staring up at a cloud formation or down at a piece of toast at the breakfast table. Here we use multimodal neuroimaging and dynamical systems analysis to measure whole-brain spatiotemporal dynamics while subjects make decisions regarding the type of object they see in rapidly flashed images. We find that the degree of interaction in these networks accounts for a substantial fraction of our bias to see faces. In general, our findings illustrate how the properties of spatiotemporal networks yield insight into the mechanisms of how we form decisions. PMID:29118108
Education as an intergenerational process of human learning, teaching, and development.
Cole, Michael
2010-11-01
In this article I argue that the future of psychological research on educational processes would benefit from an interdisciplinary approach that enables psychologists to locate their objects of study within the cultural, social, and historical contexts of their research. To make this argument, I begin by examining anthropological accounts of the characteristics of education in small, face-to-face, preindustrial societies. I then turn to a sample of contemporary psychoeducational research that seeks to implement major, qualitative changes in modern educational practices by transforming them to have the properties of education in those self-same face-to-face societies. Next I examine the challenges faced by these modern approaches and briefly describe a multi-institutional, multidisciplinary system of education that responds to these challenges while offering a model for educating psychology students in a multigenerational system of activities with potential widespread benefits. PsycINFO Database Record (c) 2010 APA, all rights reserved.
Van Belle, Goedele; Busigny, Thomas; Lefèvre, Philippe; Joubert, Sven; Felician, Olivier; Gentile, Francesco; Rossion, Bruno
2011-09-01
Gaze-contingency is a method traditionally used to investigate the perceptual span in reading by selectively revealing/masking a portion of the visual field in real time. Introducing this approach in face perception research showed that the performance pattern of a brain-damaged patient with acquired prosopagnosia (PS) in a face matching task was reversed, as compared to normal observers: the patient showed almost no further decrease of performance when only one facial part (eye, mouth, nose, etc.) was available at a time (foveal window condition, forcing part-based analysis), but a very large impairment when the fixated part was selectively masked (mask condition, promoting holistic perception) (Van Belle, De Graef, Verfaillie, Busigny, & Rossion, 2010a; Van Belle, De Graef, Verfaillie, Rossion, & Lefèvre, 2010b). Here we tested the same manipulation in a recently reported case of pure prosopagnosia (GG) with unilateral right hemisphere damage (Busigny, Joubert, Felician, Ceccaldi, & Rossion, 2010). Contrary to normal observers, GG was also significantly more impaired with a mask than with a window, demonstrating impairment with holistic face perception. Together with our previous study, these observations support a generalized account of acquired prosopagnosia as a critical impairment of holistic (individual) face perception, implying that this function is a key element of normal human face recognition. Furthermore, the similar behavioral pattern of the two patients despite different lesion localizations supports a distributed network view of the neural face processing structures, suggesting that the key function of human face processing, namely holistic perception of individual faces, requires the activity of several brain areas of the right hemisphere and their mutual connectivity. Copyright © 2011 Elsevier Ltd. All rights reserved.
A face-selective ventral occipito-temporal map of the human brain with intracerebral potentials
Jonas, Jacques; Jacques, Corentin; Liu-Shuang, Joan; Brissart, Hélène; Colnat-Coulbois, Sophie; Maillard, Louis; Rossion, Bruno
2016-01-01
Human neuroimaging studies have identified a network of distinct face-selective regions in the ventral occipito-temporal cortex (VOTC), with a right hemispheric dominance. To date, there is no evidence for this hemispheric and regional specialization with direct measures of brain activity. To address this gap in knowledge, we recorded local neurophysiological activity from 1,678 contact electrodes implanted in the VOTC of a large group of epileptic patients (n = 28). They were presented with natural images of objects at a rapid fixed rate (six images per second: 6 Hz), with faces interleaved as every fifth stimulus (i.e., 1.2 Hz). High signal-to-noise ratio face-selective responses were objectively (i.e., exactly at the face stimulation frequency) identified and quantified throughout the whole VOTC. Face-selective responses were widely distributed across the whole VOTC, but also spatially clustered in specific regions. Among these regions, the lateral section of the right middle fusiform gyrus showed the largest face-selective response by far, offering, to our knowledge, the first supporting evidence of two decades of neuroimaging observations with direct neural measures. In addition, three distinct regions with a high proportion of face-selective responses were disclosed in the right ventral anterior temporal lobe, a region that is undersampled in neuroimaging because of magnetic susceptibility artifacts. A high proportion of contacts responding only to faces (i.e., “face-exclusive” responses) were found in these regions, suggesting that they contain populations of neurons involved in dedicated face-processing functions. Overall, these observations provide a comprehensive mapping of visual category selectivity in the whole human VOTC with direct neural measures. PMID:27354526
Sajjacholapunt, Pitch; Ball, Linden J.
2014-01-01
Research suggests that banner advertisements used in online marketing are often overlooked, especially when positioned horizontally on webpages. Such inattention invariably gives rise to an inability to remember advertising brands and messages, undermining the effectiveness of this marketing method. Recent interest has focused on whether human faces within banner advertisements can increase attention to the information they contain, since the gaze cues conveyed by faces can influence where observers look. We report an experiment that investigated the efficacy of faces located in banner advertisements to enhance the attentional processing and memorability of banner contents. We tracked participants' eye movements when they examined webpages containing either bottom-right vertical banners or bottom-center horizontal banners. We also manipulated facial information such that banners either contained no face, a face with mutual gaze or a face with averted gaze. We additionally assessed people's memories for brands and advertising messages. Results indicated that relative to other conditions, the condition involving faces with averted gaze increased attention to the banner overall, as well as to the advertising text and product. Memorability of the brand and advertising message was also enhanced. Conversely, in the condition involving faces with mutual gaze, the focus of attention was localized more on the face region rather than on the text or product, weakening any memory benefits for the brand and advertising message. This detrimental impact of mutual gaze on attention to advertised products was especially marked for vertical banners. These results demonstrate that the inclusion of human faces with averted gaze in banner advertisements provides a promising means for marketers to increase the attention paid to such adverts, thereby enhancing memory for advertising information. PMID:24624104
Sajjacholapunt, Pitch; Ball, Linden J
2014-01-01
Research suggests that banner advertisements used in online marketing are often overlooked, especially when positioned horizontally on webpages. Such inattention invariably gives rise to an inability to remember advertising brands and messages, undermining the effectiveness of this marketing method. Recent interest has focused on whether human faces within banner advertisements can increase attention to the information they contain, since the gaze cues conveyed by faces can influence where observers look. We report an experiment that investigated the efficacy of faces located in banner advertisements to enhance the attentional processing and memorability of banner contents. We tracked participants' eye movements when they examined webpages containing either bottom-right vertical banners or bottom-center horizontal banners. We also manipulated facial information such that banners either contained no face, a face with mutual gaze or a face with averted gaze. We additionally assessed people's memories for brands and advertising messages. Results indicated that relative to other conditions, the condition involving faces with averted gaze increased attention to the banner overall, as well as to the advertising text and product. Memorability of the brand and advertising message was also enhanced. Conversely, in the condition involving faces with mutual gaze, the focus of attention was localized more on the face region rather than on the text or product, weakening any memory benefits for the brand and advertising message. This detrimental impact of mutual gaze on attention to advertised products was especially marked for vertical banners. These results demonstrate that the inclusion of human faces with averted gaze in banner advertisements provides a promising means for marketers to increase the attention paid to such adverts, thereby enhancing memory for advertising information.
Inverting faces elicits sensitivity to race on the N170 component: a cross-cultural study.
Vizioli, Luca; Foreman, Kay; Rousselet, Guillaume A; Caldara, Roberto
2010-01-29
Human beings are natural experts at processing faces, with some notable exceptions. Same-race faces are better recognized than other-race faces: the so-called other-race effect (ORE). Inverting faces impairs recognition more than for any other inverted visual object: the so-called face inversion effect (FIE). Interestingly, the FIE is stronger for same- compared to other-race faces. At the electrophysiological level, inverted faces elicit consistently delayed and often larger N170 compared to upright faces. However, whether the N170 component is sensitive to race is still a matter of ongoing debate. Here we investigated the N170 sensitivity to race in the framework of the FIE. We recorded EEG from Western Caucasian and East Asian observers while presented with Western Caucasian, East Asian and African American faces in upright and inverted orientations. To control for potential confounds in the EEG signal that might be evoked by the intrinsic and salient differences in the low-level properties of faces from different races, we normalized their amplitude-spectra, luminance and contrast. No differences on the N170 were observed for upright faces. Critically, inverted same-race faces lead to greater recognition impairment and elicited larger N170 amplitudes compared to inverted other-race faces. Our results indicate a finer-grained neural tuning for same-race faces at early stages of processing in both groups of observers.
Improved memory for reward cues following acute buprenorphine administration in humans.
Syal, Supriya; Ipser, Jonathan; Terburg, David; Solms, Mark; Panksepp, Jaak; Malcolm-Smith, Susan; Bos, Peter A; Montoya, Estrella R; Stein, Dan J; van Honk, Jack
2015-03-01
In rodents, there is abundant evidence for the involvement of the opioid system in the processing of reward cues, but this system has remained understudied in humans. In humans, the happy facial expression is a pivotal reward cue. Happy facial expressions activate the brain's reward system and are disregarded by subjects scoring high on depressive mood who are low in reward drive. We investigated whether a single 0.2mg administration of the mixed mu-opioid agonist/kappa-antagonist, buprenorphine, would influence short-term memory for happy, angry or fearful expressions relative to neutral faces. Healthy human subjects (n38) participated in a randomized placebo-controlled within-subject design, and performed an emotional face relocation task after administration of buprenorphine and placebo. We show that, compared to placebo, buprenorphine administration results in a significant improvement of memory for happy faces. Our data demonstrate that acute manipulation of the opioid system by buprenorphine increases short-term memory for social reward cues. Copyright © 2015. Published by Elsevier Ltd.
Monkeys preferentially process body information while viewing affective displays.
Bliss-Moreau, Eliza; Moadab, Gilda; Machado, Christopher J
2017-08-01
Despite evolutionary claims about the function of facial behaviors across phylogeny, rarely are those hypotheses tested in a comparative context-that is, by evaluating how nonhuman animals process such behaviors. Further, while increasing evidence indicates that humans make meaning of faces by integrating contextual information, including that from the body, the extent to which nonhuman animals process contextual information during affective displays is unknown. In the present study, we evaluated the extent to which rhesus macaques (Macaca mulatta) process dynamic affective displays of conspecifics that included both facial and body behaviors. Contrary to hypotheses that they would preferentially attend to faces during affective displays, monkeys looked for longest, most frequently, and first at conspecifics' bodies rather than their heads. These findings indicate that macaques, like humans, attend to available contextual information during the processing of affective displays, and that the body may also be providing unique information about affective states. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Brain Activity Associated with Emoticons: An fMRI Study
NASA Astrophysics Data System (ADS)
Yuasa, Masahide; Saito, Keiichi; Mukawa, Naoki
In this paper, we describe that brain activities associated with emoticons by using fMRI. In communication over a computer network, we use abstract faces such as computer graphics (CG) avatars and emoticons. These faces convey users' emotions and enrich their communications. However, the manner in which these faces influence the mental process is as yet unknown. The human brain may perceive the abstract face in an entirely different manner, depending on its level of reality. We conducted an experiment using fMRI in order to investigate the effects of emoticons. The results show that right inferior frontal gyrus, which associated with nonverbal communication, is activated by emoticons. Since the emoticons were created to reflect the real human facial expressions as accurately as possible, we believed that they would activate the right fusiform gyrus. However, this region was not found to be activated during the experiment. This finding is useful in understanding how abstract faces affect our behaviors and decision-making in communication over a computer network.
Inactivation of Primate Prefrontal Cortex Impairs Auditory and Audiovisual Working Memory.
Plakke, Bethany; Hwang, Jaewon; Romanski, Lizabeth M
2015-07-01
The prefrontal cortex is associated with cognitive functions that include planning, reasoning, decision-making, working memory, and communication. Neurophysiology and neuropsychology studies have established that dorsolateral prefrontal cortex is essential in spatial working memory while the ventral frontal lobe processes language and communication signals. Single-unit recordings in nonhuman primates has shown that ventral prefrontal (VLPFC) neurons integrate face and vocal information and are active during audiovisual working memory. However, whether VLPFC is essential in remembering face and voice information is unknown. We therefore trained nonhuman primates in an audiovisual working memory paradigm using naturalistic face-vocalization movies as memoranda. We inactivated VLPFC, with reversible cortical cooling, and examined performance when faces, vocalizations or both faces and vocalization had to be remembered. We found that VLPFC inactivation impaired subjects' performance in audiovisual and auditory-alone versions of the task. In contrast, VLPFC inactivation did not disrupt visual working memory. Our studies demonstrate the importance of VLPFC in auditory and audiovisual working memory for social stimuli but suggest a different role for VLPFC in unimodal visual processing. The ventral frontal lobe, or inferior frontal gyrus, plays an important role in audiovisual communication in the human brain. Studies with nonhuman primates have found that neurons within ventral prefrontal cortex (VLPFC) encode both faces and vocalizations and that VLPFC is active when animals need to remember these social stimuli. In the present study, we temporarily inactivated VLPFC by cooling the cortex while nonhuman primates performed a working memory task. This impaired the ability of subjects to remember a face and vocalization pair or just the vocalization alone. Our work highlights the importance of the primate VLPFC in the processing of faces and vocalizations in a manner that is similar to the inferior frontal gyrus in the human brain. Copyright © 2015 the authors 0270-6474/15/359666-10$15.00/0.
Huang, Lijie; Song, Yiying; Li, Jingguang; Zhen, Zonglei; Yang, Zetian; Liu, Jia
2014-01-01
In functional magnetic resonance imaging studies, object selectivity is defined as a higher neural response to an object category than other object categories. Importantly, object selectivity is widely considered as a neural signature of a functionally-specialized area in processing its preferred object category in the human brain. However, the behavioral significance of the object selectivity remains unclear. In the present study, we used the individual differences approach to correlate participants' face selectivity in the face-selective regions with their behavioral performance in face recognition measured outside the scanner in a large sample of healthy adults. Face selectivity was defined as the z score of activation with the contrast of faces vs. non-face objects, and the face recognition ability was indexed as the normalized residual of the accuracy in recognizing previously-learned faces after regressing out that for non-face objects in an old/new memory task. We found that the participants with higher face selectivity in the fusiform face area (FFA) and the occipital face area (OFA), but not in the posterior part of the superior temporal sulcus (pSTS), possessed higher face recognition ability. Importantly, the association of face selectivity in the FFA and face recognition ability cannot be accounted for by FFA response to objects or behavioral performance in object recognition, suggesting that the association is domain-specific. Finally, the association is reliable, confirmed by the replication from another independent participant group. In sum, our finding provides empirical evidence on the validity of using object selectivity as a neural signature in defining object-selective regions in the human brain. PMID:25071513
Fusar-Poli, Paolo; Placentino, Anna; Carletti, Francesco; Landi, Paola; Allen, Paul; Surguladze, Simon; Benedetti, Francesco; Abbamonte, Marta; Gasparotti, Roberto; Barale, Francesco; Perez, Jorge; McGuire, Philip; Politi, Pierluigi
2009-01-01
Background Most of our social interactions involve perception of emotional information from the faces of other people. Furthermore, such emotional processes are thought to be aberrant in a range of clinical disorders, including psychosis and depression. However, the exact neurofunctional maps underlying emotional facial processing are not well defined. Methods Two independent researchers conducted separate comprehensive PubMed (1990 to May 2008) searches to find all functional magnetic resonance imaging (fMRI) studies using a variant of the emotional faces paradigm in healthy participants. The search terms were: “fMRI AND happy faces,” “fMRI AND sad faces,” “fMRI AND fearful faces,” “fMRI AND angry faces,” “fMRI AND disgusted faces” and “fMRI AND neutral faces.” We extracted spatial coordinates and inserted them in an electronic database. We performed activation likelihood estimation analysis for voxel-based meta-analyses. Results Of the originally identified studies, 105 met our inclusion criteria. The overall database consisted of 1785 brain coordinates that yielded an overall sample of 1600 healthy participants. Quantitative voxel-based meta-analysis of brain activation provided neurofunctional maps for 1) main effect of human faces; 2) main effect of emotional valence; and 3) modulatory effect of age, sex, explicit versus implicit processing and magnetic field strength. Processing of emotional faces was associated with increased activation in a number of visual, limbic, temporoparietal and prefrontal areas; the putamen; and the cerebellum. Happy, fearful and sad faces specifically activated the amygdala, whereas angry or disgusted faces had no effect on this brain region. Furthermore, amygdala sensitivity was greater for fearful than for happy or sad faces. Insular activation was selectively reported during processing of disgusted and angry faces. However, insular sensitivity was greater for disgusted than for angry faces. Conversely, neural response in the visual cortex and cerebellum was observable across all emotional conditions. Limitations Although the activation likelihood estimation approach is currently one of the most powerful and reliable meta-analytical methods in neuroimaging research, it is insensitive to effect sizes. Conclusion Our study has detailed neurofunctional maps to use as normative references in future fMRI studies of emotional facial processing in psychiatric populations. We found selective differences between neural networks underlying the basic emotions in limbic and insular brain regions. PMID:19949718
NASA Astrophysics Data System (ADS)
Uzbaş, Betül; Arslan, Ahmet
2018-04-01
Gender is an important step for human computer interactive processes and identification. Human face image is one of the important sources to determine gender. In the present study, gender classification is performed automatically from facial images. In order to classify gender, we propose a combination of features that have been extracted face, eye and lip regions by using a hybrid method of Local Binary Pattern and Gray-Level Co-Occurrence Matrix. The features have been extracted from automatically obtained face, eye and lip regions. All of the extracted features have been combined and given as input parameters to classification methods (Support Vector Machine, Artificial Neural Networks, Naive Bayes and k-Nearest Neighbor methods) for gender classification. The Nottingham Scan face database that consists of the frontal face images of 100 people (50 male and 50 female) is used for this purpose. As the result of the experimental studies, the highest success rate has been achieved as 98% by using Support Vector Machine. The experimental results illustrate the efficacy of our proposed method.
Horizontal tuning for faces originates in high-level Fusiform Face Area.
Goffaux, Valerie; Duecker, Felix; Hausfeld, Lars; Schiltz, Christine; Goebel, Rainer
2016-01-29
Recent work indicates that the specialization of face visual perception relies on the privileged processing of horizontal angles of facial information. This suggests that stimulus properties assumed to be fully resolved in primary visual cortex (V1; e.g., orientation) in fact determine human vision until high-level stages of processing. To address this hypothesis, the present fMRI study explored the orientation sensitivity of V1 and high-level face-specialized ventral regions such as the Occipital Face Area (OFA) and Fusiform Face Area (FFA) to different angles of face information. Participants viewed face images filtered to retain information at horizontal, vertical or oblique angles. Filtered images were viewed upright, inverted and (phase-)scrambled. FFA responded most strongly to the horizontal range of upright face information; its activation pattern reliably separated horizontal from oblique ranges, but only when faces were upright. Moreover, activation patterns induced in the right FFA and the OFA by upright and inverted faces could only be separated based on horizontal information. This indicates that the specialized processing of upright face information in the OFA and FFA essentially relies on the encoding of horizontal facial cues. This pattern was not passively inherited from V1, which was found to respond less strongly to horizontal than other orientations likely due to adaptive whitening. Moreover, we found that orientation decoding accuracy in V1 was impaired for stimuli containing no meaningful shape. By showing that primary coding in V1 is influenced by high-order stimulus structure and that high-level processing is tuned to selective ranges of primary information, the present work suggests that primary and high-level levels of the visual system interact in order to modulate the processing of certain ranges of primary information depending on their relevance with respect to the stimulus and task at hand. Copyright © 2015 Elsevier Ltd. All rights reserved.
Jessen, Sarah; Grossmann, Tobias
2017-01-01
Enhanced attention to fear expressions in adults is primarily driven by information from low as opposed to high spatial frequencies contained in faces. However, little is known about the role of spatial frequency information in emotion processing during infancy. In the present study, we examined the role of low compared to high spatial frequencies in the processing of happy and fearful facial expressions by using filtered face stimuli and measuring event-related brain potentials (ERPs) in 7-month-old infants ( N = 26). Our results revealed that infants' brains discriminated between emotional facial expressions containing high but not between expressions containing low spatial frequencies. Specifically, happy faces containing high spatial frequencies elicited a smaller Nc amplitude than fearful faces containing high spatial frequencies and happy and fearful faces containing low spatial frequencies. Our results demonstrate that already in infancy spatial frequency content influences the processing of facial emotions. Furthermore, we observed that fearful facial expressions elicited a comparable Nc response for high and low spatial frequencies, suggesting a robust detection of fearful faces irrespective of spatial frequency content, whereas the detection of happy facial expressions was contingent upon frequency content. In summary, these data provide new insights into the neural processing of facial emotions in early development by highlighting the differential role played by spatial frequencies in the detection of fear and happiness.
Fishman, Inna; Ng, Rowena; Bellugi, Ursula
2012-01-01
Williams syndrome (WS) is a genetic condition with a distinctive social phenotype characterized by excessive sociability, accompanied by a relative proficiency in face recognition, despite severe deficits in visuospatial domain of cognition. This consistent phenotypic characteristic and the relative homogeneity of the WS genotype make WS a compelling human model for examining the genotype-phenotype relations, especially with respect to social behavior. Following up on a recent report suggesting that individuals with WS do not show race bias and racial stereotyping, this study was designed to investigate the neural correlates of the perception of faces from different races, in individuals with WS as compared to typically developing (TD) controls. Caucasian WS and TD participants performed a gender identification task with own-race (White) and other-race (Black) faces while event-related potentials (ERPs) were recorded. In line with previous studies with TD participants, other-race faces elicited larger amplitudes ERPs within the first 200 ms following the face onset, in WS and TD participants alike. These results suggest that, just like their TD counterparts, individuals with WS differentially processed faces of own- vs. other-race, at relatively early stages of processing, starting as early as 115 ms after the face onset. Overall, these results indicate that neural processing of faces in individuals with WS is moderated by race at early perceptual stages, calling for a reconsideration of the previous claim that they are uniquely insensitive to race. PMID:22022973
Social anhedonia is associated with neural abnormalities during face emotion processing.
Germine, Laura T; Garrido, Lucia; Bruce, Lori; Hooker, Christine
2011-10-01
Human beings are social organisms with an intrinsic desire to seek and participate in social interactions. Social anhedonia is a personality trait characterized by a reduced desire for social affiliation and reduced pleasure derived from interpersonal interactions. Abnormally high levels of social anhedonia prospectively predict the development of schizophrenia and contribute to poorer outcomes for schizophrenia patients. Despite the strong association between social anhedonia and schizophrenia, the neural mechanisms that underlie individual differences in social anhedonia have not been studied and are thus poorly understood. Deficits in face emotion recognition are related to poorer social outcomes in schizophrenia, and it has been suggested that face emotion recognition deficits may be a behavioral marker for schizophrenia liability. In the current study, we used functional magnetic resonance imaging (fMRI) to see whether there are differences in the brain networks underlying basic face emotion processing in a community sample of individuals low vs. high in social anhedonia. We isolated the neural mechanisms related to face emotion processing by comparing face emotion discrimination with four other baseline conditions (identity discrimination of emotional faces, identity discrimination of neutral faces, object discrimination, and pattern discrimination). Results showed a group (high/low social anhedonia) × condition (emotion discrimination/control condition) interaction in the anterior portion of the rostral medial prefrontal cortex, right superior temporal gyrus, and left somatosensory cortex. As predicted, high (relative to low) social anhedonia participants showed less neural activity in face emotion processing regions during emotion discrimination as compared to each control condition. The findings suggest that social anhedonia is associated with abnormalities in networks responsible for basic processes associated with social cognition, and provide a starting point for understanding the neural basis of social motivation and our drive to seek social affiliation. Copyright © 2011 Elsevier Inc. All rights reserved.
Why are mixed-race people perceived as more attractive?
Lewis, Michael B
2010-01-01
Previous, small scale, studies have suggested that people of mixed race are perceived as being more attractive than non-mixed-race people. Here, it is suggested that the reason for this is the genetic process of heterosis or hybrid vigour (ie cross-bred offspring have greater genetic fitness than pure-bred offspring). A random sample of 1205 black, white, and mixed-race faces was collected. These faces were then rated for their perceived attractiveness. There was a small but highly significant effect, with mixed-race faces, on average, being perceived as more attractive. This result is seen as a perceptual demonstration of heterosis in humans-a biological process that may have implications far beyond just attractiveness.
A survey of the dummy face and human face stimuli used in BCI paradigm.
Chen, Long; Jin, Jing; Zhang, Yu; Wang, Xingyu; Cichocki, Andrzej
2015-01-15
It was proved that the human face stimulus were superior to the flash only stimulus in BCI system. However, human face stimulus may lead to copyright infringement problems and was hard to be edited according to the requirement of the BCI study. Recently, it was reported that facial expression changes could be done by changing a curve in a dummy face which could obtain good performance when it was applied to visual-based P300 BCI systems. In this paper, four different paradigms were presented, which were called dummy face pattern, human face pattern, inverted dummy face pattern and inverted human face pattern, to evaluate the performance of the dummy faces stimuli compared with the human faces stimuli. The key point that determined the value of dummy faces in BCI systems were whether dummy faces stimuli could obtain as good performance as human faces stimuli. Online and offline results of four different paradigms would have been obtained and comparatively analyzed. Online and offline results showed that there was no significant difference among dummy faces and human faces in ERPs, classification accuracy and information transfer rate when they were applied in BCI systems. Dummy faces stimuli could evoke large ERPs and obtain as high classification accuracy and information transfer rate as the human faces stimuli. Since dummy faces were easy to be edited and had no copyright infringement problems, it would be a good choice for optimizing the stimuli of BCI systems. Copyright © 2014 Elsevier B.V. All rights reserved.
Observed touch on a non-human face is not remapped onto the human observer's own face.
Beck, Brianna; Bertini, Caterina; Scarpazza, Cristina; Làdavas, Elisabetta
2013-01-01
Visual remapping of touch (VRT) is a phenomenon in which seeing a human face being touched enhances detection of tactile stimuli on the observer's own face, especially when the observed face expresses fear. This study tested whether VRT would occur when seeing touch on monkey faces and whether it would be similarly modulated by facial expressions. Human participants detected near-threshold tactile stimulation on their own cheeks while watching fearful, happy, and neutral human or monkey faces being concurrently touched or merely approached by fingers. We predicted minimal VRT for neutral and happy monkey faces but greater VRT for fearful monkey faces. The results with human faces replicated previous findings, demonstrating stronger VRT for fearful expressions than for happy or neutral expressions. However, there was no VRT (i.e. no difference between accuracy in touch and no-touch trials) for any of the monkey faces, regardless of facial expression, suggesting that touch on a non-human face is not remapped onto the somatosensory system of the human observer.
Observed Touch on a Non-Human Face Is Not Remapped onto the Human Observer's Own Face
Beck, Brianna; Bertini, Caterina; Scarpazza, Cristina; Làdavas, Elisabetta
2013-01-01
Visual remapping of touch (VRT) is a phenomenon in which seeing a human face being touched enhances detection of tactile stimuli on the observer's own face, especially when the observed face expresses fear. This study tested whether VRT would occur when seeing touch on monkey faces and whether it would be similarly modulated by facial expressions. Human participants detected near-threshold tactile stimulation on their own cheeks while watching fearful, happy, and neutral human or monkey faces being concurrently touched or merely approached by fingers. We predicted minimal VRT for neutral and happy monkey faces but greater VRT for fearful monkey faces. The results with human faces replicated previous findings, demonstrating stronger VRT for fearful expressions than for happy or neutral expressions. However, there was no VRT (i.e. no difference between accuracy in touch and no-touch trials) for any of the monkey faces, regardless of facial expression, suggesting that touch on a non-human face is not remapped onto the somatosensory system of the human observer. PMID:24250781
Khadem, Ali; Hossein-Zadeh, Gholam-Ali; Khorrami, Anahita
2016-03-01
The majority of previous functional/effective connectivity studies conducted on the autistic patients converged to the underconnectivity theory of ASD: "long-range underconnectivity and sometimes short-rang overconnectivity". However, to the best of our knowledge the total (linear and nonlinear) predictive information transfers (PITs) of autistic patients have not been investigated yet. Also, EEG data have rarely been used for exploring the information processing deficits in autistic subjects. This study is aimed at comparing the total (linear and nonlinear) PITs of autistic and typically developing healthy youths during human face processing by using EEG data. The ERPs of 12 autistic youths and 19 age-matched healthy control (HC) subjects were recorded while they were watching upright and inverted human face images. The PITs among EEG channels were quantified using two measures separately: transfer entropy with self-prediction optimality (TESPO), and modified transfer entropy with self-prediction optimality (MTESPO). Afterwards, the directed differential connectivity graphs (dDCGs) were constructed to characterize the significant changes in the estimated PITs of autistic subjects compared with HC ones. By using both TESPO and MTESPO, long-range reduction of PITs of ASD group during face processing was revealed (particularly from frontal channels to right temporal channels). Also, it seemed the orientation of face images (upright or upside down) did not modulate the binary pattern of PIT-based dDCGs, significantly. Moreover, compared with TESPO, the results of MTESPO were more compatible with the underconnectivity theory of ASD in the sense that MTESPO showed no long-range increase in PIT. It is also noteworthy that to the best of our knowledge it is the first time that a version of MTE is applied for patients (here ASD) and it is also its first use for EEG data analysis.
The fractal based analysis of human face and DNA variations during aging.
Namazi, Hamidreza; Akrami, Amin; Hussaini, Jamal; Silva, Osmar N; Wong, Albert; Kulish, Vladimir V
2017-01-16
Human DNA is the main unit that shapes human characteristics and features such as behavior. Thus, it is expected that changes in DNA (DNA mutation) influence human characteristics and features. Face is one of the human features which is unique and also dependent on his gen. In this paper, for the first time we analyze the variations of human DNA and face simultaneously. We do this job by analyzing the fractal dimension of DNA walk and face during human aging. The results of this study show the human DNA and face get more complex by aging. These complexities are mapped on fractal exponents of DNA walk and human face. The method discussed in this paper can be further developed in order to investigate the direct influence of DNA mutation on the face variations during aging, and accordingly making a model between human face fractality and the complexity of DNA walk.
Burt, Adelaide; Hugrass, Laila; Frith-Belvedere, Tash; Crewther, David
2017-01-01
Low spatial frequency (LSF) visual information is extracted rapidly from fearful faces, suggesting magnocellular involvement. Autistic phenotypes demonstrate altered magnocellular processing, which we propose contributes to a decreased P100 evoked response to LSF fearful faces. Here, we investigated whether rapid processing of fearful facial expressions differs for groups of neurotypical adults with low and high scores on the Autistic Spectrum Quotient (AQ). We created hybrid face stimuli with low and high spatial frequency filtered, fearful, and neutral expressions. Fearful faces produced higher amplitude P100 responses than neutral faces in the low AQ group, particularly when the hybrid face contained a LSF fearful expression. By contrast, there was no effect of fearful expression on P100 amplitude in the high AQ group. Consistent with evidence linking magnocellular differences with autistic personality traits, our non-linear VEP results showed that the high AQ group had higher amplitude K2.1 responses than the low AQ group, which is indicative of less efficient magnocellular recovery. Our results suggest that magnocellular LSF processing of a human face may be the initial visual cue used to rapidly and automatically detect fear, but that this cue functions atypically in those with high autistic tendency.
Emotional content modulates response inhibition and perceptual processing.
Yang, Suyong; Luo, Wenbo; Zhu, Xiangru; Broster, Lucas S; Chen, Taolin; Li, Jinzhen; Luo, Yuejia
2014-11-01
In this study, event-related potentials were used to investigate the effect of emotion on response inhibition. Participants performed an emotional go/no-go task that required responses to human faces associated with a "go" valence (i.e., emotional, neutral) and response inhibition to human faces associated with a "no-go" valence. Emotional content impaired response inhibition, as evidenced by decreased response accuracy and N2 amplitudes in no-go trials. More importantly, emotional expressions elicited larger N170 amplitudes than neutral expressions, and this effect was larger in no-go than in go trials, indicating that the perceptual processing of emotional expression had priority in inhibitory trials. In no-go trials, correlation analysis showed that increased N170 amplitudes were associated with decreased N2 amplitudes. Taken together, our findings suggest that emotional content impairs response inhibition due to the prioritization of emotional content processing. Copyright © 2014 Society for Psychophysiological Research.
Unaware person recognition from the body when face identification fails.
Rice, Allyson; Phillips, P Jonathon; Natu, Vaidehi; An, Xiaobo; O'Toole, Alice J
2013-11-01
How does one recognize a person when face identification fails? Here, we show that people rely on the body but are unaware of doing so. State-of-the-art face-recognition algorithms were used to select images of people with almost no useful identity information in the face. Recognition of the face alone in these cases was near chance level, but recognition of the person was accurate. Accuracy in identifying the person without the face was identical to that in identifying the whole person. Paradoxically, people reported relying heavily on facial features over noninternal face and body features in making their identity decisions. Eye movements indicated otherwise, with gaze duration and fixations shifting adaptively toward the body and away from the face when the body was a better indicator of identity than the face. This shift occurred with no cost to accuracy or response time. Human identity processing may be partially inaccessible to conscious awareness.
Somppi, Sanni; Törnqvist, Heini; Topál, József; Koskela, Aija; Hänninen, Laura; Krause, Christina M.; Vainio, Outi
2017-01-01
The neuropeptide oxytocin plays a critical role in social behavior and emotion regulation in mammals. The aim of this study was to explore how nasal oxytocin administration affects gazing behavior during emotional perception in domestic dogs. Looking patterns of dogs, as a measure of voluntary attention, were recorded during the viewing of human facial expression photographs. The pupil diameters of dogs were also measured as a physiological index of emotional arousal. In a placebo-controlled within-subjects experimental design, 43 dogs, after having received either oxytocin or placebo (saline) nasal spray treatment, were presented with pictures of unfamiliar male human faces displaying either a happy or an angry expression. We found that, depending on the facial expression, the dogs’ gaze patterns were affected selectively by oxytocin treatment. After receiving oxytocin, dogs fixated less often on the eye regions of angry faces and revisited (glanced back at) more often the eye regions of smiling (happy) faces than after the placebo treatment. Furthermore, following the oxytocin treatment dogs fixated and revisited the eyes of happy faces significantly more often than the eyes of angry faces. The analysis of dogs’ pupil diameters during viewing of human facial expressions indicated that oxytocin may also have a modulatory effect on dogs’ emotional arousal. While subjects’ pupil sizes were significantly larger when viewing angry faces than happy faces in the control (placebo treatment) condition, oxytocin treatment not only eliminated this effect but caused an opposite pupil response. Overall, these findings suggest that nasal oxytocin administration selectively changes the allocation of attention and emotional arousal in domestic dogs. Oxytocin has the potential to decrease vigilance toward threatening social stimuli and increase the salience of positive social stimuli thus making eye gaze of friendly human faces more salient for dogs. Our study provides further support for the role of the oxytocinergic system in the social perception abilities of domestic dogs. We propose that oxytocin modulates fundamental emotional processing in dogs through a mechanism that may facilitate communication between humans and dogs. PMID:29089919
Face processing in autism spectrum disorders: from brain regions to brain networks
Nomi, Jason S.; Uddin, Lucina Q.
2015-01-01
Autism spectrum disorder (ASD) is characterized by reduced attention to social stimuli including the human face. This hypo-responsiveness to stimuli that are engaging to typically developing individuals may result from dysfunctioning motivation, reward, and attention systems in the brain. Here we review an emerging neuroimaging literature that emphasizes a shift from focusing on hypo-activation of isolated brain regions such as the fusiform gyrus, amygdala, and superior temporal sulcus in ASD to a more holistic approach to understanding face perception as a process supported by distributed cortical and subcortical brain networks. We summarize evidence for atypical activation patterns within brain networks that may contribute to social deficits characteristic of the disorder. We conclude by pointing to gaps in the literature and future directions that will continue to shed light on aspects of face processing in autism that are still under-examined. In particular, we highlight the need for more developmental studies and studies examining ecologically valid and naturalistic social stimuli. PMID:25829246
Alpha-band rhythm modulation under the condition of subliminal face presentation: MEG study.
Sakuraba, Satoshi; Kobayashi, Hana; Sakai, Shinya; Yokosawa, Koichi
2013-01-01
The human brain has two streams to process visual information: a dorsal stream and a ventral stream. Negative potential N170 or its magnetic counterpart M170 is known as the face-specific signal originating from the ventral stream. It is possible to present a visual image unconsciously by using continuous flash suppression (CFS), which is a visual masking technique adopting binocular rivalry. In this work, magnetoencephalograms were recorded during presentation of the three invisible images: face images, which are processed by the ventral stream; tool images, which could be processed by the dorsal stream, and a blank image. Alpha-band activities detected by sensors that are sensitive to M170 were compared. The alpha-band rhythm was suppressed more during presentation of face images than during presentation of the blank image (p=.028). The suppression remained for about 1 s after ending presentations. However, no significant difference was observed between tool and other images. These results suggest that alpha-band rhythm can be modulated also by unconscious visual images.
Liew, Sook-Lei; Ma, Yina; Han, Shihui; Aziz-Zadeh, Lisa
2011-02-16
Human adults typically respond faster to their own face than to the faces of others. However, in Chinese participants, this self-face advantage is lost in the presence of one's supervisor, and they respond faster to their supervisor's face than to their own. While this "boss effect" suggests a strong modulation of self-processing in the presence of influential social superiors, the current study examined whether this effect was true across cultures. Given the wealth of literature on cultural differences between collectivist, interdependent versus individualistic, independent self-construals, we hypothesized that the boss effect might be weaker in independent than interdependent cultures. Twenty European American college students were asked to identify orientations of their own face or their supervisors' face. We found that European Americans, unlike Chinese participants, did not show a "boss effect" and maintained the self-face advantage even in the presence of their supervisor's face. Interestingly, however, their self-face advantage decreased as their ratings of their boss's perceived social status increased, suggesting that self-processing in Americans is influenced more by one's social status than by one's hierarchical position as a social superior. In addition, when their boss's face was presented with a labmate's face, American participants responded faster to the boss's face, indicating that the boss may represent general social dominance rather than a direct negative threat to oneself, in more independent cultures. Altogether, these results demonstrate a strong cultural modulation of self-processing in social contexts and suggest that the very concept of social positions, such as a boss, may hold markedly different meanings to the self across Western and East Asian cultures.
Liew, Sook-Lei; Ma, Yina; Han, Shihui; Aziz-Zadeh, Lisa
2011-01-01
Human adults typically respond faster to their own face than to the faces of others. However, in Chinese participants, this self-face advantage is lost in the presence of one's supervisor, and they respond faster to their supervisor's face than to their own. While this “boss effect” suggests a strong modulation of self-processing in the presence of influential social superiors, the current study examined whether this effect was true across cultures. Given the wealth of literature on cultural differences between collectivist, interdependent versus individualistic, independent self-construals, we hypothesized that the boss effect might be weaker in independent than interdependent cultures. Twenty European American college students were asked to identify orientations of their own face or their supervisors' face. We found that European Americans, unlike Chinese participants, did not show a “boss effect” and maintained the self-face advantage even in the presence of their supervisor's face. Interestingly, however, their self-face advantage decreased as their ratings of their boss's perceived social status increased, suggesting that self-processing in Americans is influenced more by one's social status than by one's hierarchical position as a social superior. In addition, when their boss's face was presented with a labmate's face, American participants responded faster to the boss's face, indicating that the boss may represent general social dominance rather than a direct negative threat to oneself, in more independent cultures. Altogether, these results demonstrate a strong cultural modulation of self-processing in social contexts and suggest that the very concept of social positions, such as a boss, may hold markedly different meanings to the self across Western and East Asian cultures. PMID:21359209
Putting the face in context: Body expressions impact facial emotion processing in human infants.
Rajhans, Purva; Jessen, Sarah; Missana, Manuela; Grossmann, Tobias
2016-06-01
Body expressions exert strong contextual effects on facial emotion perception in adults. Specifically, conflicting body cues hamper the recognition of emotion from faces, as evident on both the behavioral and neural level. We examined the developmental origins of the neural processes involved in emotion perception across body and face in 8-month-old infants by measuring event-related brain potentials (ERPs). We primed infants with body postures (fearful, happy) that were followed by either congruent or incongruent facial expressions. Our results revealed that body expressions impact facial emotion processing and that incongruent body cues impair the neural discrimination of emotional facial expressions. Priming effects were associated with attentional and recognition memory processes, as reflected in a modulation of the Nc and Pc evoked at anterior electrodes. These findings demonstrate that 8-month-old infants possess neural mechanisms that allow for the integration of emotion across body and face, providing evidence for the early developmental emergence of context-sensitive facial emotion perception. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Motor Simulation during Action Word Processing in Neurosurgical Patients
ERIC Educational Resources Information Center
Tomasino, Barbara; Ceschia, Martina; Fabbro, Franco; Skrap, Miran
2012-01-01
The role that human motor areas play in linguistic processing is the subject of a stimulating debate. Data from nine neurosurgical patients with selective lesions of the precentral and postcentral sulcus could provide a direct answer as to whether motor area activation is necessary for action word processing. Action-related verbs (face-, hand-,…
Three Studies on Configural Face Processing by Chimpanzees
ERIC Educational Resources Information Center
Parr, Lisa A.; Heintz, Matthew; Akamagwuna, Unoma
2006-01-01
Previous studies have demonstrated the sensitivity of chimpanzees to facial configurations. Three studies further these findings by showing this sensitivity to be specific to second-order relational properties. In humans, this type of configural processing requires prolonged experience and enables subordinate-level discriminations of many…
Peykarjou, Stefanie; Hoehl, Stefanie; Pauen, Sabina; Rossion, Bruno
2017-10-02
This study investigates categorization of human and ape faces in 9-month-olds using a Fast Periodic Visual Stimulation (FPVS) paradigm while measuring EEG. Categorization responses are elicited only if infants discriminate between different categories and generalize across exemplars within each category. In study 1, human or ape faces were presented as standard and deviant stimuli in upright and inverted trials. Upright ape faces presented among humans elicited strong categorization responses, whereas responses for upright human faces and for inverted ape faces were smaller. Deviant inverted human faces did not elicit categorization. Data were best explained by a model with main effects of species and orientation. However, variance of low-level image characteristics was higher for the ape than the human category. Variance was matched to replicate this finding in an independent sample (study 2). Both human and ape faces elicited categorization in upright and inverted conditions, but upright ape faces elicited the strongest responses. Again, data were best explained by a model of two main effects. These experiments demonstrate that 9-month-olds rapidly categorize faces, and unfamiliar faces presented among human faces elicit increased categorization responses. This likely reflects habituation for the familiar standard category, and stronger release for the unfamiliar category deviants.
Topographic brain mapping of emotion-related hemisphere asymmetries.
Roschmann, R; Wittling, W
1992-03-01
The study used topographic brain mapping of visual evoked potentials to investigate emotion-related hemisphere asymmetries. The stimulus material consisted of color photographs of human faces, grouped into two emotion-related categories: normal faces (neutral stimuli) and faces deformed by dermatological diseases (emotional stimuli). The pictures were presented tachistoscopically to 20 adult right-handed subjects. Brain activity was recorded by 30 EEG electrodes with linked ears as reference. The waveforms were averaged separately with respect to each of the two stimulus conditions. Statistical analysis by means of significance probability mapping revealed significant differences between stimulus conditions for two periods of time, indicating right hemisphere superiority in emotion-related processing. The results are discussed in terms of a 2-stage-model of emotional processing in the cerebral hemispheres.
Implicit Processing of the Eyes and Mouth: Evidence from Human Electrophysiology.
Pesciarelli, Francesca; Leo, Irene; Sarlo, Michela
2016-01-01
The current study examined the time course of implicit processing of distinct facial features and the associate event-related potential (ERP) components. To this end, we used a masked priming paradigm to investigate implicit processing of the eyes and mouth in upright and inverted faces, using a prime duration of 33 ms. Two types of prime-target pairs were used: 1. congruent (e.g., open eyes only in both prime and target or open mouth only in both prime and target); 2. incongruent (e.g., open mouth only in prime and open eyes only in target or open eyes only in prime and open mouth only in target). The identity of the faces changed between prime and target. Participants pressed a button when the target face had the eyes open and another button when the target face had the mouth open. The behavioral results showed faster RTs for the eyes in upright faces than the eyes in inverted faces, the mouth in upright and inverted faces. Moreover they also revealed a congruent priming effect for the mouth in upright faces. The ERP findings showed a face orientation effect across all ERP components studied (P1, N1, N170, P2, N2, P3) starting at about 80 ms, and a congruency/priming effect on late components (P2, N2, P3), starting at about 150 ms. Crucially, the results showed that the orientation effect was driven by the eye region (N170, P2) and that the congruency effect started earlier (P2) for the eyes than for the mouth (N2). These findings mark the time course of the processing of internal facial features and provide further evidence that the eyes are automatically processed and that they are very salient facial features that strongly affect the amplitude, latency, and distribution of neural responses to faces.
Implicit Processing of the Eyes and Mouth: Evidence from Human Electrophysiology
Pesciarelli, Francesca; Leo, Irene; Sarlo, Michela
2016-01-01
The current study examined the time course of implicit processing of distinct facial features and the associate event-related potential (ERP) components. To this end, we used a masked priming paradigm to investigate implicit processing of the eyes and mouth in upright and inverted faces, using a prime duration of 33 ms. Two types of prime-target pairs were used: 1. congruent (e.g., open eyes only in both prime and target or open mouth only in both prime and target); 2. incongruent (e.g., open mouth only in prime and open eyes only in target or open eyes only in prime and open mouth only in target). The identity of the faces changed between prime and target. Participants pressed a button when the target face had the eyes open and another button when the target face had the mouth open. The behavioral results showed faster RTs for the eyes in upright faces than the eyes in inverted faces, the mouth in upright and inverted faces. Moreover they also revealed a congruent priming effect for the mouth in upright faces. The ERP findings showed a face orientation effect across all ERP components studied (P1, N1, N170, P2, N2, P3) starting at about 80 ms, and a congruency/priming effect on late components (P2, N2, P3), starting at about 150 ms. Crucially, the results showed that the orientation effect was driven by the eye region (N170, P2) and that the congruency effect started earlier (P2) for the eyes than for the mouth (N2). These findings mark the time course of the processing of internal facial features and provide further evidence that the eyes are automatically processed and that they are very salient facial features that strongly affect the amplitude, latency, and distribution of neural responses to faces. PMID:26790153
Bogale, Bezawork Afework; Aoyama, Masato; Sugita, Shoei
2011-01-01
We trained jungle crows to discriminate among photographs of human face according to their sex in a simultaneous two-alternative task to study their categorical learning ability. Once the crows reached a discrimination criterion (greater than or equal to 80% correct choices in two consecutive sessions; binomial probability test, p<.05), they next received generalization and transfer tests (i.e., greyscale, contour, and 'full' occlusion) in Experiment 1 followed by a 'partial' occlusion test in Experiment 2 and random stimuli pair test in Experiment 3. Jungle crows learned the discrimination task in a few trials and successfully generalized to novel stimuli sets. However, all crows failed the greyscale test and half of them the contour test. Neither occlusion of internal features of the face, nor randomly pairing of exemplars affected discrimination performance of most, if not all crows. We suggest that jungle crows categorize human face photographs based on perceptual similarities as other non-human animals do, and colour appears to be the most salient feature controlling discriminative behaviour. However, the variability in the use of facial contours among individuals suggests an exploitation of multiple features and individual differences in visual information processing among jungle crows. Copyright © 2010 Elsevier B.V. All rights reserved.
Attention to emotion modulates fMRI activity in human right superior temporal sulcus.
Narumoto, J; Okada, T; Sadato, N; Fukui, K; Yonekura, Y
2001-10-01
A parallel neural network has been proposed for processing various types of information conveyed by faces including emotion. Using functional magnetic resonance imaging (fMRI), we tested the effect of the explicit attention to the emotional expression of the faces on the neuronal activity of the face-responsive regions. Delayed match to sample procedure was adopted. Subjects were required to match the visually presented pictures with regard to the contour of the face pictures, facial identity, and emotional expressions by valence (happy and fearful expressions) and arousal (fearful and sad expressions). Contour matching of the non-face scrambled pictures was used as a control condition. The face-responsive regions that responded more to faces than to non-face stimuli were the bilateral lateral fusiform gyrus (LFG), the right superior temporal sulcus (STS), and the bilateral intraparietal sulcus (IPS). In these regions, general attention to the face enhanced the activities of the bilateral LFG, the right STS, and the left IPS compared with attention to the contour of the facial image. Selective attention to facial emotion specifically enhanced the activity of the right STS compared with attention to the face per se. The results suggest that the right STS region plays a special role in facial emotion recognition within distributed face-processing systems. This finding may support the notion that the STS is involved in social perception.
Role of temporal processing stages by inferior temporal neurons in facial recognition.
Sugase-Miyamoto, Yasuko; Matsumoto, Narihisa; Kawano, Kenji
2011-01-01
In this review, we focus on the role of temporal stages of encoded facial information in the visual system, which might enable the efficient determination of species, identity, and expression. Facial recognition is an important function of our brain and is known to be processed in the ventral visual pathway, where visual signals are processed through areas V1, V2, V4, and the inferior temporal (IT) cortex. In the IT cortex, neurons show selective responses to complex visual images such as faces, and at each stage along the pathway the stimulus selectivity of the neural responses becomes sharper, particularly in the later portion of the responses. In the IT cortex of the monkey, facial information is represented by different temporal stages of neural responses, as shown in our previous study: the initial transient response of face-responsive neurons represents information about global categories, i.e., human vs. monkey vs. simple shapes, whilst the later portion of these responses represents information about detailed facial categories, i.e., expression and/or identity. This suggests that the temporal stages of the neuronal firing pattern play an important role in the coding of visual stimuli, including faces. This type of coding may be a plausible mechanism underlying the temporal dynamics of recognition, including the process of detection/categorization followed by the identification of objects. Recent single-unit studies in monkeys have also provided evidence consistent with the important role of the temporal stages of encoded facial information. For example, view-invariant facial identity information is represented in the response at a later period within a region of face-selective neurons. Consistent with these findings, temporally modulated neural activity has also been observed in human studies. These results suggest a close correlation between the temporal processing stages of facial information by IT neurons and the temporal dynamics of face recognition.
Role of Temporal Processing Stages by Inferior Temporal Neurons in Facial Recognition
Sugase-Miyamoto, Yasuko; Matsumoto, Narihisa; Kawano, Kenji
2011-01-01
In this review, we focus on the role of temporal stages of encoded facial information in the visual system, which might enable the efficient determination of species, identity, and expression. Facial recognition is an important function of our brain and is known to be processed in the ventral visual pathway, where visual signals are processed through areas V1, V2, V4, and the inferior temporal (IT) cortex. In the IT cortex, neurons show selective responses to complex visual images such as faces, and at each stage along the pathway the stimulus selectivity of the neural responses becomes sharper, particularly in the later portion of the responses. In the IT cortex of the monkey, facial information is represented by different temporal stages of neural responses, as shown in our previous study: the initial transient response of face-responsive neurons represents information about global categories, i.e., human vs. monkey vs. simple shapes, whilst the later portion of these responses represents information about detailed facial categories, i.e., expression and/or identity. This suggests that the temporal stages of the neuronal firing pattern play an important role in the coding of visual stimuli, including faces. This type of coding may be a plausible mechanism underlying the temporal dynamics of recognition, including the process of detection/categorization followed by the identification of objects. Recent single-unit studies in monkeys have also provided evidence consistent with the important role of the temporal stages of encoded facial information. For example, view-invariant facial identity information is represented in the response at a later period within a region of face-selective neurons. Consistent with these findings, temporally modulated neural activity has also been observed in human studies. These results suggest a close correlation between the temporal processing stages of facial information by IT neurons and the temporal dynamics of face recognition. PMID:21734904
Xu, Jing; Wong, Kevin; Jian, Yifan; Sarunic, Marinko V
2014-02-01
In this report, we describe a graphics processing unit (GPU)-accelerated processing platform for real-time acquisition and display of flow contrast images with Fourier domain optical coherence tomography (FDOCT) in mouse and human eyes in vivo. Motion contrast from blood flow is processed using the speckle variance OCT (svOCT) technique, which relies on the acquisition of multiple B-scan frames at the same location and tracking the change of the speckle pattern. Real-time mouse and human retinal imaging using two different custom-built OCT systems with processing and display performed on GPU are presented with an in-depth analysis of performance metrics. The display output included structural OCT data, en face projections of the intensity data, and the svOCT en face projections of retinal microvasculature; these results compare projections with and without speckle variance in the different retinal layers to reveal significant contrast improvements. As a demonstration, videos of real-time svOCT for in vivo human and mouse retinal imaging are included in our results. The capability of performing real-time svOCT imaging of the retinal vasculature may be a useful tool in a clinical environment for monitoring disease-related pathological changes in the microcirculation such as diabetic retinopathy.
The role of the hippocampus in recognition memory.
Bird, Chris M
2017-08-01
Many theories of declarative memory propose that it is supported by partially separable processes underpinned by different brain structures. The hippocampus plays a critical role in binding together item and contextual information together and processing the relationships between individual items. By contrast, the processing of individual items and their later recognition can be supported by extrahippocampal regions of the medial temporal lobes (MTL), particularly when recognition is based on feelings of familiarity without the retrieval of any associated information. These theories are domain-general in that "items" might be words, faces, objects, scenes, etc. However, there is mixed evidence that item recognition does not require the hippocampus, or that familiarity-based recognition can be supported by extrahippocampal regions. By contrast, there is compelling evidence that in humans, hippocampal damage does not affect recognition memory for unfamiliar faces, whilst recognition memory for several other stimulus classes is impaired. I propose that regions outside of the hippocampus can support recognition of unfamiliar faces because they are perceived as discrete items and have no prior conceptual associations. Conversely, extrahippocampal processes are inadequate for recognition of items which (a) have been previously experienced, (b) are conceptually meaningful, or (c) are perceived as being comprised of individual elements. This account reconciles findings from primate and human studies of recognition memory. Furthermore, it suggests that while the hippocampus is critical for binding and relational processing, these processes are required for item recognition memory in most situations. Copyright © 2017 Elsevier Ltd. All rights reserved.
Near-optimal integration of facial form and motion.
Dobs, Katharina; Ma, Wei Ji; Reddy, Leila
2017-09-08
Human perception consists of the continuous integration of sensory cues pertaining to the same object. While it has been fairly well shown that humans use an optimal strategy when integrating low-level cues proportional to their relative reliability, the integration processes underlying high-level perception are much less understood. Here we investigate cue integration in a complex high-level perceptual system, the human face processing system. We tested cue integration of facial form and motion in an identity categorization task and found that an optimal model could successfully predict subjects' identity choices. Our results suggest that optimal cue integration may be implemented across different levels of the visual processing hierarchy.
Development of preference for conspecific faces in human infants.
Sanefuji, Wakako; Wada, Kazuko; Yamamoto, Tomoka; Mohri, Ikuko; Taniike, Masako
2014-04-01
Previous studies have proposed that humans may be born with mechanisms that attend to conspecifics. However, as previous studies have relied on stimuli featuring human adults, it remains unclear whether infants attend only to adult humans or to the entire human species. We found that 1-month-old infants (n = 23) were able to differentiate between human and monkey infants' faces; however, they exhibited no preference for human infants' faces over monkey infants' faces (n = 24) and discriminated individual differences only within the category of human infants' faces (n = 30). We successfully replicated previous findings that 1-month-old infants (n = 42) preferred adult humans, even adults of other races, to adult monkeys. Further, by 3 months of age, infants (n = 55) preferred human faces to monkey faces with both infant and adult stimuli. Human infants' spontaneous preference for conspecific faces appears to be initially limited to conspecific adults and afterward extended to conspecific infants. Future research should attempt to determine whether preference for human adults results from some innate tendency to attend to conspecific adults or from the impact of early experiences with adults. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Many faces of expertise: fusiform face area in chess experts and novices.
Bilalić, Merim; Langner, Robert; Ulrich, Rolf; Grodd, Wolfgang
2011-07-13
The fusiform face area (FFA) is involved in face perception to such an extent that some claim it is a brain module for faces exclusively. The other possibility is that FFA is modulated by experience in individuation in any visual domain, not only faces. Here we test this latter FFA expertise hypothesis using the game of chess as a domain of investigation. We exploited the characteristic of chess, which features multiple objects forming meaningful spatial relations. In three experiments, we show that FFA activity is related to stimulus properties and not to chess skill directly. In all chess and non-chess tasks, experts' FFA was more activated than that of novices' only when they dealt with naturalistic full-board chess positions. When common spatial relationships formed by chess objects in chess positions were randomly disturbed, FFA was again differentially active only in experts, regardless of the actual task. Our experiments show that FFA contributes to the holistic processing of domain-specific multipart stimuli in chess experts. This suggests that FFA may not only mediate human expertise in face recognition but, supporting the expertise hypothesis, may mediate the automatic holistic processing of any highly familiar multipart visual input.
Inagaki, Mikio; Fujita, Ichiro
2011-07-13
Social communication in nonhuman primates and humans is strongly affected by facial information from other individuals. Many cortical and subcortical brain areas are known to be involved in processing facial information. However, how the neural representation of faces differs across different brain areas remains unclear. Here, we demonstrate that the reference frame for spatial frequency (SF) tuning of face-responsive neurons differs in the temporal visual cortex and amygdala in monkeys. Consistent with psychophysical properties for face recognition, temporal cortex neurons were tuned to image-based SFs (cycles/image) and showed viewing distance-invariant representation of face patterns. On the other hand, many amygdala neurons were influenced by retina-based SFs (cycles/degree), a characteristic that is useful for social distance computation. The two brain areas also differed in the luminance contrast sensitivity of face-responsive neurons; amygdala neurons sharply reduced their responses to low luminance contrast images, while temporal cortex neurons maintained the level of their responses. From these results, we conclude that different types of visual processing in the temporal visual cortex and the amygdala contribute to the construction of the neural representations of faces.
What the Human Brain Likes About Facial Motion
Schultz, Johannes; Brockhaus, Matthias; Bülthoff, Heinrich H.; Pilz, Karin S.
2013-01-01
Facial motion carries essential information about other people's emotions and intentions. Most previous studies have suggested that facial motion is mainly processed in the superior temporal sulcus (STS), but several recent studies have also shown involvement of ventral temporal face-sensitive regions. Up to now, it is not known whether the increased response to facial motion is due to an increased amount of static information in the stimulus, to the deformation of the face over time, or to increased attentional demands. We presented nonrigidly moving faces and control stimuli to participants performing a demanding task unrelated to the face stimuli. We manipulated the amount of static information by using movies with different frame rates. The fluidity of the motion was manipulated by presenting movies with frames either in the order in which they were recorded or in scrambled order. Results confirm higher activation for moving compared with static faces in STS and under certain conditions in ventral temporal face-sensitive regions. Activation was maximal at a frame rate of 12.5 Hz and smaller for scrambled movies. These results indicate that both the amount of static information and the fluid facial motion per se are important factors for the processing of dynamic faces. PMID:22535907
Klein, Fabian; Iffland, Benjamin; Schindler, Sebastian; Wabnitz, Pascal; Neuner, Frank
2015-12-01
Recent studies have shown that the perceptual processing of human faces is affected by context information, such as previous experiences and information about the person represented by the face. The present study investigated the impact of verbally presented information about the person that varied with respect to affect (neutral, physically threatening, socially threatening) and reference (self-referred, other-referred) on the processing of faces with an inherently neutral expression. Stimuli were presented in a randomized presentation paradigm. Event-related potential (ERP) analysis demonstrated a modulation of the evoked potentials by reference at the EPN (early posterior negativity) and LPP (late positive potential) stage and an enhancing effect of affective valence on the LPP (700-1000 ms) with socially threatening context information leading to the most pronounced LPP amplitudes. We also found an interaction between reference and valence with self-related neutral context information leading to more pronounced LPP than other related neutral context information. Our results indicate an impact of self-reference on early, presumably automatic processing stages and also a strong impact of valence on later stages. Using a randomized presentation paradigm, this study confirms that context information affects the visual processing of faces, ruling out possible confounding factors such as facial configuration or conditional learning effects.
Fast hierarchical knowledge-based approach for human face detection in color images
NASA Astrophysics Data System (ADS)
Jiang, Jun; Gong, Jie; Zhang, Guilin; Hu, Ruolan
2001-09-01
This paper presents a fast hierarchical knowledge-based approach for automatically detecting multi-scale upright faces in still color images. The approach consists of three levels. At the highest level, skin-like regions are determinated by skin model, which is based on the color attributes hue and saturation in HSV color space, as well color attributes red and green in normalized color space. In level 2, a new eye model is devised to select human face candidates in segmented skin-like regions. An important feature of the eye model is that it is independent of the scale of human face. So it is possible for finding human faces in different scale with scanning image only once, and it leads to reduction the computation time of face detection greatly. In level 3, a human face mosaic image model, which is consistent with physical structure features of human face well, is applied to judge whether there are face detects in human face candidate regions. This model includes edge and gray rules. Experiment results show that the approach has high robustness and fast speed. It has wide application perspective at human-computer interactions and visual telephone etc.
Nguyen, Nga; Lee, Laura M; Fashing, Peter J; Nurmi, Niina O; Stewart, Kathrine M; Turner, Taylor J; Barry, Tyler S; Callingham, Kadie R; Goodale, C Barret; Kellogg, Bryce S; Burke, Ryan J; Bechtold, Emily K; Claase, Megan J; Eriksen, G Anita; Jones, Sorrel C Z; Kerby, Jeffrey T; Kraus, Jacob B; Miller, Carrie M; Trew, Thomas H; Zhao, Yi; Beierschmitt, Evan C; Ramsay, Malcolm S; Reynolds, Jason D; Venkataraman, Vivek V
2017-05-01
The birth process has been studied extensively in many human societies, yet little is known about this essential life history event in other primates. Here, we provide the most detailed account of behaviors surrounding birth for any wild nonhuman primate to date. Over a recent ∼10-year period, we directly observed 15 diurnal births (13 live births and 2 stillbirths) among geladas (Theropithecus gelada) at Guassa, Ethiopia. During each birth, we recorded the occurrence (or absence) of 16 periparturitional events, chosen for their potential to provide comparative evolutionary insights into the factors that shaped birth behaviors in humans and other primates. We found that several events (e.g., adopting standing crouched positions, delivering infants headfirst) occurred during all births, while other events (e.g., aiding the infant from the birth canal, licking infants following delivery, placentophagy) occurred during, or immediately after, most births. Moreover, multiparas (n = 9) were more likely than primiparas (n = 6) to (a) give birth later in the day, (b) isolate themselves from nearby conspecifics while giving birth, (c) aid the infant from the birth canal, and (d) consume the placenta. Our results suggest that prior maternal experience may contribute to greater competence or efficiency during the birth process. Moreover, face presentations (in which infants are born with their neck extended and their face appearing first, facing the mother) appear to be the norm for geladas. Lastly, malpresentations (in which infants are born in the occiput anterior position more typical of human infants) may be associated with increased mortality in this species. We compare the birth process in geladas to those in other primates (including humans) and discuss several key implications of our study for advancing understanding of obstetrics and the mechanism of labor in humans and nonhuman primates. © 2017 Wiley Periodicals, Inc.
A special purpose knowledge-based face localization method
NASA Astrophysics Data System (ADS)
Hassanat, Ahmad; Jassim, Sabah
2008-04-01
This paper is concerned with face localization for visual speech recognition (VSR) system. Face detection and localization have got a great deal of attention in the last few years, because it is an essential pre-processing step in many techniques that handle or deal with faces, (e.g. age, face, gender, race and visual speech recognition). We shall present an efficient method for localization human's faces in video images captured on mobile constrained devices, under a wide variation in lighting conditions. We use a multiphase method that may include all or some of the following steps starting with image pre-processing, followed by a special purpose edge detection, then an image refinement step. The output image will be passed through a discrete wavelet decomposition procedure, and the computed LL sub-band at a certain level will be transformed into a binary image that will be scanned by using a special template to select a number of possible candidate locations. Finally, we fuse the scores from the wavelet step with scores determined by color information for the candidate location and employ a form of fuzzy logic to distinguish face from non-face locations. We shall present results of large number of experiments to demonstrate that the proposed face localization method is efficient and achieve high level of accuracy that outperforms existing general-purpose face detection methods.
Consecutive TMS-fMRI reveals remote effects of neural noise to the "occipital face area".
Solomon-Harris, Lily M; Rafique, Sara A; Steeves, Jennifer K E
2016-11-01
The human cortical system for face perception comprises a network of connected regions including the middle fusiform gyrus ("fusiform face area" or FFA), the inferior occipital gyrus ("occipital face area" or OFA), and the posterior superior temporal sulcus (pSTS). Here, we sought to investigate how transcranial magnetic stimulation (TMS) to the OFA affects activity within the face processing network. We used offline repetitive TMS to temporarily introduce neural noise in the right OFA in healthy subjects. We then immediately performed functional magnetic resonance imaging (fMRI) to measure changes in blood oxygenation level dependent (BOLD) signal across the face network using an fMR-adaptation (fMR-A) paradigm. We hypothesized that TMS to the right OFA would induce abnormal face identity coding throughout the face processing network in regions to which it has direct or indirect connections. Indeed, BOLD signal for face identity, but not non-face (butterfly) identity, decreased in the right OFA and FFA following TMS to the right OFA compared to both sham TMS and TMS to a control site, the nearby object-related lateral occipital area (LO). Further, TMS to the right OFA decreased face-related activation in the left FFA, without any effect in the left OFA. Our findings indicate that TMS to the right OFA selectively disrupts face coding at both the stimulation site and bilateral FFA. TMS to the right OFA also decreased BOLD signal for different identity stimuli in the right pSTS. Together with mounting evidence from patient studies, we demonstrate connectivity of the OFA within the face network and that its activity modulates face processing in bilateral FFA as well as the right pSTS. Moreover, this study shows that deep regions within the face network can be remotely probed by stimulating structures closer to the cortical surface. Copyright © 2016 Elsevier B.V. All rights reserved.
Spatiotemporal dynamics of similarity-based neural representations of facial identity.
Vida, Mark D; Nestor, Adrian; Plaut, David C; Behrmann, Marlene
2017-01-10
Humans' remarkable ability to quickly and accurately discriminate among thousands of highly similar complex objects demands rapid and precise neural computations. To elucidate the process by which this is achieved, we used magnetoencephalography to measure spatiotemporal patterns of neural activity with high temporal resolution during visual discrimination among a large and carefully controlled set of faces. We also compared these neural data to lower level "image-based" and higher level "identity-based" model-based representations of our stimuli and to behavioral similarity judgments of our stimuli. Between ∼50 and 400 ms after stimulus onset, face-selective sources in right lateral occipital cortex and right fusiform gyrus and sources in a control region (left V1) yielded successful classification of facial identity. In all regions, early responses were more similar to the image-based representation than to the identity-based representation. In the face-selective regions only, responses were more similar to the identity-based representation at several time points after 200 ms. Behavioral responses were more similar to the identity-based representation than to the image-based representation, and their structure was predicted by responses in the face-selective regions. These results provide a temporally precise description of the transformation from low- to high-level representations of facial identity in human face-selective cortex and demonstrate that face-selective cortical regions represent multiple distinct types of information about face identity at different times over the first 500 ms after stimulus onset. These results have important implications for understanding the rapid emergence of fine-grained, high-level representations of object identity, a computation essential to human visual expertise.
Meet The Simpsons: top-down effects in face learning.
Bonner, Lesley; Burton, A Mike; Jenkins, Rob; McNeill, Allan; Vicki, Bruce
2003-01-01
We examined whether prior knowledge of a person affects the visual processes involved in learning a face. In two experiments, subjects were taught to associate human faces with characters they knew (from the TV show The Simpsons) or characters they did not (novel names). In each experiment, knowledge of the character predicted performance in a recognition memory test, relying only on old/new confidence ratings. In experiment 1, we established the technique and showed that there is a face-learning advantage for known people, even when face items are counterbalanced for familiarity across the experiment. In experiment 2 we replicated the effect in a setting which discouraged subjects from attending more to known than unknown people, and eliminated any visual association between face stimuli and a character from The Simpsons. We conclude that prior knowledge about a person can enhance learning of a new face.
Establishing operational stability--developing human infrastructure.
Gomez, Max A; Byers, Ernest J; Stingley, Preston; Sheridan, Robert M; Hirsch, Joshua A
2010-12-01
Over the past year, Toyota has come under harsh scrutiny as a result of several recalls. These well publicized mishaps have not only done damage to Toyota's otherwise sterling reputation for quality but have also called into question the assertions from a phalanx of followers that Toyota's production system (generically referred to as TPS or Lean) is the best method by which to structure one's systems of operation. In this article, we discuss how Toyota, faced with the pressure to grow its business, did not appropriately cadence this growth with the continued development and maintenance of the process capabilities (vis a vis the development of human infrastructure) needed to adequately support that growth. We draw parallels between the pressure Toyota faced to grow its business and the pressure neurointerventional practices face to grow theirs, and offer a methodology to support that growth without sacrificing quality.
Tsukiura, Takashi; Cabeza, Roberto
2008-01-01
Memory processes can be enhanced by reward, and social signals such a smiling face can be rewarding to humans. Using event-related functional MRI (fMRI), we investigated the rewarding effect of a simple smile during the encoding and retrieval of face-name associations. During encoding, participants viewed smiling or neutral faces, each paired with a name, and during retrieval, only names were presented, and participants retrieved the associated facial expressions. Successful memory activity of face-name associations was identified by comparing remembered vs. forgotten trials during both encoding and retrieval, and the effect of a smile was identified by comparing successful memory trials for smiling vs. neutral faces. The study yielded three main findings. First, behavioral results showed that the retrieval of face-name associations was more accurate and faster for smiling than neutral faces. Second, the orbitofrontal cortex and the hippocampus showed successful encoding and retrieval activations, which were greater for smiling than neutral faces. Third, functional connectivity between the orbitofrontal cortex and the hippocampus during successful encoding and retrieval was stronger for smiling than neutral faces. As a part of the reward system, the orbitofrontal cortex may modulate memory processes of face-name associations mediated by the hippocampus. Interestingly, the effect of a smile during retrieval was found even though only names were presented as retrieval cues, suggesting that the effect was mediated by face imagery. Taken together, the results demonstrate how rewarding social signals from a smiling face can enhance relational memory for face-name associations.
Liu-Shuang, Joan; Torfs, Katrien; Rossion, Bruno
2016-03-01
One of the most striking pieces of evidence for a specialised face processing system in humans is acquired prosopagnosia, i.e. the inability to individualise faces following brain damage. However, a sensitive and objective non-behavioural marker for this deficit is difficult to provide with standard event-related potentials (ERPs), such as the well-known face-related N170 component reported and investigated in-depth by our late distinguished colleague Shlomo Bentin. Here we demonstrate that fast periodic visual stimulation (FPVS) in electrophysiology can quantify face individualisation impairment in acquired prosopagnosia. In Experiment 1 (Liu-Shuang et al., 2014), identical faces were presented at a rate of 5.88 Hz (i.e., ≈ 6 images/s, SOA=170 ms, 1 fixation per image), with different faces appearing every 5th face (5.88 Hz/5=1.18 Hz). Responses of interest were identified at these predetermined frequencies (i.e., objectively) in the EEG frequency-domain data. A well-studied case of acquired prosopagnosia (PS) and a group of age- and gender-matched controls completed only 4 × 1-min stimulation sequences, with an orthogonal fixation cross task. Contrarily to controls, PS did not show face individualisation responses at 1.18 Hz, in line with her prosopagnosia. However, her response at 5.88 Hz, reflecting general visual processing, was within the normal range. In Experiment 2 (Rossion et al., 2015), we presented natural (i.e., unsegmented) images of objects at 5.88 Hz, with face images shown every 5th image (1.18 Hz). In accordance with her preserved ability to categorise a face as a face, and despite extensive brain lesions potentially affecting the overall EEG signal-to-noise ratio, PS showed 1.18 Hz face-selective responses within the normal range. Collectively, these findings show that fast periodic visual stimulation provides objective and sensitive electrophysiological markers of preserved and impaired face processing abilities in the neuropsychological population. Copyright © 2015 Elsevier Ltd. All rights reserved.
Fear processing and social networking in the absence of a functional amygdala.
Becker, Benjamin; Mihov, Yoan; Scheele, Dirk; Kendrick, Keith M; Feinstein, Justin S; Matusch, Andreas; Aydin, Merve; Reich, Harald; Urbach, Horst; Oros-Peusquens, Ana-Maria; Shah, Nadim J; Kunz, Wolfram S; Schlaepfer, Thomas E; Zilles, Karl; Maier, Wolfgang; Hurlemann, René
2012-07-01
The human amygdala plays a crucial role in processing social signals, such as face expressions, particularly fearful ones, and facilitates responses to them in face-sensitive cortical regions. This contributes to social competence and individual amygdala size correlates with that of social networks. While rare patients with focal bilateral amygdala lesion typically show impaired recognition of fearful faces, this deficit is variable, and an intriguing possibility is that other brain regions can compensate to support fear and social signal processing. To investigate the brain's functional compensation of selective bilateral amygdala damage, we performed a series of behavioral, psychophysiological, and functional magnetic resonance imaging experiments in two adult female monozygotic twins (patient 1 and patient 2) with equivalent, extensive bilateral amygdala pathology as a sequela of lipoid proteinosis due to Urbach-Wiethe disease. Patient 1, but not patient 2, showed preserved recognition of fearful faces, intact modulation of acoustic startle responses by fear-eliciting scenes, and a normal-sized social network. Functional magnetic resonance imaging revealed that patient 1 showed potentiated responses to fearful faces in her left premotor cortex face area and bilaterally in the inferior parietal lobule. The premotor cortex face area and inferior parietal lobule are both implicated in the cortical mirror-neuron system, which mediates learning of observed actions and may thereby promote both imitation and empathy. Taken together, our findings suggest that despite the pre-eminent role of the amygdala in processing social information, the cortical mirror-neuron system may sometimes adaptively compensate for its pathology. Copyright © 2012 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
Gilaie-Dotan, Sharon; Silvanto, Juha; Schwarzkopf, Dietrich S.; Rees, Geraint
2010-01-01
The occipital face area (OFA) is face-selective. This enhanced activation to faces could reflect either generic face and shape-related processing or high-level conceptual processing of identity. Here we examined these two possibilities using a state-dependent transcranial magnetic stimulation (TMS) paradigm. The lateral occipital (LO) cortex which is activated non-selectively by various types of objects served as a control site. We localized OFA and LO on a per-participant basis using functional MRI. We then examined whether TMS applied to either of these regions affected the ability of participants to decide whether two successively presented and physically different face images were of the same famous person or different famous people. TMS was applied during the delay between first and second face presentations to investigate whether neuronal populations in these regions played a causal role in mediating the behavioral effects of identity repetition. Behaviorally we found a robust identity repetition effect, with shorter reaction times (RTs) when identity was repeated, regardless of the fact that the pictures were physically different. Surprisingly, TMS applied over LO (but not OFA) modulated overall RTs, compared to the No-TMS condition. But critically, we found no effects of TMS to either area that were modulated by identity repetition. Thus, we found no evidence to suggest that OFA or LO contain neuronal representations selective for the identity of famous faces which play a causal role in identity processing. Instead, these brain regions may be involved in the processing of more generic features of their preferred stimulus categories. PMID:20631842
Active glass-type human augmented cognition system considering attention and intention
NASA Astrophysics Data System (ADS)
Kim, Bumhwi; Ojha, Amitash; Lee, Minho
2015-10-01
Human cognition is the result of an interaction of several complex cognitive processes with limited capabilities. Therefore, the primary objective of human cognitive augmentation is to assist and expand these limited human cognitive capabilities independently or together. In this study, we propose a glass-type human augmented cognition system, which attempts to actively assist human memory functions by providing relevant, necessary and intended information by constantly assessing intention of the user. To achieve this, we exploit selective attention and intention processes. Although the system can be used in various real-life scenarios, we test the performance of the system in a person identity scenario. To detect the intended face, the system analyses the gaze points and change in pupil size to determine the intention of the user. An assessment of the gaze points and change in pupil size together indicates that the user intends to know the identity and information about the person in question. Then, the system retrieves several clues through speech recognition system and retrieves relevant information about the face, which is finally displayed through head-mounted display. We present the performance of several components of the system. Our results show that the active and relevant assistance based on users' intention significantly helps the enhancement of memory functions.
Jessen, Sarah; Altvater-Mackensen, Nicole; Grossmann, Tobias
2016-05-01
Sensitive responding to others' emotions is essential during social interactions among humans. There is evidence for the existence of subcortically mediated emotion discrimination processes that occur independent of conscious perception in adults. However, only recently work has begun to examine the development of automatic emotion processing systems during infancy. In particular, it is unclear whether emotional expressions impact infants' autonomic nervous system regardless of conscious perception. We examined this question by measuring pupillary responses while subliminally and supraliminally presenting 7-month-old infants with happy and fearful faces. Our results show greater pupil dilation, indexing enhanced autonomic arousal, in response to happy compared to fearful faces regardless of conscious perception. Our findings suggest that, early in ontogeny, emotion discrimination occurs independent of conscious perception and is associated with differential autonomic responses. This provides evidence for the view that automatic emotion processing systems are an early-developing building block of human social functioning. Copyright © 2016 Elsevier B.V. All rights reserved.
Cross spectral, active and passive approach to face recognition for improved performance
NASA Astrophysics Data System (ADS)
Grudzien, A.; Kowalski, M.; Szustakowski, M.
2017-08-01
Biometrics is a technique for automatic recognition of a person based on physiological or behavior characteristics. Since the characteristics used are unique, biometrics can create a direct link between a person and identity, based on variety of characteristics. The human face is one of the most important biometric modalities for automatic authentication. The most popular method of face recognition which relies on processing of visual information seems to be imperfect. Thermal infrared imagery may be a promising alternative or complement to visible range imaging due to its several reasons. This paper presents an approach of combining both methods.
A defense of the subordinate-level expertise account for the N170 component.
Rossion, Bruno; Curran, Tim; Gauthier, Isabel
2002-09-01
A recent paper in this journal reports two event-related potential (ERP) experiments interpreted as supporting the domain specificity of the visual mechanisms implicated in processing faces (Cognition 83 (2002) 1). The authors argue that because a large neurophysiological response to faces (N170) is less influenced by the task than the response to objects, and because the response for human faces extends to ape faces (for which we are not expert), we should reject the hypothesis that the face-sensitivity reflected by the N170 can be accounted for by the subordinate-level expertise model of object recognition (Nature Neuroscience 3 (2000) 764). In this commentary, we question this conclusion based on some of our own ERP work on expert object recognition as well as the work of others.
Hirata, Satoshi; Fuwa, Koki; Sugama, Keiko; Kusunoki, Kiyo; Fujita, Shin
2010-09-01
This paper reports on the use of an eye-tracking technique to examine how chimpanzees look at facial photographs of conspecifics. Six chimpanzees viewed a sequence of pictures presented on a monitor while their eye movements were measured by an eye tracker. The pictures presented conspecific faces with open or closed eyes in an upright or inverted orientation in a frame. The results demonstrated that chimpanzees looked at the eyes, nose, and mouth more frequently than would be expected on the basis of random scanning of faces. More specifically, they looked at the eyes longer than they looked at the nose and mouth when photographs of upright faces with open eyes were presented, suggesting that particular attention to the eyes represents a spontaneous face-scanning strategy shared among monkeys, apes, and humans. In contrast to the results obtained for upright faces with open eyes, the viewing times for the eyes, nose, and mouth of inverted faces with open eyes did not differ from one another. The viewing times for the eyes, nose, and mouth of faces with closed eyes did not differ when faces with closed eyes were presented in either an upright or inverted orientation. These results suggest the possibility that open eyes play an important role in the configural processing of faces and that chimpanzees perceive and process open and closed eyes differently.
The fusiform face area: a cortical region specialized for the perception of faces
Kanwisher, Nancy; Yovel, Galit
2006-01-01
Faces are among the most important visual stimuli we perceive, informing us not only about a person's identity, but also about their mood, sex, age and direction of gaze. The ability to extract this information within a fraction of a second of viewing a face is important for normal social interactions and has probably played a critical role in the survival of our primate ancestors. Considerable evidence from behavioural, neuropsychological and neurophysiological investigations supports the hypothesis that humans have specialized cognitive and neural mechanisms dedicated to the perception of faces (the face-specificity hypothesis). Here, we review the literature on a region of the human brain that appears to play a key role in face perception, known as the fusiform face area (FFA). Section 1 outlines the theoretical background for much of this work. The face-specificity hypothesis falls squarely on one side of a longstanding debate in the fields of cognitive science and cognitive neuroscience concerning the extent to which the mind/brain is composed of: (i) special-purpose (‘domain-specific’) mechanisms, each dedicated to processing a specific kind of information (e.g. faces, according to the face-specificity hypothesis), versus (ii) general-purpose (‘domain-general’) mechanisms, each capable of operating on any kind of information. Face perception has long served both as one of the prime candidates of a domain-specific process and as a key target for attack by proponents of domain-general theories of brain and mind. Section 2 briefly reviews the prior literature on face perception from behaviour and neurophysiology. This work supports the face-specificity hypothesis and argues against its domain-general alternatives (the individuation hypothesis, the expertise hypothesis and others). Section 3 outlines the more recent evidence on this debate from brain imaging, focusing particularly on the FFA. We review the evidence that the FFA is selectively engaged in face perception, by addressing (and rebutting) five of the most widely discussed alternatives to this hypothesis. In §4, we consider recent findings that are beginning to provide clues into the computations conducted in the FFA and the nature of the representations the FFA extracts from faces. We argue that the FFA is engaged both in detecting faces and in extracting the necessary perceptual information to recognize them, and that the properties of the FFA mirror previously identified behavioural signatures of face-specific processing (e.g. the face-inversion effect). Section 5 asks how the computations and representations in the FFA differ from those occurring in other nearby regions of cortex that respond strongly to faces and objects. The evidence indicates clear functional dissociations between these regions, demonstrating that the FFA shows not only functional specificity but also area specificity. We end by speculating in §6 on some of the broader questions raised by current research on the FFA, including the developmental origins of this region and the question of whether faces are unique versus whether similarly specialized mechanisms also exist for other domains of high-level perception and cognition. PMID:17118927
Preference for Attractive Faces in Human Infants Extends beyond Conspecifics
ERIC Educational Resources Information Center
Quinn, Paul C.; Kelly, David J.; Lee, Kang; Pascalis, Olivier; Slater, Alan M.
2008-01-01
Human infants, just a few days of age, are known to prefer attractive human faces. We examined whether this preference is human-specific. Three- to 4-month-olds preferred attractive over unattractive domestic and wild cat (tiger) faces (Experiments 1 and 3). The preference was not observed when the faces were inverted, suggesting that it did not…
Perceived Animacy Influences the Processing of Human-Like Surface Features in the Fusiform Gyrus
Shultz, Sarah; McCarthya, Gregory
2014-01-01
While decades of research have demonstrated that a region of the right fusiform gyrus (FG) responds selectively to faces, a second line of research suggests that the FG responds to a range of animacy cues, including biological motion and goal-directed actions, even in the absence of faces or other human-like surface features. These findings raise the question of whether the FG is indeed sensitive to faces or to the more abstract category of animate agents. The current study uses fMRI to examine whether the FG responds to all faces in a category-specific way or whether the FG is especially sensitive to the faces of animate agents. Animate agents are defined here as intentional agents with the capacity for rational goal-directed actions. Specifically, we examine how the FG responds to an entity that looks like an animate agent but that lacks the capacity for goal-directed, rational action. Region-of-interest analyses reveal that the FG activates more strongly to the animate compared with the inanimate entity, even though the surface features of both animate and inanimate entities were identical. These results suggest that the FG does not respond to all faces in a category-specific way, and is instead especially sensitive to whether an entity is animate. PMID:24905285
Halliday, Drew W R; MacDonald, Stuart W S; Scherf, K Suzanne; Sherf, Suzanne K; Tanaka, James W
2014-01-01
Although not a core symptom of the disorder, individuals with autism often exhibit selective impairments in their face processing abilities. Importantly, the reciprocal connection between autistic traits and face perception has rarely been examined within the typically developing population. In this study, university participants from the social sciences, physical sciences, and humanities completed a battery of measures that assessed face, object and emotion recognition abilities, general perceptual-cognitive style, and sub-clinical autistic traits (the Autism Quotient (AQ)). We employed separate hierarchical multiple regression analyses to evaluate which factors could predict face recognition scores and AQ scores. Gender, object recognition performance, and AQ scores predicted face recognition behaviour. Specifically, males, individuals with more autistic traits, and those with lower object recognition scores performed more poorly on the face recognition test. Conversely, university major, gender and face recognition performance reliably predicted AQ scores. Science majors, males, and individuals with poor face recognition skills showed more autistic-like traits. These results suggest that the broader autism phenotype is associated with lower face recognition abilities, even among typically developing individuals.
Halliday, Drew W. R.; MacDonald, Stuart W. S.; Sherf, Suzanne K.; Tanaka, James W.
2014-01-01
Although not a core symptom of the disorder, individuals with autism often exhibit selective impairments in their face processing abilities. Importantly, the reciprocal connection between autistic traits and face perception has rarely been examined within the typically developing population. In this study, university participants from the social sciences, physical sciences, and humanities completed a battery of measures that assessed face, object and emotion recognition abilities, general perceptual-cognitive style, and sub-clinical autistic traits (the Autism Quotient (AQ)). We employed separate hierarchical multiple regression analyses to evaluate which factors could predict face recognition scores and AQ scores. Gender, object recognition performance, and AQ scores predicted face recognition behaviour. Specifically, males, individuals with more autistic traits, and those with lower object recognition scores performed more poorly on the face recognition test. Conversely, university major, gender and face recognition performance reliably predicted AQ scores. Science majors, males, and individuals with poor face recognition skills showed more autistic-like traits. These results suggest that the broader autism phenotype is associated with lower face recognition abilities, even among typically developing individuals. PMID:24853862
An efficient method for facial component detection in thermal images
NASA Astrophysics Data System (ADS)
Paul, Michael; Blanik, Nikolai; Blazek, Vladimir; Leonhardt, Steffen
2015-04-01
A method to detect certain regions in thermal images of human faces is presented. In this approach, the following steps are necessary to locate the periorbital and the nose regions: First, the face is segmented from the background by thresholding and morphological filtering. Subsequently, a search region within the face, around its center of mass, is evaluated. Automatically computed temperature thresholds are used per subject and image or image sequence to generate binary images, in which the periorbital regions are located by integral projections. Then, the located positions are used to approximate the nose position. It is possible to track features in the located regions. Therefore, these regions are interesting for different applications like human-machine interaction, biometrics and biomedical imaging. The method is easy to implement and does not rely on any training images or templates. Furthermore, the approach saves processing resources due to simple computations and restricted search regions.
Face processing regions are sensitive to distinct aspects of temporal sequence in facial dynamics.
Reinl, Maren; Bartels, Andreas
2014-11-15
Facial movement conveys important information for social interactions, yet its neural processing is poorly understood. Computational models propose that shape- and temporal sequence sensitive mechanisms interact in processing dynamic faces. While face processing regions are known to respond to facial movement, their sensitivity to particular temporal sequences has barely been studied. Here we used fMRI to examine the sensitivity of human face-processing regions to two aspects of directionality in facial movement trajectories. We presented genuine movie recordings of increasing and decreasing fear expressions, each of which were played in natural or reversed frame order. This two-by-two factorial design matched low-level visual properties, static content and motion energy within each factor, emotion-direction (increasing or decreasing emotion) and timeline (natural versus artificial). The results showed sensitivity for emotion-direction in FFA, which was timeline-dependent as it only occurred within the natural frame order, and sensitivity to timeline in the STS, which was emotion-direction-dependent as it only occurred for decreased fear. The occipital face area (OFA) was sensitive to the factor timeline. These findings reveal interacting temporal sequence sensitive mechanisms that are responsive to both ecological meaning and to prototypical unfolding of facial dynamics. These mechanisms are temporally directional, provide socially relevant information regarding emotional state or naturalness of behavior, and agree with predictions from modeling and predictive coding theory. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Activation of the right fronto-temporal cortex during maternal facial recognition in young infants.
Carlsson, Jakob; Lagercrantz, Hugo; Olson, Linus; Printz, Gordana; Bartocci, Marco
2008-09-01
Within the first days of life infants can already recognize their mother. This ability is based on several sensory mechanisms and increases during the first year of life, having its most crucial phase between 6 and 9 months when cortical circuits develop. The underlying cortical structures that are involved in this process are still unknown. Herein we report how the prefrontal cortices of healthy 6- to 9-month-old infants react to the sight of their mother's faces compared to that of an unknown female face. Concentrations of oxygenated haemoglobin [HbO2] and deoxygenated haemoglobin [HHb] were measured using near infrared spectroscopy (NIRS) in both fronto-temporal and occipital areas on the right side during the exposure to maternal and unfamiliar faces. The infants exhibited a distinct and significantly higher activation-related haemodynamic response in the right fronto-temporal cortex following exposure to the image of their mother's face, [HbO2] (0.75 micromol/L, p < 0.001), as compared to that of an unknown face (0.25 micromol/L, p < 0.001). Event-related haemodynamic changes, suggesting cortical activation, in response to the sight of human faces were detected in 6- to 9-month old children. The right fronto-temporal cortex appears to be involved in face recognition processes at this age.
Emerging Structure–Function Relations in the Developing Face Processing System
Suzanne Scherf, K.; Thomas, Cibu; Doyle, Jaime; Behrmann, Marlene
2014-01-01
To evaluate emerging structure–function relations in a neural circuit that mediates complex behavior, we investigated age-related differences among cortical regions that support face recognition behavior and the fiber tracts through which they transmit and receive signals using functional neuroimaging and diffusion tensor imaging. In a large sample of human participants (aged 6–23 years), we derived the microstructural and volumetric properties of the inferior longitudinal fasciculus (ILF), the inferior fronto-occipital fasciculus, and control tracts, using independently defined anatomical markers. We also determined the functional characteristics of core face- and place-selective regions that are distributed along the trajectory of the pathways of interest. We observed disproportionately large age-related differences in the volume, fractional anisotropy, and mean and radial, but not axial, diffusivities of the ILF. Critically, these differences in the structural properties of the ILF were tightly and specifically linked with an age-related increase in the size of a key face-selective functional region, the fusiform face area. This dynamic association between emerging structural and functional architecture in the developing brain may provide important clues about the mechanisms by which neural circuits become organized and optimized in the human cortex. PMID:23765156
[Comparative studies of face recognition].
Kawai, Nobuyuki
2012-07-01
Every human being is proficient in face recognition. However, the reason for and the manner in which humans have attained such an ability remain unknown. These questions can be best answered-through comparative studies of face recognition in non-human animals. Studies in both primates and non-primates show that not only primates, but also non-primates possess the ability to extract information from their conspecifics and from human experimenters. Neural specialization for face recognition is shared with mammals in distant taxa, suggesting that face recognition evolved earlier than the emergence of mammals. A recent study indicated that a social insect, the golden paper wasp, can distinguish their conspecific faces, whereas a closely related species, which has a less complex social lifestyle with just one queen ruling a nest of underlings, did not show strong face recognition for their conspecifics. Social complexity and the need to differentiate between one another likely led humans to evolve their face recognition abilities.
More than words (and faces): evidence for a Stroop effect of prosody in emotion word processing.
Filippi, Piera; Ocklenburg, Sebastian; Bowling, Daniel L; Heege, Larissa; Güntürkün, Onur; Newen, Albert; de Boer, Bart
2017-08-01
Humans typically combine linguistic and nonlinguistic information to comprehend emotions. We adopted an emotion identification Stroop task to investigate how different channels interact in emotion communication. In experiment 1, synonyms of "happy" and "sad" were spoken with happy and sad prosody. Participants had more difficulty ignoring prosody than ignoring verbal content. In experiment 2, synonyms of "happy" and "sad" were spoken with happy and sad prosody, while happy or sad faces were displayed. Accuracy was lower when two channels expressed an emotion that was incongruent with the channel participants had to focus on, compared with the cross-channel congruence condition. When participants were required to focus on verbal content, accuracy was significantly lower also when prosody was incongruent with verbal content and face. This suggests that prosody biases emotional verbal content processing, even when conflicting with verbal content and face simultaneously. Implications for multimodal communication and language evolution studies are discussed.
Relating brain signal variability to knowledge representation.
Heisz, Jennifer J; Shedden, Judith M; McIntosh, Anthony R
2012-11-15
We assessed the hypothesis that brain signal variability is a reflection of functional network reconfiguration during memory processing. In the present experiments, we use multiscale entropy to capture the variability of human electroencephalogram (EEG) while manipulating the knowledge representation associated with faces stored in memory. Across two experiments, we observed increased variability as a function of greater knowledge representation. In Experiment 1, individuals with greater familiarity for a group of famous faces displayed more brain signal variability. In Experiment 2, brain signal variability increased with learning after multiple experimental exposures to previously unfamiliar faces. The results demonstrate that variability increases with face familiarity; cognitive processes during the perception of familiar stimuli may engage a broader network of regions, which manifests as higher complexity/variability in spatial and temporal domains. In addition, effects of repetition suppression on brain signal variability were observed, and the pattern of results is consistent with a selectivity model of neural adaptation. Crown Copyright © 2012. Published by Elsevier Inc. All rights reserved.
Distributed Neural Activity Patterns during Human-to-Human Competition
Piva, Matthew; Zhang, Xian; Noah, J. Adam; Chang, Steve W. C.; Hirsch, Joy
2017-01-01
Interpersonal interaction is the essence of human social behavior. However, conventional neuroimaging techniques have tended to focus on social cognition in single individuals rather than on dyads or groups. As a result, relatively little is understood about the neural events that underlie face-to-face interaction. We resolved some of the technical obstacles inherent in studying interaction using a novel imaging modality and aimed to identify neural mechanisms engaged both within and across brains in an ecologically valid instance of interpersonal competition. Functional near-infrared spectroscopy was utilized to simultaneously measure hemodynamic signals representing neural activity in pairs of subjects playing poker against each other (human–human condition) or against computer opponents (human–computer condition). Previous fMRI findings concerning single subjects confirm that neural areas recruited during social cognition paradigms are individually sensitive to human–human and human–computer conditions. However, it is not known whether face-to-face interactions between opponents can extend these findings. We hypothesize distributed effects due to live processing and specific variations in across-brain coherence not observable in single-subject paradigms. Angular gyrus (AG), a component of the temporal-parietal junction (TPJ) previously found to be sensitive to socially relevant cues, was selected as a seed to measure within-brain functional connectivity. Increased connectivity was confirmed between AG and bilateral dorsolateral prefrontal cortex (dlPFC) as well as a complex including the left subcentral area (SCA) and somatosensory cortex (SS) during interaction with a human opponent. These distributed findings were supported by contrast measures that indicated increased activity at the left dlPFC and frontopolar area that partially overlapped with the region showing increased functional connectivity with AG. Across-brain analyses of neural coherence between the players revealed synchrony between dlPFC and supramarginal gyrus (SMG) and SS in addition to synchrony between AG and the fusiform gyrus (FG) and SMG. These findings present the first evidence of a frontal-parietal neural complex including the TPJ, dlPFC, SCA, SS, and FG that is more active during human-to-human social cognition both within brains (functional connectivity) and across brains (across-brain coherence), supporting a model of functional integration of socially and strategically relevant information during live face-to-face competitive behaviors. PMID:29218005
Puglia, Meghan H.; Lillard, Travis S.; Morris, James P.; Connelly, Jessica J.
2015-01-01
In humans, the neuropeptide oxytocin plays a critical role in social and emotional behavior. The actions of this molecule are dependent on a protein that acts as its receptor, which is encoded by the oxytocin receptor gene (OXTR). DNA methylation of OXTR, an epigenetic modification, directly influences gene transcription and is variable in humans. However, the impact of this variability on specific social behaviors is unknown. We hypothesized that variability in OXTR methylation impacts social perceptual processes often linked with oxytocin, such as perception of facial emotions. Using an imaging epigenetic approach, we established a relationship between OXTR methylation and neural activity in response to emotional face processing. Specifically, high levels of OXTR methylation were associated with greater amounts of activity in regions associated with face and emotion processing including amygdala, fusiform, and insula. Importantly, we found that these higher levels of OXTR methylation were also associated with decreased functional coupling of amygdala with regions involved in affect appraisal and emotion regulation. These data indicate that the human endogenous oxytocin system is involved in attenuation of the fear response, corroborating research implicating intranasal oxytocin in the same processes. Our findings highlight the importance of including epigenetic mechanisms in the description of the endogenous oxytocin system and further support a central role for oxytocin in social cognition. This approach linking epigenetic variability with neural endophenotypes may broadly explain individual differences in phenotype including susceptibility or resilience to disease. PMID:25675509
Problems of Face Recognition in Patients with Behavioral Variant Frontotemporal Dementia.
Chandra, Sadanandavalli Retnaswami; Patwardhan, Ketaki; Pai, Anupama Ramakanth
2017-01-01
Faces are very special as they are most essential for social cognition in humans. It is partly understood that face processing in its abstractness involves several extra striate areas. One of the most important causes for caregiver suffering in patients with anterior dementia is lack of empathy. This apart from being a behavioral disorder could be also due to failure to categorize the emotions of the people around them. Inlusion criteria: DSM IV for Bv FTD Tested for prosopagnosia - familiar faces, famous face, smiling face, crying face and reflected face using a simple picture card (figure 1). Advanced illness and mixed causes. 46 patients (15 females, 31 males) 24 had defective face recognition. (mean age 51.5),10/15 females (70%) and 14/31males(47. Familiar face recognition defect was found in 6/10 females and 6/14 males. Total- 40%(6/15) females and 19.35%(6/31)males with FTD had familiar face recognition. Famous Face: 9/10 females and 7/14 males. Total- 60% (9/15) females with FTD had famous face recognition defect as against 22.6%(7/31) males with FTD Smiling face defects in 8/10 female and no males. Total- 53.33% (8/15) females. Crying face recognition defect in 3/10 female and 2 /14 males. Total- 20%(3/15) females and 6.5%(2/31) males. Reflected face recognition defect in 4 females. Famous face recognition and positive emotion recognition defect in 80%, only 20% comprehend positive emotions, Face recognition defects are found in only 45% of males and more common in females. Face recognition is more affected in females with FTD There is differential involvement of different aspects of the face recognition could be one of the important factor underlying decline in the emotional and social behavior of these patients. Understanding these pathological processes will give more insight regarding patient behavior.
Meaux, Emilie; Vuilleumier, Patrik
2016-11-01
The ability to decode facial emotions is of primary importance for human social interactions; yet, it is still debated how we analyze faces to determine their expression. Here we compared the processing of emotional face expressions through holistic integration and/or local analysis of visual features, and determined which brain systems mediate these distinct processes. Behavioral, physiological, and brain responses to happy and angry faces were assessed by presenting congruent global configurations of expressions (e.g., happy top+happy bottom), incongruent composite configurations (e.g., angry top+happy bottom), and isolated features (e.g. happy top only). Top and bottom parts were always from the same individual. Twenty-six healthy volunteers were scanned using fMRI while they classified the expression in either the top or the bottom face part but ignored information in the other non-target part. Results indicate that the recognition of happy and anger expressions is neither strictly holistic nor analytic Both routes were involved, but with a different role for analytic and holistic information depending on the emotion type, and different weights of local features between happy and anger expressions. Dissociable neural pathways were engaged depending on emotional face configurations. In particular, regions within the face processing network differed in their sensitivity to holistic expression information, which predominantly activated fusiform, inferior occipital areas and amygdala when internal features were congruent (i.e. template matching), whereas more local analysis of independent features preferentially engaged STS and prefrontal areas (IFG/OFC) in the context of full face configurations, but early visual areas and pulvinar when seen in isolated parts. Collectively, these findings suggest that facial emotion recognition recruits separate, but interactive dorsal and ventral routes within the face processing networks, whose engagement may be shaped by reciprocal interactions and modulated by task demands. Copyright © 2016 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Siler, Stephanie Ann; VanLehn, Kurt
2009-01-01
Face-to-face (FTF) human-human tutoring has ranked among the most effective forms of instruction. However, because computer-mediated (CM) tutoring is becoming increasingly common, it is instructive to evaluate its effectiveness relative to face-to-face tutoring. Does the lack of spoken, face-to-face interaction affect learning gains and…
Face inversion and acquired prosopagnosia reduce the size of the perceptual field of view.
Van Belle, Goedele; Lefèvre, Philippe; Rossion, Bruno
2015-03-01
Using a gaze-contingent morphing approach, we asked human observers to choose one of two faces that best matched the identity of a target face: one face corresponded to the reference face's fixated part only (e.g., one eye), the other corresponded to the unfixated area of the reference face. The face corresponding to the fixated part was selected significantly more frequently in the inverted than in the upright orientation. This observation provides evidence that face inversion reduces an observer's perceptual field of view, even when both upright and inverted faces are displayed at full view and there is no performance difference between these conditions. It rules out an account of the drop of performance for inverted faces--one of the most robust effects in experimental psychology--in terms of a mere difference in local processing efficiency. A brain-damaged patient with pure prosopagnosia, viewing only upright faces, systematically selected the face corresponding to the fixated part, as if her perceptual field was reduced relative to normal observers. Altogether, these observations indicate that the absence of visual knowledge reduces the perceptual field of view, supporting an indirect view of visual perception. Copyright © 2014 Elsevier B.V. All rights reserved.
Automated facial acne assessment from smartphone images
NASA Astrophysics Data System (ADS)
Amini, Mohammad; Vasefi, Fartash; Valdebran, Manuel; Huang, Kevin; Zhang, Haomiao; Kemp, William; MacKinnon, Nicholas
2018-02-01
A smartphone mobile medical application is presented, that provides analysis of the health of skin on the face using a smartphone image and cloud-based image processing techniques. The mobile application employs the use of the camera to capture a front face image of a subject, after which the captured image is spatially calibrated based on fiducial points such as position of the iris of the eye. A facial recognition algorithm is used to identify features of the human face image, to normalize the image, and to define facial regions of interest (ROI) for acne assessment. We identify acne lesions and classify them into two categories: those that are papules and those that are pustules. Automated facial acne assessment was validated by performing tests on images of 60 digital human models and 10 real human face images. The application was able to identify 92% of acne lesions within five facial ROIs. The classification accuracy for separating papules from pustules was 98%. Combined with in-app documentation of treatment, lifestyle factors, and automated facial acne assessment, the app can be used in both cosmetic and clinical dermatology. It allows users to quantitatively self-measure acne severity and treatment efficacy on an ongoing basis to help them manage their chronic facial acne.
Dogs recognize dog and human emotions.
Albuquerque, Natalia; Guo, Kun; Wilkinson, Anna; Savalli, Carine; Otta, Emma; Mills, Daniel
2016-01-01
The perception of emotional expressions allows animals to evaluate the social intentions and motivations of each other. This usually takes place within species; however, in the case of domestic dogs, it might be advantageous to recognize the emotions of humans as well as other dogs. In this sense, the combination of visual and auditory cues to categorize others' emotions facilitates the information processing and indicates high-level cognitive representations. Using a cross-modal preferential looking paradigm, we presented dogs with either human or dog faces with different emotional valences (happy/playful versus angry/aggressive) paired with a single vocalization from the same individual with either a positive or negative valence or Brownian noise. Dogs looked significantly longer at the face whose expression was congruent to the valence of vocalization, for both conspecifics and heterospecifics, an ability previously known only in humans. These results demonstrate that dogs can extract and integrate bimodal sensory emotional information, and discriminate between positive and negative emotions from both humans and dogs. © 2016 The Author(s).
ERIC Educational Resources Information Center
Sui, Jie; Chechlacz, Magdalena; Humphreys, Glyn W.
2012-01-01
Facial self-awareness is a basic human ability dependent on a distributed bilateral neural network and revealed through prioritized processing of our own over other faces. Using non-prosopagnosic patients we show, for the first time, that facial self-awareness can be fractionated into different component processes. Patients performed two face…
Three-dimensional face model reproduction method using multiview images
NASA Astrophysics Data System (ADS)
Nagashima, Yoshio; Agawa, Hiroshi; Kishino, Fumio
1991-11-01
This paper describes a method of reproducing three-dimensional face models using multi-view images for a virtual space teleconferencing system that achieves a realistic visual presence for teleconferencing. The goal of this research, as an integral component of a virtual space teleconferencing system, is to generate a three-dimensional face model from facial images, synthesize images of the model virtually viewed from different angles, and with natural shadow to suit the lighting conditions of the virtual space. The proposed method is as follows: first, front and side view images of the human face are taken by TV cameras. The 3D data of facial feature points are obtained from front- and side-views by an image processing technique based on the color, shape, and correlation of face components. Using these 3D data, the prepared base face models, representing typical Japanese male and female faces, are modified to approximate the input facial image. The personal face model, representing the individual character, is then reproduced. Next, an oblique view image is taken by TV camera. The feature points of the oblique view image are extracted using the same image processing technique. A more precise personal model is reproduced by fitting the boundary of the personal face model to the boundary of the oblique view image. The modified boundary of the personal face model is determined by using face direction, namely rotation angle, which is detected based on the extracted feature points. After the 3D model is established, the new images are synthesized by mapping facial texture onto the model.
Yu, Lira; Tomonaga, Masaki
2016-04-01
Many studies have reported a spontaneous nature to synchronized movement in humans and in non-human primates. However, it is not yet clear whether individuals mutually adapt their movement to each other or whether one individual significantly changes to synchronize with the other. In the current study, we examined a directionality of the tempo adaptation to understand an introductive process of interactional synchrony in pairs of chimpanzees. Four pairs, consisting of five female chimpanzees, produced a finger-tapping movement under a face-to-face experimental setup where both auditory and visual cues of the partner's movement were available. Two test conditions were prepared: alone and paired. An analysis of the tapping tempo depending on condition showed that only one chimpanzee in each pair significantly changed their tapping tempo in the direction of the partner's tapping tempo in the paired condition compared with the alone condition. The current study demonstrated that unidirectional adaptation in tempo occurs in pairs of chimpanzees when they simultaneously produce the tapping movement under auditory and visual interaction.
Computer vision and soft computing for automatic skull-face overlay in craniofacial superimposition.
Campomanes-Álvarez, B Rosario; Ibáñez, O; Navarro, F; Alemán, I; Botella, M; Damas, S; Cordón, O
2014-12-01
Craniofacial superimposition can provide evidence to support that some human skeletal remains belong or not to a missing person. It involves the process of overlaying a skull with a number of ante mortem images of an individual and the analysis of their morphological correspondence. Within the craniofacial superimposition process, the skull-face overlay stage just focuses on achieving the best possible overlay of the skull and a single ante mortem image of the suspect. Although craniofacial superimposition has been in use for over a century, skull-face overlay is still applied by means of a trial-and-error approach without an automatic method. Practitioners finish the process once they consider that a good enough overlay has been attained. Hence, skull-face overlay is a very challenging, subjective, error prone, and time consuming part of the whole process. Though the numerical assessment of the method quality has not been achieved yet, computer vision and soft computing arise as powerful tools to automate it, dramatically reducing the time taken by the expert and obtaining an unbiased overlay result. In this manuscript, we justify and analyze the use of these techniques to properly model the skull-face overlay problem. We also present the automatic technical procedure we have developed using these computational methods and show the four overlays obtained in two craniofacial superimposition cases. This automatic procedure can be thus considered as a tool to aid forensic anthropologists to develop the skull-face overlay, automating and avoiding subjectivity of the most tedious task within craniofacial superimposition. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Seeing Emotion with Your Ears: Emotional Prosody Implicitly Guides Visual Attention to Faces
Rigoulot, Simon; Pell, Marc D.
2012-01-01
Interpersonal communication involves the processing of multimodal emotional cues, particularly facial expressions (visual modality) and emotional speech prosody (auditory modality) which can interact during information processing. Here, we investigated whether the implicit processing of emotional prosody systematically influences gaze behavior to facial expressions of emotion. We analyzed the eye movements of 31 participants as they scanned a visual array of four emotional faces portraying fear, anger, happiness, and neutrality, while listening to an emotionally-inflected pseudo-utterance (Someone migged the pazing) uttered in a congruent or incongruent tone. Participants heard the emotional utterance during the first 1250 milliseconds of a five-second visual array and then performed an immediate recall decision about the face they had just seen. The frequency and duration of first saccades and of total looks in three temporal windows ([0–1250 ms], [1250–2500 ms], [2500–5000 ms]) were analyzed according to the emotional content of faces and voices. Results showed that participants looked longer and more frequently at faces that matched the prosody in all three time windows (emotion congruency effect), although this effect was often emotion-specific (with greatest effects for fear). Effects of prosody on visual attention to faces persisted over time and could be detected long after the auditory information was no longer present. These data imply that emotional prosody is processed automatically during communication and that these cues play a critical role in how humans respond to related visual cues in the environment, such as facial expressions. PMID:22303454
Investigating the Influence of Biological Sex on the Behavioral and Neural Basis of Face Recognition
2017-01-01
Abstract There is interest in understanding the influence of biological factors, like sex, on the organization of brain function. We investigated the influence of biological sex on the behavioral and neural basis of face recognition in healthy, young adults. In behavior, there were no sex differences on the male Cambridge Face Memory Test (CFMT)+ or the female CFMT+ (that we created) and no own-gender bias (OGB) in either group. We evaluated the functional topography of ventral stream organization by measuring the magnitude and functional neural size of 16 individually defined face-, two object-, and two place-related regions bilaterally. There were no sex differences in any of these measures of neural function in any of the regions of interest (ROIs) or in group level comparisons. These findings reveal that men and women have similar category-selective topographic organization in the ventral visual pathway. Next, in a separate task, we measured activation within the 16 face-processing ROIs specifically during recognition of target male and female faces. There were no sex differences in the magnitude of the neural responses in any face-processing region. Furthermore, there was no OGB in the neural responses of either the male or female participants. Our findings suggest that face recognition behavior, including the OGB, is not inherently sexually dimorphic. Face recognition is an essential skill for navigating human social interactions, which is reflected equally in the behavior and neural architecture of men and women. PMID:28497111
Scherf, K Suzanne; Elbich, Daniel B; Motta-Mena, Natalie V
2017-01-01
There is interest in understanding the influence of biological factors, like sex, on the organization of brain function. We investigated the influence of biological sex on the behavioral and neural basis of face recognition in healthy, young adults. In behavior, there were no sex differences on the male Cambridge Face Memory Test (CFMT)+ or the female CFMT+ (that we created) and no own-gender bias (OGB) in either group. We evaluated the functional topography of ventral stream organization by measuring the magnitude and functional neural size of 16 individually defined face-, two object-, and two place-related regions bilaterally. There were no sex differences in any of these measures of neural function in any of the regions of interest (ROIs) or in group level comparisons. These findings reveal that men and women have similar category-selective topographic organization in the ventral visual pathway. Next, in a separate task, we measured activation within the 16 face-processing ROIs specifically during recognition of target male and female faces. There were no sex differences in the magnitude of the neural responses in any face-processing region. Furthermore, there was no OGB in the neural responses of either the male or female participants. Our findings suggest that face recognition behavior, including the OGB, is not inherently sexually dimorphic. Face recognition is an essential skill for navigating human social interactions, which is reflected equally in the behavior and neural architecture of men and women.
Is the Face-Perception System Human-Specific at Birth?
ERIC Educational Resources Information Center
Di Giorgio, Elisa; Leo, Irene; Pascalis, Olivier; Simion, Francesca
2012-01-01
The present study investigates the human-specificity of the orienting system that allows neonates to look preferentially at faces. Three experiments were carried out to determine whether the face-perception system that is present at birth is broad enough to include both human and nonhuman primate faces. The results demonstrate that the newborns…
Where Public Health Meets Human Rights
Kiragu, Karusa; Sawicki, Olga; Smith, Sally; Brion, Sophie; Sharma, Aditi; Mworeko, Lilian; Iovita, Alexandrina
2017-01-01
Abstract In 2014, the World Health Organization (WHO) initiated a process for validation of the elimination of mother-to-child transmission (EMTCT) of HIV and syphilis by countries. For the first time in such a process for the validation of disease elimination, WHO introduced norms and approaches that are grounded in human rights, gender equality, and community engagement. This human rights-based validation process can serve as a key opportunity to enhance accountability for human rights protection by evaluating EMTCT programs against human rights norms and standards, including in relation to gender equality and by ensuring the provision of discrimination-free quality services. The rights-based validation process also involves the assessment of participation of affected communities in EMTCT program development, implementation, and monitoring and evaluation. It brings awareness to the types of human rights abuses and inequalities faced by women living with, at risk of, or affected by HIV and syphilis, and commits governments to eliminate those barriers. This process demonstrates the importance and feasibility of integrating human rights, gender, and community into key public health interventions in a manner that improves health outcomes, legitimizes the participation of affected communities, and advances the human rights of women living with HIV. PMID:29302179
Method for Face-Emotion Retrieval Using A Cartoon Emotional Expression Approach
NASA Astrophysics Data System (ADS)
Kostov, Vlaho; Yanagisawa, Hideyoshi; Johansson, Martin; Fukuda, Shuichi
A simple method for extracting emotion from a human face, as a form of non-verbal communication, was developed to cope with and optimize mobile communication in a globalized and diversified society. A cartoon face based model was developed and used to evaluate emotional content of real faces. After a pilot survey, basic rules were defined and student subjects were asked to express emotion using the cartoon face. Their face samples were then analyzed using principal component analysis and the Mahalanobis distance method. Feature parameters considered as having relations with emotions were extracted and new cartoon faces (based on these parameters) were generated. The subjects evaluated emotion of these cartoon faces again and we confirmed these parameters were suitable. To confirm how these parameters could be applied to real faces, we asked subjects to express the same emotions which were then captured electronically. Simple image processing techniques were also developed to extract these features from real faces and we then compared them with the cartoon face parameters. It is demonstrated via the cartoon face that we are able to express the emotions from very small amounts of information. As a result, real and cartoon faces correspond to each other. It is also shown that emotion could be extracted from still and dynamic real face images using these cartoon-based features.
Myneni, Sahiti; Patel, Vimla L.; Bova, G. Steven; Wang, Jian; Ackerman, Christopher F.; Berlinicke, Cynthia A.; Chen, Steve H.; Lindvall, Mikael; Zack, Donald J.
2016-01-01
This paper describes a distributed collaborative effort between industry and academia to systematize data management in an academic biomedical laboratory. Heterogeneous and voluminous nature of research data created in biomedical laboratories make information management difficult and research unproductive. One such collaborative effort was evaluated over a period of four years using data collection methods including ethnographic observations, semi-structured interviews, web-based surveys, progress reports, conference call summaries, and face-to-face group discussions. Data were analyzed using qualitative methods of data analysis to 1) characterize specific problems faced by biomedical researchers with traditional information management practices, 2) identify intervention areas to introduce a new research information management system called Labmatrix, and finally to 3) evaluate and delineate important general collaboration (intervention) characteristics that can optimize outcomes of an implementation process in biomedical laboratories. Results emphasize the importance of end user perseverance, human-centric interoperability evaluation, and demonstration of return on investment of effort and time of laboratory members and industry personnel for success of implementation process. In addition, there is an intrinsic learning component associated with the implementation process of an information management system. Technology transfer experience in a complex environment such as the biomedical laboratory can be eased with use of information systems that support human and cognitive interoperability. Such informatics features can also contribute to successful collaboration and hopefully to scientific productivity. PMID:26652980
Explaining neural signals in human visual cortex with an associative learning model.
Jiang, Jiefeng; Schmajuk, Nestor; Egner, Tobias
2012-08-01
"Predictive coding" models posit a key role for associative learning in visual cognition, viewing perceptual inference as a process of matching (learned) top-down predictions (or expectations) against bottom-up sensory evidence. At the neural level, these models propose that each region along the visual processing hierarchy entails one set of processing units encoding predictions of bottom-up input, and another set computing mismatches (prediction error or surprise) between predictions and evidence. This contrasts with traditional views of visual neurons operating purely as bottom-up feature detectors. In support of the predictive coding hypothesis, a recent human neuroimaging study (Egner, Monti, & Summerfield, 2010) showed that neural population responses to expected and unexpected face and house stimuli in the "fusiform face area" (FFA) could be well-described as a summation of hypothetical face-expectation and -surprise signals, but not by feature detector responses. Here, we used computer simulations to test whether these imaging data could be formally explained within the broader framework of a mathematical neural network model of associative learning (Schmajuk, Gray, & Lam, 1996). Results show that FFA responses could be fit very closely by model variables coding for conditional predictions (and their violations) of stimuli that unconditionally activate the FFA. These data document that neural population signals in the ventral visual stream that deviate from classic feature detection responses can formally be explained by associative prediction and surprise signals.
Toward a unified model of face and object recognition in the human visual system
Wallis, Guy
2013-01-01
Our understanding of the mechanisms and neural substrates underlying visual recognition has made considerable progress over the past 30 years. During this period, accumulating evidence has led many scientists to conclude that objects and faces are recognised in fundamentally distinct ways, and in fundamentally distinct cortical areas. In the psychological literature, in particular, this dissociation has led to a palpable disconnect between theories of how we process and represent the two classes of object. This paper follows a trend in part of the recognition literature to try to reconcile what we know about these two forms of recognition by considering the effects of learning. Taking a widely accepted, self-organizing model of object recognition, this paper explains how such a system is affected by repeated exposure to specific stimulus classes. In so doing, it explains how many aspects of recognition generally regarded as unusual to faces (holistic processing, configural processing, sensitivity to inversion, the other-race effect, the prototype effect, etc.) are emergent properties of category-specific learning within such a system. Overall, the paper describes how a single model of recognition learning can and does produce the seemingly very different types of representation associated with faces and objects. PMID:23966963
Translating human biology (introduction to special issue).
Brewis, Alexandra A; Mckenna, James J
2015-01-01
Introducing a special issue on "Translating Human Biology," we pose two basic questions: Is human biology addressing the most critical challenges facing our species? How can the processes of translating our science be improved and innovated? We analyze articles published in American Journal of Human Biology from 2004-2013, and find there is very little human biological consideration of issues related to most of the core human challenges such as water, energy, environmental degradation, or conflict. There is some focus on disease, and considerable focus on food/nutrition. We then introduce this special volume with reference to the following articles that provide exemplars for the process of how translation and concern for broader context and impacts can be integrated into research. Human biology has significant unmet potential to engage more fully in translation for the public good, through consideration of the topics we focus on, the processes of doing our science, and the way we present our domain expertise. © 2014 Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Depper, Gina L.
2017-01-01
The world faces significant environmental challenges due largely to unsustainable human behavior. Values have been found to be a direct and indirect predictor of human behavior and understanding how they are formed/influenced is critical to any strategy of behavioral change. Our understanding of how environmental values are transmitted and…
Fusing face-verification algorithms and humans.
O'Toole, Alice J; Abdi, Hervé; Jiang, Fang; Phillips, P Jonathon
2007-10-01
It has been demonstrated recently that state-of-the-art face-recognition algorithms can surpass human accuracy at matching faces over changes in illumination. The ranking of algorithms and humans by accuracy, however, does not provide information about whether algorithms and humans perform the task comparably or whether algorithms and humans can be fused to improve performance. In this paper, we fused humans and algorithms using partial least square regression (PLSR). In the first experiment, we applied PLSR to face-pair similarity scores generated by seven algorithms participating in the Face Recognition Grand Challenge. The PLSR produced an optimal weighting of the similarity scores, which we tested for generality with a jackknife procedure. Fusing the algorithms' similarity scores using the optimal weights produced a twofold reduction of error rate over the most accurate algorithm. Next, human-subject-generated similarity scores were added to the PLSR analysis. Fusing humans and algorithms increased the performance to near-perfect classification accuracy. These results are discussed in terms of maximizing face-verification accuracy with hybrid systems consisting of multiple algorithms and humans.
Li, Shijia; Weerda, Riklef; Milde, Christopher; Wolf, Oliver T; Thiel, Christiane M
2014-12-01
Previous studies have shown that acute psychosocial stress impairs recognition of declarative memory and that emotional material is especially sensitive to this effect. Animal studies suggest a central role of the amygdala which modulates memory processes in hippocampus, prefrontal cortex and other brain areas. We used functional magnetic resonance imaging (fMRI) to investigate neural correlates of stress-induced modulation of emotional recognition memory in humans. Twenty-seven healthy, right-handed, non-smoker male volunteers performed an emotional face recognition task. During encoding, participants were presented with 50 fearful and 50 neutral faces. One hour later, they underwent either a stress (Trier Social Stress Test) or a control procedure outside the scanner which was followed immediately by the recognition session inside the scanner, where participants had to discriminate between 100 old and 50 new faces. Stress increased salivary cortisol, blood pressure and pulse, and decreased the mood of participants but did not impact recognition memory. BOLD data during recognition revealed a stress condition by emotion interaction in the left inferior frontal gyrus and right hippocampus which was due to a stress-induced increase of neural activity to fearful and a decrease to neutral faces. Functional connectivity analyses revealed a stress-induced increase in coupling between the right amygdala and the right fusiform gyrus, when processing fearful as compared to neutral faces. Our results provide evidence that acute psychosocial stress affects medial temporal and frontal brain areas differentially for neutral and emotional items, with a stress-induced privileged processing of emotional stimuli.
The neural speed of familiar face recognition.
Barragan-Jason, G; Cauchoix, M; Barbeau, E J
2015-08-01
Rapidly recognizing familiar people from their faces appears critical for social interactions (e.g., to differentiate friend from foe). However, the actual speed at which the human brain can distinguish familiar from unknown faces still remains debated. In particular, it is not clear whether familiarity can be extracted from rapid face individualization or if it requires additional time consuming processing. We recorded scalp EEG activity in 28 subjects performing a go/no-go, famous/non-famous, unrepeated, face recognition task. Speed constraints were used to encourage subjects to use the earliest familiarity information available. Event related potential (ERP) analyses show that both the N170 and the N250 components were modulated by familiarity. The N170 modulation was related to behaviour: subjects presenting the strongest N170 modulation were also faster but less accurate than those who only showed weak N170 modulation. A complementary Multi-Variate Pattern Analysis (MVPA) confirmed ERP results and provided some more insights into the dynamics of face recognition as the N170 differential effect appeared to be related to a first transitory phase (transitory bump of decoding power) starting at around 140 ms, which returned to baseline afterwards. This bump of activity was henceforth followed by an increase of decoding power starting around 200 ms after stimulus onset. Overall, our results suggest that rather than a simple single-process, familiarity for faces may rely on a cascade of neural processes, including a coarse and fast stage starting at 140 ms and a more refined but slower stage occurring after 200 ms. Copyright © 2015 Elsevier Ltd. All rights reserved.
Parkinson, Jim; Garfinkel, Sarah; Critchley, Hugo; Dienes, Zoltan; Seth, Anil K
2017-04-01
Volitional action and self-control-feelings of acting according to one's own intentions and in being control of one's own actions-are fundamental aspects of human conscious experience. However, it is unknown whether high-level cognitive control mechanisms are affected by socially salient but nonconscious emotional cues. In this study, we manipulated free choice decisions to act or withhold an action by subliminally presenting emotional faces: In a novel version of the Go/NoGo paradigm, participants made speeded button-press responses to Go targets, withheld responses to NoGo targets, and made spontaneous, free choices to execute or withhold the response for Choice targets. Before each target, we presented emotional faces, backwards masked to render them nonconscious. In Intentional trials, subliminal angry faces made participants more likely to voluntarily withhold the action, whereas fearful and happy faces had no effects. In a second experiment, the faces were made supraliminal, which eliminated the effects of angry faces on volitional choices. A third experiment measured neural correlates of the effects of subliminal angry faces on intentional choice using EEG. After replicating the behavioural results found in Experiment 1, we identified a frontal-midline theta component-associated with cognitive control processes-which is present for volitional decisions, and is modulated by subliminal angry faces. This suggests a mechanism whereby subliminally presented "threat" stimuli affect conscious control processes. In summary, nonconscious perception of angry faces increases choices to inhibit, and subliminal influences on volitional action are deep seated and ecologically embedded.
Gillespie, Alex; Corti, Kevin
2016-01-01
This article examines advances in research methods that enable experimental substitution of the speaking body in unscripted face-to-face communication. A taxonomy of six hybrid social agents is presented by combining three types of bodies (mechanical, virtual, and human) with either an artificial or human speech source. Our contribution is to introduce and explore the significance of two particular hybrids: (1) the cyranoid method that enables humans to converse face-to-face through the medium of another person's body, and (2) the echoborg method that enables artificial intelligence to converse face-to-face through the medium of a human body. These two methods are distinct in being able to parse the unique influence of the human body when combined with various speech sources. We also introduce a new framework for conceptualizing the body's role in communication, distinguishing three levels: self's perspective on the body, other's perspective on the body, and self's perspective of other's perspective on the body. Within each level the cyranoid and echoborg methodologies make important research questions tractable. By conceptualizing and synthesizing these methods, we outline a novel paradigm of research on the role of the body in unscripted face-to-face communication.
The Body That Speaks: Recombining Bodies and Speech Sources in Unscripted Face-to-Face Communication
Gillespie, Alex; Corti, Kevin
2016-01-01
This article examines advances in research methods that enable experimental substitution of the speaking body in unscripted face-to-face communication. A taxonomy of six hybrid social agents is presented by combining three types of bodies (mechanical, virtual, and human) with either an artificial or human speech source. Our contribution is to introduce and explore the significance of two particular hybrids: (1) the cyranoid method that enables humans to converse face-to-face through the medium of another person's body, and (2) the echoborg method that enables artificial intelligence to converse face-to-face through the medium of a human body. These two methods are distinct in being able to parse the unique influence of the human body when combined with various speech sources. We also introduce a new framework for conceptualizing the body's role in communication, distinguishing three levels: self's perspective on the body, other's perspective on the body, and self's perspective of other's perspective on the body. Within each level the cyranoid and echoborg methodologies make important research questions tractable. By conceptualizing and synthesizing these methods, we outline a novel paradigm of research on the role of the body in unscripted face-to-face communication. PMID:27660616
Vrancken, Leia; Germeys, Filip; Verfaillie, Karl
2017-01-01
A considerable amount of research on identity recognition and emotion identification with the composite design points to the holistic processing of these aspects in faces and bodies. In this paradigm, the interference from a nonattended face half on the perception of the attended half is taken as evidence for holistic processing (i.e., a composite effect). Far less research, however, has been dedicated to the concept of gaze. Nonetheless, gaze perception is a substantial component of face and body perception, and holds critical information for everyday communicative interactions. Furthermore, the ability of human observers to detect direct versus averted eye gaze is effortless, perhaps similar to identity perception and emotion recognition. However, the hypothesis of holistic perception of eye gaze has never been tested directly. Research on gaze perception with the composite design could facilitate further systematic comparison with other aspects of face and body perception that have been investigated using the composite design (i.e., identity and emotion). In the present research, a composite design was administered to assess holistic processing of gaze cues in faces (Experiment 1) and bodies (Experiment 2). Results confirmed that eye and head orientation (Experiment 1A) and head and body orientation (Experiment 2A) are integrated in a holistic manner. However, the composite effect was not completely disrupted by inversion (Experiments 1B and 2B), a finding that will be discussed together with implications for future research.
Is the Self Always Better than a Friend? Self-Face Recognition in Christians and Atheists
Ma, Yina; Han, Shihui
2012-01-01
Early behavioral studies found that human adults responded faster to their own faces than faces of familiar others or strangers, a finding referred to as self-face advantage. Recent research suggests that the self-face advantage is mediated by implicit positive association with the self and is influenced by sociocultural experience. The current study investigated whether and how Christian belief and practice affect the processing of self-face in a Chinese population. Christian and Atheist participants were recruited for an implicit association test (IAT) in Experiment 1 and a face-owner identification task in Experiment 2. Experiment 1 found that atheists responded faster to self-face when it shared the same response key with positive compared to negative trait adjectives. This IAT effect, however, was significantly reduced in Christians. Experiment 2 found that atheists responded faster to self-face compared to a friend’s face, but this self-face advantage was significantly reduced in Christians. Hierarchical regression analyses further showed that the IAT effect positively predicted self-face advantage in atheists but not in Christians. Our findings suggest that Christian belief and practice may weaken implicit positive association with the self and thus decrease the advantage of the self over a friend during face recognition in the believers. PMID:22662231
Is the self always better than a friend? Self-face recognition in Christians and atheists.
Ma, Yina; Han, Shihui
2012-01-01
Early behavioral studies found that human adults responded faster to their own faces than faces of familiar others or strangers, a finding referred to as self-face advantage. Recent research suggests that the self-face advantage is mediated by implicit positive association with the self and is influenced by sociocultural experience. The current study investigated whether and how Christian belief and practice affect the processing of self-face in a Chinese population. Christian and Atheist participants were recruited for an implicit association test (IAT) in Experiment 1 and a face-owner identification task in Experiment 2. Experiment 1 found that atheists responded faster to self-face when it shared the same response key with positive compared to negative trait adjectives. This IAT effect, however, was significantly reduced in Christians. Experiment 2 found that atheists responded faster to self-face compared to a friend's face, but this self-face advantage was significantly reduced in Christians. Hierarchical regression analyses further showed that the IAT effect positively predicted self-face advantage in atheists but not in Christians. Our findings suggest that Christian belief and practice may weaken implicit positive association with the self and thus decrease the advantage of the self over a friend during face recognition in the believers.
Application of robust face recognition in video surveillance systems
NASA Astrophysics Data System (ADS)
Zhang, De-xin; An, Peng; Zhang, Hao-xiang
2018-03-01
In this paper, we propose a video searching system that utilizes face recognition as searching indexing feature. As the applications of video cameras have great increase in recent years, face recognition makes a perfect fit for searching targeted individuals within the vast amount of video data. However, the performance of such searching depends on the quality of face images recorded in the video signals. Since the surveillance video cameras record videos without fixed postures for the object, face occlusion is very common in everyday video. The proposed system builds a model for occluded faces using fuzzy principal component analysis (FPCA), and reconstructs the human faces with the available information. Experimental results show that the system has very high efficiency in processing the real life videos, and it is very robust to various kinds of face occlusions. Hence it can relieve people reviewers from the front of the monitors and greatly enhances the efficiency as well. The proposed system has been installed and applied in various environments and has already demonstrated its power by helping solving real cases.
Gender Differences in Sexual Attraction and Moral Judgment: Research With Artificial Face Models.
González-Álvarez, Julio; Cervera-Crespo, Teresa
2018-01-01
Sexual attraction in humans is influenced by cultural or moral factors, and some gender differences can emerge in this complex interaction. A previous study found that men dissociate sexual attraction from moral judgment more than women do. Two experiments consisting of giving attractiveness ratings to photos of real opposite-sex individuals showed that men, compared to women, were significantly less influenced by the moral valence of a description about the person shown in each photo. There is evidence of some processing differences between real and artificial computer-generated faces. The present study tests the robustness of González-Álvarez's findings and extends the research to an experimental design using artificial face models as stimuli. A sample of 88 young adults (61 females and 27 males, average age 19.32, SD = 2.38) rated the attractiveness of 80 3D artificial face models generated with the FaceGen Modeller 3.5 software. Each face model was paired with a "good" and a "bad" (from a moral point of view) sentence depicting a quality or activity of the person represented in the model (e.g., she/he is an altruistic nurse in Africa vs. she/he is a prominent drug dealer). Results were in line with the previous findings and showed that, with artificial faces as well, sexual attraction is less influenced by morality in men than in women. This gender difference is consistent with an evolutionary perspective on human sexuality.
Lazar, Steven M; Evans, David W; Myers, Scott M; Moreno-De Luca, Andres; Moore, Gregory J
2014-04-15
Social cognition is an important aspect of social behavior in humans. Social cognitive deficits are associated with neurodevelopmental and neuropsychiatric disorders. In this study we examine the neural substrates of social cognition and face processing in a group of healthy young adults to examine the neural substrates of social cognition. Fifty-seven undergraduates completed a battery of social cognition tasks and were assessed with electroencephalography (EEG) during a face-perception task. A subset (N=22) were administered a face-perception task during functional magnetic resonance imaging. Variance in the N170 EEG was predicted by social attribution performance and by a quantitative measure of empathy. Neurally, face processing was more bilateral in females than in males. Variance in fMRI voxel count in the face-sensitive fusiform gyrus was predicted by quantitative measures of social behavior, including the Social Responsiveness Scale (SRS) and the Empathizing Quotient. When measured as a quantitative trait, social behaviors in typical and pathological populations share common neural pathways. The results highlight the importance of viewing neurodevelopmental and neuropsychiatric disorders as spectrum phenomena that may be informed by studies of the normal distribution of relevant traits in the general population. Copyright © 2014 Elsevier B.V. All rights reserved.
Aging disrupts the neural transformations that link facial identity across views.
Habak, Claudine; Wilkinson, Frances; Wilson, Hugh R
2008-01-01
Healthy human aging can have adverse effects on cortical function and on the brain's ability to integrate visual information to form complex representations. Facial identification is crucial to successful social discourse, and yet, it remains unclear whether the neuronal mechanisms underlying face perception per se, and the speed with which they process information, change with age. We present face images whose discrimination relies strictly on the shape and geometry of a face at various stimulus durations. Interestingly, we demonstrate that facial identity matching is maintained with age when faces are shown in the same view (e.g., front-front or side-side), regardless of exposure duration, but degrades when faces are shown in different views (e.g., front and turned 20 degrees to the side) and does not improve at longer durations. Our results indicate that perceptual processing speed for complex representations and the mechanisms underlying same-view facial identity discrimination are maintained with age. In contrast, information is degraded in the neural transformations that represent facial identity across views. We suggest that the accumulation of useful information over time to refine a representation within a population of neurons saturates earlier in the aging visual system than it does in the younger system and contributes to the age-related deterioration of face discrimination across views.
The nature of face representations in subcortical regions.
Gabay, Shai; Burlingham, Charles; Behrmann, Marlene
2014-07-01
Studies examining the neural correlates of face perception in humans have focused almost exclusively on the distributed cortical network of face-selective regions. Recently, however, investigations have also identified subcortical correlates of face perception and the question addressed here concerns the nature of these subcortical face representations. To explore this issue, we presented to participants pairs of images sequentially to the same or to different eyes. Superior performance in the former over latter condition implicates monocular, prestriate portions of the visual system. Over a series of five experiments, we manipulated both lower-level (size, location) as well as higher-level (identity) similarity across the pair of faces. A monocular advantage was observed even when the faces in a pair differed in location and in size, implicating some subcortical invariance across lower-level image properties. A monocular advantage was also observed when the faces in a pair were two different images of the same individual, indicating the engagement of subcortical representations in more abstract, higher-level aspects of face processing. We conclude that subcortical structures of the visual system are involved, perhaps interactively, in multiple aspects of face perception, and not simply in deriving initial coarse representations. Copyright © 2014 Elsevier Ltd. All rights reserved.
It's All in Your Head: Why Is the Body Inversion Effect Abolished for Headless Bodies?
ERIC Educational Resources Information Center
Yovel, Galit; Pelc, Tatiana; Lubetzky, Ida
2010-01-01
It has been recently argued that human bodies are processed by a specialized processing mechanism. Central evidence was that body inversion reduces recognition abilities (body inversion effect; BIE) as much as it does for faces, but more than for other objects. Here we showed that the BIE is markedly reduced for headless bodies and examined the…
Who Expressed What Emotion? Men Grab Anger, Women Grab Happiness
Neel, Rebecca; Becker, D. Vaughn; Neuberg, Steven L.; Kenrick, Douglas T.
2011-01-01
When anger or happiness flashes on a face in the crowd, do we misperceive that emotion as belonging to someone else? Two studies found that misperception of apparent emotional expressions – “illusory conjunctions” – depended on the gender of the target: male faces tended to “grab” anger from neighboring faces, and female faces tended to grab happiness. Importantly, the evidence did not suggest that this effect was due to the general tendency to misperceive male or female faces as angry or happy, but instead indicated a more subtle interaction of expectations and early visual processes. This suggests a novel aspect of affordance-management in human perception, whereby cues to threat, when they appear, are attributed to those with the greatest capability of doing harm, whereas cues to friendship are attributed to those with the greatest likelihood of providing affiliation opportunities. PMID:22368303
Different Cortical Dynamics in Face and Body Perception: An MEG study
Meeren, Hanneke K. M.; de Gelder, Beatrice; Ahlfors, Seppo P.; Hämäläinen, Matti S.; Hadjikhani, Nouchine
2013-01-01
Evidence from functional neuroimaging indicates that visual perception of human faces and bodies is carried out by distributed networks of face and body-sensitive areas in the occipito-temporal cortex. However, the dynamics of activity in these areas, needed to understand their respective functional roles, are still largely unknown. We monitored brain activity with millisecond time resolution by recording magnetoencephalographic (MEG) responses while participants viewed photographs of faces, bodies, and control stimuli. The cortical activity underlying the evoked responses was estimated with anatomically-constrained noise-normalised minimum-norm estimate and statistically analysed with spatiotemporal cluster analysis. Our findings point to distinct spatiotemporal organization of the neural systems for face and body perception. Face-selective cortical currents were found at early latencies (120–200 ms) in a widespread occipito-temporal network including the ventral temporal cortex (VTC). In contrast, early body-related responses were confined to the lateral occipito-temporal cortex (LOTC). These were followed by strong sustained body-selective responses in the orbitofrontal cortex from 200–700 ms, and in the lateral temporal cortex and VTC after 500 ms latency. Our data suggest that the VTC region has a key role in the early processing of faces, but not of bodies. Instead, the LOTC, which includes the extra-striate body area (EBA), appears the dominant area for early body perception, whereas the VTC contributes to late and post-perceptual processing. PMID:24039712
Yang, Ping; Wang, Min; Jin, Zhenlan; Li, Ling
2015-01-01
The ability to focus on task-relevant information, while suppressing distraction, is critical for human cognition and behavior. Using a delayed-match-to-sample (DMS) task, we investigated the effects of emotional face distractors (positive, negative, and neutral faces) on early and late phases of visual short-term memory (VSTM) maintenance intervals, using low and high VSTM loads. Behavioral results showed decreased accuracy and delayed reaction times (RTs) for high vs. low VSTM load. Event-related potentials (ERPs) showed enhanced frontal N1 and occipital P1 amplitudes for negative faces vs. neutral or positive faces, implying rapid attentional alerting effects and early perceptual processing of negative distractors. However, high VSTM load appeared to inhibit face processing in general, showing decreased N1 amplitudes and delayed P1 latencies. An inverse correlation between the N1 activation difference (high-load minus low-load) and RT costs (high-load minus low-load) was found at left frontal areas when viewing negative distractors, suggesting that the greater the inhibition the lower the RT cost for negative faces. Emotional interference effect was not found in the late VSTM-related parietal P300, frontal positive slow wave (PSW) and occipital negative slow wave (NSW) components. In general, our findings suggest that the VSTM load modulates the early attention and perception of emotional distractors. PMID:26388763
Mars, Rogier B.; Sallet, Jérôme; Neubert, Franz-Xaver; Rushworth, Matthew F. S.
2013-01-01
The human ability to infer the thoughts and beliefs of others, often referred to as “theory of mind,” as well as the predisposition to even consider others, are associated with activity in the temporoparietal junction (TPJ) area. Unlike the case of most human brain areas, we have little sense of whether or how TPJ is related to brain areas in other nonhuman primates. It is not possible to address this question by looking for similar task-related activations in nonhuman primates because there is no evidence that nonhuman primates engage in theory-of-mind tasks in the same manner as humans. Here, instead, we explore the relationship by searching for areas in the macaque brain that interact with other macaque brain regions in the same manner as human TPJ interacts with other human brain regions. In other words, we look for brain regions with similar positions within a distributed neural circuit in the two species. We exploited the fact that human TPJ has a unique functional connectivity profile with cortical areas with known homologs in the macaque. For each voxel in the macaque temporal and parietal cortex we evaluated the similarity of its functional connectivity profile to that of human TPJ. We found that areas in the middle part of the superior temporal cortex, often associated with the processing of faces and other social stimuli, have the most similar connectivity profile. These results suggest that macaque face processing areas and human mentalizing areas might have a similar precursor. PMID:23754406
Examining the "Whole Child" to Generate Usable Knowledge
ERIC Educational Resources Information Center
Rappolt-Schlichtmann, Gabrielle; Ayoub, Catherine C.; Gravel, Jenna W.
2009-01-01
Despite the promise of scientific knowledge contributing to issues facing vulnerable children, families, and communities, typical approaches to research have made applications challenging. While contemporary theories of human development offer appropriate complexity, research has mostly failed to address dynamic developmental processes. Research…
Gaze Dynamics in the Recognition of Facial Expressions of Emotion.
Barabanschikov, Vladimir A
2015-01-01
We studied preferably fixated parts and features of human face in the process of recognition of facial expressions of emotion. Photographs of facial expressions were used. Participants were to categorize these as basic emotions; during this process, eye movements were registered. It was found that variation in the intensity of an expression is mirrored in accuracy of emotion recognition; it was also reflected by several indices of oculomotor function: duration of inspection of certain areas of the face, its upper and bottom or right parts, right and left sides; location, number and duration of fixations, viewing trajectory. In particular, for low-intensity expressions, right side of the face was found to be attended predominantly (right-side dominance); the right-side dominance effect, was, however, absent for expressions of high intensity. For both low- and high-intensity expressions, upper face part was predominantly fixated, though with greater fixation of high-intensity expressions. The majority of trials (70%), in line with findings in previous studies, revealed a V-shaped pattern of inspection trajectory. No relationship, between accuracy of recognition of emotional expressions, was found, though, with either location and duration of fixations or pattern of gaze directedness in the face. © The Author(s) 2015.
Quarto, Tiziana; Blasi, Giuseppe; Maddalena, Chiara; Viscanti, Giovanna; Lanciano, Tiziana; Soleti, Emanuela; Mangiulli, Ivan; Taurisano, Paolo; Fazio, Leonardo; Bertolino, Alessandro; Curci, Antonietta
2016-01-01
The human ability of identifying, processing and regulating emotions from social stimuli is generally referred as Emotional Intelligence (EI). Within EI, Ability EI identifies a performance measure assessing individual skills at perceiving, using, understanding and managing emotions. Previous models suggest that a brain "somatic marker circuitry" (SMC) sustains emotional sub-processes included in EI. Three primary brain regions are included: the amygdala, the insula and the ventromedial prefrontal cortex (vmPFC). Here, our aim was to investigate the relationship between Ability EI scores and SMC activity during social judgment of emotional faces. Sixty-three healthy subjects completed a test measuring Ability EI and underwent fMRI during a social decision task (i.e. approach or avoid) about emotional faces with different facial expressions. Imaging data revealed that EI scores are associated with left insula activity during social judgment of emotional faces as a function of facial expression. Specifically, higher EI scores are associated with greater left insula activity during social judgment of fearful faces but also with lower activity of this region during social judgment of angry faces. These findings indicate that the association between Ability EI and the SMC activity during social behavior is region- and emotion-specific.
Low-cost compact thermal imaging sensors for body temperature measurement
NASA Astrophysics Data System (ADS)
Han, Myung-Soo; Han, Seok Man; Kim, Hyo Jin; Shin, Jae Chul; Ahn, Mi Sook; Kim, Hyung Won; Han, Yong Hee
2013-06-01
This paper presents a 32x32 microbolometer thermal imaging sensor for human body temperature measurement. Waferlevel vacuum packaging technology allows us to get a low cost and compact imaging sensor chip. The microbolometer uses V-W-O film as sensing material and ROIC has been designed 0.35-um CMOS process in UMC. A thermal image of a human face and a hand using f/1 lens convinces that it has a potential of human body temperature for commercial use.
Interactive searching of facial image databases
NASA Astrophysics Data System (ADS)
Nicholls, Robert A.; Shepherd, John W.; Shepherd, Jean
1995-09-01
A set of psychological facial descriptors has been devised to enable computerized searching of criminal photograph albums. The descriptors have been used to encode image databased of up to twelve thousand images. Using a system called FACES, the databases are searched by translating a witness' verbal description into corresponding facial descriptors. Trials of FACES have shown that this coding scheme is more productive and efficient than searching traditional photograph albums. An alternative method of searching the encoded database using a genetic algorithm is currenly being tested. The genetic search method does not require the witness to verbalize a description of the target but merely to indicate a degree of similarity between the target and a limited selection of images from the database. The major drawback of FACES is that is requires a manual encoding of images. Research is being undertaken to automate the process, however, it will require an algorithm which can predict human descriptive values. Alternatives to human derived coding schemes exist using statistical classifications of images. Since databases encoded using statistical classifiers do not have an obvious direct mapping to human derived descriptors, a search method which does not require the entry of human descriptors is required. A genetic search algorithm is being tested for such a purpose.
Decoding the representation of learned social roles in the human brain.
Eger, Evelyn; Moretti, Laura; Dehaene, Stanislas; Sirigu, Angela
2013-10-01
Humans as social beings are profoundly affected by exclusion. Short experiences with people differing in their degree of prosocial behaviour can induce reliable preferences for including partners, but the neural mechanisms of this learning remain unclear. Here, we asked participants to play a short social interaction game based on "cyber-ball" where one fictive partner included and another excluded the subject, thus defining social roles (includer - "good", excluder - "bad"). We then used multivariate pattern recognition on high-resolution functional magnetic resonance imaging (fMRI) data acquired before and after this game to test whether neural responses to the partners' and neutral control faces during a perceptual task reflect their learned social valence. Support vector classification scores revealed a learning-related increase in neural discrimination of social status in anterior insula and anterior cingulate regions, which was mainly driven by includer faces becoming distinguishable from excluder and control faces. Thus, face-evoked responses in anterior insula and anterior cingulate cortex contain fine-grained information shaped by prior social interactions that allow for categorisation of faces according to their learned social status. These lasting traces of social experience in cortical areas important for emotional and social processing could provide a substrate of how social inclusion shapes future behaviour and promotes cooperative interactions between individuals. Copyright © 2013 Elsevier Ltd. All rights reserved.
Lahnakoski, Juha M; Glerean, Enrico; Salmi, Juha; Jääskeläinen, Iiro P; Sams, Mikko; Hari, Riitta; Nummenmaa, Lauri
2012-01-01
Despite the abundant data on brain networks processing static social signals, such as pictures of faces, the neural systems supporting social perception in naturalistic conditions are still poorly understood. Here we delineated brain networks subserving social perception under naturalistic conditions in 19 healthy humans who watched, during 3-T functional magnetic resonance imaging (fMRI), a set of 137 short (approximately 16 s each, total 27 min) audiovisual movie clips depicting pre-selected social signals. Two independent raters estimated how well each clip represented eight social features (faces, human bodies, biological motion, goal-oriented actions, emotion, social interaction, pain, and speech) and six filler features (places, objects, rigid motion, people not in social interaction, non-goal-oriented action, and non-human sounds) lacking social content. These ratings were used as predictors in the fMRI analysis. The posterior superior temporal sulcus (STS) responded to all social features but not to any non-social features, and the anterior STS responded to all social features except bodies and biological motion. We also found four partially segregated, extended networks for processing of specific social signals: (1) a fronto-temporal network responding to multiple social categories, (2) a fronto-parietal network preferentially activated to bodies, motion, and pain, (3) a temporo-amygdalar network responding to faces, social interaction, and speech, and (4) a fronto-insular network responding to pain, emotions, social interactions, and speech. Our results highlight the role of the pSTS in processing multiple aspects of social information, as well as the feasibility and efficiency of fMRI mapping under conditions that resemble the complexity of real life.
The impact of orientation filtering on face-selective neurons in monkey inferior temporal cortex.
Taubert, Jessica; Goffaux, Valerie; Van Belle, Goedele; Vanduffel, Wim; Vogels, Rufin
2016-02-16
Faces convey complex social signals to primates. These signals are tolerant of some image transformations (e.g. changes in size) but not others (e.g. picture-plane rotation). By filtering face stimuli for orientation content, studies of human behavior and brain responses have shown that face processing is tuned to selective orientation ranges. In the present study, for the first time, we recorded the responses of face-selective neurons in monkey inferior temporal (IT) cortex to intact and scrambled faces that were filtered to selectively preserve horizontal or vertical information. Guided by functional maps, we recorded neurons in the lateral middle patch (ML), the lateral anterior patch (AL), and an additional region located outside of the functionally defined face-patches (CONTROL). We found that neurons in ML preferred horizontal-passed faces over their vertical-passed counterparts. Neurons in AL, however, had a preference for vertical-passed faces, while neurons in CONTROL had no systematic preference. Importantly, orientation filtering did not modulate the firing rate of neurons to phase-scrambled face stimuli in any recording region. Together these results suggest that face-selective neurons found in the face-selective patches are differentially tuned to orientation content, with horizontal tuning in area ML and vertical tuning in area AL.
Kismödi, Eszter; Kiragu, Karusa; Sawicki, Olga; Smith, Sally; Brion, Sophie; Sharma, Aditi; Mworeko, Lilian; Iovita, Alexandrina
2017-12-01
In 2014, the World Health Organization (WHO) initiated a process for validation of the elimination of mother-to-child transmission (EMTCT) of HIV and syphilis by countries. For the first time in such a process for the validation of disease elimination, WHO introduced norms and approaches that are grounded in human rights, gender equality, and community engagement. This human rights-based validation process can serve as a key opportunity to enhance accountability for human rights protection by evaluating EMTCT programs against human rights norms and standards, including in relation to gender equality and by ensuring the provision of discrimination-free quality services. The rights-based validation process also involves the assessment of participation of affected communities in EMTCT program development, implementation, and monitoring and evaluation. It brings awareness to the types of human rights abuses and inequalities faced by women living with, at risk of, or affected by HIV and syphilis, and commits governments to eliminate those barriers. This process demonstrates the importance and feasibility of integrating human rights, gender, and community into key public health interventions in a manner that improves health outcomes, legitimizes the participation of affected communities, and advances the human rights of women living with HIV.
Reverse engineering the face space: Discovering the critical features for face identification.
Abudarham, Naphtali; Yovel, Galit
2016-01-01
How do we identify people? What are the critical facial features that define an identity and determine whether two faces belong to the same person or different people? To answer these questions, we applied the face space framework, according to which faces are represented as points in a multidimensional feature space, such that face space distances are correlated with perceptual similarities between faces. In particular, we developed a novel method that allowed us to reveal the critical dimensions (i.e., critical features) of the face space. To that end, we constructed a concrete face space, which included 20 facial features of natural face images, and asked human observers to evaluate feature values (e.g., how thick are the lips). Next, we systematically and quantitatively changed facial features, and measured the perceptual effects of these manipulations. We found that critical features were those for which participants have high perceptual sensitivity (PS) for detecting differences across identities (e.g., which of two faces has thicker lips). Furthermore, these high PS features vary minimally across different views of the same identity, suggesting high PS features support face recognition across different images of the same face. The methods described here set an infrastructure for discovering the critical features of other face categories not studied here (e.g., Asians, familiar) as well as other aspects of face processing, such as attractiveness or trait inferences.
Sexual Dimorphism Analysis and Gender Classification in 3D Human Face
NASA Astrophysics Data System (ADS)
Hu, Yuan; Lu, Li; Yan, Jingqi; Liu, Zhi; Shi, Pengfei
In this paper, we present the sexual dimorphism analysis in 3D human face and perform gender classification based on the result of sexual dimorphism analysis. Four types of features are extracted from a 3D human-face image. By using statistical methods, the existence of sexual dimorphism is demonstrated in 3D human face based on these features. The contributions of each feature to sexual dimorphism are quantified according to a novel criterion. The best gender classification rate is 94% by using SVMs and Matcher Weighting fusion method.This research adds to the knowledge of 3D faces in sexual dimorphism and affords a foundation that could be used to distinguish between male and female in 3D faces.
Patterns of Visual Attention to Faces and Objects in Autism Spectrum Disorder
ERIC Educational Resources Information Center
McPartland, James C.; Webb, Sara Jane; Keehn, Brandon; Dawson, Geraldine
2011-01-01
This study used eye-tracking to examine visual attention to faces and objects in adolescents with autism spectrum disorder (ASD) and typical peers. Point of gaze was recorded during passive viewing of images of human faces, inverted human faces, monkey faces, three-dimensional curvilinear objects, and two-dimensional geometric patterns.…
NASA Astrophysics Data System (ADS)
Rosenzweig, Amanda H.
Through distance learning, the community college system has been able to serve more students by providing educational opportunities to students who would otherwise be unable to attend college. The community college of focus in the study increased its online enrollments and online course offerings due to the growth of overall enrollment. The need and purpose of the study is to address if there is a difference in students' grades between face-to-face and online biology related courses and if there are differences in grades between face-to-face and online biology courses taught by different instructors and the same instructor. The study also addresses if online course delivery is a viable method to educate students in biology-related fields. The study spanned 14 semesters between spring 2006 and summer 2011. Data were collected for 6,619 students. For each student, demographic information, cumulative grade point average, ACT, and data on course performance were gathered. Student data were gathered from General Biology I, Microbiology of Human Pathogens, Human Anatomy and Physiology I, and Human Anatomy and Physiology II courses. Univariate analysis of variance, linear regression, and descriptive analysis were used to analyze the data and determine which variables significantly impacted grade achievement for face-to-face and online students in biology classes. The findings from the study showed that course type, face-to-face or online, was significant for Microbiology of Human Pathogens and Human Anatomy and Physiology I, both upper level courses. Teachers were significant for General Biology I, a lower level course, Human Anatomy and Physiology I, and Human Anatomy and Physiology II. However, in every class, there were teachers who had significant differences within their courses between their face-to-face and online courses. This study will allow information to be concluded about the relationship between the students' final grades and class type, face-to-face or online, and instructor. Administrators, faculty and students can use this information to understand what needs to be done to successfully teach and enroll in biology courses, face-to-face or online. biology courses, online courses, face-to-face courses, class type, teacher influence, grades, CGPA, community college
A face to remember: emotional expression modulates prefrontal activity during memory formation.
Sergerie, Karine; Lepage, Martin; Armony, Jorge L
2005-01-15
Emotion can exert a modulatory role on episodic memory. Several studies have shown that negative stimuli (e.g., words, pictures) are better remembered than neutral ones. Although facial expressions are powerful emotional stimuli and have been shown to influence perception and attention processes, little is known about their effect on memory. We used functional magnetic resonance imaging (fMRI) in humans to investigate the effects of expression (happy, neutral, and fearful) on prefrontal cortex (PFC) activity during the encoding of faces, using a subsequent memory effect paradigm. Our results show that activity in right PFC predicted memory for faces, regardless of expression, while a homotopic region in the left hemisphere was associated with successful encoding only for faces with an emotional expression. These findings are consistent with the proposed role of right dorsolateral PFC in successful encoding of nonverbal material, but also suggest that left DLPFC may be a site where integration of memory and emotional processes occurs. This study sheds new light on the current controversy regarding the hemispheric lateralization of PFC in memory encoding.
Vuilleumier, Patrik; Richardson, Mark P; Armony, Jorge L; Driver, Jon; Dolan, Raymond J
2004-11-01
Emotional visual stimuli evoke enhanced responses in the visual cortex. To test whether this reflects modulatory influences from the amygdala on sensory processing, we used event-related functional magnetic resonance imaging (fMRI) in human patients with medial temporal lobe sclerosis. Twenty-six patients with lesions in the amygdala, the hippocampus or both, plus 13 matched healthy controls, were shown pictures of fearful or neutral faces in task-releant or task-irrelevant positions on the display. All subjects showed increased fusiform cortex activation when the faces were in task-relevant positions. Both healthy individuals and those with hippocampal damage showed increased activation in the fusiform and occipital cortex when they were shown fearful faces, but this was not the case for individuals with damage to the amygdala, even though visual areas were structurally intact. The distant influence of the amygdala was also evidenced by the parametric relationship between amygdala damage and the level of emotional activation in the fusiform cortex. Our data show that combining the fMRI and lesion approaches can help reveal the source of functional modulatory influences between distant but interconnected brain regions.
Culture modulates the brain response to human expressions of emotion: electrophysiological evidence.
Liu, Pan; Rigoulot, Simon; Pell, Marc D
2015-01-01
To understand how culture modulates on-line neural responses to social information, this study compared how individuals from two distinct cultural groups, English-speaking North Americans and Chinese, process emotional meanings of multi-sensory stimuli as indexed by both behaviour (accuracy) and event-related potential (N400) measures. In an emotional Stroop-like task, participants were presented face-voice pairs expressing congruent or incongruent emotions in conditions where they judged the emotion of one modality while ignoring the other (face or voice focus task). Results indicated that while both groups were sensitive to emotional differences between channels (with lower accuracy and higher N400 amplitudes for incongruent face-voice pairs), there were marked group differences in how intruding facial or vocal cues affected accuracy and N400 amplitudes, with English participants showing greater interference from irrelevant faces than Chinese. Our data illuminate distinct biases in how adults from East Asian versus Western cultures process socio-emotional cues, supplying new evidence that cultural learning modulates not only behaviour, but the neurocognitive response to different features of multi-channel emotion expressions. Copyright © 2014 Elsevier Ltd. All rights reserved.
Visual scan paths are abnormal in deluded schizophrenics.
Phillips, M L; David, A S
1997-01-01
One explanation for delusion formation is that they result from distorted appreciation of complex stimuli. The study investigated delusions in schizophrenia using a physiological marker of visual attention and information processing, the visual scan path-a map tracing the direction and duration of gaze when an individual views a stimulus. The aim was to demonstrate the presence of a specific deficit in processing meaningful stimuli (e.g. human faces) in deluded schizophrenics (DS) by relating this to abnormal viewing strategies. Visual scan paths were measured in acutely-deluded (n = 7) and non-deluded (n = 7) schizophrenics matched for medication, illness duration and negative symptoms, plus 10 age-matched normal controls. DS employed abnormal strategies for viewing single faces and face pairs in a recognition task, staring at fewer points and fixating non-feature areas to a significantly greater extent than both control groups (P < 0.05). The results indicate that DS direct their attention to less salient visual information when viewing faces. Future paradigms employing more complex stimuli and testing DS when less-deluded will allow further clarification of the relationship between viewing strategies and delusions.
Moore, Michelle W; Durisko, Corrine; Perfetti, Charles A; Fiez, Julie A
2014-04-01
Numerous functional neuroimaging studies have shown that most orthographic stimuli, such as printed English words, produce a left-lateralized response within the fusiform gyrus (FG) at a characteristic location termed the visual word form area (VWFA). We developed an experimental alphabet (FaceFont) comprising 35 face-phoneme pairs to disentangle phonological and perceptual influences on the lateralization of orthographic processing within the FG. Using functional imaging, we found that a region in the vicinity of the VWFA responded to FaceFont words more strongly in trained versus untrained participants, whereas no differences were observed in the right FG. The trained response magnitudes in the left FG region correlated with behavioral reading performance, providing strong evidence that the neural tissue recruited by training supported the newly acquired reading skill. These results indicate that the left lateralization of the orthographic processing is not restricted to stimuli with particular visual-perceptual features. Instead, lateralization may occur because the anatomical projections in the vicinity of the VWFA provide a unique interconnection between the visual system and left-lateralized language areas involved in the representation of speech.
Gender differences in human single neuron responses to male emotional faces.
Newhoff, Morgan; Treiman, David M; Smith, Kris A; Steinmetz, Peter N
2015-01-01
Well-documented differences in the psychology and behavior of men and women have spurred extensive exploration of gender's role within the brain, particularly regarding emotional processing. While neuroanatomical studies clearly show differences between the sexes, the functional effects of these differences are less understood. Neuroimaging studies have shown inconsistent locations and magnitudes of gender differences in brain hemodynamic responses to emotion. To better understand the neurophysiology of these gender differences, we analyzed recordings of single neuron activity in the human brain as subjects of both genders viewed emotional expressions. This study included recordings of single-neuron activity of 14 (6 male) epileptic patients in four brain areas: amygdala (236 neurons), hippocampus (n = 270), anterior cingulate cortex (n = 256), and ventromedial prefrontal cortex (n = 174). Neural activity was recorded while participants viewed a series of avatar male faces portraying positive, negative or neutral expressions. Significant gender differences were found in the left amygdala, where 23% (n = 15∕66) of neurons in men were significantly affected by facial emotion, vs. 8% (n = 6∕76) of neurons in women. A Fisher's exact test comparing the two ratios found a highly significant difference between the two (p < 0.01). These results show specific differences between genders at the single-neuron level in the human amygdala. These differences may reflect gender-based distinctions in evolved capacities for emotional processing and also demonstrate the importance of including subject gender as an independent factor in future studies of emotional processing by single neurons in the human amygdala.
Sex-dependent neural effect of oxytocin during subliminal processing of negative emotion faces.
Luo, Lizhu; Becker, Benjamin; Geng, Yayuan; Zhao, Zhiying; Gao, Shan; Zhao, Weihua; Yao, Shuxia; Zheng, Xiaoxiao; Ma, Xiaole; Gao, Zhao; Hu, Jiehui; Kendrick, Keith M
2017-11-15
In line with animal models indicating sexually dimorphic effects of oxytocin (OXT) on social-emotional processing, a growing number of OXT-administration studies in humans have also reported sex-dependent effects during social information processing. To explore whether sex-dependent effects already occur during early, subliminal, processing stages the present pharmacological fMRI-study combined the intranasal-application of either OXT or placebo (n = 86-43 males) with a backward-masking emotional face paradigm. Results showed that while OXT suppressed inferior frontal gyrus, dorsal anterior cingulate and anterior insula responses to threatening face stimuli in men it increased them in women. In women increased anterior cingulate reactivity during subliminal threat processing was also positively associated with trait anxiety. On the network level, sex-dependent effects were observed on amygdala, anterior cingulate and inferior frontal gyrus functional connectivity that were mainly driven by reduced coupling in women following OXT. Our findings demonstrate that OXT produces sex-dependent effects even at the early stages of social-emotional processing, and suggest that while it attenuates neural responses to threatening social stimuli in men it increases them in women. Thus in a therapeutic context OXT may potentially produce different effects on anxiety disorders in men and women. Copyright © 2017 Elsevier Inc. All rights reserved.
Efficient human face detection in infancy.
Jakobsen, Krisztina V; Umstead, Lindsey; Simpson, Elizabeth A
2016-01-01
Adults detect conspecific faces more efficiently than heterospecific faces; however, the development of this own-species bias (OSB) remains unexplored. We tested whether 6- and 11-month-olds exhibit OSB in their attention to human and animal faces in complex visual displays with high perceptual load (25 images competing for attention). Infants (n = 48) and adults (n = 43) passively viewed arrays containing a face among 24 non-face distractors while we measured their gaze with remote eye tracking. While OSB is typically not observed until about 9 months, we found that, already by 6 months, human faces were more likely to be detected, were detected more quickly (attention capture), and received longer looks (attention holding) than animal faces. These data suggest that 6-month-olds already exhibit OSB in face detection efficiency, consistent with perceptual attunement. This specialization may reflect the biological importance of detecting conspecific faces, a foundational ability for early social interactions. © 2015 Wiley Periodicals, Inc.
Perry, Anat; Aviezer, Hillel; Goldstein, Pavel; Palgi, Sharon; Klein, Ehud; Shamay-Tsoory, Simone G
2013-11-01
The neuropeptide oxytocin (OT) has been repeatedly reported to play an essential role in the regulation of social cognition in humans in general, and specifically in enhancing the recognition of emotions from facial expressions. The later was assessed in different paradigms that rely primarily on isolated and decontextualized emotional faces. However, recent evidence has indicated that the perception of basic facial expressions is not context invariant and can be categorically altered by context, especially body context, at early perceptual levels. Body context has a strong effect on our perception of emotional expressions, especially when the actual target face and the contextually expected face are perceptually similar. To examine whether and how OT affects emotion recognition, we investigated the role of OT in categorizing facial expressions in incongruent body contexts. Our results show that in the combined process of deciphering emotions from facial expressions and from context, OT gives an advantage to the face. This advantage is most evident when the target face and the contextually expected face are perceptually similar. Copyright © 2013 Elsevier Ltd. All rights reserved.
Neural correlates of face gender discrimination learning.
Su, Junzhu; Tan, Qingleng; Fang, Fang
2013-04-01
Using combined psychophysics and event-related potentials (ERPs), we investigated the effect of perceptual learning on face gender discrimination and probe the neural correlates of the learning effect. Human subjects were trained to perform a gender discrimination task with male or female faces. Before and after training, they were tested with the trained faces and other faces with the same and opposite genders. ERPs responding to these faces were recorded. Psychophysical results showed that training significantly improved subjects' discrimination performance and the improvement was specific to the trained gender, as well as to the trained identities. The training effect indicates that learning occurs at two levels-the category level (gender) and the exemplar level (identity). ERP analyses showed that the gender and identity learning was associated with the N170 latency reduction at the left occipital-temporal area and the N170 amplitude reduction at the right occipital-temporal area, respectively. These findings provide evidence for the facilitation model and the sharpening model on neuronal plasticity from visual experience, suggesting a faster processing speed and a sparser representation of face induced by perceptual learning.
ERIC Educational Resources Information Center
Simpson, Elizabeth A.; Suomi, Stephen J.; Paukner, Annika
2016-01-01
In human children and adults, familiar face types--typically own-age and own-species faces--are discriminated better than other face types; however, human infants do not appear to exhibit an own-age bias but instead better discriminate adult faces, which they see more often. There are two possible explanations for this pattern: Perceptual…
Modulation of Alpha Oscillations in the Human EEG with Facial Preference
Kang, Jae-Hwan; Kim, Su Jin; Cho, Yang Seok; Kim, Sung-Phil
2015-01-01
Facial preference that results from the processing of facial information plays an important role in social interactions as well as the selection of a mate, friend, candidate, or favorite actor. However, it still remains elusive which brain regions are implicated in the neural mechanisms underlying facial preference, and how neural activities in these regions are modulated during the formation of facial preference. In the present study, we investigated the modulation of electroencephalography (EEG) oscillatory power with facial preference. For the reliable assessments of facial preference, we designed a series of passive viewing and active choice tasks. In the former task, twenty-four face stimuli were passively viewed by participants for multiple times in random order. In the latter task, the same stimuli were then evaluated by participants for their facial preference judgments. In both tasks, significant differences between the preferred and non-preferred faces groups were found in alpha band power (8–13 Hz) but not in other frequency bands. The preferred faces generated more decreases in alpha power. During the passive viewing task, significant differences in alpha power between the preferred and non-preferred face groups were observed at the left frontal regions in the early (0.15–0.4 s) period during the 1-s presentation. By contrast, during the active choice task when participants consecutively watched the first and second face for 1 s and then selected the preferred one, an alpha power difference was found for the late (0.65–0.8 s) period over the whole brain during the first face presentation and over the posterior regions during the second face presentation. These results demonstrate that the modulation of alpha activity by facial preference is a top-down process, which requires additional cognitive resources to facilitate information processing of the preferred faces that capture more visual attention than the non-preferred faces. PMID:26394328
Primate pelvic anatomy and implications for birth.
Trevathan, Wenda
2015-03-05
The pelvis performs two major functions for terrestrial mammals. It provides somewhat rigid support for muscles engaged in locomotion and, for females, it serves as the birth canal. The result for many species, and especially for encephalized primates, is an 'obstetric dilemma' whereby the neonate often has to negotiate a tight squeeze in order to be born. On top of what was probably a baseline of challenging birth, locomotor changes in the evolution of bipedalism in the human lineage resulted in an even more complex birth process. Negotiation of the bipedal pelvis requires a series of rotations, the end of which has the infant emerging from the birth canal facing the opposite direction from the mother. This pattern, strikingly different from what is typically seen in monkeys and apes, places a premium on having assistance at delivery. Recently reported observations of births in monkeys and apes are used to compare the process in human and non-human primates, highlighting similarities and differences. These include presentation (face, occiput anterior or posterior), internal and external rotation, use of the hands by mothers and infants, reliance on assistance, and the developmental state of the neonate. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Primate pelvic anatomy and implications for birth
Trevathan, Wenda
2015-01-01
The pelvis performs two major functions for terrestrial mammals. It provides somewhat rigid support for muscles engaged in locomotion and, for females, it serves as the birth canal. The result for many species, and especially for encephalized primates, is an ‘obstetric dilemma’ whereby the neonate often has to negotiate a tight squeeze in order to be born. On top of what was probably a baseline of challenging birth, locomotor changes in the evolution of bipedalism in the human lineage resulted in an even more complex birth process. Negotiation of the bipedal pelvis requires a series of rotations, the end of which has the infant emerging from the birth canal facing the opposite direction from the mother. This pattern, strikingly different from what is typically seen in monkeys and apes, places a premium on having assistance at delivery. Recently reported observations of births in monkeys and apes are used to compare the process in human and non-human primates, highlighting similarities and differences. These include presentation (face, occiput anterior or posterior), internal and external rotation, use of the hands by mothers and infants, reliance on assistance, and the developmental state of the neonate. PMID:25602069
The Motivational Salience of Faces Is Related to Both Their Valence and Dominance.
Wang, Hongyi; Hahn, Amanda C; DeBruine, Lisa M; Jones, Benedict C
2016-01-01
Both behavioral and neural measures of the motivational salience of faces are positively correlated with their physical attractiveness. Whether physical characteristics other than attractiveness contribute to the motivational salience of faces is not known, however. Research with male macaques recently showed that more dominant macaques' faces hold greater motivational salience. Here we investigated whether dominance also contributes to the motivational salience of faces in human participants. Principal component analysis of third-party ratings of faces for multiple traits revealed two orthogonal components. The first component ("valence") was highly correlated with rated trustworthiness and attractiveness. The second component ("dominance") was highly correlated with rated dominance and aggressiveness. Importantly, both components were positively and independently related to the motivational salience of faces, as assessed from responses on a standard key-press task. These results show that at least two dissociable components underpin the motivational salience of faces in humans and present new evidence for similarities in how humans and non-human primates respond to facial cues of dominance.
Watanabe, Jun-ichiro; Ishibashi, Nozomu; Yano, Kazuo
2014-01-01
Quantitative analyses of human-generated data collected in various fields have uncovered many patterns of complex human behaviors. However, thus far the quantitative evaluation of the relationship between the physical behaviors of employees and their performance has been inadequate. Here, we present findings demonstrating the significant relationship between the physical behaviors of employees and their performance via experiments we conducted in inbound call centers while the employees wore sensor badges. There were two main findings. First, we found that face-to-face interaction among telecommunicators and the frequency of their bodily movements caused by the face-to-face interaction had a significant correlation with the entire call center performance, which we measured as "Calls per Hour." Second, our trial to activate face-to-face interaction on the basis of data collected by the wearable sensor badges the employees wore significantly increased their performance. These results demonstrate quantitatively that human-human interaction in the physical world plays an important role in team performance.
Watanabe, Jun-ichiro; Ishibashi, Nozomu; Yano, Kazuo
2014-01-01
Quantitative analyses of human-generated data collected in various fields have uncovered many patterns of complex human behaviors. However, thus far the quantitative evaluation of the relationship between the physical behaviors of employees and their performance has been inadequate. Here, we present findings demonstrating the significant relationship between the physical behaviors of employees and their performance via experiments we conducted in inbound call centers while the employees wore sensor badges. There were two main findings. First, we found that face-to-face interaction among telecommunicators and the frequency of their bodily movements caused by the face-to-face interaction had a significant correlation with the entire call center performance, which we measured as “Calls per Hour.” Second, our trial to activate face-to-face interaction on the basis of data collected by the wearable sensor badges the employees wore significantly increased their performance. These results demonstrate quantitatively that human-human interaction in the physical world plays an important role in team performance. PMID:25501748
See You See Me: the Role of Eye Contact in Multimodal Human-Robot Interaction.
Xu, Tian Linger; Zhang, Hui; Yu, Chen
2016-05-01
We focus on a fundamental looking behavior in human-robot interactions - gazing at each other's face. Eye contact and mutual gaze between two social partners are critical in smooth human-human interactions. Therefore, investigating at what moments and in what ways a robot should look at a human user's face as a response to the human's gaze behavior is an important topic. Toward this goal, we developed a gaze-contingent human-robot interaction system, which relied on momentary gaze behaviors from a human user to control an interacting robot in real time. Using this system, we conducted an experiment in which human participants interacted with the robot in a joint attention task. In the experiment, we systematically manipulated the robot's gaze toward the human partner's face in real time and then analyzed the human's gaze behavior as a response to the robot's gaze behavior. We found that more face looks from the robot led to more look-backs (to the robot's face) from human participants and consequently created more mutual gaze and eye contact between the two. Moreover, participants demonstrated more coordinated and synchronized multimodal behaviors between speech and gaze when more eye contact was successfully established and maintained.
NASA Astrophysics Data System (ADS)
Jamal, Wasifa; Das, Saptarshi; Maharatna, Koushik; Pan, Indranil; Kuyucu, Doga
2015-09-01
Degree of phase synchronization between different Electroencephalogram (EEG) channels is known to be the manifestation of the underlying mechanism of information coupling between different brain regions. In this paper, we apply a continuous wavelet transform (CWT) based analysis technique on EEG data, captured during face perception tasks, to explore the temporal evolution of phase synchronization, from the onset of a stimulus. Our explorations show that there exists a small set (typically 3-5) of unique synchronized patterns or synchrostates, each of which are stable of the order of milliseconds. Particularly, in the beta (β) band, which has been reported to be associated with visual processing task, the number of such stable states has been found to be three consistently. During processing of the stimulus, the switching between these states occurs abruptly but the switching characteristic follows a well-behaved and repeatable sequence. This is observed in a single subject analysis as well as a multiple-subject group-analysis in adults during face perception. We also show that although these patterns remain topographically similar for the general category of face perception task, the sequence of their occurrence and their temporal stability varies markedly between different face perception scenarios (stimuli) indicating toward different dynamical characteristics for information processing, which is stimulus-specific in nature. Subsequently, we translated these stable states into brain complex networks and derived informative network measures for characterizing the degree of segregated processing and information integration in those synchrostates, leading to a new methodology for characterizing information processing in human brain. The proposed methodology of modeling the functional brain connectivity through the synchrostates may be viewed as a new way of quantitative characterization of the cognitive ability of the subject, stimuli and information integration/segregation capability.
ERIC Educational Resources Information Center
Byrum, David L.
1982-01-01
Suggests uses for and possible adaptations of a set of semiflexible molecular models. Includes price and supplier information. Also suggests rubbing oil from human face/hands along the pouring lip of a beaker, allowing one to pour uniformly from the beaker spout and making the process as dripless as possible. (Author/JN)
Myneni, Sahiti; Patel, Vimla L; Bova, G Steven; Wang, Jian; Ackerman, Christopher F; Berlinicke, Cynthia A; Chen, Steve H; Lindvall, Mikael; Zack, Donald J
2016-04-01
This paper describes a distributed collaborative effort between industry and academia to systematize data management in an academic biomedical laboratory. Heterogeneous and voluminous nature of research data created in biomedical laboratories make information management difficult and research unproductive. One such collaborative effort was evaluated over a period of four years using data collection methods including ethnographic observations, semi-structured interviews, web-based surveys, progress reports, conference call summaries, and face-to-face group discussions. Data were analyzed using qualitative methods of data analysis to (1) characterize specific problems faced by biomedical researchers with traditional information management practices, (2) identify intervention areas to introduce a new research information management system called Labmatrix, and finally to (3) evaluate and delineate important general collaboration (intervention) characteristics that can optimize outcomes of an implementation process in biomedical laboratories. Results emphasize the importance of end user perseverance, human-centric interoperability evaluation, and demonstration of return on investment of effort and time of laboratory members and industry personnel for success of implementation process. In addition, there is an intrinsic learning component associated with the implementation process of an information management system. Technology transfer experience in a complex environment such as the biomedical laboratory can be eased with use of information systems that support human and cognitive interoperability. Such informatics features can also contribute to successful collaboration and hopefully to scientific productivity. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Combining fMRI and behavioral measures to examine the process of human learning.
Karuza, Elisabeth A; Emberson, Lauren L; Aslin, Richard N
2014-03-01
Prior to the advent of fMRI, the primary means of examining the mechanisms underlying learning were restricted to studying human behavior and non-human neural systems. However, recent advances in neuroimaging technology have enabled the concurrent study of human behavior and neural activity. We propose that the integration of behavioral response with brain activity provides a powerful method of investigating the process through which internal representations are formed or changed. Nevertheless, a review of the literature reveals that many fMRI studies of learning either (1) focus on outcome rather than process or (2) are built on the untested assumption that learning unfolds uniformly over time. We discuss here various challenges faced by the field and highlight studies that have begun to address them. In doing so, we aim to encourage more research that examines the process of learning by considering the interrelation of behavioral measures and fMRI recording during learning. Copyright © 2013 Elsevier Inc. All rights reserved.
Combining fMRI and Behavioral Measures to Examine the Process of Human Learning
Karuza, Elisabeth A.; Emberson, Lauren L.; Aslin, Richard N.
2013-01-01
Prior to the advent of fMRI, the primary means of examining the mechanisms underlying learning were restricted to studying human behavior and non-human neural systems. However, recent advances in neuroimaging technology have enabled the concurrent study of human behavior and neural activity. We propose that the integration of behavioral response with brain activity provides a powerful method of investigating the process through which internal representations are formed or changed. Nevertheless, a review of the literature reveals that many fMRI studies of learning either (1) focus on outcome rather than process or (2) are built on the untested assumption that learning unfolds uniformly over time. We discuss here various challenges faced by the field and highlight studies that have begun to address them. In doing so, we aim to encourage more research that examines the process of learning by considering the interrelation of behavioral measures and fMRI recording during learning. PMID:24076012
Selective attention modulates early human evoked potentials during emotional face-voice processing.
Ho, Hao Tam; Schröger, Erich; Kotz, Sonja A
2015-04-01
Recent findings on multisensory integration suggest that selective attention influences cross-sensory interactions from an early processing stage. Yet, in the field of emotional face-voice integration, the hypothesis prevails that facial and vocal emotional information interacts preattentively. Using ERPs, we investigated the influence of selective attention on the perception of congruent versus incongruent combinations of neutral and angry facial and vocal expressions. Attention was manipulated via four tasks that directed participants to (i) the facial expression, (ii) the vocal expression, (iii) the emotional congruence between the face and the voice, and (iv) the synchrony between lip movement and speech onset. Our results revealed early interactions between facial and vocal emotional expressions, manifested as modulations of the auditory N1 and P2 amplitude by incongruent emotional face-voice combinations. Although audiovisual emotional interactions within the N1 time window were affected by the attentional manipulations, interactions within the P2 modulation showed no such attentional influence. Thus, we propose that the N1 and P2 are functionally dissociated in terms of emotional face-voice processing and discuss evidence in support of the notion that the N1 is associated with cross-sensory prediction, whereas the P2 relates to the derivation of an emotional percept. Essentially, our findings put the integration of facial and vocal emotional expressions into a new perspective-one that regards the integration process as a composite of multiple, possibly independent subprocesses, some of which are susceptible to attentional modulation, whereas others may be influenced by additional factors.
Schizophrenia as a human process.
Corradi, Richard B
2011-01-01
The patient with schizophrenia often appears to be living in an alien world, one of strange voices, bizarre beliefs, and disorganized speech and behavior. It is difficult to empathize with someone suffering from symptoms so remote from one's ordinary experience. However, examination of the disorder reveals not only symptoms of the psychosis itself but also an intensely human struggle against the disintegration of personality it can produce. Furthermore, examination of the individual's attempts to cope with a devastating psychotic process reveals familiar psychodynamic processes and defense mechanisms, however unsuccessful they may be. Knowing that behind the seemingly alien diagnostic features of schizophrenia is a person attempting to preserve his or her self-identity puts a human face on the illness. This article utilizes clinical material to describe some of the psychodynamic processes of schizophrenia. Its purpose is to facilitate understanding of an illness that requires comprehensive biopsychosocial treatment in which a therapeutic doctor-patient relationship is as necessary as antipsychotic medication.
Anterior temporal face patches: a meta-analysis and empirical study
Von Der Heide, Rebecca J.; Skipper, Laura M.; Olson, Ingrid R.
2013-01-01
Evidence suggests the anterior temporal lobe (ATL) plays an important role in person identification and memory. In humans, neuroimaging studies of person memory report consistent activations in the ATL to famous and personally familiar faces and studies of patients report resection or damage of the ATL causes an associative prosopagnosia in which face perception is intact but face memory is compromised. In addition, high-resolution fMRI studies of non-human primates and electrophysiological studies of humans also suggest regions of the ventral ATL are sensitive to novel faces. The current study extends previous findings by investigating whether similar subregions in the dorsal, ventral, lateral, or polar aspects of the ATL are sensitive to personally familiar, famous, and novel faces. We present the results of two studies of person memory: a meta-analysis of existing fMRI studies and an empirical fMRI study using optimized imaging parameters. Both studies showed left-lateralized ATL activations to familiar individuals while novel faces activated the right ATL. Activations to famous faces were quite ventral, similar to what has been reported in previous high-resolution fMRI studies of non-human primates. These findings suggest that face memory-sensitive patches in the human ATL are in the ventral/polar ATL. PMID:23378834
Mapping the emotional face. How individual face parts contribute to successful emotion recognition.
Wegrzyn, Martin; Vogt, Maria; Kireclioglu, Berna; Schneider, Julia; Kissler, Johanna
2017-01-01
Which facial features allow human observers to successfully recognize expressions of emotion? While the eyes and mouth have been frequently shown to be of high importance, research on facial action units has made more precise predictions about the areas involved in displaying each emotion. The present research investigated on a fine-grained level, which physical features are most relied on when decoding facial expressions. In the experiment, individual faces expressing the basic emotions according to Ekman were hidden behind a mask of 48 tiles, which was sequentially uncovered. Participants were instructed to stop the sequence as soon as they recognized the facial expression and assign it the correct label. For each part of the face, its contribution to successful recognition was computed, allowing to visualize the importance of different face areas for each expression. Overall, observers were mostly relying on the eye and mouth regions when successfully recognizing an emotion. Furthermore, the difference in the importance of eyes and mouth allowed to group the expressions in a continuous space, ranging from sadness and fear (reliance on the eyes) to disgust and happiness (mouth). The face parts with highest diagnostic value for expression identification were typically located in areas corresponding to action units from the facial action coding system. A similarity analysis of the usefulness of different face parts for expression recognition demonstrated that faces cluster according to the emotion they express, rather than by low-level physical features. Also, expressions relying more on the eyes or mouth region were in close proximity in the constructed similarity space. These analyses help to better understand how human observers process expressions of emotion, by delineating the mapping from facial features to psychological representation.
Mapping the emotional face. How individual face parts contribute to successful emotion recognition
Wegrzyn, Martin; Vogt, Maria; Kireclioglu, Berna; Schneider, Julia; Kissler, Johanna
2017-01-01
Which facial features allow human observers to successfully recognize expressions of emotion? While the eyes and mouth have been frequently shown to be of high importance, research on facial action units has made more precise predictions about the areas involved in displaying each emotion. The present research investigated on a fine-grained level, which physical features are most relied on when decoding facial expressions. In the experiment, individual faces expressing the basic emotions according to Ekman were hidden behind a mask of 48 tiles, which was sequentially uncovered. Participants were instructed to stop the sequence as soon as they recognized the facial expression and assign it the correct label. For each part of the face, its contribution to successful recognition was computed, allowing to visualize the importance of different face areas for each expression. Overall, observers were mostly relying on the eye and mouth regions when successfully recognizing an emotion. Furthermore, the difference in the importance of eyes and mouth allowed to group the expressions in a continuous space, ranging from sadness and fear (reliance on the eyes) to disgust and happiness (mouth). The face parts with highest diagnostic value for expression identification were typically located in areas corresponding to action units from the facial action coding system. A similarity analysis of the usefulness of different face parts for expression recognition demonstrated that faces cluster according to the emotion they express, rather than by low-level physical features. Also, expressions relying more on the eyes or mouth region were in close proximity in the constructed similarity space. These analyses help to better understand how human observers process expressions of emotion, by delineating the mapping from facial features to psychological representation. PMID:28493921
Representational Similarity of Body Parts in Human Occipitotemporal Cortex.
Bracci, Stefania; Caramazza, Alfonso; Peelen, Marius V
2015-09-23
Regions in human lateral and ventral occipitotemporal cortices (OTC) respond selectively to pictures of the human body and its parts. What are the organizational principles underlying body part responses in these regions? Here we used representational similarity analysis (RSA) of fMRI data to test multiple possible organizational principles: shape similarity, physical proximity, cortical homunculus proximity, and semantic similarity. Participants viewed pictures of whole persons, chairs, and eight body parts (hands, arms, legs, feet, chests, waists, upper faces, and lower faces). The similarity of multivoxel activity patterns for all body part pairs was established in whole person-selective OTC regions. The resulting neural similarity matrices were then compared with similarity matrices capturing the hypothesized organizational principles. Results showed that the semantic similarity model best captured the neural similarity of body parts in lateral and ventral OTC, which followed an organization in three clusters: (1) body parts used as action effectors (hands, feet, arms, and legs), (2) noneffector body parts (chests and waists), and (3) face parts (upper and lower faces). Whole-brain RSA revealed, in addition to OTC, regions in parietal and frontal cortex in which neural similarity was related to semantic similarity. In contrast, neural similarity in occipital cortex was best predicted by shape similarity models. We suggest that the semantic organization of body parts in high-level visual cortex relates to the different functions associated with the three body part clusters, reflecting the unique processing and connectivity demands associated with the different types of information (e.g., action, social) different body parts (e.g., limbs, faces) convey. Significance statement: While the organization of body part representations in motor and somatosensory cortices has been well characterized, the principles underlying body part representations in visual cortex have not yet been explored. In the present fMRI study we used multivoxel pattern analysis and representational similarity analysis to characterize the organization of body maps in human occipitotemporal cortex (OTC). Results indicate that visual and shape dimensions do not fully account for the organization of body part representations in OTC. Instead, the representational structure of body maps in OTC appears strongly related to functional-semantic properties of body parts. We suggest that this organization reflects the unique processing and connectivity demands associated with the different types of information different body parts convey. Copyright © 2015 the authors 0270-6474/15/3512977-09$15.00/0.
Reading sadness beyond human faces.
Chammat, Mariam; Foucher, Aurélie; Nadel, Jacqueline; Dubal, Stéphanie
2010-08-12
Human faces are the main emotion displayers. Knowing that emotional compared to neutral stimuli elicit enlarged ERPs components at the perceptual level, one may wonder whether this has led to an emotional facilitation bias toward human faces. To contribute to this question, we measured the P1 and N170 components of the ERPs elicited by human facial compared to artificial stimuli, namely non-humanoid robots. Fifteen healthy young adults were shown sad and neutral, upright and inverted expressions of human versus robotic displays. An increase in P1 amplitude in response to sad displays compared to neutral ones evidenced an early perceptual amplification for sadness information. P1 and N170 latencies were delayed in response to robotic stimuli compared to human ones, while N170 amplitude was not affected by media. Inverted human stimuli elicited a longer latency of P1 and a larger N170 amplitude while inverted robotic stimuli did not. As a whole, our results show that emotion facilitation is not biased to human faces but rather extend to non-human displays, thus suggesting our capacity to read emotion beyond faces. Copyright 2010 Elsevier B.V. All rights reserved.
Human amygdala response to dynamic facial expressions of positive and negative surprise.
Vrticka, Pascal; Lordier, Lara; Bediou, Benoît; Sander, David
2014-02-01
Although brain imaging evidence accumulates to suggest that the amygdala plays a key role in the processing of novel stimuli, only little is known about its role in processing expressed novelty conveyed by surprised faces, and even less about possible interactive encoding of novelty and valence. Those investigations that have already probed human amygdala involvement in the processing of surprised facial expressions either used static pictures displaying negative surprise (as contained in fear) or "neutral" surprise, and manipulated valence by contextually priming or subjectively associating static surprise with either negative or positive information. Therefore, it still remains unresolved how the human amygdala differentially processes dynamic surprised facial expressions displaying either positive or negative surprise. Here, we created new artificial dynamic 3-dimensional facial expressions conveying surprise with an intrinsic positive (wonderment) or negative (fear) connotation, but also intrinsic positive (joy) or negative (anxiety) emotions not containing any surprise, in addition to neutral facial displays either containing ("typical surprise" expression) or not containing ("neutral") surprise. Results showed heightened amygdala activity to faces containing positive (vs. negative) surprise, which may either correspond to a specific wonderment effect as such, or to the computation of a negative expected value prediction error. Findings are discussed in the light of data obtained from a closely matched nonsocial lottery task, which revealed overlapping activity within the left amygdala to unexpected positive outcomes. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Automatic recognition of emotions from facial expressions
NASA Astrophysics Data System (ADS)
Xue, Henry; Gertner, Izidor
2014-06-01
In the human-computer interaction (HCI) process it is desirable to have an artificial intelligent (AI) system that can identify and categorize human emotions from facial expressions. Such systems can be used in security, in entertainment industries, and also to study visual perception, social interactions and disorders (e.g. schizophrenia and autism). In this work we survey and compare the performance of different feature extraction algorithms and classification schemes. We introduce a faster feature extraction method that resizes and applies a set of filters to the data images without sacrificing the accuracy. In addition, we have enhanced SVM to multiple dimensions while retaining the high accuracy rate of SVM. The algorithms were tested using the Japanese Female Facial Expression (JAFFE) Database and the Database of Faces (AT&T Faces).
Cheng, Xue Jun; McCarthy, Callum J; Wang, Tony S L; Palmeri, Thomas J; Little, Daniel R
2018-06-01
Upright faces are thought to be processed more holistically than inverted faces. In the widely used composite face paradigm, holistic processing is inferred from interference in recognition performance from a to-be-ignored face half for upright and aligned faces compared with inverted or misaligned faces. We sought to characterize the nature of holistic processing in composite faces in computational terms. We use logical-rule models (Fifić, Little, & Nosofsky, 2010) and Systems Factorial Technology (Townsend & Nozawa, 1995) to examine whether composite faces are processed through pooling top and bottom face halves into a single processing channel-coactive processing-which is one common mechanistic definition of holistic processing. By specifically operationalizing holistic processing as the pooling of features into a single decision process in our task, we are able to distinguish it from other processing models that may underlie composite face processing. For instance, a failure of selective attention might result even when top and bottom components of composite faces are processed in serial or in parallel without processing the entire face coactively. Our results show that performance is best explained by a mixture of serial and parallel processing architectures across all 4 upright and inverted, aligned and misaligned face conditions. The results indicate multichannel, featural processing of composite faces in a manner inconsistent with the notion of coactivity. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
A systematic review of the human body burden of e-waste exposure in China.
Song, Qingbin; Li, Jinhui
2014-07-01
As China is one of the countries facing the most serious pollution and human exposure effects of e-waste in the world, much of the population there is exposed to potentially hazardous substances due to informal e-waste recycling processes. This report reviews recent studies on human exposure to e-waste in China, with particular focus on exposure routes (e.g. dietary intake, inhalation, and soil/dust ingestion) and human body burden markers (e.g. placenta, umbilical cord blood, breast milk, blood, hair, and urine) and assesses the evidence for the association between such e-waste exposure and the human body burden in China. The results suggest that residents in the e-waste exposure areas, located mainly in the three traditional e-waste recycling sites (Taizhou, Guiyu, and Qingyuan), are faced with a potential higher daily intake of these pollutants than residents in the control areas, especially via food ingestion. Moreover, pollutants (PBBs, PBDEs, PCBs, PCDD/Fs, and heavy metals) from the e-waste recycling processes were all detectable in the tissue samples at high levels, showing that they had entered residents' bodies through the environment and dietary exposure. Children and neonates are the groups most sensitive to the human body effects of e-waste exposure. We also recorded plausible outcomes associated with exposure to e-waste, including 7 types of human body burden. Although the data suggest that exposure to e-waste is harmful to health, better designed epidemiological investigations in vulnerable populations, especially neonates and children, are needed to confirm these associations. Copyright © 2014 Elsevier Ltd. All rights reserved.
Perceptual expertise in forensic facial image comparison
White, David; Phillips, P. Jonathon; Hahn, Carina A.; Hill, Matthew; O'Toole, Alice J.
2015-01-01
Forensic facial identification examiners are required to match the identity of faces in images that vary substantially, owing to changes in viewing conditions and in a person's appearance. These identifications affect the course and outcome of criminal investigations and convictions. Despite calls for research on sources of human error in forensic examination, existing scientific knowledge of face matching accuracy is based, almost exclusively, on people without formal training. Here, we administered three challenging face matching tests to a group of forensic examiners with many years' experience of comparing face images for law enforcement and government agencies. Examiners outperformed untrained participants and computer algorithms, thereby providing the first evidence that these examiners are experts at this task. Notably, computationally fusing responses of multiple experts produced near-perfect performance. Results also revealed qualitative differences between expert and non-expert performance. First, examiners' superiority was greatest at longer exposure durations, suggestive of more entailed comparison in forensic examiners. Second, experts were less impaired by image inversion than non-expert students, contrasting with face memory studies that show larger face inversion effects in high performers. We conclude that expertise in matching identity across unfamiliar face images is supported by processes that differ qualitatively from those supporting memory for individual faces. PMID:26336174
COFFMAN, MARIKA C.; TRUBANOVA, ANDREA; RICHEY, J. ANTHONY; WHITE, SUSAN W.; KIM-SPOON, JUNGMEEN; OLLENDICK, THOMAS H.; PINE, DANIEL S.
2016-01-01
Attention to faces is a fundamental psychological process in humans, with atypical attention to faces noted across several clinical disorders. Although many clinical disorders onset in adolescence, there is a lack of well-validated stimulus sets containing adolescent faces available for experimental use. Further, the images comprising most available sets are not controlled for high- and low-level visual properties. Here, we present a cross-site validation of the National Institute of Mental Health Child Emotional Faces Picture Set (NIMH-ChEFS), comprised of 257 photographs of adolescent faces displaying angry, fearful, happy, sad, and neutral expressions. All of the direct facial images from the NIMH-ChEFS set were adjusted in terms of location of facial features and standardized for luminance, size, and smoothness. Although overall agreement between raters in this study and the original development-site raters was high (89.52%), this differed by group such that agreement was lower for adolescents relative to mental health professionals in the current study. These results suggest that future research using this face set or others of adolescent/child faces should base comparisons on similarly-aged validation data. PMID:26359940
Perceptual Learning: 12-Month-Olds' Discrimination of Monkey Faces
ERIC Educational Resources Information Center
Fair, Joseph; Flom, Ross; Jones, Jacob; Martin, Justin
2012-01-01
Six-month-olds reliably discriminate different monkey and human faces whereas 9-month-olds only discriminate different human faces. It is often falsely assumed that perceptual narrowing reflects a permanent change in perceptual abilities. In 3 experiments, ninety-six 12-month-olds' discrimination of unfamiliar monkey faces was examined. Following…
Faces are special but not too special: Spared face recognition in amnesia is based on familiarity
Aly, Mariam; Knight, Robert T.; Yonelinas, Andrew P.
2014-01-01
Most current theories of human memory are material-general in the sense that they assume that the medial temporal lobe (MTL) is important for retrieving the details of prior events, regardless of the specific type of materials. Recent studies of amnesia have challenged the material-general assumption by suggesting that the MTL may be necessary for remembering words, but is not involved in remembering faces. We examined recognition memory for faces and words in a group of amnesic patients, which included hypoxic patients and patients with extensive left or right MTL lesions. Recognition confidence judgments were used to plot receiver operating characteristics (ROCs) in order to more fully quantify recognition performance and to estimate the contributions of recollection and familiarity. Consistent with the extant literature, an analysis of overall recognition accuracy showed that the patients were impaired at word memory but had spared face memory. However, the ROC analysis indicated that the patients were generally impaired at high confidence recognition responses for faces and words, and they exhibited significant recollection impairments for both types of materials. Familiarity for faces was preserved in all patients, but extensive left MTL damage impaired familiarity for words. These results suggest that face recognition may appear to be spared because performance tends to rely heavily on familiarity, a process that is relatively well preserved in amnesia. The findings challenge material-general theories of memory, and suggest that both material and process are important determinants of memory performance in amnesia, and different types of materials may depend more or less on recollection and familiarity. PMID:20833190
Face detection and eyeglasses detection for thermal face recognition
NASA Astrophysics Data System (ADS)
Zheng, Yufeng
2012-01-01
Thermal face recognition becomes an active research direction in human identification because it does not rely on illumination condition. Face detection and eyeglasses detection are necessary steps prior to face recognition using thermal images. Infrared light cannot go through glasses and thus glasses will appear as dark areas in a thermal image. One possible solution is to detect eyeglasses and to exclude the eyeglasses areas before face matching. In thermal face detection, a projection profile analysis algorithm is proposed, where region growing and morphology operations are used to segment the body of a subject; then the derivatives of two projections (horizontal and vertical) are calculated and analyzed to locate a minimal rectangle of containing the face area. Of course, the searching region of a pair of eyeglasses is within the detected face area. The eyeglasses detection algorithm should produce either a binary mask if eyeglasses present, or an empty set if no eyeglasses at all. In the proposed eyeglasses detection algorithm, block processing, region growing, and priori knowledge (i.e., low mean and variance within glasses areas, the shapes and locations of eyeglasses) are employed. The results of face detection and eyeglasses detection are quantitatively measured and analyzed using the manually defined ground truths (for both face and eyeglasses). Our experimental results shown that the proposed face detection and eyeglasses detection algorithms performed very well in contrast with the predefined ground truths.
Motion Planning in a Society of Intelligent Mobile Agents
NASA Technical Reports Server (NTRS)
Esterline, Albert C.; Shafto, Michael (Technical Monitor)
2002-01-01
The majority of the work on this grant involved formal modeling of human-computer integration. We conceptualize computer resources as a multiagent system so that these resources and human collaborators may be modeled uniformly. In previous work we had used modal for this uniform modeling, and we had developed a process-algebraic agent abstraction. In this work, we applied this abstraction (using CSP) in uniformly modeling agents and users, which allowed us to use tools for investigating CSP models. This work revealed the power of, process-algebraic handshakes in modeling face-to-face conversation. We also investigated specifications of human-computer systems in the style of algebraic specification. This involved specifying the common knowledge required for coordination and process-algebraic patterns of communication actions intended to establish the common knowledge. We investigated the conditions for agents endowed with perception to gain common knowledge and implemented a prototype neural-network system that allows agents to detect when such conditions hold. The literature on multiagent systems conceptualizes communication actions as speech acts. We implemented a prototype system that infers the deontic effects (obligations, permissions, prohibitions) of speech acts and detects violations of these effects. A prototype distributed system was developed that allows users to collaborate in moving proxy agents; it was designed to exploit handshakes and common knowledge Finally. in work carried over from a previous NASA ARC grant, about fifteen undergraduates developed and presented projects on multiagent motion planning.
Seligman, Martin E.P.; Kahana, Michael
2009-01-01
Can intuition be taught? The way in which faces are recognized, the structure of natural classes, and the architecture of intuition may all be instances of the same process. The conjecture that intuition is a species of recognition memory implies that human intuitive decision making can be enormously enhanced by virtual simulation. PMID:20300491
Incremental Inductive Learning in a Constructivist Agent
NASA Astrophysics Data System (ADS)
Perotto, Filipo Studzinski; Älvares, Luís Otávio
The constructivist paradigm in Artificial Intelligence has been definitively inaugurated in the earlier 1990's by Drescher's pioneer work [10]. He faces the challenge of design an alternative model for machine learning, founded in the human cognitive developmental process described by Piaget [x]. His effort has inspired many other researchers.
Continuing Education in Architecture: The Process, the Issues, the Challenge.
ERIC Educational Resources Information Center
Frandson, Phillip E.
1980-01-01
The author sees three critical issues facing the architecture field: (1) the communications gap between client and practitioner; (2) humanization of the environment; and (3) financial, spatial, material, and societal constraints. He examines the role of continuing education and professional associations in responding to those challenges, which are…
Kujala, Miiamaaria V; Somppi, Sanni; Jokela, Markus; Vainio, Outi; Parkkonen, Lauri
2017-01-01
Facial expressions are important for humans in communicating emotions to the conspecifics and enhancing interpersonal understanding. Many muscles producing facial expressions in humans are also found in domestic dogs, but little is known about how humans perceive dog facial expressions, and which psychological factors influence people's perceptions. Here, we asked 34 observers to rate the valence, arousal, and the six basic emotions (happiness, sadness, surprise, disgust, fear, and anger/aggressiveness) from images of human and dog faces with Pleasant, Neutral and Threatening expressions. We investigated how the subjects' personality (the Big Five Inventory), empathy (Interpersonal Reactivity Index) and experience of dog behavior affect the ratings of dog and human faces. Ratings of both species followed similar general patterns: human subjects classified dog facial expressions from pleasant to threatening very similarly to human facial expressions. Subjects with higher emotional empathy evaluated Threatening faces of both species as more negative in valence and higher in anger/aggressiveness. More empathetic subjects also rated the happiness of Pleasant humans but not dogs higher, and they were quicker in their valence judgments of Pleasant human, Threatening human and Threatening dog faces. Experience with dogs correlated positively with ratings of Pleasant and Neutral dog faces. Personality also had a minor effect on the ratings of Pleasant and Neutral faces in both species. The results imply that humans perceive human and dog facial expression in a similar manner, and the perception of both species is influenced by psychological factors of the evaluators. Especially empathy affects both the speed and intensity of rating dogs' emotional facial expressions.
Kujala, Miiamaaria V.; Somppi, Sanni; Jokela, Markus; Vainio, Outi; Parkkonen, Lauri
2017-01-01
Facial expressions are important for humans in communicating emotions to the conspecifics and enhancing interpersonal understanding. Many muscles producing facial expressions in humans are also found in domestic dogs, but little is known about how humans perceive dog facial expressions, and which psychological factors influence people’s perceptions. Here, we asked 34 observers to rate the valence, arousal, and the six basic emotions (happiness, sadness, surprise, disgust, fear, and anger/aggressiveness) from images of human and dog faces with Pleasant, Neutral and Threatening expressions. We investigated how the subjects’ personality (the Big Five Inventory), empathy (Interpersonal Reactivity Index) and experience of dog behavior affect the ratings of dog and human faces. Ratings of both species followed similar general patterns: human subjects classified dog facial expressions from pleasant to threatening very similarly to human facial expressions. Subjects with higher emotional empathy evaluated Threatening faces of both species as more negative in valence and higher in anger/aggressiveness. More empathetic subjects also rated the happiness of Pleasant humans but not dogs higher, and they were quicker in their valence judgments of Pleasant human, Threatening human and Threatening dog faces. Experience with dogs correlated positively with ratings of Pleasant and Neutral dog faces. Personality also had a minor effect on the ratings of Pleasant and Neutral faces in both species. The results imply that humans perceive human and dog facial expression in a similar manner, and the perception of both species is influenced by psychological factors of the evaluators. Especially empathy affects both the speed and intensity of rating dogs’ emotional facial expressions. PMID:28114335
Behaviorally Relevant Abstract Object Identity Representation in the Human Parietal Cortex
Jeong, Su Keun
2016-01-01
The representation of object identity is fundamental to human vision. Using fMRI and multivoxel pattern analysis, here we report the representation of highly abstract object identity information in human parietal cortex. Specifically, in superior intraparietal sulcus (IPS), a region previously shown to track visual short-term memory capacity, we found object identity representations for famous faces varying freely in viewpoint, hairstyle, facial expression, and age; and for well known cars embedded in different scenes, and shown from different viewpoints and sizes. Critically, these parietal identity representations were behaviorally relevant as they closely tracked the perceived face-identity similarity obtained in a behavioral task. Meanwhile, the task-activated regions in prefrontal and parietal cortices (excluding superior IPS) did not exhibit such abstract object identity representations. Unlike previous studies, we also failed to observe identity representations in posterior ventral and lateral visual object-processing regions, likely due to the greater amount of identity abstraction demanded by our stimulus manipulation here. Our MRI slice coverage precluded us from examining identity representation in anterior temporal lobe, a likely region for the computing of identity information in the ventral region. Overall, we show that human parietal cortex, part of the dorsal visual processing pathway, is capable of holding abstract and complex visual representations that are behaviorally relevant. These results argue against a “content-poor” view of the role of parietal cortex in attention. Instead, the human parietal cortex seems to be “content rich” and capable of directly participating in goal-driven visual information representation in the brain. SIGNIFICANCE STATEMENT The representation of object identity (including faces) is fundamental to human vision and shapes how we interact with the world. Although object representation has traditionally been associated with human occipital and temporal cortices, here we show, by measuring fMRI response patterns, that a region in the human parietal cortex can robustly represent task-relevant object identities. These representations are invariant to changes in a host of visual features, such as viewpoint, and reflect an abstract level of representation that has not previously been reported in the human parietal cortex. Critically, these neural representations are behaviorally relevant as they closely track the perceived object identities. Human parietal cortex thus participates in the moment-to-moment goal-directed visual information representation in the brain. PMID:26843642
NASA Astrophysics Data System (ADS)
Liu, Zexi; Cohen, Fernand
2017-11-01
We describe an approach for synthesizing a three-dimensional (3-D) face structure from an image or images of a human face taken at a priori unknown poses using gender and ethnicity specific 3-D generic models. The synthesis process starts with a generic model, which is personalized as images of the person become available using preselected landmark points that are tessellated to form a high-resolution triangular mesh. From a single image, two of the three coordinates of the model are reconstructed in accordance with the given image of the person, while the third coordinate is sampled from the generic model, and the appearance is made in accordance with the image. With multiple images, all coordinates and appearance are reconstructed in accordance with the observed images. This method allows for accurate pose estimation as well as face identification in 3-D rendering of a difficult two-dimensional (2-D) face recognition problem into a much simpler 3-D surface matching problem. The estimation of the unknown pose is achieved using the Levenberg-Marquardt optimization process. Encouraging experimental results are obtained in a controlled environment with high-resolution images under a good illumination condition, as well as for images taken in an uncontrolled environment under arbitrary illumination with low-resolution cameras.
Quarto, Tiziana; Blasi, Giuseppe; Maddalena, Chiara; Viscanti, Giovanna; Lanciano, Tiziana; Soleti, Emanuela; Mangiulli, Ivan; Taurisano, Paolo; Fazio, Leonardo; Bertolino, Alessandro; Curci, Antonietta
2016-01-01
The human ability of identifying, processing and regulating emotions from social stimuli is generally referred as Emotional Intelligence (EI). Within EI, Ability EI identifies a performance measure assessing individual skills at perceiving, using, understanding and managing emotions. Previous models suggest that a brain “somatic marker circuitry” (SMC) sustains emotional sub-processes included in EI. Three primary brain regions are included: the amygdala, the insula and the ventromedial prefrontal cortex (vmPFC). Here, our aim was to investigate the relationship between Ability EI scores and SMC activity during social judgment of emotional faces. Sixty-three healthy subjects completed a test measuring Ability EI and underwent fMRI during a social decision task (i.e. approach or avoid) about emotional faces with different facial expressions. Imaging data revealed that EI scores are associated with left insula activity during social judgment of emotional faces as a function of facial expression. Specifically, higher EI scores are associated with greater left insula activity during social judgment of fearful faces but also with lower activity of this region during social judgment of angry faces. These findings indicate that the association between Ability EI and the SMC activity during social behavior is region- and emotion-specific. PMID:26859495
Chaotic time series analysis of vision evoked EEG
NASA Astrophysics Data System (ADS)
Zhang, Ningning; Wang, Hong
2010-01-01
To investigate the human brain activities for aesthetic processing, beautiful woman face picture and ugly buffoon face picture were applied. Twelve subjects were assigned the aesthetic processing task while the electroencephalogram (EEG) was recorded. Event-related brain potential (ERP) was required from the 32 scalp electrodes and the ugly buffoon picture produced larger amplitudes for the N1, P2, N2, and late slow wave components. Average ERP from the ugly buffoon picture were larger than that from the beautiful woman picture. The ERP signals shows that the ugly buffoon elite higher emotion waves than the beautiful woman face, because some expression is on the face of the buffoon. Then, chaos time series analysis was carried out to calculate the largest Lyapunov exponent using small data set method and the correlation dimension using G-P algorithm. The results show that the largest Lyapunov exponents of the ERP signals are greater than zero, which indicate that the ERP signals may be chaotic. The correlations dimensions coming from the beautiful woman picture are larger than that from the ugly buffoon picture. The comparison of the correlations dimensions shows that the beautiful face can excite the brain nerve cells. The research in the paper is a persuasive proof to the opinion that cerebrum's work is chaotic under some picture stimuli.
Modulation of α power and functional connectivity during facial affect recognition.
Popov, Tzvetan; Miller, Gregory A; Rockstroh, Brigitte; Weisz, Nathan
2013-04-03
Research has linked oscillatory activity in the α frequency range, particularly in sensorimotor cortex, to processing of social actions. Results further suggest involvement of sensorimotor α in the processing of facial expressions, including affect. The sensorimotor face area may be critical for perception of emotional face expression, but the role it plays is unclear. The present study sought to clarify how oscillatory brain activity contributes to or reflects processing of facial affect during changes in facial expression. Neuromagnetic oscillatory brain activity was monitored while 30 volunteers viewed videos of human faces that changed their expression from neutral to fearful, neutral, or happy expressions. Induced changes in α power during the different morphs, source analysis, and graph-theoretic metrics served to identify the role of α power modulation and cross-regional coupling by means of phase synchrony during facial affect recognition. Changes from neutral to emotional faces were associated with a 10-15 Hz power increase localized in bilateral sensorimotor areas, together with occipital power decrease, preceding reported emotional expression recognition. Graph-theoretic analysis revealed that, in the course of a trial, the balance between sensorimotor power increase and decrease was associated with decreased and increased transregional connectedness as measured by node degree. Results suggest that modulations in α power facilitate early registration, with sensorimotor cortex including the sensorimotor face area largely functionally decoupled and thereby protected from additional, disruptive input and that subsequent α power decrease together with increased connectedness of sensorimotor areas facilitates successful facial affect recognition.
Kume, Yuko; Maekawa, Toshihiko; Urakawa, Tomokazu; Hironaga, Naruhito; Ogata, Katsuya; Shigyo, Maki; Tobimatsu, Shozo
2016-08-01
When and where the awareness of faces is consciously initiated is unclear. We used magnetoencephalography to probe the brain responses associated with face awareness under intermittent pseudo-rivalry (PR) and binocular rivalry (BR) conditions. The stimuli comprised three pictures: a human face, a monkey face and a house. In the PR condition, we detected the M130 component, which has been minimally characterized in previous research. We obtained a clear recording of the M170 component in the fusiform face area (FFA), and found that this component had an earlier response time to faces compared with other objects. The M170 occurred predominantly in the right hemisphere in both conditions. In the BR condition, the amplitude of the M130 significantly increased in the right hemisphere irrespective of the physical characteristics of the visual stimuli. Conversely, we did not detect the M170 when the face image was suppressed in the BR condition, although this component was clearly present when awareness for the face was initiated. We also found a significant difference in the latency of the M170 (human
The bridge of iconicity: from a world of experience to the experience of language.
Perniss, Pamela; Vigliocco, Gabriella
2014-09-19
Iconicity, a resemblance between properties of linguistic form (both in spoken and signed languages) and meaning, has traditionally been considered to be a marginal, irrelevant phenomenon for our understanding of language processing, development and evolution. Rather, the arbitrary and symbolic nature of language has long been taken as a design feature of the human linguistic system. In this paper, we propose an alternative framework in which iconicity in face-to-face communication (spoken and signed) is a powerful vehicle for bridging between language and human sensori-motor experience, and, as such, iconicity provides a key to understanding language evolution, development and processing. In language evolution, iconicity might have played a key role in establishing displacement (the ability of language to refer beyond what is immediately present), which is core to what language does; in ontogenesis, iconicity might play a critical role in supporting referentiality (learning to map linguistic labels to objects, events, etc., in the world), which is core to vocabulary development. Finally, in language processing, iconicity could provide a mechanism to account for how language comes to be embodied (grounded in our sensory and motor systems), which is core to meaningful communication.
The bridge of iconicity: from a world of experience to the experience of language
Perniss, Pamela; Vigliocco, Gabriella
2014-01-01
Iconicity, a resemblance between properties of linguistic form (both in spoken and signed languages) and meaning, has traditionally been considered to be a marginal, irrelevant phenomenon for our understanding of language processing, development and evolution. Rather, the arbitrary and symbolic nature of language has long been taken as a design feature of the human linguistic system. In this paper, we propose an alternative framework in which iconicity in face-to-face communication (spoken and signed) is a powerful vehicle for bridging between language and human sensori-motor experience, and, as such, iconicity provides a key to understanding language evolution, development and processing. In language evolution, iconicity might have played a key role in establishing displacement (the ability of language to refer beyond what is immediately present), which is core to what language does; in ontogenesis, iconicity might play a critical role in supporting referentiality (learning to map linguistic labels to objects, events, etc., in the world), which is core to vocabulary development. Finally, in language processing, iconicity could provide a mechanism to account for how language comes to be embodied (grounded in our sensory and motor systems), which is core to meaningful communication. PMID:25092668
Becoming a Lunari or Taiyo expert: learned attention to parts drives holistic processing of faces.
Chua, Kao-Wei; Richler, Jennifer J; Gauthier, Isabel
2014-06-01
Faces are processed holistically, but the locus of holistic processing remains unclear. We created two novel races of faces (Lunaris and Taiyos) to study how experience with face parts influences holistic processing. In Experiment 1, subjects individuated Lunaris wherein the top, bottom, or both face halves contained diagnostic information. Subjects who learned to attend to face parts exhibited no holistic processing. This suggests that individuation only leads to holistic processing when the whole face is attended. In Experiment 2, subjects individuated both Lunaris and Taiyos, with diagnostic information in complementary face halves of the two races. Holistic processing was measured with composites made of either diagnostic or nondiagnostic face parts. Holistic processing was only observed for composites made from diagnostic face parts, demonstrating that holistic processing can occur for diagnostic face parts that were never seen together. These results suggest that holistic processing is an expression of learned attention to diagnostic face parts. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Human perceptual decision making: disentangling task onset and stimulus onset.
Cardoso-Leite, Pedro; Waszak, Florian; Lepsien, Jöran
2014-07-01
The left dorsolateral prefrontal cortex (ldlPFC) has been highlighted as a key actor in human perceptual decision-making (PDM): It is theorized to support decision-formation independently of stimulus type or motor response. PDM studies however generally confound stimulus onset and task onset: when the to-be-recognized stimulus is presented, subjects know that a stimulus is shown and can set up processing resources-even when they do not know which stimulus is shown. We hypothesized that the ldlPFC might be involved in task preparation rather than decision-formation. To test this, we asked participants to report whether sequences of noisy images contained a face or a house within an experimental design that decorrelates stimulus and task onset. Decision-related processes should yield a sustained response during the task, whereas preparation-related areas should yield transient responses at its beginning. The results show that the brain activation pattern at task onset is strikingly similar to that observed in previous PDM studies. In particular, they contradict the idea that ldlPFC forms an abstract decision and suggest instead that its activation reflects preparation for the upcoming task. We further investigated the role of the fusiform face areas and parahippocampal place areas which are thought to be face and house detectors, respectively, that feed their signals to higher level decision areas. The response patterns within these areas suggest that this interpretation is unlikely and that the decisions about the presence of a face or a house in a noisy image might instead already be computed within these areas without requiring higher-order areas. Copyright © 2013 Wiley Periodicals, Inc.
Elastic facial movement influences part-based but not holistic processing
Xiao, Naiqi G.; Quinn, Paul C.; Ge, Liezhong; Lee, Kang
2013-01-01
Face processing has been studied for decades. However, most of the empirical investigations have been conducted using static face images as stimuli. Little is known about whether static face processing findings can be generalized to real world contexts, in which faces are constantly moving. The present study investigates the nature of face processing (holistic vs. part-based) in elastic moving faces. Specifically, we focus on whether elastic moving faces, as compared to static ones, can facilitate holistic or part-based face processing. Using the composite paradigm, participants were asked to remember either an elastic moving face (i.e., a face that blinks and chews) or a static face, and then tested with a static composite face. The composite effect was (1) significantly smaller in the dynamic condition than in the static condition, (2) consistently found with different face encoding times (Experiments 1–3), and (3) present for the recognition of both upper and lower face parts (Experiment 4). These results suggest that elastic facial motion facilitates part-based processing, rather than holistic processing. Thus, while previous work with static faces has emphasized an important role for holistic processing, the current work highlights an important role for featural processing with moving faces. PMID:23398253
Kitayama, Shinobu
2014-01-01
The fundamentally social nature of humans is revealed in their exquisitely high sensitivity to potentially negative evaluations held by others. At present, however, little is known about neurocortical correlates of the response to such social-evaluative threat. Here, we addressed this issue by showing that mere exposure to an image of a watching face is sufficient to automatically evoke a social-evaluative threat for those who are relatively high in interdependent self-construal. Both European American and Asian participants performed a flanker task while primed with a face (vs control) image. The relative increase of the error-related negativity (ERN) in the face (vs control) priming condition became more pronounced as a function of interdependent (vs independent) self-construal. Relative to European Americans, Asians were more interdependent and, as predicted, they showed a reliably stronger ERN in the face (vs control) priming condition. Our findings suggest that the ERN can serve as a robust empirical marker of self-threat that is closely modulated by socio-cultural variables. PMID:23160814
Repetition suppression of faces is modulated by emotion
NASA Astrophysics Data System (ADS)
Ishai, Alumit; Pessoa, Luiz; Bikle, Philip C.; Ungerleider, Leslie G.
2004-06-01
Single-unit recordings and functional brain imaging studies have shown reduced neural responses to repeated stimuli in the visual cortex. By using event-related functional MRI, we compared the activation evoked by repetitions of neutral and fearful faces, which were either task relevant (targets) or irrelevant (distracters). We found that within the inferior occipital gyri, lateral fusiform gyri, superior temporal sulci, amygdala, and the inferior frontal gyri/insula, targets evoked stronger responses than distracters and their repetition was associated with significantly reduced responses. Repetition suppression, as manifested by the difference in response amplitude between the first and third repetitions of a target, was stronger for fearful than neutral faces. Distracter faces, regardless of their repetition or valence, evoked negligible activation, indicating top-down attenuation of behaviorally irrelevant stimuli. Our findings demonstrate a three-way interaction between emotional valence, repetition, and task relevance and suggest that repetition suppression is influenced by high-level cognitive processes in the human brain. face perception | functional MRI
Shape, color, and the other-race effect in the infant brain
Balas, Benjamin; Westerlund, Alissa; Hung, Katherine; Nelson, Charles A.
2015-01-01
The “other-race” effect describes the phenomenon in which faces are difficult to distinguish from one another if they belong to an ethnic or racial group to which the observer has had little exposure. Adult observers typically display multiple forms of recognition error for other-race faces, and infants exhibit behavioral evidence of a developing other-race effect at about 9 months of age. The neural correlates of the adult other-race effect have been identified using ERPs and fMRI, but the effects of racial category on infants’ neural response to face stimuli have to date not been described. We examine two distinct components of the infant ERP response to human faces and demonstrate through the use of computer-generated “hybrid” faces that the observed other-race effect is not the result of low-level sensitivity to 3D shape and color differences between the stimuli. Rather, differential processing depends critically on the joint encoding of race-specific features. PMID:21676108
Exploring the spatio-temporal neural basis of face learning
Yang, Ying; Xu, Yang; Jew, Carol A.; Pyles, John A.; Kass, Robert E.; Tarr, Michael J.
2017-01-01
Humans are experts at face individuation. Although previous work has identified a network of face-sensitive regions and some of the temporal signatures of face processing, as yet, we do not have a clear understanding of how such face-sensitive regions support learning at different time points. To study the joint spatio-temporal neural basis of face learning, we trained subjects to categorize two groups of novel faces and recorded their neural responses using magnetoencephalography (MEG) throughout learning. A regression analysis of neural responses in face-sensitive regions against behavioral learning curves revealed significant correlations with learning in the majority of the face-sensitive regions in the face network, mostly between 150–250 ms, but also after 300 ms. However, the effect was smaller in nonventral regions (within the superior temporal areas and prefrontal cortex) than that in the ventral regions (within the inferior occipital gyri (IOG), midfusiform gyri (mFUS) and anterior temporal lobes). A multivariate discriminant analysis also revealed that IOG and mFUS, which showed strong correlation effects with learning, exhibited significant discriminability between the two face categories at different time points both between 150–250 ms and after 300 ms. In contrast, the nonventral face-sensitive regions, where correlation effects with learning were smaller, did exhibit some significant discriminability, but mainly after 300 ms. In sum, our findings indicate that early and recurring temporal components arising from ventral face-sensitive regions are critically involved in learning new faces. PMID:28570739
Exploring the spatio-temporal neural basis of face learning.
Yang, Ying; Xu, Yang; Jew, Carol A; Pyles, John A; Kass, Robert E; Tarr, Michael J
2017-06-01
Humans are experts at face individuation. Although previous work has identified a network of face-sensitive regions and some of the temporal signatures of face processing, as yet, we do not have a clear understanding of how such face-sensitive regions support learning at different time points. To study the joint spatio-temporal neural basis of face learning, we trained subjects to categorize two groups of novel faces and recorded their neural responses using magnetoencephalography (MEG) throughout learning. A regression analysis of neural responses in face-sensitive regions against behavioral learning curves revealed significant correlations with learning in the majority of the face-sensitive regions in the face network, mostly between 150-250 ms, but also after 300 ms. However, the effect was smaller in nonventral regions (within the superior temporal areas and prefrontal cortex) than that in the ventral regions (within the inferior occipital gyri (IOG), midfusiform gyri (mFUS) and anterior temporal lobes). A multivariate discriminant analysis also revealed that IOG and mFUS, which showed strong correlation effects with learning, exhibited significant discriminability between the two face categories at different time points both between 150-250 ms and after 300 ms. In contrast, the nonventral face-sensitive regions, where correlation effects with learning were smaller, did exhibit some significant discriminability, but mainly after 300 ms. In sum, our findings indicate that early and recurring temporal components arising from ventral face-sensitive regions are critically involved in learning new faces.
Lahaie, A; Mottron, L; Arguin, M; Berthiaume, C; Jemel, B; Saumier, D
2006-01-01
Configural processing in autism was studied in Experiment 1 by using the face inversion effect. A normal inversion effect was observed in the participants with autism, suggesting intact configural face processing. A priming paradigm using partial or complete faces served in Experiment 2 to assess both local and configural face processing. Overall, normal priming effects were found in participants with autism, irrespective of whether the partial face primes were intuitive face parts (i.e., eyes, nose, etc.) or arbitrary segments. An exception, however, was that participants with autism showed magnified priming with single face parts relative to typically developing control participants. The present findings argue for intact configural processing in autism along with an enhanced processing for individual face parts. The face-processing peculiarities known to characterize autism are discussed on the basis of these results and past congruent results with nonsocial stimuli.
Exploring movement and energy in human P-glycoprotein conformational rearrangement.
Zhang, Yue; Gong, Weikang; Wang, Yan; Liu, Yang; Li, Chunhua
2018-04-24
Human P-glycoprotein (P-gp), a kind of ATP-Binding Cassette transporter, can export a diverse variety of anti-cancer drugs out of the tumor cell. Its overexpression is one of the main reasons for the multidrug resistance (MDR) of tumor cells. It has been confirmed that during the substrate transport process, P-gp experiences a large-scale structural rearrangement from the inward- to outward-facing states. However, the mechanism of how the nucleotide-binding domains (NBDs) control the transmembrane domains (TMDs) to open towards the periplasm in the outward-facing state has not yet been fully characterized. Herein, targeted molecular dynamics simulations were performed to explore the conformational rearrangement of human P-gp. The results show that the allosteric process proceeds in a coupled way, and first the transition is driven by the NBDs, and then transmitted to the cytoplasmic parts of TMDs, finally to the periplasmic parts. The trajectories show that besides the translational motions, the NBDs undergo a rotation movement, which mainly occurs in xy plane and ensures the formation of the correct ATP-binding pockets. The analyses on the interaction energies between the six structure segments (cICLs) from the TMDs and NBDs reveal that their subtle energy differences play an important role in causing the periplasmic parts of the transmembrane helices to separate from each other in the established directions and in appropriate amplitudes. This conclusion can explain the two experimental phenomena about human P-gp in some extent. These studies have provided a detailed exploration into human P-gp rearrangement process and given an energy insight into the TMD reorientation during P-gp transition.
Ventromedial prefrontal cortex mediates visual attention during facial emotion recognition.
Wolf, Richard C; Philippi, Carissa L; Motzkin, Julian C; Baskaya, Mustafa K; Koenigs, Michael
2014-06-01
The ventromedial prefrontal cortex is known to play a crucial role in regulating human social and emotional behaviour, yet the precise mechanisms by which it subserves this broad function remain unclear. Whereas previous neuropsychological studies have largely focused on the role of the ventromedial prefrontal cortex in higher-order deliberative processes related to valuation and decision-making, here we test whether ventromedial prefrontal cortex may also be critical for more basic aspects of orienting attention to socially and emotionally meaningful stimuli. Using eye tracking during a test of facial emotion recognition in a sample of lesion patients, we show that bilateral ventromedial prefrontal cortex damage impairs visual attention to the eye regions of faces, particularly for fearful faces. This finding demonstrates a heretofore unrecognized function of the ventromedial prefrontal cortex-the basic attentional process of controlling eye movements to faces expressing emotion. © The Author (2014). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
What can individual differences reveal about face processing?
Yovel, Galit; Wilmer, Jeremy B.; Duchaine, Brad
2014-01-01
Faces are probably the most widely studied visual stimulus. Most research on face processing has used a group-mean approach that averages behavioral or neural responses to faces across individuals and treats variance between individuals as noise. However, individual differences in face processing can provide valuable information that complements and extends findings from group-mean studies. Here we demonstrate that studies employing an individual differences approach—examining associations and dissociations across individuals—can answer fundamental questions about the way face processing operates. In particular these studies allow us to associate and dissociate the mechanisms involved in face processing, tie behavioral face processing mechanisms to neural mechanisms, link face processing to broader capacities and quantify developmental influences on face processing. The individual differences approach we illustrate here is a powerful method that should be further explored within the domain of face processing as well as fruitfully applied across the cognitive sciences. PMID:25191241
The structural and functional correlates of the efficiency in fearful face detection.
Wang, Yongchao; Guo, Nana; Zhao, Li; Huang, Hui; Yao, Xiaonan; Sang, Na; Hou, Xin; Mao, Yu; Bi, Taiyong; Qiu, Jiang
2017-06-01
Human visual system is found to be much efficient in searching for a fearful face. Some individuals are more sensitive to this threat-related stimulus. However, we still know little about the neural correlates of such variability. In the current study, we exploited a visual search paradigm, and asked the subjects to search for a fearful face or a target gender. Every subject showed a shallower search function for fearful face search than face gender search, indicating a stable fearful face advantage. We then used voxel-based morphometry (VBM) analysis and correlated this advantage to the gray matter volume (GMV) of some presumably face related cortical areas. The result revealed that only the left fusiform gyrus showed a significant positive correlation. Next, we defined the left fusiform gyrus as the seed region and calculated its resting state functional connectivity to the whole brain. Correlations were also calculated between fearful face advantage and these connectivities. In this analysis, we found positive correlations in the inferior parietal lobe and the ventral medial prefrontal cortex. These results suggested that the anatomical structure of the left fusiform gyrus might determine the search efficiency of fearful face, and frontoparietal attention network involved in this process through top-down attentional modulation. Copyright © 2017. Published by Elsevier Ltd.
Wang, Xu; Zhu, Qi; Song, Yiying; Liu, Jia
2017-08-28
Prior studies on development of functional specialization in human brain mainly focus on age-related increases in regional activation and connectivity among regions. However, a few recent studies on the face network demonstrate age-related decrease in face-specialized activation in the extended face network (EFN), in addition to increase in activation in the core face network (CFN). Here we used a voxel-based global brain connectivity approach to investigate whether development of the face network exhibited both increase and decrease in network connectivity. We found the voxel-wise resting-state functional connectivity (FC) within the CFN increased with age in bilateral posterior superior temporal sulcus, suggesting the integration of the CFN during development. Interestingly, the FC of the voxels in the EFN to the right fusiform face area and occipital face area decreased with age, suggesting that the CFN segregated from the EFN during development. Moreover, the age-related connectivity in the CFN was related to behavioral performance in face processing. Overall, our study demonstrated developmental reorganization of the face network achieved by both integration within the CFN and segregation of the CFN from the EFN, which may account for the simultaneous increases and decreases in neural activation during the development of the face network. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Zhang, Yi-Qing; Cui, Jing; Zhang, Shu-Min; Zhang, Qi; Li, Xiang
2016-02-01
Modelling temporal networks of human face-to-face contacts is vital both for understanding the spread of airborne pathogens and word-of-mouth spreading of information. Although many efforts have been devoted to model these temporal networks, there are still two important social features, public activity and individual reachability, have been ignored in these models. Here we present a simple model that captures these two features and other typical properties of empirical face-to-face contact networks. The model describes agents which are characterized by an attractiveness to slow down the motion of nearby people, have event-triggered active probability and perform an activity-dependent biased random walk in a square box with periodic boundary. The model quantitatively reproduces two empirical temporal networks of human face-to-face contacts which are testified by their network properties and the epidemic spread dynamics on them.
2018-01-01
Reports an error in "Facing Humanness: Facial Width-to-Height Ratio Predicts Ascriptions of Humanity" by Jason C. Deska, E. Paige Lloyd and Kurt Hugenberg ( Journal of Personality and Social Psychology , Advanced Online Publication, Aug 28, 2017, np). In the article, there is a data error in the Results section of Study 1c. The fourth sentence of the fourth paragraph should read as follows: High fWHR targets (M= 74.39, SD=18.25) were rated as equivalently evolved as their low fWHR counterparts (M=79.39, SD=15.91). (The following abstract of the original article appeared in record 2017-36694-001.) The ascription of mind to others is central to social cognition. Most research on the ascription of mind has focused on motivated, top-down processes. The current work provides novel evidence that facial width-to-height ratio (fWHR) serves as a bottom-up perceptual signal of humanness. Using a range of well-validated operational definitions of humanness, we provide evidence across 5 studies that target faces with relatively greater fWHR are seen as less than fully human compared with their relatively lower fWHR counterparts. We then present 2 ancillary studies exploring whether the fWHR-to-humanness link is mediated by previously established fWHR-trait links in the literature. Finally, 3 additional studies extend this fWHR-humanness link beyond measurements of humanness, demonstrating that the fWHR-humanness link has consequences for downstream social judgments including the sorts of crimes people are perceived to be guilty of and the social tasks for which they seem helpful. In short, we provide evidence for the hypothesis that individuals with relatively greater facial width-to-height ratio are routinely denied sophisticated, humanlike minds. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
24/7 security system: 60-FPS color EMCCD camera with integral human recognition
NASA Astrophysics Data System (ADS)
Vogelsong, T. L.; Boult, T. E.; Gardner, D. W.; Woodworth, R.; Johnson, R. C.; Heflin, B.
2007-04-01
An advanced surveillance/security system is being developed for unattended 24/7 image acquisition and automated detection, discrimination, and tracking of humans and vehicles. The low-light video camera incorporates an electron multiplying CCD sensor with a programmable on-chip gain of up to 1000:1, providing effective noise levels of less than 1 electron. The EMCCD camera operates in full color mode under sunlit and moonlit conditions, and monochrome under quarter-moonlight to overcast starlight illumination. Sixty frame per second operation and progressive scanning minimizes motion artifacts. The acquired image sequences are processed with FPGA-compatible real-time algorithms, to detect/localize/track targets and reject non-targets due to clutter under a broad range of illumination conditions and viewing angles. The object detectors that are used are trained from actual image data. Detectors have been developed and demonstrated for faces, upright humans, crawling humans, large animals, cars and trucks. Detection and tracking of targets too small for template-based detection is achieved. For face and vehicle targets the results of the detection are passed to secondary processing to extract recognition templates, which are then compared with a database for identification. When combined with pan-tilt-zoom (PTZ) optics, the resulting system provides a reliable wide-area 24/7 surveillance system that avoids the high life-cycle cost of infrared cameras and image intensifiers.
The "highs and lows" of the human brain on dopaminergics: Evidence from neuropharmacology.
Martins, Daniel; Mehta, Mitul A; Prata, Diana
2017-09-01
Rewards are appetitive events that elicit approach. Ground-breaking findings from neurophysiological experiments in animals, alongside neuropharmacology and neuroimaging research in human samples have identified dopamine as the main neurochemical messenger of global reward processing in the brain. However, dopamine's contribution to the different components of reward processing remains to be precisely defined. To facilitate the informed design and interpretation of reward studies in humans, we have systematically reviewed all existing human pharmacological studies investigating how drug manipulation of the dopamine system affects reward-related behaviour and its neural correlates. Pharmacological experiments in humans face methodological challenges in terms of the: 1) specificity and safety of the available drugs for administration in humans, 2) uncertainties about pre- or post-synaptic modes of action, and 3) possible interactions with inter-individual neuropsychological or genotypic variables. In order to circumvent some of these limitations, future research should rely on the combination of different levels of observation, in integrative pharmaco-genetics-neurobehavioral approaches, to more completely characterize dopamine's role in both general and modality-specific processing of reward. Copyright © 2017 Elsevier Ltd. All rights reserved.
Damaskinou, Nikoleta; Watling, Dawn
2018-05-01
This study was designed to investigate the patterns of electrophysiological responses of early emotional processing at frontocentral sites in adults and to explore whether adults' activation patterns show hemispheric lateralization for facial emotion processing. Thirty-five adults viewed full face and chimeric face stimuli. After viewing two faces, sequentially, participants were asked to decide which of the two faces was more emotive. The findings from the standard faces and the chimeric faces suggest that emotion processing is present during the early phases of face processing in the frontocentral sites. In particular, sad emotional faces are processed differently than neutral and happy (including happy chimeras) faces in these early phases of processing. Further, there were differences in the electrode amplitudes over the left and right hemisphere, particularly in the early temporal window. This research provides supporting evidence that the chimeric face test is a test of emotion processing that elicits right hemispheric processing.
Young children perceive less humanness in outgroup faces.
McLoughlin, Niamh; Tipper, Steven P; Over, Harriet
2018-03-01
We investigated when young children first dehumanize outgroups. Across two studies, 5- and 6-year-olds were asked to rate how human they thought a set of ambiguous doll-human face morphs were. We manipulated whether these faces belonged to their gender in- or gender outgroup (Study 1) and to a geographically based in- or outgroup (Study 2). In both studies, the tendency to perceive outgroup faces as less human relative to ingroup faces increased with age. Explicit ingroup preference, in contrast, was present even in the youngest children and remained stable across age. These results demonstrate that children dehumanize outgroup members from relatively early in development and suggest that the tendency to do so may be partially distinguishable from intergroup preference. This research has important implications for our understanding of children's perception of humanness and the origins of intergroup bias. © 2017 John Wiley & Sons Ltd.
Dog experts' brains distinguish socially relevant body postures similarly in dogs and humans.
Kujala, Miiamaaria V; Kujala, Jan; Carlson, Synnöve; Hari, Riitta
2012-01-01
We read conspecifics' social cues effortlessly, but little is known about our abilities to understand social gestures of other species. To investigate the neural underpinnings of such skills, we used functional magnetic resonance imaging to study the brain activity of experts and non-experts of dog behavior while they observed humans or dogs either interacting with, or facing away from a conspecific. The posterior superior temporal sulcus (pSTS) of both subject groups dissociated humans facing toward each other from humans facing away, and in dog experts, a distinction also occurred for dogs facing toward vs. away in a bilateral area extending from the pSTS to the inferior temporo-occipital cortex: the dissociation of dog behavior was significantly stronger in expert than control group. Furthermore, the control group had stronger pSTS responses to humans than dogs facing toward a conspecific, whereas in dog experts, the responses were of similar magnitude. These findings suggest that dog experts' brains distinguish socially relevant body postures similarly in dogs and humans.
Body Topography Parcellates Human Sensory and Motor Cortex.
Kuehn, Esther; Dinse, Juliane; Jakobsen, Estrid; Long, Xiangyu; Schäfer, Andreas; Bazin, Pierre-Louis; Villringer, Arno; Sereno, Martin I; Margulies, Daniel S
2017-07-01
The cytoarchitectonic map as proposed by Brodmann currently dominates models of human sensorimotor cortical structure, function, and plasticity. According to this model, primary motor cortex, area 4, and primary somatosensory cortex, area 3b, are homogenous areas, with the major division lying between the two. Accumulating empirical and theoretical evidence, however, has begun to question the validity of the Brodmann map for various cortical areas. Here, we combined in vivo cortical myelin mapping with functional connectivity analyses and topographic mapping techniques to reassess the validity of the Brodmann map in human primary sensorimotor cortex. We provide empirical evidence that area 4 and area 3b are not homogenous, but are subdivided into distinct cortical fields, each representing a major body part (the hand and the face). Myelin reductions at the hand-face borders are cortical layer-specific, and coincide with intrinsic functional connectivity borders as defined using large-scale resting state analyses. Our data extend the Brodmann model in human sensorimotor cortex and suggest that body parts are an important organizing principle, similar to the distinction between sensory and motor processing. © The Author 2017. Published by Oxford University Press.
Gender differences in human single neuron responses to male emotional faces
Newhoff, Morgan; Treiman, David M.; Smith, Kris A.; Steinmetz, Peter N.
2015-01-01
Well-documented differences in the psychology and behavior of men and women have spurred extensive exploration of gender's role within the brain, particularly regarding emotional processing. While neuroanatomical studies clearly show differences between the sexes, the functional effects of these differences are less understood. Neuroimaging studies have shown inconsistent locations and magnitudes of gender differences in brain hemodynamic responses to emotion. To better understand the neurophysiology of these gender differences, we analyzed recordings of single neuron activity in the human brain as subjects of both genders viewed emotional expressions. This study included recordings of single-neuron activity of 14 (6 male) epileptic patients in four brain areas: amygdala (236 neurons), hippocampus (n = 270), anterior cingulate cortex (n = 256), and ventromedial prefrontal cortex (n = 174). Neural activity was recorded while participants viewed a series of avatar male faces portraying positive, negative or neutral expressions. Significant gender differences were found in the left amygdala, where 23% (n = 15∕66) of neurons in men were significantly affected by facial emotion, vs. 8% (n = 6∕76) of neurons in women. A Fisher's exact test comparing the two ratios found a highly significant difference between the two (p < 0.01). These results show specific differences between genders at the single-neuron level in the human amygdala. These differences may reflect gender-based distinctions in evolved capacities for emotional processing and also demonstrate the importance of including subject gender as an independent factor in future studies of emotional processing by single neurons in the human amygdala. PMID:26441597
A computer-generated animated face stimulus set for psychophysiological research
Naples, Adam; Nguyen-Phuc, Alyssa; Coffman, Marika; Kresse, Anna; Faja, Susan; Bernier, Raphael; McPartland., James
2014-01-01
Human faces are fundamentally dynamic, but experimental investigations of face perception traditionally rely on static images of faces. While naturalistic videos of actors have been used with success in some contexts, much research in neuroscience and psychophysics demands carefully controlled stimuli. In this paper, we describe a novel set of computer generated, dynamic, face stimuli. These grayscale faces are tightly controlled for low- and high-level visual properties. All faces are standardized in terms of size, luminance, and location and size of facial features. Each face begins with a neutral pose and transitions to an expression over the course of 30 frames. Altogether there are 222 stimuli spanning 3 different categories of movement: (1) an affective movement (fearful face); (2) a neutral movement (close-lipped, puffed cheeks with open eyes); and (3) a biologically impossible movement (upward dislocation of eyes and mouth). To determine whether early brain responses sensitive to low-level visual features differed between expressions, we measured the occipital P100 event related potential (ERP), which is known to reflect differences in early stages of visual processing and the N170, which reflects structural encoding of faces. We found no differences between faces at the P100, indicating that different face categories were well matched on low-level image properties. This database provides researchers with a well-controlled set of dynamic faces controlled on low-level image characteristics that are applicable to a range of research questions in social perception. PMID:25028164
Development of Human Face Literature Database Using Text Mining Approach: Phase I.
Kaur, Paramjit; Krishan, Kewal; Sharma, Suresh K
2018-06-01
The face is an important part of the human body by which an individual communicates in the society. Its importance can be highlighted by the fact that a person deprived of face cannot sustain in the living world. The amount of experiments being performed and the number of research papers being published under the domain of human face have surged in the past few decades. Several scientific disciplines, which are conducting research on human face include: Medical Science, Anthropology, Information Technology (Biometrics, Robotics, and Artificial Intelligence, etc.), Psychology, Forensic Science, Neuroscience, etc. This alarms the need of collecting and managing the data concerning human face so that the public and free access of it can be provided to the scientific community. This can be attained by developing databases and tools on human face using bioinformatics approach. The current research emphasizes on creating a database concerning literature data of human face. The database can be accessed on the basis of specific keywords, journal name, date of publication, author's name, etc. The collected research papers will be stored in the form of a database. Hence, the database will be beneficial to the research community as the comprehensive information dedicated to the human face could be found at one place. The information related to facial morphologic features, facial disorders, facial asymmetry, facial abnormalities, and many other parameters can be extracted from this database. The front end has been developed using Hyper Text Mark-up Language and Cascading Style Sheets. The back end has been developed using hypertext preprocessor (PHP). The JAVA Script has used as scripting language. MySQL (Structured Query Language) is used for database development as it is most widely used Relational Database Management System. XAMPP (X (cross platform), Apache, MySQL, PHP, Perl) open source web application software has been used as the server.The database is still under the developmental phase and discusses the initial steps of its creation. The current paper throws light on the work done till date.
One-year-old fear memories rapidly activate human fusiform gyrus
Pizzagalli, Diego A.
2016-01-01
Fast threat detection is crucial for survival. In line with such evolutionary pressure, threat-signaling fear-conditioned faces have been found to rapidly (<80 ms) activate visual brain regions including the fusiform gyrus on the conditioning day. Whether remotely fear conditioned stimuli (CS) evoke similar early processing enhancements is unknown. Here, 16 participants who underwent a differential face fear-conditioning and extinction procedure on day 1 were presented the initial CS 24 h after conditioning (Recent Recall Test) as well as 9-17 months later (Remote Recall Test) while EEG was recorded. Using a data-driven segmentation procedure of CS evoked event-related potentials, five distinct microstates were identified for both the recent and the remote memory test. To probe intracranial activity, EEG activity within each microstate was localized using low resolution electromagnetic tomography analysis (LORETA). In both the recent (41–55 and 150–191 ms) and remote (45–90 ms) recall tests, fear conditioned faces potentiated rapid activation in proximity of fusiform gyrus, even in participants unaware of the contingencies. These findings suggest that rapid processing enhancements of conditioned faces persist over time. PMID:26416784
Rajaei, Karim; Khaligh-Razavi, Seyed-Mahdi; Ghodrati, Masoud; Ebrahimpour, Reza; Shiri Ahmad Abadi, Mohammad Ebrahim
2012-01-01
The brain mechanism of extracting visual features for recognizing various objects has consistently been a controversial issue in computational models of object recognition. To extract visual features, we introduce a new, biologically motivated model for facial categorization, which is an extension of the Hubel and Wiesel simple-to-complex cell hierarchy. To address the synaptic stability versus plasticity dilemma, we apply the Adaptive Resonance Theory (ART) for extracting informative intermediate level visual features during the learning process, which also makes this model stable against the destruction of previously learned information while learning new information. Such a mechanism has been suggested to be embedded within known laminar microcircuits of the cerebral cortex. To reveal the strength of the proposed visual feature learning mechanism, we show that when we use this mechanism in the training process of a well-known biologically motivated object recognition model (the HMAX model), it performs better than the HMAX model in face/non-face classification tasks. Furthermore, we demonstrate that our proposed mechanism is capable of following similar trends in performance as humans in a psychophysical experiment using a face versus non-face rapid categorization task.
Bossard, B.; Renard, J. M.; Capelle, P.; Paradis, P.; Beuscart, M. C.
2000-01-01
Investing in information technology has become a crucial process in hospital management today. Medical and administrative managers are faced with difficulties in measuring medical information technology costs and benefits due to the complexity of the domain. This paper proposes a preimplementation methodology for evaluating and appraising material, process and human costs and benefits. Based on the users needs and organizational process analysis, the methodology provides an evaluative set of financial and non financial indicators which can be integrated in a decision making and investment evaluation process. We describe the first results obtained after a few months of operation for the Computer-Based Patient Record (CPR) project. Its full acceptance, in spite of some difficulties, encourages us to diffuse the method for the entire project. PMID:11079851
Understanding face perception by means of human electrophysiology.
Rossion, Bruno
2014-06-01
Electrophysiological recordings on the human scalp provide a wealth of information about the temporal dynamics and nature of face perception at a global level of brain organization. The time window between 100 and 200 ms witnesses the transition between low-level and high-level vision, an N170 component correlating with conscious interpretation of a visual stimulus as a face. This face representation is rapidly refined as information accumulates during this time window, allowing the individualization of faces. To improve the sensitivity and objectivity of face perception measures, it is increasingly important to go beyond transient visual stimulation by recording electrophysiological responses at periodic frequency rates. This approach has recently provided face perception thresholds and the first objective signature of integration of facial parts in the human brain. Copyright © 2014 Elsevier Ltd. All rights reserved.
The time course of individual face recognition: A pattern analysis of ERP signals.
Nemrodov, Dan; Niemeier, Matthias; Mok, Jenkin Ngo Yin; Nestor, Adrian
2016-05-15
An extensive body of work documents the time course of neural face processing in the human visual cortex. However, the majority of this work has focused on specific temporal landmarks, such as N170 and N250 components, derived through univariate analyses of EEG data. Here, we take on a broader evaluation of ERP signals related to individual face recognition as we attempt to move beyond the leading theoretical and methodological framework through the application of pattern analysis to ERP data. Specifically, we investigate the spatiotemporal profile of identity recognition across variation in emotional expression. To this end, we apply pattern classification to ERP signals both in time, for any single electrode, and in space, across multiple electrodes. Our results confirm the significance of traditional ERP components in face processing. At the same time though, they support the idea that the temporal profile of face recognition is incompletely described by such components. First, we show that signals associated with different facial identities can be discriminated from each other outside the scope of these components, as early as 70ms following stimulus presentation. Next, electrodes associated with traditional ERP components as well as, critically, those not associated with such components are shown to contribute information to stimulus discriminability. And last, the levels of ERP-based pattern discrimination are found to correlate with recognition accuracy across subjects confirming the relevance of these methods for bridging brain and behavior data. Altogether, the current results shed new light on the fine-grained time course of neural face processing and showcase the value of novel methods for pattern analysis to investigating fundamental aspects of visual recognition. Copyright © 2016 Elsevier Inc. All rights reserved.
Seymour, Ben; Yoshida, Wako; Dolan, Ray
2009-01-01
The origin of altruism remains one of the most enduring puzzles of human behaviour. Indeed, true altruism is often thought either not to exist, or to arise merely as a miscalculation of otherwise selfish behaviour. In this paper, we argue that altruism emerges directly from the way in which distinct human decision-making systems learn about rewards. Using insights provided by neurobiological accounts of human decision-making, we suggest that reinforcement learning in game-theoretic social interactions (habitisation over either individuals or games) and observational learning (either imitative of inference based) lead to altruistic behaviour. This arises not only as a result of computational efficiency in the face of processing complexity, but as a direct consequence of optimal inference in the face of uncertainty. Critically, we argue that the fact that evolutionary pressure acts not over the object of learning ('what' is learned), but over the learning systems themselves ('how' things are learned), enables the evolution of altruism despite the direct threat posed by free-riders.
Applying face identification to detecting hijacking of airplane
NASA Astrophysics Data System (ADS)
Luo, Xuanwen; Cheng, Qiang
2004-09-01
That terrorists hijacked the airplanes and crashed the World Trade Center is disaster to civilization. To avoid the happening of hijack is critical to homeland security. To report the hijacking in time, limit the terrorist to operate the plane if happened and land the plane to the nearest airport could be an efficient way to avoid the misery. Image processing technique in human face recognition or identification could be used for this task. Before the plane take off, the face images of pilots are input into a face identification system installed in the airplane. The camera in front of pilot seat keeps taking the pilot face image during the flight and comparing it with pre-input pilot face images. If a different face is detected, a warning signal is sent to ground automatically. At the same time, the automatic cruise system is started or the plane is controlled by the ground. The terrorists will have no control over the plane. The plane will be landed to a nearest or appropriate airport under the control of the ground or cruise system. This technique could also be used in automobile industry as an image key to avoid car stealth.
Parkinson Patients' Initial Trust in Avatars: Theory and Evidence.
Javor, Andrija; Ransmayr, Gerhard; Struhal, Walter; Riedl, René
2016-01-01
Parkinson's disease (PD) is a neurodegenerative disease that affects the motor system and cognitive and behavioral functions. Due to these impairments, PD patients also have problems in using the computer. However, using computers and the Internet could help these patients to overcome social isolation and enhance information search. Specifically, avatars (defined as virtual representations of humans) are increasingly used in online environments to enhance human-computer interaction by simulating face-to-face interaction. Our laboratory experiment investigated how PD patients behave in a trust game played with human and avatar counterparts, and we compared this behavior to the behavior of age, income, education and gender matched healthy controls. The results of our study show that PD patients trust avatar faces significantly more than human faces. Moreover, there was no significant difference between initial trust of PD patients and healthy controls in avatar faces, while PD patients trusted human faces significantly less than healthy controls. Our data suggests that PD patients' interaction with avatars may constitute an effective way of communication in situations in which trust is required (e.g., a physician recommends intake of medication). We discuss the implications of these results for several areas of human-computer interaction and neurological research.
Parkinson Patients’ Initial Trust in Avatars: Theory and Evidence
Javor, Andrija; Ransmayr, Gerhard; Struhal, Walter; Riedl, René
2016-01-01
Parkinson’s disease (PD) is a neurodegenerative disease that affects the motor system and cognitive and behavioral functions. Due to these impairments, PD patients also have problems in using the computer. However, using computers and the Internet could help these patients to overcome social isolation and enhance information search. Specifically, avatars (defined as virtual representations of humans) are increasingly used in online environments to enhance human-computer interaction by simulating face-to-face interaction. Our laboratory experiment investigated how PD patients behave in a trust game played with human and avatar counterparts, and we compared this behavior to the behavior of age, income, education and gender matched healthy controls. The results of our study show that PD patients trust avatar faces significantly more than human faces. Moreover, there was no significant difference between initial trust of PD patients and healthy controls in avatar faces, while PD patients trusted human faces significantly less than healthy controls. Our data suggests that PD patients’ interaction with avatars may constitute an effective way of communication in situations in which trust is required (e.g., a physician recommends intake of medication). We discuss the implications of these results for several areas of human-computer interaction and neurological research. PMID:27820864
Tree physiology research in a changing world.
Kaufmann, Merrill R.; Linder, Sune
1996-01-01
Changes in issues and advances in methodology have contributed to substantial progress in tree physiology research during the last several decades. Current research focuses on process interactions in complex systems and the integration of processes across multiple spatial and temporal scales. An increasingly important challenge for future research is assuring sustainability of production systems and forested ecosystems in the face of increased demands for natural resources and human disturbance of forests. Meeting this challenge requires significant shifts in research approach, including the study of limitations of productivity that may accompany achievement of system sustainability, and a focus on the biological capabilities of complex land bases altered by human activity.
Crookes, Kate; Favelle, Simone; Hayward, William G
2013-01-01
Recent evidence suggests stronger holistic processing for own-race faces may underlie the own-race advantage in face memory. In previous studies Caucasian participants have demonstrated larger holistic processing effects for Caucasian over Asian faces. However, Asian participants have consistently shown similar sized effects for both Asian and Caucasian faces. We investigated two proposed explanations for the holistic processing of other-race faces by Asian participants: (1) greater other-race exposure, (2) a general global processing bias. Holistic processing was tested using the part-whole task. Participants were living in predominantly own-race environments and other-race contact was evaluated. Despite reporting significantly greater contact with own-race than other-race people, Chinese participants displayed strong holistic processing for both Asian and Caucasian upright faces. In addition, Chinese participants showed no evidence of holistic processing for inverted faces arguing against a general global processing bias explanation. Caucasian participants, in line with previous studies, displayed stronger holistic processing for Caucasian than Asian upright faces. For inverted faces there were no race-of-face differences. These results are used to suggest that Asians may make more general use of face-specific mechanisms than Caucasians.
Crookes, Kate; Favelle, Simone; Hayward, William G.
2013-01-01
Recent evidence suggests stronger holistic processing for own-race faces may underlie the own-race advantage in face memory. In previous studies Caucasian participants have demonstrated larger holistic processing effects for Caucasian over Asian faces. However, Asian participants have consistently shown similar sized effects for both Asian and Caucasian faces. We investigated two proposed explanations for the holistic processing of other-race faces by Asian participants: (1) greater other-race exposure, (2) a general global processing bias. Holistic processing was tested using the part-whole task. Participants were living in predominantly own-race environments and other-race contact was evaluated. Despite reporting significantly greater contact with own-race than other-race people, Chinese participants displayed strong holistic processing for both Asian and Caucasian upright faces. In addition, Chinese participants showed no evidence of holistic processing for inverted faces arguing against a general global processing bias explanation. Caucasian participants, in line with previous studies, displayed stronger holistic processing for Caucasian than Asian upright faces. For inverted faces there were no race-of-face differences. These results are used to suggest that Asians may make more general use of face-specific mechanisms than Caucasians. PMID:23386840
Spatiotemporal dynamics of similarity-based neural representations of facial identity
Vida, Mark D.; Nestor, Adrian; Plaut, David C.; Behrmann, Marlene
2017-01-01
Humans’ remarkable ability to quickly and accurately discriminate among thousands of highly similar complex objects demands rapid and precise neural computations. To elucidate the process by which this is achieved, we used magnetoencephalography to measure spatiotemporal patterns of neural activity with high temporal resolution during visual discrimination among a large and carefully controlled set of faces. We also compared these neural data to lower level “image-based” and higher level “identity-based” model-based representations of our stimuli and to behavioral similarity judgments of our stimuli. Between ∼50 and 400 ms after stimulus onset, face-selective sources in right lateral occipital cortex and right fusiform gyrus and sources in a control region (left V1) yielded successful classification of facial identity. In all regions, early responses were more similar to the image-based representation than to the identity-based representation. In the face-selective regions only, responses were more similar to the identity-based representation at several time points after 200 ms. Behavioral responses were more similar to the identity-based representation than to the image-based representation, and their structure was predicted by responses in the face-selective regions. These results provide a temporally precise description of the transformation from low- to high-level representations of facial identity in human face-selective cortex and demonstrate that face-selective cortical regions represent multiple distinct types of information about face identity at different times over the first 500 ms after stimulus onset. These results have important implications for understanding the rapid emergence of fine-grained, high-level representations of object identity, a computation essential to human visual expertise. PMID:28028220
Face to face with emotion: holistic face processing is modulated by emotional state.
Curby, Kim M; Johnson, Kareem J; Tyson, Alyssa
2012-01-01
Negative emotions are linked with a local, rather than global, visual processing style, which may preferentially facilitate feature-based, relative to holistic, processing mechanisms. Because faces are typically processed holistically, and because social contexts are prime elicitors of emotions, we examined whether negative emotions decrease holistic processing of faces. We induced positive, negative, or neutral emotions via film clips and measured holistic processing before and after the induction: participants made judgements about cued parts of chimeric faces, and holistic processing was indexed by the interference caused by task-irrelevant face parts. Emotional state significantly modulated face-processing style, with the negative emotion induction leading to decreased holistic processing. Furthermore, self-reported change in emotional state correlated with changes in holistic processing. These results contrast with general assumptions that holistic processing of faces is automatic and immune to outside influences, and they illustrate emotion's power to modulate socially relevant aspects of visual perception.
See You See Me: the Role of Eye Contact in Multimodal Human-Robot Interaction
XU, TIAN (LINGER); ZHANG, HUI; YU, CHEN
2016-01-01
We focus on a fundamental looking behavior in human-robot interactions – gazing at each other’s face. Eye contact and mutual gaze between two social partners are critical in smooth human-human interactions. Therefore, investigating at what moments and in what ways a robot should look at a human user’s face as a response to the human’s gaze behavior is an important topic. Toward this goal, we developed a gaze-contingent human-robot interaction system, which relied on momentary gaze behaviors from a human user to control an interacting robot in real time. Using this system, we conducted an experiment in which human participants interacted with the robot in a joint attention task. In the experiment, we systematically manipulated the robot’s gaze toward the human partner’s face in real time and then analyzed the human’s gaze behavior as a response to the robot’s gaze behavior. We found that more face looks from the robot led to more look-backs (to the robot’s face) from human participants and consequently created more mutual gaze and eye contact between the two. Moreover, participants demonstrated more coordinated and synchronized multimodal behaviors between speech and gaze when more eye contact was successfully established and maintained. PMID:28966875
A shape-based account for holistic face processing.
Zhao, Mintao; Bülthoff, Heinrich H; Bülthoff, Isabelle
2016-04-01
Faces are processed holistically, so selective attention to 1 face part without any influence of the others often fails. In this study, 3 experiments investigated what type of facial information (shape or surface) underlies holistic face processing and whether generalization of holistic processing to nonexperienced faces requires extensive discrimination experience. Results show that facial shape information alone is sufficient to elicit the composite face effect (CFE), 1 of the most convincing demonstrations of holistic processing, whereas facial surface information is unnecessary (Experiment 1). The CFE is eliminated when faces differ only in surface but not shape information, suggesting that variation of facial shape information is necessary to observe holistic face processing (Experiment 2). Removing 3-dimensional (3D) facial shape information also eliminates the CFE, indicating the necessity of 3D shape information for holistic face processing (Experiment 3). Moreover, participants show similar holistic processing for faces with and without extensive discrimination experience (i.e., own- and other-race faces), suggesting that generalization of holistic processing to nonexperienced faces requires facial shape information, but does not necessarily require further individuation experience. These results provide compelling evidence that facial shape information underlies holistic face processing. This shape-based account not only offers a consistent explanation for previous studies of holistic face processing, but also suggests a new ground-in addition to expertise-for the generalization of holistic processing to different types of faces and to nonface objects. (c) 2016 APA, all rights reserved).
Prakash, Akanksha; Rogers, Wendy A
2015-04-01
Ample research in social psychology has highlighted the importance of the human face in human-human interactions. However, there is a less clear understanding of how a humanoid robot's face is perceived by humans. One of the primary goals of this study was to investigate how initial perceptions of robots are influenced by the extent of human-likeness of the robot's face, particularly when the robot is intended to provide assistance with tasks in the home that are traditionally carried out by humans. Moreover, although robots have the potential to help both younger and older adults, there is limited knowledge of whether the two age groups' perceptions differ. In this study, younger ( N = 32) and older adults ( N = 32) imagined interacting with a robot in four different task contexts and rated robot faces of varying levels of human-likeness. Participants were also interviewed to assess their reasons for particular preferences. This multi-method approach identified patterns of perceptions across different appearances as well as reasons that influence the formation of such perceptions. Overall, the results indicated that people's perceptions of robot faces vary as a function of robot human-likeness. People tended to over-generalize their understanding of humans to build expectations about a human-looking robot's behavior and capabilities. Additionally, preferences for humanoid robots depended on the task although younger and older adults differed in their preferences for certain humanoid appearances. The results of this study have implications both for advancing theoretical understanding of robot perceptions and for creating and applying guidelines for the design of robots.
Gácsi, Márta; Miklósi, Adám; Varga, Orsolya; Topál, József; Csányi, Vilmos
2004-07-01
The ability of animals to use behavioral/facial cues in detection of human attention has been widely investigated. In this test series we studied the ability of dogs to recognize human attention in different experimental situations (ball-fetching game, fetching objects on command, begging from humans). The attentional state of the humans was varied along two variables: (1) facing versus not facing the dog; (2) visible versus non-visible eyes. In the first set of experiments (fetching) the owners were told to take up different body positions (facing or not facing the dog) and to either cover or not cover their eyes with a blindfold. In the second set of experiments (begging) dogs had to choose between two eating humans based on either the visibility of the eyes or direction of the face. Our results show that the efficiency of dogs to discriminate between "attentive" and "inattentive" humans depended on the context of the test, but they could rely on the orientation of the body, the orientation of the head and the visibility of the eyes. With the exception of the fetching-game situation, they brought the object to the front of the human (even if he/she turned his/her back towards the dog), and preferentially begged from the facing (or seeing) human. There were also indications that dogs were sensitive to the visibility of the eyes because they showed increased hesitative behavior when approaching a blindfolded owner, and they also preferred to beg from the person with visible eyes. We conclude that dogs are able to rely on the same set of human facial cues for detection of attention, which form the behavioral basis of understanding attention in humans. Showing the ability of recognizing human attention across different situations dogs proved to be more flexible than chimpanzees investigated in similar circumstances.
Processing of configural and componential information in face-selective cortical areas.
Zhao, Mintao; Cheung, Sing-Hang; Wong, Alan C-N; Rhodes, Gillian; Chan, Erich K S; Chan, Winnie W L; Hayward, William G
2014-01-01
We investigated how face-selective cortical areas process configural and componential face information and how race of faces may influence these processes. Participants saw blurred (preserving configural information), scrambled (preserving componential information), and whole faces during fMRI scan, and performed a post-scan face recognition task using blurred or scrambled faces. The fusiform face area (FFA) showed stronger activation to blurred than to scrambled faces, and equivalent responses to blurred and whole faces. The occipital face area (OFA) showed stronger activation to whole than to blurred faces, which elicited similar responses to scrambled faces. Therefore, the FFA may be more tuned to process configural than componential information, whereas the OFA similarly participates in perception of both. Differences in recognizing own- and other-race blurred faces were correlated with differences in FFA activation to those faces, suggesting that configural processing within the FFA may underlie the other-race effect in face recognition.
Lakshmanan, Usha; Graham, Robert E
2016-01-01
Christiansen & Chater (C&C) offer the Chunk-and-Pass strategy as a language processing approach allowing humans to make sense of incoming language in the face of cognitive and perceptual constraints. We propose that the Chunk-and-Pass strategy is not adequate to extend universally across languages (accounting for typologically diverse languages), nor is it sufficient to generalize to other auditory modalities such as music.
Avoiding Communication Barriers in the Classroom: The APEINTA Project
ERIC Educational Resources Information Center
Iglesias, Ana; Jiménez, Javier; Revuelta, Pablo; Moreno, Lourdes
2016-01-01
Education is a fundamental human right, however unfortunately not everybody has the same learning opportunities. For instance, if a student has hearing impairments, s/he could face communications barriers in the classroom, which could affect his/her learning process. APEINTA is a Spanish educational project that aims for inclusive education for…
SMEs and their E-Commerce: Implications for Training in Wellington, New Zealand
ERIC Educational Resources Information Center
Beal, Tim; Abdullah, Moha Asri
2005-01-01
One of the greatest challenges facing traditional small and medium-sized enterprises (SMEs) throughout the world is that posed by the Internet. While the Internet offers great potential to SMEs, from improving and cheapening production processes through to reaching global customers, it also poses great problems. SMEs' resources, human and…
Self-transcendence: a concept analysis for nursing praxis.
Teixeira, M Elizabeth
2008-01-01
Self-transcendence is a quality inherent in every human being. This process toward personal transformation is instrumental in finding true meaning and purpose in life. When faced with adversity, self-transcendence can be a powerful coping strategy. Clarity of this concept will assist nurses in providing holistic interventions that promote and facilitate self-transcendence.
The State of the Environment 1983. Selected Topics.
ERIC Educational Resources Information Center
United Nations Environment Programme, Nairobi (Kenya).
Two of the most urgent tasks facing the world community are controlling dangerous pollution and finding plentiful supplies of energy, particularly in developing countries. This report examines: (1) what to do about hazardous wastes that endanger human life and health (restricted to wastes from chemical processes and those generated by cleaning or…
Essays on New Careers; Social Implications for Adult Educators.
ERIC Educational Resources Information Center
Riessman, Frank; And Others
These essays concentrate on the challenge that adult education faces in helping the urban poor develop meaningful paraprofessional careers in the human services. In one essay, the reformist approach to improving access to credentials is compared with the radical approach, which questions the validity of the credentials process as well as its…
Visual adaptation of the perception of "life": animacy is a basic perceptual dimension of faces.
Koldewyn, Kami; Hanus, Patricia; Balas, Benjamin
2014-08-01
One critical component of understanding another's mind is the perception of "life" in a face. However, little is known about the cognitive and neural mechanisms underlying this perception of animacy. Here, using a visual adaptation paradigm, we ask whether face animacy is (1) a basic dimension of face perception and (2) supported by a common neural mechanism across distinct face categories defined by age and species. Observers rated the perceived animacy of adult human faces before and after adaptation to (1) adult faces, (2) child faces, and (3) dog faces. When testing the perception of animacy in human faces, we found significant adaptation to both adult and child faces, but not dog faces. We did, however, find significant adaptation when morphed dog images and dog adaptors were used. Thus, animacy perception in faces appears to be a basic dimension of face perception that is species specific but not constrained by age categories.
Bishop, Sonia J.; Aguirre, Geoffrey K.; Nunez-Elizalde, Anwar O.; Toker, Daniel
2015-01-01
Anxious individuals have a greater tendency to categorize faces with ambiguous emotional expressions as fearful (Richards et al., 2002). These behavioral findings might reflect anxiety-related biases in stimulus representation within the human amygdala. Here, we used functional magnetic resonance imaging (fMRI) together with a continuous adaptation design to investigate the representation of faces from three expression continua (surprise-fear, sadness-fear, and surprise-sadness) within the amygdala and other brain regions implicated in face processing. Fifty-four healthy adult participants completed a face expression categorization task. Nineteen of these participants also viewed the same expressions presented using type 1 index 1 sequences while fMRI data were acquired. Behavioral analyses revealed an anxiety-related categorization bias in the surprise-fear continuum alone. Here, elevated anxiety was associated with a more rapid transition from surprise to fear responses as a function of percentage fear in the face presented, leading to increased fear categorizations for faces with a mid-way blend of surprise and fear. fMRI analyses revealed that high trait anxious participants also showed greater representational similarity, as indexed by greater adaptation of the Blood Oxygenation Level Dependent (BOLD) signal, between 50/50 surprise/fear expression blends and faces from the fear end of the surprise-fear continuum in both the right amygdala and right fusiform face area (FFA). No equivalent biases were observed for the other expression continua. These findings suggest that anxiety-related biases in the processing of expressions intermediate between surprise and fear may be linked to differential representation of these stimuli in the amygdala and FFA. The absence of anxiety-related biases for the sad-fear continuum might reflect intermediate expressions from the surprise-fear continuum being most ambiguous in threat-relevance. PMID:25870551
ERIC Educational Resources Information Center
Chawarska, Katarzyna; Volkmar, Fred
2007-01-01
Face recognition impairments are well documented in older children with Autism Spectrum Disorders (ASD); however, the developmental course of the deficit is not clear. This study investigates the progressive specialization of face recognition skills in children with and without ASD. Experiment 1 examines human and monkey face recognition in…
The surprisingly high human efficiency at learning to recognize faces
Peterson, Matthew F.; Abbey, Craig K.; Eckstein, Miguel P.
2009-01-01
We investigated the ability of humans to optimize face recognition performance through rapid learning of individual relevant features. We created artificial faces with discriminating visual information heavily concentrated in single features (nose, eyes, chin or mouth). In each of 2500 learning blocks a feature was randomly selected and retained over the course of four trials, during which observers identified randomly sampled, noisy face images. Observers learned the discriminating feature through indirect feedback, leading to large performance gains. Performance was compared to a learning Bayesian ideal observer, resulting in unexpectedly high learning compared to previous studies with simpler stimuli. We explore various explanations and conclude that the higher learning measured with faces cannot be driven by adaptive eye movement strategies but can be mostly accounted for by suboptimalities in human face discrimination when observers are uncertain about the discriminating feature. We show that an initial bias of humans to use specific features to perform the task even though they are informed that each of four features is equally likely to be the discriminatory feature would lead to seemingly supra-optimal learning. We also examine the possibility of inefficient human integration of visual information across the spatially distributed facial features. Together, the results suggest that humans can show large performance improvement effects in discriminating faces as they learn to identify the feature containing the discriminatory information. PMID:19000918
Neonatal face-to-face interactions promote later social behaviour in infant rhesus monkeys
Dettmer, Amanda M.; Kaburu, Stefano S. K.; Simpson, Elizabeth A.; Paukner, Annika; Sclafani, Valentina; Byers, Kristen L.; Murphy, Ashley M.; Miller, Michelle; Marquez, Neal; Miller, Grace M.; Suomi, Stephen J.; Ferrari, Pier F.
2016-01-01
In primates, including humans, mothers engage in face-to-face interactions with their infants, with frequencies varying both within and across species. However, the impact of this variation in face-to-face interactions on infant social development is unclear. Here we report that infant monkeys (Macaca mulatta) who engaged in more neonatal face-to-face interactions with mothers have increased social interactions at 2 and 5 months. In a controlled experiment, we show that this effect is not due to physical contact alone: monkeys randomly assigned to receive additional neonatal face-to-face interactions (mutual gaze and intermittent lip-smacking) with human caregivers display increased social interest at 2 months, compared with monkeys who received only additional handling. These studies suggest that face-to-face interactions from birth promote young primate social interest and competency. PMID:27300086
Gonzalez Bernaldo de Quiros, Fernan; Dawidowski, Adriana R; Figar, Silvana
2017-02-01
In this study, we aimed: 1) to conceptualize the theoretical challenges facing health information systems (HIS) to represent patients' decisions about health and medical treatments in everyday life; 2) to suggest approaches for modeling these processes. The conceptualization of the theoretical and methodological challenges was discussed in 2015 during a series of interdisciplinary meetings attended by health informatics staff, epidemiologists and health professionals working in quality management and primary and secondary prevention of chronic diseases of the Hospital Italiano de Buenos Aires, together with sociologists, anthropologists and e-health stakeholders. HIS are facing the need and challenge to represent social human processes based on constructivist and complexity theories, which are the current frameworks of human sciences for understanding human learning and socio-cultural changes. Computer systems based on these theories can model processes of social construction of concrete and subjective entities and the interrelationships between them. These theories could be implemented, among other ways, through the mapping of health assets, analysis of social impact through community trials and modeling of complexity with system simulation tools. This analysis suggested the need to complement the traditional linear causal explanations of disease onset (and treatments) that are the bases for models of analysis of HIS with constructivist and complexity frameworks. Both may enlighten the complex interrelationships among patients, health services and the health system. The aim of this strategy is to clarify people's decision making processes to improve the efficiency, quality and equity of the health services and the health system.
On the domain-specificity of the visual and non-visual face-selective regions.
Axelrod, Vadim
2016-08-01
What happens in our brains when we see a face? The neural mechanisms of face processing - namely, the face-selective regions - have been extensively explored. Research has traditionally focused on visual cortex face-regions; more recently, the role of face-regions outside the visual cortex (i.e., non-visual-cortex face-regions) has been acknowledged as well. The major quest today is to reveal the functional role of each this region in face processing. To make progress in this direction, it is essential to understand the extent to which the face-regions, and particularly the non-visual-cortex face-regions, process only faces (i.e., face-specific, domain-specific processing) or rather are involved in a more domain-general cognitive processing. In the current functional MRI study, we systematically examined the activity of the whole face-network during face-unrelated reading task (i.e., written meaningful sentences with content unrelated to faces/people and non-words). We found that the non-visual-cortex (i.e., right lateral prefrontal cortex and posterior superior temporal sulcus), but not the visual cortex face-regions, responded significantly stronger to sentences than to non-words. In general, some degree of sentence selectivity was found in all non-visual-cortex cortex. Present result highlights the possibility that the processing in the non-visual-cortex face-selective regions might not be exclusively face-specific, but rather more or even fully domain-general. In this paper, we illustrate how the knowledge about domain-general processing in face-regions can help to advance our general understanding of face processing mechanisms. Our results therefore suggest that the problem of face processing should be approached in the broader scope of cognition in general. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Wu, Jinglong; Chen, Kewei; Imajyo, Satoshi; Ohno, Seiichiro; Kanazawa, Susumu
2013-01-01
In human visual cortex, the primary visual cortex (V1) is considered to be essential for visual information processing; the fusiform face area (FFA) and parahippocampal place area (PPA) are considered as face-selective region and places-selective region, respectively. Recently, a functional magnetic resonance imaging (fMRI) study showed that the neural activity ratios between V1 and FFA were constant as eccentricities increasing in central visual field. However, in wide visual field, the neural activity relationships between V1 and FFA or V1 and PPA are still unclear. In this work, using fMRI and wide-view present system, we tried to address this issue by measuring neural activities in V1, FFA and PPA for the images of faces and houses aligning in 4 eccentricities and 4 meridians. Then, we further calculated ratio relative to V1 (RRV1) as comparing the neural responses amplitudes in FFA or PPA with those in V1. We found V1, FFA, and PPA showed significant different neural activities to faces and houses in 3 dimensions of eccentricity, meridian, and region. Most importantly, the RRV1s in FFA and PPA also exhibited significant differences in 3 dimensions. In the dimension of eccentricity, both FFA and PPA showed smaller RRV1s at central position than those at peripheral positions. In meridian dimension, both FFA and PPA showed larger RRV1s at upper vertical positions than those at lower vertical positions. In the dimension of region, FFA had larger RRV1s than PPA. We proposed that these differential RRV1s indicated FFA and PPA might have different processing strategies for encoding the wide field visual information from V1. These different processing strategies might depend on the retinal position at which faces or houses are typically observed in daily life. We posited a role of experience in shaping the information processing strategies in the ventral visual cortex. PMID:23991147
Familiarity facilitates feature-based face processing.
Visconti di Oleggio Castello, Matteo; Wheeler, Kelsey G; Cipolli, Carlo; Gobbini, M Ida
2017-01-01
Recognition of personally familiar faces is remarkably efficient, effortless and robust. We asked if feature-based face processing facilitates detection of familiar faces by testing the effect of face inversion on a visual search task for familiar and unfamiliar faces. Because face inversion disrupts configural and holistic face processing, we hypothesized that inversion would diminish the familiarity advantage to the extent that it is mediated by such processing. Subjects detected personally familiar and stranger target faces in arrays of two, four, or six face images. Subjects showed significant facilitation of personally familiar face detection for both upright and inverted faces. The effect of familiarity on target absent trials, which involved only rejection of unfamiliar face distractors, suggests that familiarity facilitates rejection of unfamiliar distractors as well as detection of familiar targets. The preserved familiarity effect for inverted faces suggests that facilitation of face detection afforded by familiarity reflects mostly feature-based processes.
Taubert, Jessica; Parr, Lisa A
2011-01-01
All primates can recognize faces and do so by analyzing the subtle variation that exists between faces. Through a series of three experiments, we attempted to clarify the nature of second-order information processing in nonhuman primates. Experiment one showed that both chimpanzees (Pan troglodytes) and rhesus monkeys (Macaca mulatta) tolerate geometric distortions along the vertical axis, suggesting that information about absolute position of features does not contribute to accurate face recognition. Chimpanzees differed from monkeys, however, in that they were more sensitive to distortions along the horizontal axis, suggesting that when building a global representation of facial identity, horizontal relations between features are more diagnostic of identity than vertical relations. Two further experiments were performed to determine whether the monkeys were simply less sensitive to horizontal relations compared to chimpanzees or were instead relying on local features. The results of these experiments confirm that monkeys can utilize a holistic strategy when discriminating between faces regardless of familiarity. In contrast, our data show that chimpanzees, like humans, use a combination of holistic and local features when the faces are unfamiliar, but primarily holistic information when the faces become familiar. We argue that our comparative approach to the study of face recognition reveals the impact that individual experience and social organization has on visual cognition.
Perceptual expertise in forensic facial image comparison.
White, David; Phillips, P Jonathon; Hahn, Carina A; Hill, Matthew; O'Toole, Alice J
2015-09-07
Forensic facial identification examiners are required to match the identity of faces in images that vary substantially, owing to changes in viewing conditions and in a person's appearance. These identifications affect the course and outcome of criminal investigations and convictions. Despite calls for research on sources of human error in forensic examination, existing scientific knowledge of face matching accuracy is based, almost exclusively, on people without formal training. Here, we administered three challenging face matching tests to a group of forensic examiners with many years' experience of comparing face images for law enforcement and government agencies. Examiners outperformed untrained participants and computer algorithms, thereby providing the first evidence that these examiners are experts at this task. Notably, computationally fusing responses of multiple experts produced near-perfect performance. Results also revealed qualitative differences between expert and non-expert performance. First, examiners' superiority was greatest at longer exposure durations, suggestive of more entailed comparison in forensic examiners. Second, experts were less impaired by image inversion than non-expert students, contrasting with face memory studies that show larger face inversion effects in high performers. We conclude that expertise in matching identity across unfamiliar face images is supported by processes that differ qualitatively from those supporting memory for individual faces. © 2015 The Author(s).
Coffman, Marika C; Trubanova, Andrea; Richey, J Anthony; White, Susan W; Kim-Spoon, Jungmeen; Ollendick, Thomas H; Pine, Daniel S
2015-12-01
Attention to faces is a fundamental psychological process in humans, with atypical attention to faces noted across several clinical disorders. Although many clinical disorders onset in adolescence, there is a lack of well-validated stimulus sets containing adolescent faces available for experimental use. Further, the images comprising most available sets are not controlled for high- and low-level visual properties. Here, we present a cross-site validation of the National Institute of Mental Health Child Emotional Faces Picture Set (NIMH-ChEFS), comprised of 257 photographs of adolescent faces displaying angry, fearful, happy, sad, and neutral expressions. All of the direct facial images from the NIMH-ChEFS set were adjusted in terms of location of facial features and standardized for luminance, size, and smoothness. Although overall agreement between raters in this study and the original development-site raters was high (89.52%), this differed by group such that agreement was lower for adolescents relative to mental health professionals in the current study. These results suggest that future research using this face set or others of adolescent/child faces should base comparisons on similarly-aged validation data. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Cook, Stephanie; Fallon, Nicholas; Wright, Hazel; Thomas, Anna; Giesbrecht, Timo; Field, Matt; Stancak, Andrej
2015-01-01
Odors can alter hedonic evaluations of human faces, but the neural mechanisms of such effects are poorly understood. The present study aimed to analyze the neural underpinning of odor-induced changes in evaluations of human faces in an odor-priming paradigm, using event-related potentials (ERPs). Healthy, young participants (N = 20) rated neutral faces presented after a 3 s pulse of a pleasant odor (jasmine), unpleasant odor (methylmercaptan), or no-odor control (clean air). Neutral faces presented in the pleasant odor condition were rated more pleasant than the same faces presented in the no-odor control condition, which in turn were rated more pleasant than faces in the unpleasant odor condition. Analysis of face-related potentials revealed four clusters of electrodes significantly affected by odor condition at specific time points during long-latency epochs (600-950 ms). In the 620-640 ms interval, two scalp-time clusters showed greater negative potential in the right parietal electrodes in response to faces in the pleasant odor condition, compared to those in the no-odor and unpleasant odor conditions. At 926 ms, face-related potentials showed greater positivity in response to faces in the pleasant and unpleasant odor conditions at the left and right lateral frontal-temporal electrodes, respectively. Our data shows that odor-induced shifts in evaluations of faces were associated with amplitude changes in the late (>600) and ultra-late (>900 ms) latency epochs. The observed amplitude changes during the ultra-late epoch are consistent with a left/right hemisphere bias towards pleasant/unpleasant odor effects. Odors alter evaluations of human faces, even when there is a temporal lag between presentation of odors and faces. Our results provide an initial understanding of the neural mechanisms underlying effects of odors on hedonic evaluations.
Cook, Stephanie; Fallon, Nicholas; Wright, Hazel; Thomas, Anna; Giesbrecht, Timo; Field, Matt; Stancak, Andrej
2015-01-01
Odors can alter hedonic evaluations of human faces, but the neural mechanisms of such effects are poorly understood. The present study aimed to analyze the neural underpinning of odor-induced changes in evaluations of human faces in an odor-priming paradigm, using event-related potentials (ERPs). Healthy, young participants (N = 20) rated neutral faces presented after a 3 s pulse of a pleasant odor (jasmine), unpleasant odor (methylmercaptan), or no-odor control (clean air). Neutral faces presented in the pleasant odor condition were rated more pleasant than the same faces presented in the no-odor control condition, which in turn were rated more pleasant than faces in the unpleasant odor condition. Analysis of face-related potentials revealed four clusters of electrodes significantly affected by odor condition at specific time points during long-latency epochs (600−950 ms). In the 620−640 ms interval, two scalp-time clusters showed greater negative potential in the right parietal electrodes in response to faces in the pleasant odor condition, compared to those in the no-odor and unpleasant odor conditions. At 926 ms, face-related potentials showed greater positivity in response to faces in the pleasant and unpleasant odor conditions at the left and right lateral frontal-temporal electrodes, respectively. Our data shows that odor-induced shifts in evaluations of faces were associated with amplitude changes in the late (>600) and ultra-late (>900 ms) latency epochs. The observed amplitude changes during the ultra-late epoch are consistent with a left/right hemisphere bias towards pleasant/unpleasant odor effects. Odors alter evaluations of human faces, even when there is a temporal lag between presentation of odors and faces. Our results provide an initial understanding of the neural mechanisms underlying effects of odors on hedonic evaluations. PMID:26733843
Liu, Pan; Rigoulot, Simon; Pell, Marc D
2017-12-01
To explore how cultural immersion modulates emotion processing, this study examined how Chinese immigrants to Canada process multisensory emotional expressions, which were compared to existing data from two groups, Chinese and North Americans. Stroop and Oddball paradigms were employed to examine different stages of emotion processing. The Stroop task presented face-voice pairs expressing congruent/incongruent emotions and participants actively judged the emotion of one modality while ignoring the other. A significant effect of cultural immersion was observed in the immigrants' behavioral performance, which showed greater interference from to-be-ignored faces, comparable with what was observed in North Americans. However, this effect was absent in their N400 data, which retained the same pattern as the Chinese. In the Oddball task, where immigrants passively viewed facial expressions with/without simultaneous vocal emotions, they exhibited a larger visual MMN for faces accompanied by voices, again mirroring patterns observed in Chinese. Correlation analyses indicated that the immigrants' living duration in Canada was associated with neural patterns (N400 and visual mismatch negativity) more closely resembling North Americans. Our data suggest that in multisensory emotion processing, adopting to a new culture first leads to behavioral accommodation followed by alterations in brain activities, providing new evidence on human's neurocognitive plasticity in communication.
Experience-dependent changes in the development of face preferences in infant rhesus monkeys.
Parr, Lisa A; Murphy, Lauren; Feczko, Eric; Brooks, Jenna; Collantes, Marie; Heitz, Thomas R
2016-12-01
It is well known that early experience shapes the development of visual perception for faces in humans. However, the effect of experience on the development of social attention in non-human primates is unknown. In two studies, we examined the effect of cumulative social experience on developmental changes in attention to the faces of unfamiliar conspecifics or heterospecifics, and mom versus an unfamiliar female. From birth, infant rhesus monkeys preferred to look at conspecific compared to heterospecific faces, but this pattern reversed over time. In contrast, no consistent differences were found for attention to mom's face compared to an unfamiliar female. These results suggest differential roles of social experience in shaping the development of face preferences in infant monkeys. Results have important implications for establishing normative trajectories for the development of face preferences in an animal model of human social behavior. © 2016 Wiley Periodicals, Inc.
Rogers, Wendy A.
2015-01-01
Ample research in social psychology has highlighted the importance of the human face in human–human interactions. However, there is a less clear understanding of how a humanoid robot's face is perceived by humans. One of the primary goals of this study was to investigate how initial perceptions of robots are influenced by the extent of human-likeness of the robot's face, particularly when the robot is intended to provide assistance with tasks in the home that are traditionally carried out by humans. Moreover, although robots have the potential to help both younger and older adults, there is limited knowledge of whether the two age groups' perceptions differ. In this study, younger (N = 32) and older adults (N = 32) imagined interacting with a robot in four different task contexts and rated robot faces of varying levels of human-likeness. Participants were also interviewed to assess their reasons for particular preferences. This multi-method approach identified patterns of perceptions across different appearances as well as reasons that influence the formation of such perceptions. Overall, the results indicated that people's perceptions of robot faces vary as a function of robot human-likeness. People tended to over-generalize their understanding of humans to build expectations about a human-looking robot's behavior and capabilities. Additionally, preferences for humanoid robots depended on the task although younger and older adults differed in their preferences for certain humanoid appearances. The results of this study have implications both for advancing theoretical understanding of robot perceptions and for creating and applying guidelines for the design of robots. PMID:26294936
Neural bases of eye and gaze processing: The core of social cognition
Itier, Roxane J.; Batty, Magali
2014-01-01
Eyes and gaze are very important stimuli for human social interactions. Recent studies suggest that impairments in recognizing face identity, facial emotions or in inferring attention and intentions of others could be linked to difficulties in extracting the relevant information from the eye region including gaze direction. In this review, we address the central role of eyes and gaze in social cognition. We start with behavioral data demonstrating the importance of the eye region and the impact of gaze on the most significant aspects of face processing. We review neuropsychological cases and data from various imaging techniques such as fMRI/PET and ERP/MEG, in an attempt to best describe the spatio-temporal networks underlying these processes. The existence of a neuronal eye detector mechanism is discussed as well as the links between eye gaze and social cognition impairments in autism. We suggest impairments in processing eyes and gaze may represent a core deficiency in several other brain pathologies and may be central to abnormal social cognition. PMID:19428496
Neural correlates of own- and other-race face perception: spatial and temporal response differences.
Natu, Vaidehi; Raboy, David; O'Toole, Alice J
2011-02-01
Humans show an "other-race effect" for face recognition, with more accurate recognition of own- versus other-race faces. We compared the neural representations of own- and other-race faces using functional magnetic resonance imaging (fMRI) data in combination with a multi-voxel pattern classifier. Neural activity was recorded while Asians and Caucasians viewed Asian and Caucasian faces. A pattern classifier, applied to voxels across a broad range of ventral temporal areas, discriminated the brain activity maps elicited in response to Asian versus Caucasian faces in the brains of both Asians and Caucasians. Classification was most accurate in the first few time points of the block and required the use of own-race faces in the localizer scan to select voxels for classifier input. Next, we examined differences in the time-course of neural responses to own- and other-race faces and found evidence for a temporal "other-race effect." Own-race faces elicited a larger neural response initially that attenuated rapidly. The response to other-race faces was weaker at first, but increased over time, ultimately surpassing the magnitude of the own-race response in the fusiform "face" area (FFA). A similar temporal response pattern held across a broad range of ventral temporal areas. The pattern-classification results indicate the early availability of categorical information about own- versus other-race face status in the spatial pattern of neural activity. The slower, more sustained, brain response to other-race faces may indicate the need to recruit additional neural resources to process other-race faces for identification. Copyright © 2010 Elsevier Inc. All rights reserved.
Looking at My Own Face: Visual Processing Strategies in Self–Other Face Recognition
Chakraborty, Anya; Chakrabarti, Bhismadev
2018-01-01
We live in an age of ‘selfies.’ Yet, how we look at our own faces has seldom been systematically investigated. In this study we test if the visual processing of the highly familiar self-face is different from other faces, using psychophysics and eye-tracking. This paradigm also enabled us to test the association between the psychophysical properties of self-face representation and visual processing strategies involved in self-face recognition. Thirty-three adults performed a self-face recognition task from a series of self-other face morphs with simultaneous eye-tracking. Participants were found to look longer at the lower part of the face for self-face compared to other-face. Participants with a more distinct self-face representation, as indexed by a steeper slope of the psychometric response curve for self-face recognition, were found to look longer at upper part of the faces identified as ‘self’ vs. those identified as ‘other’. This result indicates that self-face representation can influence where we look when we process our own vs. others’ faces. We also investigated the association of autism-related traits with self-face processing metrics since autism has previously been associated with atypical self-processing. The study did not find any self-face specific association with autistic traits, suggesting that autism-related features may be related to self-processing in a domain specific manner. PMID:29487554
Recognizing Disguised Faces: Human and Machine Evaluation
Dhamecha, Tejas Indulal; Singh, Richa; Vatsa, Mayank; Kumar, Ajay
2014-01-01
Face verification, though an easy task for humans, is a long-standing open research area. This is largely due to the challenging covariates, such as disguise and aging, which make it very hard to accurately verify the identity of a person. This paper investigates human and machine performance for recognizing/verifying disguised faces. Performance is also evaluated under familiarity and match/mismatch with the ethnicity of observers. The findings of this study are used to develop an automated algorithm to verify the faces presented under disguise variations. We use automatically localized feature descriptors which can identify disguised face patches and account for this information to achieve improved matching accuracy. The performance of the proposed algorithm is evaluated on the IIIT-Delhi Disguise database that contains images pertaining to 75 subjects with different kinds of disguise variations. The experiments suggest that the proposed algorithm can outperform a popular commercial system and evaluates them against humans in matching disguised face images. PMID:25029188
Translation and articulation in biological motion perception.
Masselink, Jana; Lappe, Markus
2015-08-01
Recent models of biological motion processing focus on the articulational aspect of human walking investigated by point-light figures walking in place. However, in real human walking, the change in the position of the limbs relative to each other (referred to as articulation) results in a change of body location in space over time (referred to as translation). In order to examine the role of this translational component on the perception of biological motion we designed three psychophysical experiments of facing (leftward/rightward) and articulation discrimination (forward/backward and leftward/rightward) of a point-light walker viewed from the side, varying translation direction (relative to articulation direction), the amount of local image motion, and trial duration. In a further set of a forward/backward and a leftward/rightward articulation task, we additionally tested the influence of translational speed, including catch trials without articulation. We found a perceptual bias in translation direction in all three discrimination tasks. In the case of facing discrimination the bias was limited to short stimulus presentation. Our results suggest an interaction of articulation analysis with the processing of translational motion leading to best articulation discrimination when translational direction and speed match articulation. Moreover, we conclude that the global motion of the center-of-mass of the dot pattern is more relevant to processing of translation than the local motion of the dots. Our findings highlight that translation is a relevant cue that should be integrated in models of human motion detection.
Zhou, Guomei; Cheng, Zhijie; Yue, Zhenzhu; Tredoux, Colin; He, Jibo; Wang, Ling
2015-01-01
Studies have shown that people are better at recognizing human faces from their own-race than from other-races, an effect often termed the Own-Race Advantage. The current study investigates whether there is an Own-Race Advantage in attention and its neural correlates. Participants were asked to search for a human face among animal faces. Experiment 1 showed a classic Own-Race Advantage in response time both for Chinese and Black South African participants. Using event-related potentials (ERPs), Experiment 2 showed a similar Own-Race Advantage in response time for both upright faces and inverted faces. Moreover, the latency of N2pc for own-race faces was earlier than that for other-race faces. These results suggested that own-race faces capture attention more efficiently than other-race faces.
Dog Experts' Brains Distinguish Socially Relevant Body Postures Similarly in Dogs and Humans
Kujala, Miiamaaria V.; Kujala, Jan; Carlson, Synnöve; Hari, Riitta
2012-01-01
We read conspecifics' social cues effortlessly, but little is known about our abilities to understand social gestures of other species. To investigate the neural underpinnings of such skills, we used functional magnetic resonance imaging to study the brain activity of experts and non-experts of dog behavior while they observed humans or dogs either interacting with, or facing away from a conspecific. The posterior superior temporal sulcus (pSTS) of both subject groups dissociated humans facing toward each other from humans facing away, and in dog experts, a distinction also occurred for dogs facing toward vs. away in a bilateral area extending from the pSTS to the inferior temporo-occipital cortex: the dissociation of dog behavior was significantly stronger in expert than control group. Furthermore, the control group had stronger pSTS responses to humans than dogs facing toward a conspecific, whereas in dog experts, the responses were of similar magnitude. These findings suggest that dog experts' brains distinguish socially relevant body postures similarly in dogs and humans. PMID:22720054
ERIC Educational Resources Information Center
Rossion, Bruno; Hanseeuw, Bernard; Dricot, Laurence
2012-01-01
A number of human brain areas showing a larger response to faces than to objects from different categories, or to scrambled faces, have been identified in neuroimaging studies. Depending on the statistical criteria used, the set of areas can be overextended or minimized, both at the local (size of areas) and global (number of areas) levels. Here…
Davila-Ross, Marina; Jesus, Goncalo; Osborne, Jade; Bard, Kim A
2015-01-01
The ability to flexibly produce facial expressions and vocalizations has a strong impact on the way humans communicate, as it promotes more explicit and versatile forms of communication. Whereas facial expressions and vocalizations are unarguably closely linked in primates, the extent to which these expressions can be produced independently in nonhuman primates is unknown. The present work, thus, examined if chimpanzees produce the same types of facial expressions with and without accompanying vocalizations, as do humans. Forty-six chimpanzees (Pan troglodytes) were video-recorded during spontaneous play with conspecifics at the Chimfunshi Wildlife Orphanage. ChimpFACS was applied, a standardized coding system to measure chimpanzee facial movements, based on FACS developed for humans. Data showed that the chimpanzees produced the same 14 configurations of open-mouth faces when laugh sounds were present and when they were absent. Chimpanzees, thus, produce these facial expressions flexibly without being morphologically constrained by the accompanying vocalizations. Furthermore, the data indicated that the facial expression plus vocalization and the facial expression alone were used differently in social play, i.e., when in physical contact with the playmates and when matching the playmates' open-mouth faces. These findings provide empirical evidence that chimpanzees produce distinctive facial expressions independently from a vocalization, and that their multimodal use affects communicative meaning, important traits for a more explicit and versatile way of communication. As it is still uncertain how human laugh faces evolved, the ChimpFACS data were also used to empirically examine the evolutionary relation between open-mouth faces with laugh sounds of chimpanzees and laugh faces of humans. The ChimpFACS results revealed that laugh faces of humans must have gradually emerged from laughing open-mouth faces of ancestral apes. This work examines the main evolutionary changes of laugh faces since the last common ancestor of chimpanzees and humans.
Proposal of Self-Learning and Recognition System of Facial Expression
NASA Astrophysics Data System (ADS)
Ogawa, Yukihiro; Kato, Kunihito; Yamamoto, Kazuhiko
We describe realization of more complicated function by using the information acquired from some equipped unripe functions. The self-learning and recognition system of the human facial expression, which achieved under the natural relation between human and robot, are proposed. The robot with this system can understand human facial expressions and behave according to their facial expressions after the completion of learning process. The system modelled after the process that a baby learns his/her parents’ facial expressions. Equipping the robot with a camera the system can get face images and equipping the CdS sensors on the robot’s head the robot can get the information of human action. Using the information of these sensors, the robot can get feature of each facial expression. After self-learning is completed, when a person changed his facial expression in front of the robot, the robot operates actions under the relevant facial expression.
Collins, Heather R; Zhu, Xun; Bhatt, Ramesh S; Clark, Jonathan D; Joseph, Jane E
2012-12-01
The degree to which face-specific brain regions are specialized for different kinds of perceptual processing is debated. This study parametrically varied demands on featural, first-order configural, or second-order configural processing of faces and houses in a perceptual matching task to determine the extent to which the process of perceptual differentiation was selective for faces regardless of processing type (domain-specific account), specialized for specific types of perceptual processing regardless of category (process-specific account), engaged in category-optimized processing (i.e., configural face processing or featural house processing), or reflected generalized perceptual differentiation (i.e., differentiation that crosses category and processing type boundaries). ROIs were identified in a separate localizer run or with a similarity regressor in the face-matching runs. The predominant principle accounting for fMRI signal modulation in most regions was generalized perceptual differentiation. Nearly all regions showed perceptual differentiation for both faces and houses for more than one processing type, even if the region was identified as face-preferential in the localizer run. Consistent with process specificity, some regions showed perceptual differentiation for first-order processing of faces and houses (right fusiform face area and occipito-temporal cortex and right lateral occipital complex), but not for featural or second-order processing. Somewhat consistent with domain specificity, the right inferior frontal gyrus showed perceptual differentiation only for faces in the featural matching task. The present findings demonstrate that the majority of regions involved in perceptual differentiation of faces are also involved in differentiation of other visually homogenous categories.
Collins, Heather R.; Zhu, Xun; Bhatt, Ramesh S.; Clark, Jonathan D.; Joseph, Jane E.
2015-01-01
The degree to which face-specific brain regions are specialized for different kinds of perceptual processing is debated. The present study parametrically varied demands on featural, first-order configural or second-order configural processing of faces and houses in a perceptual matching task to determine the extent to which the process of perceptual differentiation was selective for faces regardless of processing type (domain-specific account), specialized for specific types of perceptual processing regardless of category (process-specific account), engaged in category-optimized processing (i.e., configural face processing or featural house processing) or reflected generalized perceptual differentiation (i.e. differentiation that crosses category and processing type boundaries). Regions of interest were identified in a separate localizer run or with a similarity regressor in the face-matching runs. The predominant principle accounting for fMRI signal modulation in most regions was generalized perceptual differentiation. Nearly all regions showed perceptual differentiation for both faces and houses for more than one processing type, even if the region was identified as face-preferential in the localizer run. Consistent with process-specificity, some regions showed perceptual differentiation for first-order processing of faces and houses (right fusiform face area and occipito-temporal cortex, and right lateral occipital complex), but not for featural or second-order processing. Somewhat consistent with domain-specificity, the right inferior frontal gyrus showed perceptual differentiation only for faces in the featural matching task. The present findings demonstrate that the majority of regions involved in perceptual differentiation of faces are also involved in differentiation of other visually homogenous categories. PMID:22849402